Policy, process changes needed to safely integrate AI into clinical workflows

A new report from the Duke-Margolis Center for Health Policy explores some of the policy changes that should be made to enable safer and more effective deployment of artificial intelligence in healthcare.

As AI and machine learning become de facto ingredients in many key clinical technologies, a better understanding of how they can best be leveraged for optimal analytics and decision support is the goal of the study, “Current State and Near-Term Priorities for AI-Enabled Diagnostic Support Software in Health Care.”

WHY IT MATTERS
The Duke report takes stock of the existing legal and regulatory landscape for algorithm-based CDS and diagnostic support software, and lays out some essential priorities to work toward in the years ahead to ensure safe deployment of AI in clinical settings.

These aren’t just theoretical concerns. AI and ML are making inroads all over healthcare, of course, and current legislation and regulatory policy – whether it’s the massive 21st Century Cures Act or FDA’s new updates to the Software Pre-Cert Pilot Program – are adequate but still not optimal for a future that promises to evolve at a dizzying pace.

The Duke-Margolis paper, meant as a “resource for developers, regulators, clinicians, policy makers, and other stakeholders as they strive to effectively, ethically, and safely incorporate AI as a fundamental component in diagnostic error prevention and other types of CDS,” looks at some of the major challenges and opportunities facing AI in the years ahead.

Stakeholders like those listed about will need to grapple with big questions, more than a dozen researchers and authors write. Such as:

  • Making a case for the value of more widespread adoption of these technologies. Such evidence would include how the software improves patient outcomes, boosts quality and lowers cost of care, gives clinicians relevant information in a manner they find “useful and trustworthy.”
  • Assessing the potential risk of using those products in clinical settings. “The degree to which a software product comes with information that explains how it works and the types of populations used to train the software will have significant impact on regulators’ and clinicians’ assessment of the risk to patients when clinicians use this software,” said Duke researchers. “Product labeling may need to be reconsidered and the risks and benefits of continuous learning versus locked models must be discussed.”
  • Seeing to it that such systems are deployed in a way that’s both flexible and ethical. More and more health systems will need to develop best practices that can mitigate any bias that could be introduced by the training data used to develop software, they explained. That’s the only way to ensure that “data-driven AI methods do not perpetuate or exacerbate existing clinical biases.”

Also, these organizations will have to think hard about the data implications as the products scale up into settings that may be different from initial use cases. And, of course, “new paradigms are needed for how to best protect patient privacy,” according to the report.

THE LARGER TREND
As the technological capabilities and clinical applications of AI-enabled decision support continue to expand, the Duke researchers said more regulatory clarity from agencies such as FDA, which has signaled an appetite for much wider approval of machine learning apps, is needed to protect patients from wanton use of the “black box” algorithms that many have warned about.

In addition, there are other major areas that need ironing-out. Among them: proper allowances for patient privacy and data access, and the ability for these fast-emerging technologies demonstrate value and ROI for providers. In all of those, hospitals and health systems have an active role to play.

Then there are all sorts of other technical questions that exist – but haven’t necessarily been answered, certainly not on a consistent or widespread basis. Such as: how new approaches to labeling different software might improve understanding of its inner workings; how to weigh the relative risks and benefits of locked versus continuously learning models of AI; how to evaluate its performance over time most effectively; how to mitigate data bias; how to assess “algorithmic adaptability” and more.

ON THE RECORD
“AI is now poised to disrupt health care, with the potential to improve patient outcomes, reduce costs, and enhance work-life balance for health care providers, but a policy process is needed,” said Greg Daniel, deputy director for policy at Duke-Margolis, in a statement.

“Integrating AI into healthcare safely and effectively will need to be a careful process, requiring policymakers and stakeholders to strike a balance between the essential work of safeguarding patients while ensuring that innovators have access to the tools they need to succeed in making products that improve the public health,” he said.

“AI-enabled clinical decision support software has the potential help clinicians arrive at a correct diagnosis faster, while enhancing public health and improving clinical outcomes,” added Christina Silcox, managing associate at Duke-Margolis and co-author of the report. “To realize AI’s potential in health care, the regulatory, legal, data, and adoption challenges that are slowing safe and effective innovation need to be addressed.”

Twitter: @MikeMiliardHITN
Email the writer: [email protected]

Healthcare IT News is a publication of HIMSS Media.

Source: Read Full Article