certify
HomeReport: Policy Rethink Needed On Bringing AI Into Clinical Decision Making

Report: Policy Rethink Needed On Bringing AI Into Clinical Decision Making

A new white paper has identified the policy needs for incorporating artificial intelligence (AI) into diagnosis and other types of clinical decision support software with effective innovation, regulation and patient protections.

In the white paper, the Duke-Margolis Center for Health Policy seeks to address the major challenges currently hindering safe, effective AI health care innovation.

Giving innovators the tools

Greg Daniel, Deputy Director for Policy at Duke-Margolis said AI was now poised to disrupt health care, with the potential to improve patient outcomes, reduce costs, and enhance work-life balance for healthcare providers, but a policy process was needed.

“Integrating AI into healthcare safely and effectively will need to be a careful process, requiring policymakers and stakeholders to strike a balance between the essential work of safeguarding patients while ensuring that innovators have access to the tools they need to succeed in making products that improve the public health,” Daniel said.

The report titled “Current State and Near-Term Priorities for AI-Enabled Diagnostic Support Software in Health Care” picks up the clinical decision support (CDS) software as an example of how AI can improve healthcare.

“This project examines the potential benefits and challenges when AI is incorporated into CDS software, particularly software that supports improved clinical diagnosis, as well as barriers that may be preventing development and adoption of this software. Improved CDS could be useful in reducing diagnostic errors, which account for almost 60 percent of all medical errors and an estimated 40,000 to 80,000 deaths each year,” the report says.

The white paper says AI-enabled diagnostic support software — a subset of CDS
software — has the potential to augment clinicians’ intelligence, support their decision-making processes, help them arrive at the correct diagnosis faster, reduce unnecessary testing and treatments otherwise resulting from misdiagnosis, and reduce pain and suffering by starting treatments earlier.

Working with a group of experts throughout the health care and artificial intelligence ecosystem, the Duke-Margolis Center for Health Policy said the report is meant to foster innovation, create an incentive for the adoption of safe and effective AI-enabled diagnostic support software and to develop an overview of AI-enabled CDS software and the regulatory and policy environment surrounding the use of CDS.

FDA regulations stifle innovation

A concern by the authors of the report was that because of some regulations by the U.S. Food and Drug Administration (FDA) some types of CDS are not considered medical devices. “The most impactful regulatory update affecting AI-enabled CDS software is the FDA’s proposed precertification program for software that are regulated as medical devices,” the report said.

Christina Silcox, a managing associate at Duke-Margolis and co-author on the white paper, said: “AI-enabled clinical decision support software has the potential to help clinicians arrive at a correct diagnosis faster, while enhancing public health and improving clinical outcomes. To realize AI’s potential in health care, the regulatory, legal, data, and adoption challenges that are slowing safe and effective innovation need to be addressed.”

The report then points out priority concerns around the development, regulation, and adoption of safe and effective AI-enabled diagnostic support software that stakeholders will need to address.

  • Evidentiary needs for increased adoption of these technologies. This evidence will include the effect of the software on patient outcomes, care quality, total costs of care, and workflow; the usability of the software and its effectiveness at delivering the right information in a way that clinicians find useful and trustworthy; and the potential for reimbursement for use of these products by payers.
  • Effective patient risk assessment of these products. The degree to which a software product comes with information that explains how it works and the types of populations used to train the software will have significant impact on regulators’ and clinicians’ assessment of the risk to patients when clinicians use this software. Product labeling may need to be reconsidered and the risks and benefits of continuous learning versus locked models must be discussed.
  • Ensuring AI systems are ethically trained and flexible. Best practices to mitigate bias that may be introduced by the training data used to develop software are critical to ensuring that software developed with data-driven AI methods do not perpetuate or exacerbate existing clinical biases. In addition, developers will need to assess how the data inputs required by their software may affect scalability of their products to settings that are different from the original setting that provided the data used to train the algorithms. Finally, best practices and,potentially new paradigms are needed to determine how to best protect patient privacy.

The authors of the report said it was meant to serve as a “resource for developers, regulators, clinicians, policy makers, and other stakeholders as they strive to effectively, ethically, and safely incorporate AI as a fundamental component in diagnostic error prevention and other types of CDS.”

The report also points out the major challenges currently hinder safe, effective AI health care innovation and includes near-term priorities:

Regulatory clarity – The 21st Century Cures Act removed certain types of clinical decision software from FDA authority, depending on whether the software systems explain how the input data is analyzed to arrive at a care recommendation. Software that directly diagnoses or treats patients are considered to be of higher risk than software that acts as a support or resource for a clinician’s decision-making. Greater regulatory clarity is needed regarding how the FDA would assess risks to patients when providers use so-called “black box” software versus software that gives more information about how the product uses input data to come to a recommendation.

Data access and privacy — Software innovators need access to large volumes of clinical data to “train” the software. But this data must be consistent with the quality of input data used in the real world, because one would not want to train an autonomous vehicle on an empty racetrack when it would be expected to drive down crowded city streets. It will be critical to improve data standards and increase the interoperability of data, while upholding patient privacy protections.

Demonstrating value —  Public and private coverage and reimbursement to the provider will drive adoption and increase the return on investment for these technologies. But AI-enabled clinical decision support software must be able to demonstrate improvements in provider system efficiency and enable providers to meet key outcome and cost measures. A useful first step would be to establish which clinical decision support software features and performance outcomes will be most valued by payers, as well as the types of evidence that will be required to prove performance gains.

READ
AI Comes to Healthcare With FDA Approval
banner
Adsense