Publication addresses barriers to and advantages of using AI in medical diagnosis

Feature Stories

Publication addresses barriers to and advantages of using AI in medical diagnosis

Precision Health member Cornelius James, MD, was a co-author on the recent National Academy of Medicine Discussion Paper “Meeting the Moment: Addressing Barriers and Facilitating Clinical Adoption of Artificial Intelligence in Medical Diagnosis.” The paper looks at key factors for the successful adoption of AI diagnostic decision support (DDS) tools, as well how issues of bias and equity affect provider trust and adoption of these tools. The authors then discuss policy implications around adopting of AI-DDS systems, and suggest which priorities to consider to collaboratively support the success of these tools in safe, efficient, equitable diagnoses.

James says, “It was a privilege working with an interdisciplinary team on this paper. This included lawyers, clinicians, developers, researchers, learners, and educators with vast experience in this space.” James is a clinical assistant professor of internal medicine and pediatrics. He also leads the Data Augmented Technology Assisted Medical Decision-making (DATA-MD) curriculum initiative. Here, James discusses the paper, its significance, and what the authors hope will come out of sharing this information.

Who are the intended audiences for this paper? What do you hope readers get out of this discussion paper?
The intended audience here is broad. The paper addresses potential barriers to clinicians adopting artificial intelligence-based diagnostic decision support (AI-DDS) tools in their practice. While all the barriers described are clinician-centered, we acknowledge that many individuals and groups will have an impact on clinicians’ use of AI-DDS. This ranges from health system leadership and policy makers to developers and patients. This paper is relevant to all key stakeholders in this space.

Why is now the right time to have this discussion?
We have two important movements occurring at this time. Since the National Academy of Medicine published the Improving Diagnosis in Health Care report in 2015, increased attention has been given to diagnostic safety and decreasing diagnostic errors. At the same time, there have been major advances in health care AI and ML [machine learning], especially in the area of diagnostics. In addition, the amount of data impacting clinical decision making is becoming unmanageable, and clinicians are becoming increasingly overwhelmed by tasks that threaten to decrease meaningful time spent with patients. Therefore, we are seeing an opportunity to not only augment clinicians’ diagnostic decision making using AI/ML, but also to delegate tasks to AI to decrease administrative burdens and improve clinician wellness.

The paper suggests, “Software developers should integrate human clinical diagnosticians at all phases of software development, design, validation, implementation, and iterative improvements.” Is this happening? Can you give an example of it happening at Michigan Medicine?
I am aware of some of the collaborative efforts of developers and clinicians at Michigan Medicine. Specifically, I think about the work that Jenna Wiens is doing with C. diff. More broadly, we have interprofessional groups like E-HAIL, MiCHAMP, and the DATA-MD team that come together to foster collaboration and benefit from the knowledge and skills of key stakeholders. These groups include developers, clinicians, researchers, educators, and others.

The authors point to reasons, means, methods, and desires to use as four key drivers for adoption of AI-DDS tools—which of these drivers poses the most challenges?
That is a great question. I really can’t say that there is one domain that poses a greater challenge than others. Each of them will certainly require a lot of careful thought and consideration. However, I do believe that all of the issues addressed in the paper will inevitably impact clinician trust, which is under the “desire to use domain.” Ultimately, clinicians will need to have trust on a number of levels before these tools become common in diagnostic decision making.

The authors focus on concerns about health equity in deploying AI-DDS tools. What are the major concerns, and how do we overcome them?
The main concerns related to equity have been well described in the literature. Some concerns include models perpetuating biases against historically marginalized populations, generalizability of model outputs, and access to models. Some of these issues may be addressed by training models using representative data. Other concerns may be addressed by training a more diverse clinician, developer, and research workforce. These issues will require intentional interventions across all stages of AI-DDS tools, from development to implementation.