Member Spotlight: Nikola Banovic

Feature Stories

Member Spotlight: Nikola Banovic

Precision Health member Nikola Banovic is an Assistant Professor of Computer Science and Engineering who specializes in Human-Computer Interaction (HCI). He is an investigator on a recently awarded R01 grant studying “Human-AI Collaborations to Improve Accuracy and Mitigate Bias in Acute Dyspnea Diagnosis.”

What are your research interests, broadly?
I primarily focus on computational modeling approaches to understanding human behavior and creating behavior-aware user interfaces. To leverage computational models to study human behavior requires that such models are both explainable and interpretable, thus I also have a strong interest in Explainable AI (XAI).

To what fields have you applied this research?
My primary field of research is HCI, but my research group has applied our research in the domains of health care, computational law, and algorithmic management.

Please talk a little about how the Precision Health Investigators Award project “Precision Diagnosis in Patients with Acute Dyspnea by Linking Imaging and Clinical Data” led to additional NIH funding.
Artificial Intelligence (AI) technologies that precision health is based on have the potential to provide accurate and timely diagnosis of each patient’s health condition. However, explainability of such AI-based decision support systems is of critical importance to technology-driven innovation and broader adoption of AI in the health care domain. A Precision Health Investigators Award provided seed funding for preliminary work, which has shown early evidence that explanations play an important role in how clinicians perceive results of an AI-based diagnostics systems. Those early findings were at the core of our NIH grant.

What are the goals of this research? With whom are you collaborating on the project?
The main goal of this research is to study the effects that communicating uncertainty about the AI model’s competence has on the clinician’s decision making. Transparency about uncertainty of AI-based automated diagnoses (a form of explanation) could invite interaction between the clinician and the AI. For example, when a highly competent AI is confident about its diagnosis, transparency aids it trustworthiness. When it is uncertain, it could invite the clinician to look closer into the diagnosis and provide their expert opinion. On the flip side, communicating uncertainty could also allow clinicians to identify incompetent AI and exclude it from their clinical workflows. Our team is composed of clinicians (who bring clinical expertise and the perspective of the end-users who will ultimately interact with Precision Health technology) and computer scientists with backgrounds in machine learning (ML), computer vision (CV), and HCI, which makes us uniquely positioned to study this interesting problem. Our work will contribute new scientific knowledge about empirically validated mechanisms to deliver explainable precision diagnosis tools to clinicians.

How will this work help people?
The main application of this work is in precision diagnosis, which will help clinicians provide timely and accurate diagnoses to their patients. It will ultimately have a positive impact on the patients, because the clinicians will have the time to focus on difficult cases where competent AI-based diagnostics systems report high uncertainty in their decisions.

What are the most challenging aspects of this research?
The most challenging aspect of this research is to create and deliver explanations that the clinicians can actually make sense of. Current Explainable AI (XAI) research mostly assumes that the end-users are math-savvy computer scientists. This is not the case with most clinicians. Instead, our work will set the foundation for carefully designed user interfaces and interactions that will enable clinicians to understand AI decisions and ask for explanations that are of interest to them.

What do you like to do when you aren’t doing research?
I have developed a passion for mentoring students on how to improve their graduate school applications. I realized that many great students, especially from historically marginalized groups in computer science, do not always have access to mentors who can teach them about the nuances of preparing graduate school applications (e.g., writing a strong statement of purpose, who and how to ask for recommendation letters). That is why I have been involved with organizing the annual Explore Graduate Studies in Computer Science workshop, which teaches participants about graduate school, graduate student life, and how to prepare their application materials. I also greatly enjoy team sports like soccer and basketball, which I now have the pleasure of playing with my son.