By Kavya and Neeyanth Kopparpu
Ever since the term artificial intelligence (AI) was coined by computer scientist John McCarthy more than 60 years ago, researchers have been able to apply the powerful technology to a variety of applications, including many used in health care. Despite some obstacles, the intersection between health care and AI has the potential for a great partnership. The abundance of patient data—from doctors’ notes and MRI scans to gene sequences—has allowed artificial intelligence to swoop in and help doctors and researchers create predictions with a high degree of accuracy.
But for patients, something is still missing. If a doctor determined you had terminal brain cancer, your first question would probably be “Why?” Unfortunately, because most powerful AI models are unable to explain their decisions, your doctor would be stuck saying, “because a computer told me.”
Recently, advancements in the field of AI have allowed, if not yet an explanation, at least a kind of interpretation. These AI models can provide additional information about what is important in the given data.
As the creators of some of these AI models, we have seen firsthand that an additional step of interpretability adds to an AI’s usefulness. For example, when we created a model to detect early signs of Parkinson’s disease, we only picked up on problems with it after we looked at how the model interpreted MRI scans. Originally, the model claimed that random points in and out of the brain were important to its decision-making process. After realizing this, we were able to fix the model to more accurately detect early signs of the disease.
Most important, interpretability may build trust by providing additional insight into using AI. It brings us closer to a future of human/machine teams, allowing doctors to start to understand why the AI made its decision.
And we need both: Without AI, we lose the ability to use vast amounts of medical data that could lead to more accurate diagnoses. Without human doctors, we lose empathy, reliability, and natural patient experiences. Integrating interpretable results into AI systems will be an important step toward using AI for a more accurate, cheaper, and accessible tool in the doctor’s office.