AI EDUCATION: What Is Explainable Artificial Intelligence?

837

Each week we find a new topic for our readers to learn about in our AI Education column. 

I love math—and I don’t mean that ironically. I was always good at math in grade school—don’t worry, this is not another Swiss Army knife story—and that added to my enjoyment. In particular, I loved when I could take math out of workbooks and off of the paper and do it all in my head. Not just simple operations like basic arithmetic, but also algebra and basic calculus. 

I liked it so much in my grammar and high school years that I ended up in a bit of trouble as I made my way through advanced math courses. See, with calculators and graphic calculators proliferating among students, teachers were no longer convinced that students with the right answer had arrived at that answer in a proper manner. Those of us who were practitioners of mental math were asked to “show our work,” detailing each step we took to arrive at the correct answer to a mathematical problem. 

Many AI models work like us mental mathematicians. When prompted, they arrive at an answer that usually at least looks correct at first glance—and most often is really correct. But they don’t show their work. These models could be said to operate in what’s called a black box, where they do the work but only share the answer—those of us querying the AI don’t really know how that answer came about. 

Enter Explainable AI 

Explainable AI, sometimes abbreviated as XAI, is distinct from AI in general in that it shows its work—all of it. Per IBM: “Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.” 

Think of a car’s computer and those warning lights on its dashboard. When a warning light goes off, it means the car’s computer has identified a problem via one of the many sensors distributed throughout the vehicle. A driver or mechanic can access that car’s computer to hone in on what sensor caused the light to go off—but the computer can’t tell you why that sensor tripped the car’s warning lights. That at least requires lifting the hood of the car for an examination, or looking underneath the vehicle, to try to identify the specific problem. It may require trial and error to see what corrects the computer’s warning. Figuring out what’s going on can go far beyond the warning light. And so it goes with AI and its outputs. 

Let’s consider ChatGPT for a moment—no matter what kind of ChatGPT you’re using, when you query the AI, there’s actually quite a bit of transparency. ChatGPT, unlike many high school and college students, is not really a plagiarist—it cites its sources, so you do have some idea of where its responses come from. What’s missing from most ChatGPT responses, however, is a why. Why and how does the AI choose its sources? Explainable AI lifts the black box from around these models so that we might be able to answer the why. 

Why Do We Need a Why? 

If an AI is spitting out responses that look reasonably correct, what’s the big deal about XAI? Shouldn’t we be happy with what we get? Well, yes and no. In some cases, like if you’re using Google’s Gemini AI to write a fan letter to an Olympic hurdler, it probably wouldn’t matter how the AI chose the words and sentences for your letter. In other cases, like when AI is being used to formulate a new drug or offer a treatment plan for a patient, it’s important for practitioners to know why the AI made its recommendations. 

In criminal justice, XAI is necessary because, when used as a tool for investigations, work conducted by artificial intelligence might be deemed inadmissible in court if the AI’s operations were not explainable. 

In the financial world, AI will soon be used almost universally to create plans, build investment portfolios and manage the full spectrum of a client’s financial needs. But there’s still going to be a person and a business responsible for meeting the regulatory compliance requirements for those portfolios and plans, and they’ll need to be able to explain how the tools they use to make recommendations and execute plans came up with their responses. In fields like medicine and finance, most AI should be XAI—doctors and financial advisors, not to mention health care administrators and financial executives—can’t afford the risks of using black box technology. 

But, in addition to building trust and meeting regulatory requirements, XAI is better for the technology itself. If programmers and technologists have access to an AI model’s step-by-step reasoning, they can better understand why a model might hallucinate or produce suboptimal responses. XAI is key, then, to developing better AI. Furthermore, they can more easily tweak a model in real time, even after it’s deployed commercially, to improve its reasoning and performance. 

Explainable AI Versus Interpretable AI 

Interpretable AI, according to IBM, is distinct from explainable AI because it is only concerned with whether an observer or user can understand the AI’s decisions or predict how it will make decisions—basically, it’s general knowledge of how an AI model works. Explainable AI involves understanding, within a specific process, what an AI model is doing at every step along the way because, again, it shows us its work. 

Without diving into the highly technical elements of how AI works, we can describe how explainable AI uses processes that aren’t necessarily present in artificial intelligence in general. Explainable AI will have in place some system for testing the accuracy of its output or predictions. It will offer users some way to trace its decision-making process from query to output. 

Why We Like XAI 

Our world is increasingly described and governed by algorithms—but to date most of us have little insight into what these algorithms are actually doing. In most AI applications we’ve encountered to date, this is only concerning to the obsessively curious among us: why did Amazon recommend a particular product to us, or why did our streaming service pick a particular song or movie. However, as AI—and its algorithms—take more control over the more serious parts of our lives, like our health, safety and financial security—explainability becomes more important for building and maintaining trust. 

Emerging generative artificial intelligence models might be the most brilliant mental mathematicians ever conceived—but there’s nothing wrong with expecting them to show us their work.