AI EDUCATION: What Is AI TRiSM?

806

Each week we find a new topic for our readers to learn about in our AI Education column. 

So artificial intelligence is eventually going to be our doctor, our banker, our lawyer, our financial advisor, an employee, an employer—and maybe, for some of us, our best friend and mate.

Let’s hope we can count on it! 

Is there really any guarantee that AI won’t screw us over—and if it does, who or what can be held accountable? What if an AI model starts to hallucinate and diagnoses us with a cancer we don’t have and prescribes harmful interventions we don’t need? What if it drains our investment accounts and leaves us at zero? What if AI gives bad advice or we’re otherwise harmed by an AI model’s biased results?  

Can we rely on AI at all if we have to ask all these questions? Why bother with this seemingly troublesome technology? 

Is There AI We Can Trust? 

Trust: We’re talking about trust. Again. Since we’re going to continue writing about AI and finance, we’ll keep on talking about trust for some time to come. If you were with us last week, we ran down the body of knowledge known as data ethics and gave some examples of competing frameworks for the ethical collection and use of data—which, as it turns out, is one component of this week’s topic, AI TRiSM. 

AI TRiSM, or AI Trust, Risk, and Security Management, goes beyond data ethics as a more holistic framework for thinking about the safe and responsible use of generative artificial intelligence, as proposed by Gartner. We suppose a “how we got here” is appropriate. Look, we aren’t favoring Gartner’s framework over any other comparable framework for thinking about AI safety. Last week we offered two different approaches to data ethics. This week we’re going to look at one approach to ensuring “governance, trustworthiness, fairness, reliability, robustness, efficacy and data protection,” in Gartner’s own words. 

AI TRiSM has come up time and again in our news sweeps for financial artificial intelligence news, and it dovetails well with our recent discussions regarding trust in technology—so here we are. 

And yes, there is AI we can trust, as long as we understand what we’re using and how it works.

What Is AI TRiSM? 

In general, generative AI should be safe and reliable, that’s what AI TRiSM is all about—we’re finding ways to ensure to ourselves and the people we do business with that our AI models are not going to hurt anyone. To help companies address all of the potential risks around the use of artificial intelligence models, AI TRiSM encompasses three core principles: 

Trust—Ensuring that stakeholders, including users, should be confident in the performance of an AI system. Trust comes from stability, transparency and fairness in decision-making. 

Risk—Understanding and identifying vulnerabilities, threats and potential threats that might impact privacy, security and performance. 

Security Management—Protecting the sanctity of data, not just from misuse or manipulation, but also from unauthorized access altogether. 

Sometimes, a fourth principle is included in the framework, Compliance, meaning companies need also to keep up with the intersecting rules regarding AI models from governments and independent regulators. 

What Problems Is AI TRiSM Intended To Address? 

We’ll start in financial services, where AI is already being deployed for purposes including trading, fraud detection, loan approval and underwriting. Financial enterprises need to be able to understand and explain the securities trading decisions of automated systems. They need to be certain that there is no undue bias in the systems underwriting insurance policies and approving loans—the financial industry has already taken great pains to address underlying biases disadvantaging specific demographic groups in loan approval and insurance underwriting—it can ill afford a relapse due to AI models trained on biased data. And, of course, all the information gathered from stakeholders needs to be kept safe and secure—especially in fraud protection and prevention. 

Healthcare, as we often note, faces similar challenges with regards to information security and privacy, but it also faces the potential for greater calamity should an AI model start to hallucinate or make bad decisions, which could put the health or lives of patients at risk. As in finance, healthcare settings must be able to explain the choices made by their AI models, and they practitioners be able to rely on those models to make good decisions with good data. 

AI TRiSM carries over into newer AI applications like autonomous vehicles, which need to be kept safe from hackers, and identification and recognition technologies—facial, speech, biometric—which must address bias issues of their own while dealing with public distrust. 

Four Pillars of AI TRiSM 

Explainability and Monitoring—Not only should we know what our AI models are doing, we should be able to understand why they do what they do—how they learned from training data, and how models process new information and use that data and information to make decisions. 

Model Operations (or ModelOps)—An AI model is not completely autonomous or something we can, to quote inventor and master marketer Ron Popeil, “set it and forget it.” We need systems and processes in place to manage our AI models from development, through deployment and into their active lifecycle to provide for continuous maintenance. We also need to make sure our AI infrastructure is maintained and kept up-to-date so our models always run efficiently. 

AI Application Security (or AI AppSec)—AI data needs to be kept safe and secure from cyberattacks in general, but particularly to certain AI-specific attacks. By manipulating the data AI is trained on, bad actors can maliciously influence model behavior downstream.   Thus, AI data and other model files must be encrypted throughout a model’s lifecycle, with access controlled through permissions. 

Privacy—Enterprises should comply with user expectations and regulations concerning data privacy and keep abreast of any updates or changes. We should also be transparent about what data we’re collecting, why, how it is being stored, and how it is being protected. Not only that, but since the developers of AI models are often using third-party data to train those models, we need to make sure that anyone else we’re getting data from has robust privacy practices of their own. And yes, AI AppSec standards should hold true for everyone in a data set’s chain of custody.