AI EDUCATION: What Is AI Sovereignty?

1148

Each week we find a new topic for our readers to learn about in our AI Education column. 

As businesses, states and individuals ramp up their use and development of artificial intelligence, questions of control are emerging. According to some onlookers, the world is moving towards a new technological “Cold War” driven by artificial intelligence, one that will inevitably pit technology developed by the so-called Free World of the West against political adversaries led by China. 

Hence, the rapid rise of DeepSeek, the Chinese AI powerhouse, was interpreted by many political and economic analysts as a “shot across the bow” for technology firms and policymakers in the West. 

While much of the focus has been on the competition to develop next-generation artificial intelligence, underpinning this competition are questions of trust and control. Who controls the data that AI models are trained on? Who vets the data to make sure it is accurate? Who ensures that sensitive information remains secure? Are developers allowed to create unbiased and open AI models, or are they forced to conform to political and social standards for speech and etiquette? 

These questions are responsible for much of the debate around AI sovereignty.  

But What Is AI Sovereignty? 

I’m skipping my usual “How We Got Here” section this week because I think the mere mention of DeepSeek serves as enough of an explainer, but AI sovereignty is in the news every day, even if it isn’t mentioned explicitly. Much of that discussion is on the national level—that is, where U.S. interests conflict or align with those of economic adversaries like China, or with strategic allies in Europe and the Americas. AI sovereignty can also be devolved to a more local level—cities, states, streets, companies and individuals. 

According to Oracle, “Sovereign AI refers to a government’s or organization’s control over AI technology” including the software and hardware used to build and run AI, the people responsible for building and running AI, and the policies which govern both people and technologies. 

Sovereign AI, or AI sovereignty, derives from the concept of digital sovereignty, which Oracle defines as the rules and regulations around how organizations deploy and manage digital assets including their use of the cloud. I would describe it as the ability to control technology, including hardware, software and data, and like AI sovereignty, it can exist in a national, regional or global geopolitical sense, but can also be devolved down to the business or individual. 

Digital Sovereignty on Two Levels 

I suggest we think of digital and AI sovereignty the same way we might think about trade regulation and protection. People, goods and services now cross international borders with relative ease, recent tariffs and other enforcement moves aside, but data is even more unencumbered. There’s never really been a customs check on information, anything you learn in a foreign country is pretty easy to carry back to your country of origin, right? 

Well, now data doesn’t need to move with a person, thanks to the internet, and now the cloud, data has been detached from our brains and voices and can be passed from computer to computer with no human intermediary. Digital sovereignty is, in part, an attempt to get a handle on that free flow of data. 

But we also need to acknowledge that, as technology users, we’re often not as free as we think. We’re storing our most sensitive information like our bank accounts, investments, credit cards and loans, Social Security numbers and other identification data, and information about our families and friends in the hardware of major technology providers—and more often than not we don’t even know their names. Most of us rely on a handful of companies to help us move our data around the globe, placing an awful lot of trust in the hands of multinational providers who are not easily regulated. Unseen by us, our personal data makes the rounds through a set of backchannels, allowing private entities to reach us without our permission. Digital sovereignty also describes attempts to bring all this data under control, or at least to force disclosure of where it has been and where it is going. 

Bringing It Back to AI 

So, on one level, sovereign AI refers to a country’s ability to develop and control artificial intelligence using its own resources, ensuring that external actors do not bias the products of AI or misuse the data made available to AI models. While much of AI development, and the recent development of technologies in general, has occurred in a world in which barriers between nation-states have been dropping and a more global outlook has emerged, today, the winds are blowing in the other direction, and more guardrails are being built between countries and regions. 

In an economic context, sovereign AI is critical to deploying generative AI in the financial services industry and other highly regulated spaces to protect volumes of personal data and infrastructure from theft, fraud and other security breaches. 

This discussion probably has a lot of readers thinking of superpowers like Russia and China, or rogue states like North Korea and Iran, who are often identified as bad actors when it comes to data and technology. While our recent lurch towards protectionism might shade sovereign AI as primarily an American interest, the concept is also a response to encroaching U.S. technological hegemony. The most prolific provider of advanced AI chips, Nvidia, is a U.S.-based company. Most of the biggest standalone providers of AI models are U.S. companies, and the mega-cap multinational technology providers that are in the vanguard of AI development are almost all companies with roots in the U.S. Sovereign AI is also a way for countries to protect their own citizens, data and culture from rising American influence.