By Greg Woolf, AI RegRisk Think Tank
Before financial institutions can fully embrace generative AI, regulators are sending a clear warning: not so fast. The Treasury Department, the Federal Reserve, and FINRA have all raised concerns about the risks of sending sensitive financial data into external, cloud-based AI models. This isn’t just about compliance checkboxes. It’s a fundamental reassessment of risk and control that is reshaping how firms think about AI adoption.
Click Here to Learn More About the AI Readiness
Program for Financial & Wealthtech Firms
The Problem with “Black Box” AI
Cloud-based AI platforms such as OpenAI or Google Gemini deliver remarkable performance. But regulators are increasingly concerned that these systems introduce risks that financial institutions cannot fully mitigate. As MIT’s Dr. Sandra Black has documented, the “black box” nature of advanced models makes their decision-making processes difficult — often impossible — to explain, leaving firms vulnerable to risks they cannot fully audit or control.
- Treasury Department: In its 2024 report on AI and financial stability, Treasury warned of “unauthorized access, misuse, or compromise” of data in cloud-based systems posing potential systemic risk.
- Federal Reserve: The Fed is concerned about whether firms can validate how decisions are made and ensure those decisions are free from model risk.
- FINRA: FINRA has cautioned that generative AI can produce inaccurate or biased outputs, urging firms to ensure robust oversight of third-party AI vendors.
Why Sovereign AI is Rising
Sovereign AI refers to models deployed entirely within an institution’s own infrastructure — private cloud, on-prem data center, or tightly controlled hosting — ensuring data never leaves the firm’s environment. Joe Merrill, CEO of OpenTeams, a company building open-source infrastructure to help organizations run their own AI, argues that AI should be treated not as SaaS but as core infrastructure that firms must control rather than rent from third parties: “AI is the tool that makes the thing that makes you special go into hyperdrive — but only if you control it.”
The benefits are clear:
- Data residency: Sensitive financial information remains under institutional control, addressing regulators’ core concern.
- Transparency and auditability: Institutions gain full visibility into training their own models and how decisions are generated.
- Open-source security: Contrary to popular belief, open-source is often more secure because vulnerabilities are spotted and fixed quickly.
The Trade-Offs
However, Sovereign AI isn’t a free lunch. Taking control of training your own purpose-built models requires:
- High Capital and Talent Requirements: Implementing sovereign AI can be costly, but open-source platforms like Hugging Face make it possible to train smaller, purpose-built models. The real challenge lies in the expertise required for initial configuration, as well as ongoing training and maintenance as models evolve.
- Operational Complexity: Institutions must assume full responsibility for patching, scaling, monitoring, and ensuring uptime, rather than relying on the managed services provided by hyperscalers.
- Performance Considerations: Large frontier models aim to solve every problem; smaller, task-specific sovereign models often produce more accurate and reliable results within the defined scope of financial use cases.
For most institutions, the strategic advantage lies not in adopting the largest possible model, but in deploying fit-to-purpose systems that balance accuracy, control, and cost.
A Hybrid Future
For most wealth and investment asset managers, the pragmatic path may be a combination or hybrid:
- Sovereign AI for sensitive workloads such as client data analysis, regulatory reporting, and proprietary trading strategies.
- Cloud AI for lower-risk applications like productivity tools, research summaries, or general communications.
This approach allows firms to demonstrate control to regulators while still benefiting from the raw power of frontier AI models where appropriate. The emphasis should be on fit-to-purpose deployment — using smaller sovereign models for critical, high-fidelity tasks, and reserving cloud-based tools for generic or non-sensitive functions. Firms may increasingly run as much as possible locally, but “burst” into hyperscalers for compute when needed — always within their own segregated environment. In this model, institutions use the cloud on their own terms, without surrendering sovereignty.
Navigating the Next Phase
The future of AI in finance will not be defined by who has the flashiest model. It will be shaped by transparency, accountability, and governance. Regulators are making it clear: firms cannot simply plug into a black-box GenAI system and call it innovation.
Sovereign AI provides a credible path to satisfy examiners and protect clients, but it comes with real costs and operational responsibilities. As open-source models improve and infrastructure becomes easier to manage, the case for sovereignty will strengthen. For now, financial institutions must resist the urge to rush ahead with unchecked adoption. Innovation has to move at the speed of trust — and today, regulators are telling firms to slow down until control and oversight are firmly in place.
Author’s Note:
For this column, instead of using ChatGPT or Gemini, I experimented by running a local Sovereign AI model on a modest work laptop. The performance was surprisingly strong — in some areas even surpassing cloud-based models. It’s an early sign that open-source AI can deliver results without sending sensitive data into external systems. The technology is moving fast, but the lesson remains: proceed carefully and keep control close to home.
Greg Woolf is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk™ Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry. https://airegrisk.com