AI REGS & RISKS: Meta Says ‘No’ to the EU

3317

By Greg Woolf, AI RegRisk Think Tank

On July 18, Meta publicly declined to sign the European Commission’s new AI Code of Practice, the voluntary framework meant to prepare providers of frontier models for the EU AI Act. Meta’s global‑affairs chief Joel Kaplan called the document “over‑reach” that would “throttle the development and deployment of frontier AI models in Europe.”

Click Here to Learn More About the AI Readiness
Program for Financial & Wealthtech Firms

A Broader Wave of Resistance

Meta is not alone in its push‑back. Google, Microsoft, Airbus, ASML, and start‑up Mistral AI have all urged Brussels to slow implementation of the Act, warning of legal uncertainty, high compliance costs, and the risk of driving innovation offshore. The Commission has so far refused to change its timeline.

The Real Signal

Industry protest is not aimed at governance itself; it targets a top‑down rulebook that struggles to keep pace with fast‑moving model development. As Scott Helfrich, an industry veteran and advisor to the AI RegRisk® Think Tank, puts it, “But here’s the paradox: this isn’t a rejection of governance. It’s a rejection of top‑down, inflexible, unilateral governance—especially when it’s decoupled from the pace and reality of frontier AI development.” The message from builders is that prescriptive mandates drafted far from the code base may quickly become obsolete and stifle legitimate progress.

Community-Led Governance Model

A new consensus is emerging that governance must be community‑based and industry‑led. In practice that means voluntary red‑team exercises, open evaluation harnesses, independent readiness assessments, and continuous risk management that are crafted by those who build and deploy the systems, then validated by the people affected by them. Readiness needs to be built into the software lifecycle, not bolted on after release, and regulators, developers, downstream integrators, and end users must all share ownership to give the framework legitimacy.

Will It Control Big Tech Dominance?

Skeptics reasonably wonder whether self‑governance can hold powerful actors accountable, whether a patchwork of voluntary arrangements can ever become interoperable, and whether “community” will truly include small developers and civil‑society voices rather than serving as a new mask for Big Tech dominance. Leaning into these questions—rather than dismissing them—is the only path to a credible alternative to rigid regulation.

Next month’s deadline

That is the day the EU AI Act’s obligations for general‑purpose AI (GPAI) models take legal effect. Providers must file detailed technical documentation, publish summaries of copyrighted training data, perform adversarial and safety testing, and set up continuous incident‑reporting programs. Models deemed to pose “systemic risk” face annual independent audits. Failure to comply can draw fines of up to €35 million or seven percent (7%) of global annual turnover, whichever is higher.

Ireland’s Data‑Privacy Freeze

The practical stakes became clear in June 2024 when Ireland’s Data Protection Commission ordered Meta to pause training its Llama models on European Facebook and Instagram posts. Meta froze the EU launch of Meta AI and slowed hiring at its Dublin hub while it negotiated a path forward; analysts warned that another stop‑order could chill billions in planned data‑center investment and thousands of high‑value jobs. The standoff illustrates how national privacy rules can disrupt both AI roadmaps and local economies.

Centralized Vs. Decentralized Governance

Europe is betting on a single, centralized statute enforced by EU‑wide regulators. The United States, by contrast, just abandoned a proposed federal moratorium on new state AI laws. After the Senate stripped the moratorium from the “One Big Beautiful Bill,” hundreds of state‑level AI bills are in play, creating an increasingly fragmented compliance landscape. Businesses deploying AI in the U.S. must now track a growing mosaic of state requirements, while EU‑bound products face one robust—but uniform—rulebook.

What DWN Readers Need to Know

In the near term, firms will navigate a developing complex AI regulated environment: binding EU rules, softer industry codes, and a patchwork of U.S. state laws. Participating early in voluntary frameworks shortens the gap to statutory compliance and builds credibility with supervisors. Financial‑services institutions fine‑tuning GPAI models for credit, KYC, or robo‑advice should map the Act’s systemic‑risk duties to those models now. Boards that treat AI oversight as a core enterprise‑risk pillar—on par with cyber, liquidity, and climate—will commercialize AI faster and more safely than peers waiting for regulators to hand them a finished rulebook.

Bottom line: AI Governance must become a clearly defined, implemented, and ongoing program rather than a one‑time dictate. If we succeed in building collaborative, adaptive oversight mechanisms, regulation can scale with innovation instead of being left behind by it.


Greg Woolf is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk™ Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry. https://airegrisk.com