AI REGS & RISKS: California AI Regulations Are Back – And They Might Survive

850

By Greg Woolf, AI RegRisk Think Tank

Remembering the Failed SB-1047

You may remember the rise and fall of California’s Senate Bill SB-1047 last year, an initial attempt to regulate AI in response to the rapid advancement of foundational large language models that underpin Generative AI. CA holds significant impact over the AI industry, since most foundational models, AI hardware and cloud infrastructure companies are either based or at least have significant operations in the state. It was a well-intentioned effort, but it quickly became clear that it was too rigid, too focused on just one factor – the size of the AI model.

After Governor Newsom’s vetoed the bill, he convened a 2025 Working Group that recently published a report based on a study, kicking off in late 2024, aiming to build a much more practical and adaptable framework. They’re essentially saying, ‘Let’s learn from the mistakes of the past and create regulatory oversight using an agile governance model that can adapt to the pace of AI evolution.’

The 2025 Working Group’s Approach

The Working Group is proposing a huge shift from SB-1047. They’re not trying to build a single, monolithic rulebook. Instead, they’re proposing a system that’s designed to respond to the changing landscape and risks of AI.

Think of it like this: they’re not saying, ‘If your AI is bigger than X, you must do Y.’ They’re saying, ‘Let’s look at what the AI is being used for, and how risky that use is.’

Here’s a breakdown of their plan:

  • Broadening the Scope: They’re moving beyond just model size, even the smallest models could cause significant harm if used irresponsibly. They’re focusing on the application of AI. If an AI is being used to make critical decisions – especially in regulated industries like healthcare or finance.
    Guidance, Not Mandates (Initially): They’re recommending a ‘guidance-based’ approach. This means they’re encouraging best practices – like requiring external ‘red-teaming’ – to assess powerful models before they’re deployed. But they’re not mandating specific technical measures like a ‘kill switch’ right away. They believe that’s too restrictive and could stifle innovation.
  • Transparency is Key: They’re proposing a robust system for reporting adverse events. Imagine a public database, similar to those used in healthcare, finance or aviation, where companies would regularly publish information on their AI models – including their limitations and any misuse or harm discovered. And crucially, they’re advocating for strong whistleblower protections, even offering incentives for employees or external researchers to report vulnerabilities. This creates a feedback loop, allowing them to adapt policy as needed.
  • Leveraging Existing Power: They’re not trying to create a brand new regulatory agency. Instead, they’re leaning on existing industry regulators – the folks who already have infrastructure, resources and policies to oversee critical industries like finance or healthcare – to handle AI-related harms. Any new regulations would be built on this foundation.
  • Oversight Through Disclosure: The system relies heavily on transparency. If a company fails to disclose risks or has a serious incident, they’ll face scrutiny, reputational damage, and potentially targeted regulatory action. It’s about accountability, not constant policing, aligning strongly with best practices advocated by the SEC and other industry regulatory bodies.
  • Adaptability – The Core Principle: This is perhaps the most important thing. They’re building in the ability to update thresholds and metrics over time. As AI technology evolves, so too will their framework. They’re embracing a ‘learning-oriented’ approach – policies will be refined based on empirical data and scientific progress.
  • Innovation-Friendly: Ultimately, the Working Group wants to foster innovation. By starting with lighter initial burdens – focusing on transparency and voluntary best practices – they aim to prevent overly restrictive rules that could stifle development. Involving industry experts in shaping standards could also increase buy-in and compliance.

Likelihood of Adoption

Ultimately, the Working Group’s report represents a significant shift in thinking – a move away from heavy-handed regulation towards a more collaborative and evidence-based approach, giving it a much better chance of adoption. However, the path to implementation isn’t guaranteed to be smooth, any proposed AI legislation will face considerable headwinds. While the Working Group’s emphasis on industry collaboration and data-driven decision-making is designed to mitigate opposition, the sheer power of AI tech giants like Google, Microsoft, OpenAI and Meta could be a major obstacle. These companies have deep pockets and a vested interest in shaping the regulatory narrative. They’ll almost certainly advocate for a more cautious approach, emphasizing the potential risks of overly restrictive regulations and arguing for a ‘wait-and-see’ strategy.

We’re likely to see strong support from consumer advocacy groups, ethicists, and some segments of the academic community, championing the Working Group’s recommendations, arguing for greater accountability and transparency. Ultimately, the success of the Working Group’s report hinges on a delicate balancing act – convincing lawmakers and industry leaders that responsible AI is possible, as it presents significant challenges without stifling innovation, balancing technological progress against safety, ethics, and societal impact.

Conclusion

In a fundamentally different approach than SB-1047, the 2025 Working Group is proposing an AI governance system that’s flexible, adaptable, and continues to support responsible innovation. It’s a recognition that AI is a powerful tool, and we need to use data and transparency to manage its risks while still allowing it to unlock incredible potential.


Greg Woolf is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk™ Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry. https://airegrisk.com