AI REGS & RISK: Controversial US Regulation of AI is Gaining Momentum

664

By Greg Woolf, AI RegRisk Think Tank

The controversy surrounding California’s SB 1047 and its attempt to regulate AI is gaining steam—from both supporters and detractors.

This week, the bill, now titled the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” is poised for a critical vote in the California state assembly. The bill would require companies building large-scale AI models to implement safety testing, create shutdown capabilities, and maintain detailed safety documentation. These requirements would be backed by annual audits and civil penalties for non-compliance.

Key Amendments to SB 1047

As U.S. AI regulation takes tangible steps forward, SB 1047 stands as a significant example. Recent amendments have addressed concerns from the tech industry, with key changes including:

  • Removal of the Frontier Model Division: Responsibilities for overseeing AI safety have been shifted to the California Government Operations Agency, eliminating the need for a new regulatory body.
  • Adjusted Legal Standards: The compliance standard has been relaxed from “reasonable assurance” to “reasonable care,” aligning more closely with existing legal frameworks.
  • Narrowed Scope of Enforcement and Harm: Civil penalties are now limited to situations where actual harm has occurred or there is an imminent threat to public safety.
  • Open Source Carveout: Developers who spend less than $10 million fine-tuning AI models are exempt from the bill’s requirements.

OpenAI vs. Anthropic’s Support of SB 1047

OpenAI recently broke its silence on SB 1047, voicing strong opposition in a letter to California state officials. They argue that the bill would stifle innovation and drive talent out of California, advocating instead for federal regulation to ensure clarity and competitiveness. OpenAI’s Chief Strategy Officer, Jason Kwon, emphasized the risks to California’s position as a global leader in AI.

In contrast, Anthropic, founded by former OpenAI researchers, has cautiously endorsed the bill. They view it as a necessary step towards embedding safety protocols within AI systems, aligning with their mission of developing transparent and ethical AI. Their support signals a recognition within the AI community of the need for practical, enforceable regulations that move beyond theoretical discussions to implement actionable safeguards.

Moving Beyond Concepts: Practical AI Risk Assessment

With these amendments and cautious endorsements from firms like Anthropic, SB 1047 is part of a larger movement toward practical AI regulation in the U.S. But how do we ensure these regulations are effective? The first step is to define what could go wrong and how to measure it—this is where institutions like MIT are leading the way.

The MIT AI Risk Database serves as a concrete tool for identifying and mitigating the risks associated with AI. It catalogs over 700 potential risks, offering a practical resource for companies and regulators alike. MIT’s work moves beyond the conceptual, providing a structured approach to understanding and managing the real-world impacts of AI. It’s not about halting AI development; rather, it’s about ensuring that as AI becomes more integrated into our lives, it does so with the necessary safeguards in place.

Next Steps for SB 1047

The next step for SB 1047 is a vote in the California state assembly by August 31, 2024. If it passes, the bill will be sent to Governor Gavin Newsom for final approval by the end of September. Governor Newsom has not yet indicated whether he plans to sign the legislation.

The evolving landscape of AI regulation is doing more than just imposing rules—it’s helping to demystify the potential harms associated with AI. By translating abstract risks into concrete guidelines, these regulations provide companies and their boards with the tools they need to move forward responsibly.

The Importance of AI Readiness

As the pace of AI adoption increases, so do the risks and concerns about its adoption.   Companies are determined to expand their AI strategy are preparing to move forward. To help companies navigate this transition, we’re hosting a free webinar to provide insights and tools that help organizations assess their level of AI readiness. Whether you’re just starting to explore AI or looking to scale, this webinar will offer practical advice on how to proceed safely and strategically.

Registration Link:  Join us on September 5th for a Free Webinar to Help Determine Your AI Readiness.


Greg Woolf is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk™ Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry. https://airegrisk.com