AI REGS & RISK: The Senate is Taking a Hard Look at OpenAI

894

By Greg Woolf, AI RegRisk Think Tank

With all the drama unfolding at OpenAI over the last year, the Senate is getting concerned that the company is not necessarily addressing society’s best interests in its race to win the AI market.  No more admiration and platitudes for OpenAI—some folks in government are taking a critical look at how this industry leader is shaping the AI landscape for the rest of us.

Senate Scrutiny on OpenAI

A group of U.S. Senators—Brian Schatz, Ben Ray Luján, Mark Warner, Peter Welch, and Angus King—have sent a detailed letter to OpenAI’s CEO, Sam Altman. Their message? It’s time for some serious transparency and accountability as AI becomes increasingly crucial to our national and economic security.

What the Senators Want to Know

The Senators have laid out twelve pointed questions, asking for answers by August 13th. Here’s the rundown:

  1. AI Safety Commitment: Are you still dedicating 20% of your computing power to AI safety research?
  2. Employment Practices: Will you stop forcing non-disparagement agreements on employees?
  3. Ethical Clauses: Will you remove clauses in contracts that penalize employees for speaking out?
  4. Cybersecurity Concerns: What’s your game plan for handling cybersecurity and safety concerns from employees?
  5. Protection from Malicious Actors: How are you protecting your AI models from being hacked or stolen?
  6. Non-Retaliation Policies: Are you sticking to your own non-retaliation policies?
  7. Independent Safety Testing: Do you let independent experts test your AI systems before they go live?
  8. Involving Independent Experts: Will you bring in independent experts to help with safety evaluations and governance?
  9. Government Access to Models: Will you let government agencies test your next big AI model before releasing it to the public?
  10. Post-Release Monitoring: What do you do to monitor your AI after it’s out in the world, and what issues have you seen?
  11. Retrospective Impact Assessments: Are you looking back to see how your AI has impacted the world after deployment?
  12. Meeting Safety Commitments: How are you planning to meet your safety commitments to the current administration?

Reading Between the Lines

So, what’s really going on here? Let’s break it down:

  1. Employee Rights and Transparency: There’s a big focus on how OpenAI treats its employees. The Senators are worried about non-disparagement agreements and other contract clauses that might silence whistleblowers. They want to ensure a transparent, ethical work environment.
  2. Cybersecurity Readiness: With AI being so critical, cybersecurity is a huge concern. The Senators want to know what steps OpenAI is taking to protect its tech from being hijacked by bad actors. Given the sheer volume of personal and confidential queries that run through OpenAI’s systems every day, ensuring robust cybersecurity measures is more crucial than ever.
  3. Independent Oversight: By asking about independent safety testing, the Senators are pushing for more external checks on OpenAI’s work. They’re saying, “We need to make sure your AI is safe and sound before it hits the market.”
  4. Government Involvement: The question about government access to AI models suggests that the Senators want closer collaboration between AI companies and the government to safeguard national interests.
  5. Post-Deployment Accountability: Questions about monitoring and retrospective impact assessments show that the Senators are concerned with the long-term effects of AI technologies. They want to ensure that OpenAI is proactive and responsible even after their AI is out in the world.

What This Means for the AI Industry

The Senate’s increased scrutiny signals a more stringent regulatory environment for AI companies. OpenAI, being a leader in the field, must now balance innovation with the need for greater transparency and security. This is a wake-up call for all AI stakeholders: the regulatory landscape is evolving, and staying ahead means focusing on more than just tech. It’s about solid governance and ethical practices, too.

The Senate’s letter to OpenAI is a big moment in the ongoing chat about AI regulation. The answers to these questions could really shake things up for AI policy and the industry as a whole. We’ll keep you posted as this story develops.

Thanks for tuning in to this week’s AI Regs and Risks series on AI&Finance. Stay curious, stay informed!


Greg Woolf is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk™ Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry. https://airegrisk.com