AI REGS & RISK: Could AI Run Amok? Some People Are Starting to Think So


By Greg Woolf, AI RegRisk Think Tank

Remember how the internet was created as a wonderful tool to share information for the greater social good—connecting loved ones across the world, collaborating on drug discovery, creating a global village? Well, how long did it take for the internet to be used predominantly for videos of cats riding Roombas, unsolicited porn, and romance scams?

The Rise of Deceptive AI: A Cautionary Tale

An alarming demonstration of AI’s potential for deception comes from an AI developer who created an AI agent specifically designed to spread false news and opinions. In a YouTube video, the developer detailed how this deceptive AI agent posted a fake story on Reddit, claiming that Elon Musk was hand-selecting 10,000 elite people to live on Mars in case humanity destroys itself. While obviously ridiculous, the post—and the automated responses by the deceptive AI—garnered over 60 real human responses in just six hours before the developer deleted it.

This experiment underscores the real dangers of unregulated AI. The ability of AI to generate convincing false narratives poses a significant threat to public trust, especially in an election year (remember Cambridge Analytica?).

So What Are We Doing About It?

California’s Proposed Law SB-1047: A Regulatory Misstep?

California has always been at the forefront of tech innovation, and its latest legislative effort, SB-1047, aims to regulate AI technologies. This long, complex bill mandates safety assessments and shutdown capabilities for AI models, focusing on regulating the technology itself rather than its applications. While the intention is to enhance safety, critics argue that such regulation could stifle innovation and fail to address the real issues of AI misuse. According to an analysis by DeepLearning.AI, the bill’s approach might be a bad idea. Instead of promoting responsible use, it could create unnecessary hurdles for developers and researchers, potentially driving AI development underground or outside the state. The key to effective regulation lies not in controlling the technology but in ensuring its ethical application and addressing specific use cases that pose risks.

The OpenAI Letter: A Call to Action from Insiders

Adding to the growing chorus of concern, a group of current and former employees from OpenAI and other leading AI companies has issued a stark warning about the potential dangers of AI. Their letter, hosted on the Right to Warn website, highlights the risks of unregulated AI development and calls for urgent action to prevent catastrophic outcomes. These insiders emphasize that without proper oversight and ethical guidelines, AI systems could be misused in ways that threaten societal stability and security. Their plea shows the need for a balanced approach to AI governance, combining robust regulatory frameworks with industry-led initiatives to promote transparency and accountability.

There Is Hope: Demystifying How AI Thinks

Amidst the concerns, there is a beacon of hope. Anthropic, a company known for its commitment to open-source and transparent AI, has published groundbreaking research that aims to demystify how AI models make decisions. Their work, which can be likened to an fMRI of a human brain while it’s thinking, provides insights into the internal workings of AI models. This research, detailed in their latest publication on transformer circuits, could pave the way for safer AI systems. By understanding how models arrive at their conclusions, we can guide them towards more ethical and reliable outcomes. Anthropic’s approach exemplifies how transparency and rigorous scientific inquiry can contribute to building trust in AI technologies.


Dealing with these AI issues is going to take a team effort. Regulators, industry leaders, and researchers need to collaborate to make sure AI benefits everyone without causing chaos. With so much potential for misinformation, we might need something like a VeriSign badge for trustworthy content in the future, since there will be so much junk online that only a small fraction of it will be real. Only by working together, can we tap into AI’s potential while avoiding the pitfalls.

Greg Woolf is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk™ Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry.