AI REGS & RISK: The Dark Side of AI, Part 1 – Global Risks and the Big Picture


By Greg Woolf, AI RegRisk Think Tank

On April 18, Meta opened the floodgates of concern by choosing to open-source Llama 3. This model rivals the power and performance of ChatGPT-4 and makes this technology freely available for use—and abuse—by anyone, from lone cybersecurity hackers to foreign nation-states like China seeking geopolitical and economic advantage.

AI has been making waves, transforming industries, and creating new opportunities. However, with great power comes great responsibility— according to Voltaire, the 18th-century French writer, long before it was popularized by Spider-Man. This week let’s dive into the “dark side” of AI by exploring two different viewpoints from industry heavyweights: Andrew Ng and Vinod Khosla.

Andrew Ng: The Optimistic Pioneer

Andrew Ng, co-founder of Google Brain, Coursera, Deeplearning.AI, and recently on the Board of Amazon, remains highly optimistic about AI’s potential. In a recent address to legislative and business leaders in Washington, D.C., Ng shared his thoughts on the future of AI and regulation. He celebrated the strides made by the open-source community in pushing back against stifling regulations, emphasizing that innovation thrives in an open environment. Ng believes that AI’s capabilities should be harnessed to drive progress while ensuring that appropriate guardrails are in place.

Ng said, “I’m encouraged by the progress the U.S. federal government has made getting a realistic grasp of AI’s risks. To be clear, guardrails are needed. But they should be applied to AI applications, not to general-purpose AI technology.” He further noted that regulators have shifted their concerns over time, initially worried about AI causing human extinction and now focusing on national security risks. Despite these evolving arguments, Ng remains a staunch advocate for open-source AI, arguing that restricting access could allow authoritarian regimes to dominate the AI landscape. Ng also emphasized the critical need to keep educating government officials in this rapidly evolving landscape. “It’s essential that we keep helping governments understand AI,” Ng emphasized. You can read Ng’s blog post here.

Vinod Khosla: The Cautious Capitalist

On the flip side, Vinod Khosla, billionaire co-founder of Sun Microsystems and founder of Khosla Ventures, offers a more cautious perspective. In a recent presentation at the “Bloomberg Tech” conference in San Francisco, Khosla expressed significant concerns about the national security implications of open-sourcing AI. He warned about the economic and cybersecurity risks posed by adversarial nations exploiting open-source AI.

Khosla stated, “We are in a war. And this is not about weapons. It’s about economic dominance. The open-sourcing of state-of-the-art models like Llama 3 should not be done. It’s a national security hazard.” He highlighted the risks of AI technologies being duplicated and used by countries like China to gain an upper hand in the global techno-economic landscape. Khosla’s view underscores the need for stringent controls to prevent the misuse of AI and to safeguard national interests. You can watch Khosla’s full presentation here.

The Middle Ground: Balancing Innovation and Security

Ng and Khosla paint a pretty complex picture of AI’s impact on society. On one hand, you have Ng pushing for open and collaborative AI development. On the other hand, Khosla is sounding the alarm about security risks and misuse. Finding the right balance between encouraging innovation and ensuring security is no easy task for policymakers, businesses, and technologists. We need to keep these conversations going, bringing diverse viewpoints to the table. By listening to and addressing concerns from leaders like Ng and Khosla, we can aim for a future where AI’s benefits are enjoyed responsibly while its risks are kept in check.


The debate over AI’s dark side is far from over and it’s not black and white. As we forge ahead, it’s crucial to consider both optimistic and cautionary perspectives. And let’s not forget the crazy pace at which AI is growing—ChatGPT hit 100 million users in just two months! That’s 42 times faster than the World Wide Web, which took seven years to reach the same milestone and 15 times faster than Instagram (source). This rapid adoption highlights just how important it is to keep educating regulators and stakeholders to keep up with AI’s ever-evolving landscape. By considering the insights of Andrew Ng and Vinod Khosla, we can better navigate the complexities of AI development and ensure this powerful technology is used for the greater good.

Stay tuned for our next post, where we’ll dive deeper into the “micro” threats of AI and explore how AI can be used against wealth managers and their clients.

Greg Woolf Bio

Greg is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk™ Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry.