AI REGS & RISK: Industry-wide Insights on AI


By Greg Woolf, AI RegRisk Think Tank

We recently moderated a roundtable at an AI for Executives conference focused on executive-level implications of AI adoption, which attracted over 3,000 enthusiasts eager to explore AI’s potential for business. Instead of the usual lecture format, the organizers recommended a roundtable event to engage a community of various regulated industries in a lively discussion about their expectations and concerns regarding AI in regulated environments.

With more than three times the expected turnout, it was clear that we struck a chord among professionals ranging from wealth management, insurance, asset management, healthcare, government, and even nuclear power. Despite the diverse participants, their common goal was clear:  how to leverage AI across various sectors despite the complex landscape of regulations.

Ethical Considerations and Model Accuracy

Ethical concerns, especially regarding fairness and privacy, dominated the roundtable discussions. Ongoing initiatives to develop robust model risk management frameworks were shared, illustrating how these efforts align with regulations to ensure models remain transparent and accurate.

Cybersecurity Concerns

As AI becomes increasingly embedded in core operational processes, cybersecurity has surfaced as a paramount concern. Participants discussed the vulnerabilities inherent in AI systems, such as risks to data privacy and the potential for model poisoning, highlighting the urgent need for robust protective measures.

Learning from Health Insurance: A Governance Blueprint

The head of Responsible AI at a prominent insurance company detailed their proactive strategy for safeguarding AI deployments within the rapidly evolving compliance landscape. Key components of their governance blueprint include:

  1. Developing risk management frameworks linked to existing models to manage risks effectively.
  2. Monitoring legislative changes to ensure continuous compliance.
  3. Collaborating with legal and procurement teams to manage contracts and ensure adherence to laws and ethical standards.
  4. Conducting rigorous vetting of third-party AI vendors.
  5. Regularly testing AI models for biases to maintain ethical standards.
  6. Implementing strict protocols to prevent inappropriate content from generative AI applications.
  7. Applying tiered security monitoring tailored to specific AI use cases to enhance cybersecurity measures.

Unique Challenges for Mission Critical Infrastructure

A nuclear power industry expert emphasized the importance of air-gapped systems, explaining, “If anything goes wrong in a power plant, people are going to notice.” This setup ensures operational integrity and the security of critical data, underscoring the need for transparent and reproducible models.

Government Insights on Precision and Protection

Representatives from federal and state agencies underscored the importance of precision and robust security in AI applications. They discussed the critical role of detailed architecture planning and internal model evaluations, which are essential for maintaining safety and compliance in government AI projects.



As the discussion concluded, it was universally acknowledged that AI’s role is both transformative and inevitable. Despite potential risks, the consensus was clear: the cost of not deploying AI could outweigh the challenges, impacting business competitiveness and customer satisfaction across industries.

To explore these topics further, we’ll be co-hosting an event with the Boston RegTech Group titled “Safeguarding AI for Financial Services Adoption.” Join us on May 16th at 5:30 PM in the Boston office of Ernst & Young to discuss secure and effective AI implementation in financial services.

Greg Woolf Bio

Greg is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk™ Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry.