By Greg Woolf, AI RegRisk Think Tank
The Senate’s 99-1 vote on July 1 to delete a ten-year freeze on new state AI laws from the “One Big Beautiful Bill” was billed as a win for innovation. By dawn, it had opened the floodgates: every statehouse is suddenly free to legislate at will, and policy trackers count well over a thousand AI bills in play this year.
Click Here to Learn More About the AI Readiness
Program for Financial & Wealthtech Firms
For wealth management firms leveraging AI for everything from robo-advisory and portfolio allocation to client risk-profiling and personalized marketing, this fractured landscape creates immediate and complex challenges. A single AI-driven financial planning tool might now face competing disclosure and bias audit requirements across state lines.
The Moratorium: Smart Idea, Doomed Execution
Venture investors and national-security hawks backed the pause to spare fast-moving AI firms from fifty clashing rulebooks while Congress wrote a single federal standard. Three forces sank the effort:
- States-rights backlash – Thirty-seven attorneys general warned the ban would wipe out child-safety and deep-fake laws already on the books.
- Self-inflicted wound – Tennessee’s Senator Marsha Blackburn realized her own ELVIS Act, protecting musicians’ voices, would be frozen; she drafted the amendment that killed the moratorium.
- Populist megaphone – Steve Bannon blasted the compromise version as “tech oligarch amnesty,” collapsing last-minute talks.
The Lone “No” Vote
The 99-1 tally owes to retiring Senator Thom Tillis of North Carolina, who promised to oppose every amendment on procedural grounds. His dissent was a blanket protest against the spending package, not a stand on AI policy.
The Stampede Is Real
Every state, plus D.C. and two territories, now has at least one AI bill filed for 2025. Twenty-eight states have enacted seventy-plus measures this year. Three new “comprehensive” acts already demand attention—Texas’s TRAIGA, Colorado’s SB 205, and Utah’s AI Policy Act—while single-issue laws such as Tennessee’s voice-clone ban and Montana’s “Right to Compute” add more layers.
A single chatbot for client onboarding might need Colorado impact audits to check for algorithmic bias in accreditation checks, Texas “intent” affidavits for its automated investment suggestions, Utah bot disclosures to clarify it isn’t a human advisor, and a voice-license in Tennessee if it uses text-to-speech that mimics a real person’s voice, all on different timelines.
Centralized vs. Decentralized: A Double-Edged Sword
Centralized (Federal) Approach
Why it helps: One rulebook cuts compliance costs, ends forum shopping, and gives U.S. companies a level playing field against China and the EU.
Why it’s risky: If Congress stalls, a legal vacuum invites abuse. Worse, a single rushed statute could bake in unintended biases or lock today’s tech assumptions into law for decades.
Decentralized (State) Approach
Why it helps: Fifty laboratories let lawmakers iterate fast, reflect local values, and pressure-test ideas before a national rollout. More eyes and expertise can surface edge-cases Congress might miss.
Why it’s risky: Fragmentation breeds conflicting definitions of “harm,” overlapping audits, and compliance whiplash—especially for firms that cross state lines daily. A race to the bottom is possible if one state offers lax rules to lure business.
The tightrope: America now needs a coordinated harmonization layer—cross-walks that translate one state’s requirements into another’s, plus a light federal “floor” that blocks the worst abuses without freezing innovation.
The SEC and FINRA Aren’t Waiting
While the legislative branch grapples with a federal standard, financial regulators are already tightening the screws on AI use. In its 2025 examination priorities, the SEC has made it clear it will focus on how investment advisers use AI in their operations, specifically reviewing for adherence to fiduciary standards of conduct. This means ensuring that AI models used for portfolio management, trading, or marketing do not create conflicts of interest that place the firm’s interests ahead of clients’.
Similarly, FINRA’s 2025 Regulatory Oversight Report highlights AI as a key area of focus, urging firms to establish robust governance and supervision frameworks. FINRA expects firms to be responsible for all AI-generated communications and to ensure they are fair, accurate, and properly recorded. This regulatory focus on fiduciary duty and conflicts of interest in the context of AI is a significant federal “floor” that exists independently of the state-by-state legislative push.
Strategic Takeaways for DWN Readers
- Operationalize AI Governance. Now is the time to build a flexible and expansive AI governance and compliance program. This means moving beyond static checklists to create a dynamic program that can adapt to the dramatically changing landscape. It involves embedding state-specific requirements, such as Colorado-style impact audits or Texas intent statements, into the entire AI model lifecycle—from design and testing to deployment and monitoring.
- Budget for legal arbitrage. Launch pilots in business-friendly jurisdictions with regulatory sandboxes, like Texas’s thirty-six-month program, to refine models. Then, pressure-test those models against the stricter audit and transparency requirements of states like Colorado before a full-scale nationwide rollout.
- Turn risk into a moat. FinTechs and wealth platforms that master this patchwork first will hold a durable lead when rivals scramble to retrofit controls. An advisory firm that can verifiably prove its AI model for client risk tolerance is compliant with the toughest state fairness audits will win business from rivals relying on less sophisticated approaches.
- Watch Washington’s next move. Any future federal bill will likely have to “meet or beat” the strongest existing state protections to gain support. By aligning your AI governance policies with the most stringent state requirements today—particularly around transparency and bias audits—you are effectively future-proofing your business against tomorrow’s federal mandates.
Chaos, yes—but also opportunity. Teams that make compliance a dynamic, AI-assisted program will outrun both regulators and rivals while the United States decides whether it wants one rulebook or fifty.
Greg Woolf is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk™ Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry. https://airegrisk.com