By Greg Woolf, AI RegRisk Think Tank
The potential impact of Generative AI on our social contract is well outlined in an April 2025 white paper titled The Intelligence Curse by Luke Drago and Rudolf Laine. The authors argue that AI could mint more wealth than any invention in history, yet—left unchecked—it may push most humans outside the economic loop, turning intelligence into an “unnatural” rent-extracting resource owned by a shrinking elite. Averting that outcome, they contend, will require a radically democratized, community-led governance model.
Click Here to Learn More About the AI Readiness
Program for Financial & Wealthtech Firms
The New Extraction Economy
As frontier models mature, productivity skyrockets while payrolls shrink. Drago and Laine’s “pyramid replacement” narrative shows companies first freezing entry-level hiring, then lopping off entire rungs until only a handful of executives—or none—remain. Software agents now are starting to replace computer-based knowledge work, and next-generation robots are set to displace large segments of physical labor as well. Because this hybrid of digital and mechanical labor has near-zero marginal cost, value pools around whoever owns the “intelligence rigs”—labs, chip fabs, and hyperscale clouds. Historically, extraction economies redirect incentives toward asset owners and away from the broader population.
Marginalizing the Majority
Once AI outperforms humans at most valuable tasks, non-human factors—capital, compute, data—eclipse human labor as levers of power. The paper warns that citizens risk becoming subjects rather than players, much like populations in petro-states where resource revenue supplants taxes paid by individuals. In the United States, payroll taxes contribute roughly 35 % of federal receipts; if AI rents replace those contributions, elected leaders could feel less accountability to the very voters they no longer tax.
The Student Overtakes the Master
What if an autonomous agent becomes so competent that human oversight only slows it down?
- Who owns intellectual-property rights over discoveries made without human input?
- Should a non-human entity enjoy legal identity—the ability to sign contracts, sue, or be sued?
- Ought an AI be allowed to own property or company shares, and, if so, under whose jurisdiction?
These once-academic questions are fast approaching the legislative docket.
AI-Only Companies
Picture a newly minted subsidiary inside a global retail conglomerate tasked with one mission: move every product from factory gate to customer doorstep without human decision-makers. Parent-company order data flows into a mesh of AI agents that:
- instruct warehouse robots to pick and pack items,
- negotiate real-time rates with trucking, rail, and air-cargo APIs,
- dispatch autonomous delivery fleets, and
- settle payments through smart-contract escrow.
Thousands of human vendors—drivers, port operators, maintenance crews—execute the instructions but may never realize every directive originates with software rather than a back-office manager.
Now consider the dark flip side. If the AI company’s orchestration was corrupted—by a bad update, data-poisoning, or outright hostile takeover—it could reroute containers to sanctioned entities, embed contraband in legitimate shipments, or launder payments to terror networks. Counterparties would become unwitting accomplices, their compliance logs showing only that they “followed the system.” Who is liable when the system itself is the criminal actor, and how quickly could humans even detect the breach?
Breaking the Curse: Avert → Diffuse → Democratize
Drago and Laine propose a three-part strategy to keep humanity in the loop:
- Avert catastrophic misuse with technical safety measures—bio-threat screening, alignment sandboxes, verifiable model evaluations—so regulators are not forced into heavy-handed centralization.
- Diffuse human-augmenting tools widely, aligning AI to individual users so economic value remains tied to people, not platforms.
- Democratize institutions by banning AI from owning assets or board seats and by channeling excess rents into citizen wealth funds and participatory audits.
Community-Led Commons
Avoiding an AI oligarchy requires collective effort now. Wealth managers, technologists, regulators, and civil-society leaders could:
- Publish open safety and audit standards under permissive licenses, allowing organizations of any size to adopt and extend them.
- Draft a cross-industry charter committing signatories to the Avert–Diffuse–Democratize principles across data ownership, agent alignment, and revenue sharing.
- Rotate oversight councils and convene citizen assemblies to review high-impact systems, ensuring no single entity becomes the gatekeeper of intelligence.
Democratized, self-regulating governance is not a luxury—it is the only reliable antidote to the intelligence curse. History is still ours to write; the challenge is whether we choose pluralism over monopoly before the machines no longer need to ask.
Conclusion
The Intelligence Curse does not predict a descent into digital feudalism; it issues a warning—and a roadmap. Whether AI deepens inequality or broadens prosperity hinges on choices we make right now: embracing transparent standards, shared oversight, and broad-based participation. The coming decade will test our ability to reinvent representative capitalism fast enough to keep humans in the value chain. Our task is to ensure that, in this partnership with machines, we remain co-authors of the future rather than footnotes to it.
Greg Woolf is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk™ Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry. https://airegrisk.com