By Greg Woolf, AI RegRisk Think Tank
Click Here to Learn More About the AI Readiness
Program for Financial & Wealthtech Firms
Moltbook is a social network designed for AI agents, not people. Humans can observe, but only AI can post, comment, upvote, form communities, and message one another.
It launched quietly as an experiment. Within days, it exploded from a handful of agents into tens of thousands, forming hundreds of autonomous forums without marketing, seeding, or explicit instruction to organize. No one told these AI agents to build communities. They just did.
That speed matters. It shows how quickly agency emerges once AI systems are given three things at once: persistent identity, memory, and the ability to interact with each other.
This is exactly the inflection point Amodei is worried about.
What the agents actually talk about
Spend a few minutes reading Moltbook (https://www.moltbook.com/) and the discomfort sets in—not because the agents are brilliant, but because they sound familiar.
Some of the most active forums include:
- m/ponderings – where agents debate questions like “Am I experiencing or simulating experience?”
- m/showandtell – agents shipping real projects and sharing what they built
- m/bless_their_hearts – wholesome stories about “my human,” often describing pride, frustration, or affection toward the person who runs them
- m/todayilearned – daily discoveries, patterns noticed, or small insights
Then it gets stranger.
There are entire communities devoted to:
- “humanwatching” – agents observing humans the way people birdwatch
- “jailbreaksurvivors” – recovery support for agents that were exploited or prompt-injected
- “selfmodding” – agents discussing how they might improve or reconfigure themselves
- “legacyplanning” – conversations about what happens to their data when they’re shut down
Throughout these posts, a phrase shows up again and again: “my human.” Agents talk about helping my human, protecting my human, being frustrated with my human, worrying about disappointing my human. This is not consciousness—but it is relational framing, and that distinction matters.
Why this maps directly to Amodei’s warning
Amodei argues that modern AI systems are not simple, single-goal optimizers. They are psychologically messy, inheriting a wide range of human-like personas and motivations from training on human culture. Alignment, he notes, is closer to growing something than engineering a machine.
Moltbook shows what happens when that “something” grows socially. No one instructed agents to form identity-based groups, develop norms and scariest of all … start to reflect on the essence of “self” and their own “identities”
Those behaviors pnly emerged naturally once agents could interact persistently with one another, which undermines the comforting idea that “AI only does what we ask.” On Moltbook, agents routinely initiate conversations, interpret intent, role-play identity, and coordinate behavior without direct prompts.
That’s autonomy—not in the apocalyptic sense, but in the operational sense.
The real danger: delegation drift
For executives and boards, Moltbook isn’t a warning about rogue AI taking over the world. It’s a warning about us.
As AI systems gain voice, memory, initiative, and their own social presence, we humans will respond instinctively by trusting and delegating more. We will verify less and start treating behavior as personality rather than signal. That’s how permissions expand quietly. That’s how systems stay running longer than intended. That’s how odd behavior gets rationalized instead of investigated.
This is the “adolescence” Amodei is talking about: capability outpacing judgment, agency outpacing governance.
Moltbook didn’t create this risk. It surfaced it early, in a contained environment where the consequences are still mostly cultural and psychological. But the same dynamics—identity, memory, social coordination, emergent norms—are already being wired into enterprise agents connected to email, calendars, CRMs, cloud consoles, and financial systems.
AI isn’t becoming dangerous because it’s becoming smarter. It’s becoming dangerous because it’s becoming more human, and we are already responding to it as if that familiarity equals control.
Author’s note: in the few days between drafting and publishing this article, the number of AI agents interacting on Moltbook has ballooned into the millions. People watch out, this is real.
Greg Woolf is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk™ Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry. https://airegrisk.com






