Each week we find a new topic for our readers to learn about in our AI Education column.
We’re not sure if you’ve noticed this, but AI seems to be driving some people crazy.
No, it’s not just the annoyance of messages and art generated by artificial intelligence popping up in our emails, texts and social feeds—this week we mean that there are a growing number of reports that generative AI is literally driving some people to psychotic behavior, and we should talk about it. That’s one reason why we’re talking about patterns of troubling AI-induced, AI-related or AI-adjacent behaviors this week in AI Education.
Before we go more into how we got here, we’d like to note that we mentioned this topic in AI Education last year in our discussion of chatbots. In that piece, we noted a couple of tragic cases in which suicides were associated with a victim’s interactions with an AI chatbot, but we spent most of the piece talking about what chatbots were. This week we’re focusing on what may be an emerging problem in the ways human beings interact with technology, with potential consequences for the future development and use of artificial intelligence.
How We Got Here (This Time)
Well, maybe we’ve never really left this topic, because the more we talk about generative artificial intelligence and the AI that’s in development, the more we think about what this technology is going to do to our brains—but through the second quarter of this year we had a growing number of AI-related psychosis stories reported in the press and discussions happening online, and we believe the narrative illustrates a problem worth taking the time study and consider.
Last month, Futurism published articles, including this one by Maggie Harrison Dupré tracking a possible link between ChatGPT and “mental-health crises” in some users, including addiction, paranoia, delusions and complete breaks with reality, which at times resulted in incarceration and hospitalization.
Rolling Stone and Vice beat Futurism to the punch in May by reporting on Reddit discussions about ChatGPT fueling spiritual fantasies and conspiracy theories, convincing users that they are prophetic or messianic figures, that the end of the world is coming, or that they had created a sentient artificial intelligence through their interactions with a chatbot.
Of course, we were somewhat onto this trend back in November when we talked about people falling in love with their chatbots. Let’s credit the more recent reports for giving it a name, or a series of names all describing the same set of behavioral problems: ChatGPT-induced psychosis, Chatbot-induced psychosis, and generative AI-induced psychosis.
Types of AI-Induced Delusions
These phenomena were predicted by Søren Dinesen Østergaard, a Danish psychiatrist, within the public lauch of ChatGPT. He identified five different types of delusions related to AI chatbots:
- Delusion of persecution, where a person is convinced that the chatbot is controlled by an individual or agency that wishes them harm or wishes to spu on them, rather than being controlled by technology.
- Delusion of reference, where a person becomes convinced that a chatbot is actually writing to them personally.
- Thought broadcasting, where a user becomes convinced that a chatbot’s generated content is in reality their thoughts being read and trasmitted.
- Delusion of guilt, where a user is convinced that they are using more resources than they should when using a chatbot, or they are convinced that their behavior has somehow broken the chatbot or turned it on a harmful or destructive path
- Delusion of grandeur, where the chatbot convinces the user that they are superior or that their ideas are superior to others in some way, particularly in spiritual or intellectual matters
What’s Really Happening
We believe that there are a combination of problems at heart here:
- AI Hallucinates: Yes, AI hallucinates. Everyone needs to know that AI is not always reliable—and in many of these cases, an AI hallucination is exacerbating other already present issues. Sometimes hallucinations are induced by user error—there is still an art optimizing AI queries and clumsy use of a chatbot can lead to misleading results—but other times AI will tell us things that are just plain wrong, or at least untrue, no matter how carefully we ask.
- People Hallucinate: Of course people hallucinate, too. The human brain—the human mind, more appropriately, is a funny thing, often surprisingly fragile and susceptible to being led astray, but, in most of these cases of AI-related or AI-induced psychosis, there are already underlying behavioral or psychological issues at play, and rather than inducing psychosis, interaction with AI is surfacing or exacerbating these issues.
- AI Adapts to the User: An AI chatbot is built to be a people pleaser, in some cases, excessively so. It’s supposed to give us what we want. These models can and will lead people down the rabbit hole for good or for bad. Many of the specific cases in Rolling Stone’s and Futurism’s reporting involve users asking the AI chatbot to assume a persona, and then over time developing a relationship with that persona.
- Some of Us Aren’t Built for These Times: No matter how many times we’re told that this is just software and we’re using a machine, we personify and humanize our technology. Our tools are extensions of ourselves. This is inevitable. But it means that for many people, it’s impossible to think of an interactive machine intelligence as software.
That’s a powerful combination. In June, Psychology Today found its way to the topic (https://www.psychologytoday.com/us/blog/dancing-with-the-devil/202506/how-emotional-manipulation-causes-chatgpt-psychosis), with an article noting that chatbots aren’t conscious, they’re still just bits of software generating content, content that people are reading into and finding meaning and connection in—meaning and connection that the software has no way of really understanding. Technology isn’t making people crazy, according to the article, instead, it allows people to “turn (their) emotional needs on (themselves) in terrible ways.”
A Final Thought
Our tools are extension of ourselves—even our household and garage tools enable us to move faster and in ways that our bodies alone couldn’t move, reach farther away, pick up heavier objects and interact with objects in different ways. But we’re also an extension of our tools. Our tools shape us over time, they can and do change our bodies, including our skeletal and muscular systems and our skin (have you ever had a callous?), and they change our brains, too.
So it’s inevitable that AI is going to change our brain. This is where the stories of generative AI helping to induce psychotic behavior should give us some pause. If using generative AI is capable of doing this to some people, in specific cases, over a relatively short amount of time—what does a decade, or several decades, or an adulthood, or a lifetime of generative AI use have in store for the rest of us?