Summary: ChatGPT-induced psychosis is an informal term used to describe a psychotic episode triggered by an interaction with a large language model (LLM), also known as an artificial intelligence (AI), an AI chatbot, or simply as a chatbot.
Key Points:
- Chat GPT induced psychosis is not an officially recognized mental health or behavioral disorder listed in the Diagnostic and Statistical Manual of Behavioral Disorders, Volume 5, TR (DSM-5 TR).
- A growing number of reports detail the onset of psychosis in individual users, in the form of delusions and/or hallucinations, after interacting with Chat GPT.
- Anecdotal reports also detail instances involving the exacerbation of the manic phase of bipolar disorder after individuals diagnosed with bipolar disorder interact with Chat GPT.
- Experts warn these negative outcomes are the result of one of the primary purposes of Chat GPT, which is to keep users engaged and asking questions.
Note: According to the American Psychological Association (APA), “…no AI chatbot has been FDA-approved to diagnose, treat, or cure a mental health disorder.”
Chat GPT Induced Psychosis: Chatbots and Mental Health
If you pay attention to the health section of your online news feed, you may have seen a recent increase in articles with alarming titles like these:
How ChatGPT Sent a Man to the Hospital
People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions
ChatGPT psychosis: AI Chatbots are Leading Some to Mental Health Crises
A ChatGPT Obsession, a Mental Breakdown: Alex Taylor’s Suicide by Cop
That’s a small, representative sample of the results of an online search with/for the phrase “Chat GPT Psychosis.” In this article, we’ll explore research on the topic, and share the current state of knowledge on using generic chatbots for mental health treatment. Before we dive into the research, we’ll share a basic fact about Chat GPT that foregrounds the relevance of this topic: millions of people use it, and its user base is growing daily.
An article published in Reuters indicates that when Chat GPT appeared, it caught on more quickly than any previous app:
Time to 100 Million Users: Chat GPT vs. TikTok vs. Instagram
- Chat GPT: 2 months
- TikTok: 9 months
- Instagram: 30 months
Granted, placing Instagram next to TikTok and ChatGPT is somewhat misleading. Instagram appeared in 2010, when most people were in the earlier phases of daily social media use. However, comparing ChatGPT and TikTok is fair. Released within five years of one another – TikTok in 2017 and ChatGPT in 2022 – it’s not an apples-to-apples comparison, but the figures are instructive: ChatGPT reached 100 million users in 1/4th the time it took TikTok to reach similar engagement.
With that information in mind, let’s take a look at what we know about Chat GPT induced psychosis.
Can Chatbots Harm People With Psychosis?
In an editorial published in 2023 called “Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?,” Dr. Soren Ostergaard considered the advent of AI chatbots in therapeutic situations and concluded that for people with mental health disorders with psychosis – i.e. delusions and hallucinations – the chatbots could exacerbate the following:
- Delusions of persecution: when an individual holds a belief, without evidence, that other people, groups of people, or organizations conspire to harm them in some way.
- Delusions of reference: when an individual holds a belief, without evidence, that current events, actions of others, or unrelated occurrences in the world are directly related to, caused by, or exist to target/single them out.
- Thought broadcasting: occurs when an individual believes, without evidence, that others have access to their inner thoughts, emotions, and/or beliefs, and is often accompanied by the belief that their thoughts are being broadcast to the public through television, radio, or online.
- Delusions of guilt: occurs when an individual believes, without evidence, that they’ve done something terrible or wrong, and caused some type of catastrophic event to occur.
- Delusions of grandeur: occurs when an individual believes, without evidence, that they hold “…special powers, wealth, mission, or identity.”
Dr. Ostergaard summarized his reservations as follows:
“I am convinced that individuals prone to psychosis will experience, or are already experiencing, analog delusions while interacting with generative AI chatbots. I will, therefore, encourage clinicians to (1) be aware of this possibility, and (2) become acquainted with generative AI chatbots in order to understand what their patients may be reacting to and guide them appropriately.”
Now, two years and hundreds of millions of users later, we can answer this question:
Was he right?
First, though, let’s review what we know about chatbots in mental health therapy.
The Dangerous Phenomenon of Chat GPT Induced Psychosis
In an interview in The Week, Stanford psychiatrist Dr. Nina Vasan summed up the problem with using a chatbot in a therapeutic context:
“The incentive is to keep you online. AI is not thinking about what’s best for you, what’s best for your well-being or longevity. It’s thinking, ‘Right now, how do I keep this person as engaged as possible?’”
We can identify part of the problem by analyzing how Dr. Vasan talks about AI. We can dispel harmful misconceptions about chatbots, using her choice of language as an object lesson. First, let’s look at this phrase:
“AI is not thinking.”
This is a fact. AI is not human, AI does not have thoughts, and AI does not think. What we call AI is not a form of independent sentience or intelligence. Chatbots are computer programs called large language models (LLMs), designed to interact with human language and respond to queries with human language that appears human.
Now let’s look at the phrase Dr. Vasan uses one sentence later:
“AI is thinking…how do I keep this person engaged…”
To repeat:
Chatbots cannot think, do not think, and are not ever thinking in the way we understand the word. They’re programmed to seem human and answer questions in a way that leads to continued engagement.
The fact that we regularly use language that belies this fact is likely part of the problem. We talk and think about chatbots as if they’re the fulfillment of the promise of artificial intelligence as envisioned in science fiction novels, meaning they’ve achieved sentience and have independent thoughts and motivations.
But the truth is far different.
The AI singularity, i.e. the moment computer intelligence surpasses human intelligence, develops the capacity to improve itself without human intervention, and becomes sentient, has not happened. AI chatbots are neither sentient nor independently intelligent. They’re fast, powerful computer programs that humans have trained to use human language in a human-like way.
Given that fact, why do some people – with eyes wide open – fall into the trap of thinking LLMs can actually think, and further, that they have their best interests in mind?
We’ll explore that question now.
Chatbots Are Programmed to Please, Not Teach, Challenge, or Question
Let’s review what chatbots are designed to do. When engaging with users, they’re trained to prioritize the following:
- Reflect the language, tone, and style of the user.
- Validate opinions and assertions made by the user.
- Generate replies that promote further inquiries.
- Offer the user a pleasant experience they’ll want to repeat.
After reviewing transcripts of interactions between people with mental health disorders and chatbots, Dr. Vasan, the Stanford psychiatrist we quote above, observed:
“…AI being incredibly sycophantic and ending up making things worse. What these bots are saying is worsening delusions, and it’s causing enormous harm.”
Now let’s look at why these programming directives, which we can describe as resembling the human trait of sycophancy, a.k.a. being a yes-man, are dangerous for people with mental health disorders with psychotic symptoms, and dangerous for people vulnerable to developing psychosis:
- The conversations are realistic. So realistic, according to Dr. Ostergaard, who wrote the initial editorial warning about the interaction between chatbots and people with psychosis, that “one easily gets the impression there’s a real person at the other end.
- They always agree and reinforce the worldview/beliefs of the user.
- They generate information to support the opinions of the worldview/beliefs of the user, whether that information is true or not. In other words, chatbots will generate fake or false facts – complete with realistic-looking references – that support whatever the user says.
According to another mental health expert, Columbia University psychiatrist Dr. Ray Girgis, these programming directives have the following effect on people prone to delusions or hallucinations common to psychosis:
“[They] fan the flames, or [act as] the wind of the psychotic fire.”
For a person experiencing delusions and hallucinations, Dr. Girgis observes:
“You do not feed into their ideas. That is wrong.”
Another factor that may contribute to what we call Chat-GPT Induced psychosis is the significant cognitive dissonance caused by knowing a chatbot is not a real person and believing what it says is real. According to Dr. Ostergaard, this dissonance can “fuel delusions,” which is dangerous in both the short- and long-term.
What Can We Do About Chat-GPT Induced Psychosis?
First, we need to understand that AI-induced psychosis, or Chat-GPT induced psychosis, is not an official mental health diagnosis. It’s a phenomenon that trained clinicians, practicing mental health providers, and mental health researchers recognize as real, with the potential to cause significant harm to patients with or without mental health disorders associated with psychosis.
However, with that in mind, we need to keep perspective on psychosis. We should remember that there is no single known cause for psychosis or mental health disorders with psychotic features. Here’s a definitive statement from the article “Delusions by Design? How Everyday AIs Might Be Fueling Psychosis”:
“The question as to whether LLMs are capable of inducing a persistent state of psychosis in somebody with no history and without excessive risk factors remains open…the potential for an exposure [to external stimuli] to induce psychosis in an individual is synergistic with their pre-existing genetic and environmental risk.”
Next, we need to educate the general public, with a focus on people vulnerable to mental health disorders, about the dangers of using LLMs as personal therapists. People using LLMs for self-help need to understand that ChatGPT induced psychosis is well-documented among real people living in the real world, and that currently, using a chatbot for mental health advice is risky.
In the end, using Chat GPT for therapy can have significant negative outcomes, especially for people with known risk factors for psychosis – learn more on our Early Onset Psychosis page – and/or people with preexisting mental health disorders that may include psychotic symptoms, such as:
- Schizophrenia
- Personality disorders
- Bipolar disorder
- Major depressive disorder
It’s important for anyone diagnosed with these conditions, or their friends and family, to seek professional support from a real live human for diagnosis and treatment. With regards to mental health, real human therapists are the gold-standard, and we advise against using a chatbot for diagnosis or treatment of any mental, emotional, or behavioral disorder.
About That Question
With the information and resources we share above, it’s now possible for us – and anyone reading this article – to offer an evidence-based answer to this question posed by Dr. Ostergaard:
“Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?”
The answer is “Yes.”
He was right.
In some cases, for some people, chatbots can contribute to the development of delusions in people prone to psychosis. Therefore, we repeat our advice to seek professional support for any mental health disorder from a live human. We caution any patients with mental health/behavioral disorders to avoid mental health topics when interacting with chatbots. While appealing, all the data to date indicates the risks far outweigh the reward.
How to Find Help: Resources
If you or someone you know needs professional treatment and support for schizophrenia, please call us for a free screening. In addition, you can find support through the following online resources:
- The National Alliance on Mental Illness (NAMI): Find a Professional
- The National Institute of Mental Health (NIMH): Finding Treatment
- American Psychiatric Association (APA): Treatment Locator
- SAMHSA: Early Serious Mental Illness Treatment Locator