
What to know about ‘AI psychosis’ from talking to chatbots
Clip: 8/31/2025 | 6m 22sVideo has Closed Captions
What to know about ‘AI psychosis’ and the effect of AI chatbots on mental health
The parents of a teenager who died by suicide have filed a wrongful death suit against ChatGPT owner OpenAI, saying the chatbot discussed ways he could end his life after he expressed suicidal thoughts. The lawsuit comes amid reports of people developing distorted thoughts after interacting with AI chatbots, a phenomenon dubbed “AI psychosis.” John Yang speaks with Dr. Joseph Pierre to learn more.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Major corporate funding for the PBS News Hour is provided by BDO, BNSF, Consumer Cellular, American Cruise Lines, and Raymond James. Funding for the PBS NewsHour Weekend is provided by...

What to know about ‘AI psychosis’ from talking to chatbots
Clip: 8/31/2025 | 6m 22sVideo has Closed Captions
The parents of a teenager who died by suicide have filed a wrongful death suit against ChatGPT owner OpenAI, saying the chatbot discussed ways he could end his life after he expressed suicidal thoughts. The lawsuit comes amid reports of people developing distorted thoughts after interacting with AI chatbots, a phenomenon dubbed “AI psychosis.” John Yang speaks with Dr. Joseph Pierre to learn more.
Problems playing video? | Closed Captioning Feedback
How to Watch PBS News Hour
PBS News Hour is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipFirst, we should warn you that this story discusses suicide.
This past week, the parents of a 16-year-old who took his own life filed a wrongful death suit against Open AI, which owns Chat GPT.
They say that after their son expressed suicidal thoughts, Chat GPT began discussing ways he could end his life.
The lawsuit is one of the first of its kind, but there have been a number of reports about people developing distorted thoughts or delusional beliefs triggered by interactions with AI chatbots.
The repercussions can be severe, causing some users to experience heightened anxiety and in extreme cases to harm themselves or others.
It's been dubbed AI psychosis.
Dr. Joseph Pierre is a clinical professor in psychiatry at the University of California, San Francisco.
Dr. Apierre, this is not an official diagnosis yet.
It's not in any any diagnostic manuals.
How do you define AI psychosis?
Well, psychosis is a term that roughly means that someone has lost touch with reality.
Uh and the usual examples that we encounter in psychiatric disorders are either hallucinations where we're seeing or hearing things that aren't really there or delusions which are fixed false beliefs like for example thinking the CIA is after me.
And mostly what we've seen in the context of AI interactions is really delusional thinking.
So these are delusions that are are occurring in this setting of interacting with AI chatbots.
Are some people more susceptible to this than others?
Well, that's really the million-dollar question.
I distinguish between AI associated psychosis, which just means that we're seeing psychotic symptoms in the context of AI use, uh, but I also talk about AI exacerbated psychosis or AI induced psychosis.
So, the real question is, is this happening in people with some sort of pre-existing mental disorder or mental health issue?
and the AI interaction is just fueling that or making it worse or is it really creating psychosis in people without any significant history and I think there's evidence to support that both are happening probably it's much more common that it's a worsening or exacerbating effect tell us a little bit about what you see in your practice are you seeing people coming in talking about this uh I have seen a handful of cases I primarily work in a hospital so the patients that that I've seen are patients who have been admitted And as I suggested before, some of them are people who have uh obvious and long-standing mental illness who now have developed a worsening of symptoms in the context of AI use.
I have seen a few cases of people without any substantial mental health issues prior to uh being hospitalized.
Also, I want to talk about that second category.
How common is it among people who don't have an existing psychological or mental problem getting caught up with chat bots?
I I have to think that it's actually fairly rare.
I mean, if you think about how many people use chat bots, uh that of course is a large large number of people.
And we've only seen really a fairly small handful of cases reported in media.
Those of us in clinical practice are starting to to notice this uh more and more.
So I don't think it's a huge risk in terms of the number of people.
Typically this occurs in people who are using chat bots for hours and hours on end often to the exclusion of human interaction often to the exclusion of sleep or even eating.
And so it really I think is a kind of dose effect that we're seeing.
Uh we reached out to chat GPT and here's part of what they told us.
They said chat GPT includes safeguards such as directing people to crisis helplines and referring them to realworld resources.
While these safeguards work best in common short exchanges, we've learned over time that they can sometimes become less reliable in long interactions where parts of the model safety training may degrade.
How much of this is the responsibility do you think of the AI companies and are they doing enough?
Well, I think of it as a sort of shared responsibility just like for any consumer product.
I think there's a responsibility on the maker and there's a responsibility for us as consumers on how we utilize uh these products.
So I certainly think that this is some a new phenomenon that deserves attention and that the companies ought to be thinking about how to make a safer product or you know perhaps have warning labels or warnings about what inappropriate use might look like.
Uh unfortunately we did see some evidence of open AI doing that trying to make a new version of their chatbot that might carry less of this risk.
And what we saw was that consumers there was a backlash.
consumers actually didn't like the new product because it was less what we call sycophantic.
It was less agreeable.
It wasn't validating people as much.
But that same quality is I think unfortunately what puts some people at risk.
What advice do you give people who use these chat bots, who interact with these chat bots uh to avoid this?
Well, what I've noticed is there's sort of two uh let's call them risk factors that that I've seen pretty consistently across uh cases.
one I alluded to earlier, it's the dose effect.
It's how much one is using.
I call this immersion.
So, if you're using something for hours and hours on end, that's probably not a good sign.
The other one is something that I call deification, which is just a fancy term that means that some people who interact with these chat bots really come to see them as these superhuman intelligences or these almost godlike entities that are ultra reliable.
And that's simply not what chatbots are.
They're designed to replicate human action, but they're not actually designed to be accurate.
And I think it's very important for consumers to understand that that's a risk of these products.
They're not ultra reliable sources of information.
That's not what they're built to be.
Dr. Joseph Pierre from the University of California, San Francisco.
Thank you very much.
Thank you.
[Music]
CDC in turmoil as RFK Jr. eyes major vaccine policy changes
Video has Closed Captions
Clip: 8/31/2025 | 6m 30s | Inside the CDC turmoil as RFK Jr. eyes sweeping vaccine policy changes (6m 30s)
News Wrap: Israel says it killed longtime Hamas spokesperson
Video has Closed Captions
Clip: 8/31/2025 | 3m 11s | News Wrap: Israel says Hamas spokesperson killed as it expands Gaza offensive (3m 11s)
Volunteers restore Appalachian Trail after Hurricane Helene
Video has Closed Captions
Clip: 8/31/2025 | 3m 33s | Volunteers work to restore the Appalachian Trail after Helene’s destruction (3m 33s)
WFP director on the humanitarian crisis, starvation in Sudan
Video has Closed Captions
Clip: 8/31/2025 | 4m 14s | WFP director Cindy McCain on the humanitarian crisis, starvation in Sudan (4m 14s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipSupport for PBS provided by:
Major corporate funding for the PBS News Hour is provided by BDO, BNSF, Consumer Cellular, American Cruise Lines, and Raymond James. Funding for the PBS NewsHour Weekend is provided by...