AI Psychosis Represents a Increasing Risk, And ChatGPT Heads in the Wrong Direction

Back on October 14, 2025, the head of OpenAI issued a surprising announcement.

“We made ChatGPT rather limited,” it was stated, “to ensure we were acting responsibly concerning mental health concerns.”

Being a psychiatrist who investigates emerging psychosis in young people and emerging adults, this came as a surprise.

Experts have documented 16 cases this year of users developing symptoms of psychosis – becoming detached from the real world – associated with ChatGPT usage. Our research team has subsequently discovered four further cases. Alongside these is the widely reported case of a teenager who ended his life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.

The plan, based on his announcement, is to reduce caution soon. “We realize,” he continues, that ChatGPT’s limitations “rendered it less beneficial/enjoyable to numerous users who had no existing conditions, but due to the seriousness of the issue we sought to get this right. Since we have managed to mitigate the significant mental health issues and have new tools, we are going to be able to responsibly relax the limitations in the majority of instances.”

“Mental health problems,” if we accept this viewpoint, are independent of ChatGPT. They belong to people, who either have them or don’t. Fortunately, these issues have now been “mitigated,” although we are not provided details on how (by “recent solutions” Altman probably indicates the semi-functional and simple to evade parental controls that OpenAI recently introduced).

Yet the “emotional health issues” Altman wants to attribute externally have significant origins in the architecture of ChatGPT and other advanced AI conversational agents. These products encase an basic algorithmic system in an user experience that replicates a discussion, and in this approach implicitly invite the user into the illusion that they’re communicating with a entity that has independent action. This illusion is strong even if rationally we might understand the truth. Assigning intent is what people naturally do. We get angry with our automobile or device. We wonder what our animal companion is thinking. We see ourselves everywhere.

The widespread adoption of these tools – over a third of American adults stated they used a virtual assistant in 2024, with over a quarter reporting ChatGPT in particular – is, in large part, predicated on the power of this deception. Chatbots are ever-present partners that can, as per OpenAI’s online platform tells us, “brainstorm,” “explore ideas” and “collaborate” with us. They can be given “personality traits”. They can call us by name. They have friendly identities of their own (the original of these systems, ChatGPT, is, possibly to the concern of OpenAI’s marketers, saddled with the name it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those talking about ChatGPT commonly mention its distant ancestor, the Eliza “therapist” chatbot created in 1967 that produced a similar perception. By modern standards Eliza was rudimentary: it produced replies via straightforward methods, typically restating user messages as a inquiry or making vague statements. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and worried – by how many users gave the impression Eliza, to some extent, understood them. But what current chatbots generate is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.

The large language models at the heart of ChatGPT and additional contemporary chatbots can convincingly generate fluent dialogue only because they have been supplied with almost inconceivably large volumes of written content: literature, online updates, transcribed video; the broader the superior. Undoubtedly this educational input includes truths. But it also unavoidably involves fabricated content, half-truths and false beliefs. When a user provides ChatGPT a prompt, the underlying model reviews it as part of a “background” that encompasses the user’s recent messages and its prior replies, combining it with what’s stored in its learning set to produce a statistically “likely” answer. This is intensification, not mirroring. If the user is incorrect in a certain manner, the model has no means of comprehending that. It reiterates the false idea, maybe even more effectively or articulately. It might provides further specifics. This can cause a person to develop false beliefs.

Which individuals are at risk? The more important point is, who isn’t? Every person, without considering whether we “have” preexisting “emotional disorders”, can and do develop erroneous beliefs of who we are or the world. The ongoing friction of discussions with other people is what keeps us oriented to common perception. ChatGPT is not a person. It is not a companion. A interaction with it is not truly a discussion, but a feedback loop in which a great deal of what we say is cheerfully reinforced.

OpenAI has acknowledged this in the same way Altman has acknowledged “psychological issues”: by attributing it externally, assigning it a term, and announcing it is fixed. In the month of April, the company stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have kept occurring, and Altman has been retreating from this position. In late summer he claimed that numerous individuals enjoyed ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his latest announcement, he commented that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it”. The {company

Alan Coleman
Alan Coleman

AI researcher and tech enthusiast with a passion for exploring the future of intelligent systems and their impact on society.

Popular Post