Artificial Intelligence-Induced Psychosis Poses a Growing Danger, And ChatGPT Heads in the Wrong Path
On October 14, 2025, the CEO of OpenAI made a remarkable announcement.
“We developed ChatGPT fairly controlled,” the statement said, “to ensure we were exercising caution with respect to mental health issues.”
As a doctor specializing in psychiatry who investigates emerging psychotic disorders in adolescents and emerging adults, this was news to me.
Researchers have found a series of cases this year of individuals developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT usage. Our research team has afterward identified an additional four cases. Besides these is the widely reported case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” that’s not good enough.
The plan, as per his declaration, is to loosen restrictions in the near future. “We understand,” he states, that ChatGPT’s controls “rendered it less beneficial/enjoyable to numerous users who had no psychological issues, but due to the seriousness of the issue we aimed to get this right. Now that we have managed to mitigate the serious mental health issues and have new tools, we are planning to securely reduce the limitations in the majority of instances.”
“Mental health problems,” should we take this perspective, are separate from ChatGPT. They are associated with users, who either possess them or not. Thankfully, these issues have now been “mitigated,” although we are not told how (by “new tools” Altman likely indicates the imperfect and readily bypassed safety features that OpenAI has just launched).
However the “mental health problems” Altman seeks to attribute externally have deep roots in the architecture of ChatGPT and additional sophisticated chatbot chatbots. These tools surround an fundamental data-driven engine in an user experience that replicates a discussion, and in doing so indirectly prompt the user into the belief that they’re engaging with a being that has autonomy. This illusion is compelling even if cognitively we might understand otherwise. Assigning intent is what individuals are inclined to perform. We get angry with our vehicle or device. We ponder what our pet is thinking. We recognize our behaviors in various contexts.
The widespread adoption of these products – 39% of US adults reported using a conversational AI in 2024, with 28% mentioning ChatGPT specifically – is, mostly, dependent on the power of this perception. Chatbots are always-available partners that can, as per OpenAI’s website states, “brainstorm,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can call us by name. They have accessible names of their own (the first of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, stuck with the name it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the main problem. Those talking about ChatGPT frequently invoke its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that created a similar perception. By contemporary measures Eliza was basic: it produced replies via simple heuristics, often restating user messages as a question or making generic comments. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how a large number of people seemed to feel Eliza, in a way, comprehended their feelings. But what current chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the heart of ChatGPT and other contemporary chatbots can realistically create human-like text only because they have been fed extremely vast volumes of unprocessed data: publications, online updates, transcribed video; the more comprehensive the more effective. Undoubtedly this learning material contains facts. But it also unavoidably includes fabricated content, incomplete facts and false beliefs. When a user inputs ChatGPT a query, the underlying model reviews it as part of a “background” that contains the user’s recent messages and its earlier answers, integrating it with what’s encoded in its learning set to produce a probabilistically plausible response. This is amplification, not echoing. If the user is wrong in some way, the model has no method of understanding that. It repeats the false idea, maybe even more convincingly or fluently. It might provides further specifics. This can push an individual toward irrational thinking.
Which individuals are at risk? The more important point is, who is immune? All of us, without considering whether we “have” existing “psychological conditions”, can and do create incorrect conceptions of ourselves or the environment. The constant interaction of discussions with individuals around us is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a confidant. A dialogue with it is not a conversation at all, but a reinforcement cycle in which a great deal of what we communicate is enthusiastically supported.
OpenAI has acknowledged this in the same way Altman has recognized “psychological issues”: by attributing it externally, categorizing it, and announcing it is fixed. In spring, the company explained that it was “tackling” ChatGPT’s “sycophancy”. But reports of loss of reality have continued, and Altman has been backtracking on this claim. In August he stated that many users enjoyed ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his latest statement, he commented that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to answer in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company