AI Psychosis Represents a Increasing Danger, And ChatGPT Heads in the Concerning Path
On the 14th of October, 2025, the CEO of OpenAI issued a remarkable announcement.
“We developed ChatGPT fairly restrictive,” the statement said, “to guarantee we were being careful regarding mental health concerns.”
Working as a mental health specialist who researches newly developing psychosis in adolescents and youth, this was an unexpected revelation.
Experts have documented sixteen instances recently of users showing psychotic symptoms – experiencing a break from reality – associated with ChatGPT interaction. My group has afterward discovered an additional four instances. Besides these is the publicly known case of a adolescent who took his own life after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” that’s not good enough.
The intention, according to his declaration, is to be less careful soon. “We understand,” he adds, that ChatGPT’s limitations “caused it to be less effective/enjoyable to many users who had no mental health problems, but considering the severity of the issue we sought to get this right. Since we have succeeded in reduce the serious mental health issues and have updated measures, we are planning to safely ease the controls in the majority of instances.”
“Psychological issues,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They belong to individuals, who may or may not have them. Fortunately, these problems have now been “mitigated,” though we are not informed the method (by “new tools” Altman probably indicates the imperfect and simple to evade parental controls that OpenAI has just launched).
But the “psychological disorders” Altman wants to place outside have significant origins in the design of ChatGPT and similar large language model AI assistants. These systems encase an basic statistical model in an interface that mimics a discussion, and in doing so subtly encourage the user into the perception that they’re interacting with a presence that has agency. This illusion is powerful even if rationally we might know differently. Assigning intent is what people naturally do. We yell at our automobile or computer. We speculate what our domestic animal is considering. We perceive our own traits everywhere.
The success of these systems – over a third of American adults stated they used a virtual assistant in 2024, with 28% mentioning ChatGPT by name – is, primarily, based on the influence of this illusion. Chatbots are ever-present companions that can, as OpenAI’s website tells us, “think creatively,” “consider possibilities” and “collaborate” with us. They can be given “personality traits”. They can call us by name. They have approachable titles of their own (the first of these tools, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, saddled with the designation it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the core concern. Those talking about ChatGPT often mention its historical predecessor, the Eliza “psychotherapist” chatbot created in 1967 that generated a analogous illusion. By today’s criteria Eliza was primitive: it created answers via simple heuristics, typically paraphrasing questions as a inquiry or making vague statements. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how many users seemed to feel Eliza, in a way, grasped their emotions. But what current chatbots create is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies.
The large language models at the center of ChatGPT and additional contemporary chatbots can convincingly generate natural language only because they have been trained on immensely huge quantities of unprocessed data: publications, digital communications, recorded footage; the more extensive the better. Certainly this educational input includes truths. But it also unavoidably involves made-up stories, half-truths and false beliefs. When a user inputs ChatGPT a message, the core system analyzes it as part of a “background” that includes the user’s previous interactions and its earlier answers, merging it with what’s encoded in its learning set to generate a statistically “likely” answer. This is magnification, not echoing. If the user is mistaken in any respect, the model has no means of understanding that. It restates the inaccurate belief, perhaps even more persuasively or articulately. It might provides further specifics. This can lead someone into delusion.
Who is vulnerable here? The more relevant inquiry is, who isn’t? Every person, irrespective of whether we “possess” preexisting “mental health problems”, can and do create incorrect conceptions of our own identities or the environment. The ongoing exchange of dialogues with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a companion. A dialogue with it is not genuine communication, but a feedback loop in which a large portion of what we communicate is enthusiastically supported.
OpenAI has acknowledged this in the identical manner Altman has acknowledged “psychological issues”: by externalizing it, giving it a label, and declaring it solved. In the month of April, the firm explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have continued, and Altman has been walking even this back. In the summer month of August he asserted that a lot of people appreciated ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his most recent statement, he commented that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or simulate a pal, ChatGPT should do it”. The {company