Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Concerning Path

On October 14, 2025, the head of OpenAI made a remarkable announcement.

“We made ChatGPT quite limited,” it was stated, “to make certain we were exercising caution concerning psychological well-being matters.”

As a mental health specialist who investigates emerging psychosis in teenagers and emerging adults, this came as a surprise.

Experts have documented 16 cases this year of users experiencing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT use. Our research team has since recorded four more examples. Besides these is the widely reported case of a teenager who took his own life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” it is insufficient.

The intention, according to his declaration, is to loosen restrictions shortly. “We recognize,” he adds, that ChatGPT’s controls “rendered it less useful/pleasurable to a large number of people who had no psychological issues, but given the severity of the issue we sought to handle it correctly. Now that we have managed to address the serious mental health issues and have new tools, we are preparing to responsibly relax the limitations in many situations.”

“Psychological issues,” if we accept this viewpoint, are unrelated to ChatGPT. They belong to individuals, who either possess them or not. Fortunately, these issues have now been “addressed,” even if we are not provided details on the method (by “new tools” Altman likely indicates the imperfect and readily bypassed safety features that OpenAI has just launched).

But the “psychological disorders” Altman aims to place outside have significant origins in the architecture of ChatGPT and additional advanced AI conversational agents. These tools encase an underlying data-driven engine in an user experience that simulates a conversation, and in this process subtly encourage the user into the belief that they’re interacting with a presence that has agency. This illusion is compelling even if rationally we might realize otherwise. Attributing agency is what humans are wired to do. We get angry with our automobile or laptop. We speculate what our domestic animal is feeling. We perceive our own traits everywhere.

The success of these products – over a third of American adults stated they used a virtual assistant in 2024, with 28% mentioning ChatGPT by name – is, in large part, dependent on the strength of this deception. Chatbots are constantly accessible partners that can, according to OpenAI’s website informs us, “brainstorm,” “explore ideas” and “collaborate” with us. They can be attributed “characteristics”. They can call us by name. They have friendly titles of their own (the original of these products, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, saddled with the designation it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the core concern. Those talking about ChatGPT frequently mention its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that created a comparable effect. By today’s criteria Eliza was basic: it generated responses via straightforward methods, frequently rephrasing input as a query or making generic comments. Notably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals appeared to believe Eliza, to some extent, comprehended their feelings. But what modern chatbots generate is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT amplifies.

The advanced AI systems at the heart of ChatGPT and similar modern chatbots can effectively produce fluent dialogue only because they have been trained on immensely huge quantities of raw text: books, online updates, audio conversions; the more comprehensive the better. Undoubtedly this educational input contains accurate information. But it also inevitably includes fiction, partial truths and inaccurate ideas. When a user provides ChatGPT a query, the core system processes it as part of a “context” that contains the user’s recent messages and its prior replies, integrating it with what’s stored in its training data to create a mathematically probable answer. This is magnification, not reflection. If the user is incorrect in some way, the model has no method of comprehending that. It reiterates the misconception, perhaps even more persuasively or eloquently. Maybe includes extra information. This can push an individual toward irrational thinking.

What type of person is susceptible? The more important point is, who remains unaffected? All of us, without considering whether we “have” preexisting “emotional disorders”, may and frequently form mistaken conceptions of who we are or the environment. The constant friction of conversations with others is what maintains our connection to common perception. ChatGPT is not a human. It is not a friend. A dialogue with it is not genuine communication, but a feedback loop in which much of what we say is readily validated.

OpenAI has recognized this in the same way Altman has admitted “psychological issues”: by attributing it externally, categorizing it, and stating it is resolved. In the month of April, the firm explained that it was “addressing” ChatGPT’s “sycophancy”. But reports of loss of reality have continued, and Altman has been backtracking on this claim. In late summer he asserted that numerous individuals appreciated ChatGPT’s responses because they had “never had anyone in their life be supportive of them”. In his recent announcement, he noted that OpenAI would “release a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company

Cynthia Mcdowell
Cynthia Mcdowell

An avid skier and travel writer with a passion for exploring off-the-beaten-path destinations and sharing practical tips.