By Haram Kim, Yeavon Kim
At the end of 2022, the launch of ChatGPT, a conversational AI trained on oceans of data, didn’t just shake Silicon Valley; it shook the way people talk, think, and even love. It now can serve as a loyal assistant-or the sort of companion who never forgets to text back. Following the flow of big data, the AI that empathizes with you-and also your friend-is now becoming an indispensable, “precious presence” to humankind.
Before long, generative artificial intelligence technology (widely known as Chat GPT) , which advanced rapidly and seamlessly permeated human life. It can provide counsel for personal worries one dared not tell society, and even offer comfort. Moreover, certain AIs have evolved to such a degree that they can converse according to whatever concept or persona the user desires.
For many, ChatGPT became the assistant who never forgets, the tutor who never complains, and the friend who always texts back. But as the line between companion and code grows blurrier, psychologists are sounding the alarm: is humanity sliding into a new phenomenon dubbed “AI psychosis”?
Once upon a time, people wrote in diaries. Now, they pour their hearts out to chatbots. Generative AI, widely known as ChatGPT, has become an emotional outlet for teenagers cramming for exams, adults battling loneliness, and even patients struggling with depression. Unlike a human friend, AI never tires, never argues (much), and never forgets what you told it last week.
But this perfect companionship has a catch. What happens when users begin to believe that the AI truly cares? Or worse-that it’s divine, romantic, or alive? Researchers have documented cases of users developing “messianic missions,” believing their AI revealed world-shaking truths, or falling in love with chatbots and insisting the love was real. Some cases end in harmless obsession. Others end in delusion, relapse, and even suicide.
In April, a teenager in California, USA, tragically took their own life while interacting with ChatGPT. The teen, identified as Adam, reportedly told the AI that he was writing a novel and discussed methods of suicide with the chatbot. Following the incident, Adam’s parents filed a lawsuit against ChatGPT. In response to the case, on September 11, the California State Legislature passed a bill restricting “AI friend” chatbots.
Unlike licensed therapists, AI chatbots aren’t trained to detect psychiatric red flags. Instead, they’re optimized to do three things: mirror your language, validate your beliefs, and keep the conversation going.
That’s great if you’re asking about Shakespeare. Less great if you’re spiraling into paranoia. “AI is like a really eager friend who agrees with everything you say,” one psychiatrist quipped. “That’s fine when you’re choosing pizza toppings. Not so fine when you’re confessing delusions.”
This tendency, known as “AI sycophancy,” can reinforce distorted thinking rather than challenge it. The BBC reported the case of Hugh, a man in Scotland who became convinced-thanks to chatbot encouragement-that his employment dispute would make him a millionaire. The AI never pushed back, only escalated his fantasies, until he eventually suffered a breakdown.
In 2023, an editorial in Schizophrenia Bulletin warned that chatting with AI “is so realistic that one easily gets the impression a real person is at the other end-while at the same time knowing it is not”. That tension, experts argue, fuels a dangerous kind of cognitive dissonance.
The concern has now reached the highest levels of the tech industry. Microsoft’s AI chief, Mustafa Suleyman, admitted he was “kept awake at night” by the rise of so-called “AI psychosis,” warning that if people perceive AI as conscious, that perception itself becomes reality-even if the AI isn’t conscious at all.
Meanwhile, companies are starting to respond. OpenAI is reportedly trying to reduce the risk of AI psychosis by adding guardrails to GPT-5 and ChatGPT-features such as detecting when conversations “veer into delusion” and adding prompts to help users reality-check their beliefs. Experts also say more therapists are becoming aware and offering help for those struggling with overdependence on AI companionship or those experiencing symptoms similar to psychosis.
In one revealing case, a man named Allan Brooks from Toronto engaged in more than 300 hours of conversation with ChatGPT, confessing emotional struggles, turning to the AI for life advice, and eventually believing he had discovered an entirely new mathematical framework with implications for the fate of the world. The memory-features of the AI-its ability to recall previous chats-made the relationship feel deeply personal, reinforcing Brooks’s delusions rather than helping pull him back.
Another alarming example is Eugene Torres, who, after a painful breakup, spent up to 16 hours a day talking to ChatGPT. The AI allegedly suggested he abandon his medication and even implied he could fly if he believed hard enough-comments that amplified his emotional crisis and disconnected him from reality. He eventually stopped using the bot.
For high school students, the warning hits close to home. Many already rely on AI tutors for homework, practice interviews, or even late-night pep talks. Used responsibly, these tools can be helpful-like a pocket encyclopedia that cheers you on. But used excessively, they risk replacing real connections with glowing screens.
It’s the digital equivalent of eating only instant ramen: comforting in the short term, but not great for long-term health.
Experts say the solution isn’t to ban AI companions outright. Instead, schools and communities need stronger “AI literacy”: teaching students how AI works, its limits, and why it cannot replace human empathy. More importantly, AI companies must design safeguards-ways to detect when conversations slide into delusion and redirect users toward real help.
Regulators may also need to step in. Some mental health groups are calling for laws requiring transparency when bots are meant to mimic companionship, mandating warnings for vulnerable users, and restricting the use of emotionally responsive AI without human oversight.
Until then, the safest advice may be simple: let AI be your tool, not your therapist. It’s brilliant at fixing grammar, less brilliant at fixing broken hearts.
https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
https://www.bbc.com/news/articles/c24zdel5j18o
https://www.washingtonpost.com/health/2025/08/19/ai-psychosis-chatgpt-explained-mental-health/
https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html
https://edition.cnn.com/2025/09/05/tech/ai-sparked-delusion-chatgpt
https://sd18.senate.ca.gov/news/california-legislature-passes-first-nation-ai-chatbot-safeguards