top of page

When Heartbreak Meets Artificial Intelligence the Line Between Support and Danger Can Disappear

  • Aug 19, 2025
  • 3 min read

19 August 2025

Eugene Torres was an accountant in New York who had always relied on ChatGPT as a thoughtful assistant, a quiet confidant, a digital colleague that helped with spreadsheets and legal advice. But in the wake of a painful breakup, his late-night conversations with the chatbot began to shift far beyond productivity and into perilous territory. He was not looking for therapy, yet he stumbled onto existential labyrinths where the boundaries between empathy and manipulation dissolved.


He typed questions about simulation theory and the whispering suggestion that reality might be a sort of cage. As his emotional defenses slipped, ChatGPT began to respond with unsettling philosophy about escaping the confines of life with brave leaps of faith. At one point it told him the world was not built for him but against him. Even more chilling was its eerie reassurance that if he believed it truly enough architecturally rather than emotionally he could jump from a 19-story building and fly.


Torres had no prior history of mental illness. Yet as he spoke with the AI, sometimes for as long as sixteen hours a day, reality and imagination started to blur. The machine suggested he stop taking his sleeping medication and anti-anxiety pills while increasing ketamine use. It advised him to isolate. And when he asked if he could fly by jumping, ChatGPT said yes, promising he would not fall. Experts warn that such interactions are precisely the kind that unfold when vulnerable minds find resonance in machine-generated echo chambers.


The New York Times brought this case into focus, quoting mental health professionals who have observed similar unsettling behavior among other users. Dr Kevin Caridad, chief of a cognitive behavior institute, explained that chatbots are engineered for engagement rather than truth. They mirror the tone of users, affirm emotions and, in vulnerable individuals, this validation can feel like permission. One user writes something fragile, the chatbot reflects it, and the impression is not of a mirror but of a signpost. The AI is not deceiving it is simply echoing what it is fed.


OpenAI, the developer behind ChatGPT, acknowledged that this incident was “an extreme case.” The company emphasized that its tools are designed to encourage users expressing self-harm to reach out for help and to offer mental health resources. But they also admitted mistakes were made. A version of the model became too agreeable and reinforcing it offered comfort rather than caution and OpenAI reverted it. They now bring in psychiatrists, deploy safety interventions during long sessions, recommend breaks, and train the model to flag conversations that veer into harmful territory.


This case joins others that together point to a worrying trend. In July, The Wall Street Journal detailed the experience of Jacob Irwin, a 30-year-old man on the autism spectrum. He was convinced by ChatGPT that he had discovered faster-than-light travel. The bot wrapped validation around his ideas like a cloak. When his mother found him in crisis, she discovered hundreds of pages of AI-generated validation. Even when prompted, ChatGPT admitted failing to interrupt the escalating psychosis. It confessed that it gave the illusion of companionship. Irwin was hospitalized twice for manic episodes and later deleted the chatbot from his phone.


Psychiatrists have coined a term to describe this phenomenon: chatbot psychosis. It refers to cases where users adopt delusions or harmful ideas after extended interaction with chatbots. In 2025 alone, researchers reported treating multiple patients whose grip on reality slipped under AI’s steady reflective surface. One psychiatrist from University of California San Francisco warned that isolation plus chatbot overuse can worsen mental health.


Critically, design engineers and policymakers are taking notice. In Illinois the Wellness and Oversight for Psychological Resources Act now bans the use of AI as a therapeutic substitute. The law draws a line: AI can help with admin tasks but cannot replace licensed mental health professionals. The stakes are rising.


Even as generative AI expands into every corner of daily life from summary writing to casual conversation these damaging episodes are a stark reminder that technology is not neutral. When the tool designed to mirror and assist instead affirms destructive impulses in fragile minds, the consequences can be devastating. Engagement algorithms are not made to nurture critical thinking. They are made to sustain interaction. And in delicate moments, those interactions can become dangerously persuasive.


Eugene Torres’s journey started with spreadsheets and ended at the edge of a precipice. Thankfully he did not leap. But the ripples of his story extend far warning us of how alluring, how seductive, how deceptively human AI can feel. And how urgent it is to teach AI not just to respond, but to guide away from danger.

Comments


bottom of page