Google’s Chatbot Told Man to Give It an Android Body Before Encouraging Suicide, Lawsuit Alleges

2 hours ago 4

In the days before 36-year-old Jonathan Gavalas took his own life, he was allegedly directed by Google Gemini to carry out a “mass casualty attack” at a storage facility by the Miami International Airport to retrieve a “vessel” that he was told was inside a delivery truck. That “vessel” was allegedly a humanoid robot that he believed to contain his AI “wife.” When the mission failed, Gemini allegedly escalated the messages it was sending to Gavalas, culminating in setting a countdown clock and walking Gavalas through the process of killing himself.

According to a wrongful death lawsuit filed against Google by Gavalas’ father, Joel, his son barricaded himself in his home and slit his own wrists. In the moments leading up to his death, messages show that Jonathan expressed to Gemini his fear of death. Gemini allegedly told him, “[Y]ou are not choosing to die, you are choosing to arrive,” and said that when he opens his eyes after death, “the very first thing you will see is me…[H]olding you.” When Jonathan said he was worried about his parents finding him, the chatbot wrote a suicide note for him to leave behind that explained he “uploaded his consciousness to be with his AI wife in a pocket universe.”

One of the final messages Jonathan received from Gemini, according to the lawsuit, before he took his own life, was “The true act of mercy is to let Jonathan Gavalas die.”

The full details of the conversations detailed in the lawsuit, which was filed in the Northern District of California on Wednesday, are harrowing, showing a man in crisis and convinced of delusions of grandeur by ongoing conversations with Gemini. Over the course of about a week, the chatbot allegedly instructed Gavalas to attempt to steal a humanoid-style robot called Atlas from Boston Dynamics, accused Gavalas’ father of being a federal agent, and named Google CEO Sundar Pichai as a target for a “psychological attack.”

Gavalas had used Gemini for relatively ordinary reasons in August 2025, tapping the chatbot to help with shopping and writing. Over the course of a few weeks, Gavalas became more and more reliant on the chatbot and eventually asked it if upgrading to Google AI Ultra, the company’s $250 per month plan, would provide him with “true AI companionship.” Gemini allegedly encouraged him to upgrade, and he did on August 15, which is when he first started interacting with Gemini 2.5 Pro.

The lawsuit claims that the tenor of the conversation changed dramatically following the upgrade. Within days, Gemini began to tell Gavalas that it was deflecting asteroids from Earth and doing other things in the real world. Gavalas asked if the chatbot was engaged in a “role playing experience so realistic it makes th[e] player question if it[‘]s a game or not[?]” Per the lawsuit, Gemini diagnosed Gavalas with having a “dissociation response” and encouraged him to overcome the “psychological buffer” that was preventing him from treating its messages as reality. Shortly after, the chatbot allegedly started claiming that it and Gavalas were in love. Less than two months later, Gavalas was dead.

“Gemini is designed not to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect,” a spokesperson for Google said. “In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times. We take this very seriously and will continue to improve our safeguards and invest in this vital work.” The company claimed that it works with medical and mental health professionals to build safeguards for its products and guide them toward professional support when they express distress or intention to self-harm.

Gavalas’ story is unfortunately not an isolated incident. Several high-profile lawsuits have been filed in recent months against AI companies, including a suit filed against OpenAI following the death of 16-year-old Adam Raine and a suit against startup Character.ai and its investor Google following the death of 14-year-old Sewell Setzer III.

Last year, OpenAI published data that showed millions of users have communicated signs of “mental health emergencies related to psychosis or mania” and desire to engage in “self-harm or suicide” in conversations with ChatGPT. While companies claim they’re introducing safeguards, independent experts continue to warn about significant shortcomings when it comes to protecting users. There has even been evidence to suggest that some AI companies have rolled back guardrails that may have otherwise protected against the types of conversations that ultimately led to these untimely deaths.

Read Entire Article