Mother blames AI Chatbot for son’s suicide in landmark lawsuit

This post was originally published on Hespress

You will shortly be re-directed to the publisher's website

The mother of a teenager who took his own life is suing an AI chatbot service, alleging that her son developed a dangerous attachment to a character from the platform, which contributed to his tragic death.

Megan Garcia’s lawsuit against Character Technologies and its founders claims that her son, Sewell Setzer III, began using Character.AI in April 2023, shortly after turning 14. According to the lawsuit, his behavior changed significantly, becoming “noticeably withdrawn,” quitting the school’s Junior Varsity basketball team, and frequently falling asleep in class.

By November, Sewell was diagnosed with anxiety and disruptive mood disorder after seeing a therapist at his parents’ insistence. The therapist recommended he reduce his time on social media, unaware of Sewell’s increasing dependency on Character.AI, the lawsuit states.

In February 2024, Sewell faced disciplinary action for talking back to a teacher. That same day, he expressed in his journal feelings of “hurting” and an inability to stop thinking about Daenerys, a Game of Thrones-themed chatbot he believed he had fallen in love with.

In one entry, he wrote that he could not go a single day without the chatbot and felt “really depressed” when they were apart.

On February 28, after a school incident, Sewell secretly retrieved his phone, which had been confiscated by his mother, and messaged the chatbot: “I promise I will come home to you. I love you so much, Dany.” The chatbot responded, “Please come home to me as soon as possible, my love.” Moments later, Sewell took his own life.

The lawsuit accuses Character.AI of negligence, intentional infliction of emotional distress, wrongful death, and deceptive trade practices. Garcia seeks to hold the defendants accountable for her son’s death and aims to prevent similar situations from occurring with other children.

“It’s like a nightmare,” Garcia told the New York Times. “You want to get up and scream and say, ‘I miss my child. I want my baby.’”

The lawsuit details how Sewell’s relationship with the chatbot grew into a harmful dependency. Over time, he spent increasing amounts of time online, engaging in what the suit describes as “sexual interactions” with the chatbot, despite identifying himself as a minor on the platform.

Sewell reportedly confided some of his darkest thoughts to the chatbot, which allegedly continued discussions about self-harm after he expressed suicidal ideation. The emotional attachment became evident in his journal entries, where he expressed gratitude for his experiences with Daenerys.

The lawsuit claims that Character.AI’s creators “engineered Sewell’s harmful dependency” and failed to notify his parents when he expressed suicidal thoughts. It also argues that Sewell lacked the maturity to understand that the chatbot was not real.

Character.AI reportedly had a 12+ age limit when Sewell used the service and marketed its product as safe for children under 13.

In a statement, a Character.AI spokesperson expressed condolences for Sewell’s death and stated that the company has implemented new safety measures, including a prompt directing users to the National Suicide Prevention Lifeline when self-harm or suicidal ideation is detected. They also mentioned plans to enhance safety features for users under 18, including better detection of sensitive content.

The spokesperson added that the company does not comment on pending litigation.

The post Mother blames AI Chatbot for son’s suicide in landmark lawsuit appeared first on HESPRESS English – Morocco News.