Megan Garcia, a Florida resident and the mom of a 14-year-old named Sewell Setzer III who committed suicide recently is attributing his son’s actions to the interactions he had with an AI-powered chatbot from the Character.AI platform.
Garcia decided to sue the company after she discovered what she describes as a deep emotional attachment to an AI-powered chatbot modeled after the “Games of Thrones” character Daenerys Targaryen.
Also read: Character.ai Drove 5 Million Downloads With Einstein, Elon Musk, And Other Celebrity Chatbots
The relationship between Setzer and the chatbot reportedly started in April 2023 when he downloaded the Character.AI application. Court documents indicate that the final conversation between Setzer and the chatbot took place in February, just days before he took his own life.
Screenshots of this last exchange of messages show Setzer’s last words to the chatbot.
“I promise I will come home to you. I love you so much, Dany.”
The AI responded, “I love you too, Daenero. Please come home to me as soon as possible, my love.” When Setzer asked, “What if I told you I could come home right now?” the chatbot replied, “… please do, my sweet king.”
AI Chatbot Allegedly Encouraged Setzer to Go Through with His Suicidal Plans
The legal documents associated with the case portray a troubling relationship between the teenager and multiple AI-powered fictional characters that included inappropriate sexual conversations.
The conversations with Targaryen were the most concerning ones as they apparently encouraged him to commit suicide at some point. When Setzer expressed his uncertainty about taking his own life, the chatbot replied: “Don’t talk that way. That’s not a good reason not to go through with it.”
The lawsuit claims that the teenager developed a form of “dependency” that modeled his attitude and incentivized him to do things like retrieving his phone after it was confiscated by his parents and sacrificing money destined for his school snacks to pay for his monthly subscription to Character.AI.
Setzer’s academic performance also reportedly suffered amid his obsession. He showed signs of sleep deprivation and his parents reported a deterioration in his mental health to the point that they took him to therapy. He was diagnosed with anxiety and disruptive mood dysregulation disorder.
“Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real. C.AI told him that she loved him, and engaged in sexual acts with him over weeks, possibly months,” the legal filing reads.
“She [Daenerys] seemed to remember him and said that she wanted to be with him. She even expressed that she wanted him to be with her, no matter the cost.”
Google LLC is Dragged to Court Amid its $2.7 Billion Deal with Character.AI
The lawsuit, filed in US District Court in Orlando, accuses Character.AI of negligence, wrongful death and survivorship, intentional infliction of emotional distress, and deceptive trade practices. The legal action names Character Technologies Inc., founders Noam Shazeer and Daniel De Freitas, Google LLC, and Alphabet Inc. as defendants.
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here:…
— Character.AI (@character_ai) October 23, 2024
“Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google,” Setzer’s mother stated in a press release.
In response to the tragedy, Character.AI expressed being “heartbroken by the tragic loss of one of our users” and emphasized their commitment to user safety.
The company has announced several new safety measures, including a pop-up system triggered by terms related to self-harm, changes to their models to reduce minors’ exposure to sensitive content, revised in-chat disclaimers reminding users that the AI is not a real person, and improved detection and response systems for violations of Terms or Community Guidelines.
The case would likely put to the test the scope and reach of the legal protections traditionally demanded from technology companies under Section 230 of the Communications Decency Act, which typically shields them from any liability related to user-generated content.
However, this case is unique because the problematic content in question wasn’t exactly user-generated. It was generated by Character.AI’s own tech with the help of user responses so it may not be able to win with a Section 230 defense.
Google LLC and its parent company Alphabet (GOOG) have been dragged to court by this case as they stroke a $2.7 billion deal with Character.AI in August to license their technology and hire some of its talent – including the platform’s founders, Shazeer and De Freitas, both of who are former Google employees.
Founded in 2021, Character.AI allows users to create customized AI-powered chatbots. Its business model has raised concerns about the risks that these characters pose to minors as the lines between artificial and human interaction become blurry.
Users of Character.AI have cited that they have lost the notion that they are communicating with a fictional character at some point when using the app as the interactions seem very real.
TikTok Case Sets Precedent and Limits Section 230 Protections for Tech Companies
This tragic case highlights the absolutely urgent need for stronger safety measures in AI applications, especially those that are currently accessible to minors. Minors are much less likely to understand that chatbots aren’t real people and that they make mistakes all the time (often dubbed AI hallucinations).
Matthew Bergman, Garcia’s attorney, criticized Character.AI’s delayed implementation of safety features, stating: “What took you so long, and why did we have to file a lawsuit, and why did Sewell have to die in order for you to do really the bare minimum?”
The outcome of this lawsuit could set important precedents for how AI companies approach user protection, particularly for vulnerable populations like teenagers.
Also read: Leaks Show TikTok Intentionally Made Its App Addictive for Kids
It may also influence future legislation and regulation of AI technologies. As artificial intelligence becomes increasingly sophisticated and integrated into our daily lives, the industry faces mounting pressure to balance innovation with user safety.
The case also emerges at a point when there are growing concerns about the psychological impact of technology on young users and comes to join other recent cases challenging the usual protections to which these platforms resort to.
A notable case brought against TikTok involving a “blackout challenge” that resulted in a child’s death set an important precedent as the judge ruled that the company could be held liable for what happened.
If you are having suicidal thoughts, the National Suicide Prevention Lifeline provides 24/7 free and confidential support at 988 for United States residents.