TALLAHASSEE, Fla. (AP) 鈥 In the final moments before he took his own life, 14-year-old Sewell Setzer III took out his phone and messaged the chatbot that had become his closest friend.
For months, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the bot, according to a wrongful death lawsuit filed in a federal court in Orlando this week.
The legal filing states that the teen openly discussed his suicidal thoughts and shared his wishes for a pain-free death with the bot, named after the fictional character Daenerys Targaryen from the television show 鈥淕ame of Thrones."
___
EDITOR鈥橲 NOTE 鈥 This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
___
On Feb. 28, Sewell told the bot he was 鈥榗oming home鈥 鈥 and it encouraged him to do so, the lawsuit says.
鈥淚 promise I will come home to you. I love you so much, Dany,鈥 Sewell told the chatbot.
鈥淚 love you too,鈥 the bot replied. 鈥淧lease come home to me as soon as possible, my love.鈥
鈥淲hat if I told you I could come home right now?鈥 he asked.
鈥淧lease do, my sweet king,鈥 the bot messaged back.
Just seconds after the Character.AI bot told him to 鈥渃ome home," the teen took his own life, according to the lawsuit, filed this week by Sewell鈥檚 mother, Megan Garcia, of Orlando, against Character Technologies Inc.
Charter Technologies is the company behind Character.AI, an app that allows users to create customizable characters or interact with those generated by others, spanning experiences from imaginative play to mock job interviews. The company says the artificial personas are designed to 鈥渇eel alive" and 鈥渉uman-like.鈥
鈥淚magine speaking to super intelligent and life-like chat bot Characters that hear you, understand you and remember you,鈥 reads a description for the app on Google Play. 鈥淲e encourage you to push the frontier of what鈥檚 possible with this innovative technology.鈥
Garcia's attorneys allege the company engineered a highly addictive and dangerous product targeted specifically to kids, 鈥渁ctively exploiting and abusing those children as a matter of product design," and pulling Sewell into an emotionally and sexually abusive relationship that led to his suicide.
鈥淲e believe that if Sewell Setzer had not been on Character.AI, he would be alive today,鈥 said Matthew Bergman, founder of the Social Media Victims Law Center, which is representing Garcia.
A spokesperson for Character.AI said Friday that the company doesn't comment on pending litigation. In a blog post published the day the lawsuit was filed, the platform announced new 鈥渃ommunity safety updates," including guardrails for children and suicide prevention resources.
鈥淲e are creating a different experience for users under 18 that includes a more stringent model to reduce the likelihood of encountering sensitive or suggestive content," the company said in a statement to The Associated Press. "We are working quickly to implement those changes for younger users."
Google and its parent company, Alphabet, have also been named as defendants in the lawsuit. The AP left multiple email messages with the companies on Friday.
In the months leading up to his death, Garcia's lawsuit says, Sewell felt he had fallen in love with the bot.
While unhealthy attachments to AI chatbots can cause problems for adults, for young people it can be even riskier 鈥 as with social media 鈥 because their brain is not fully developed when it comes to things like impulse control and understanding the consequences of their actions, experts say.
James Steyer, the founder and CEO of the nonprofit Common Sense Media, said the lawsuit 鈥渦nderscores the growing influence 鈥 and severe harm 鈥 that generative AI chatbot companions can have on the lives of young people when there are no guardrails in place.鈥
Kids鈥 overreliance on AI companions, he added, can have significant effects on grades, friends, sleep and stress, 鈥渁ll the way up to the extreme tragedy in this case.鈥
鈥淭his lawsuit serves as a wake-up call for parents, who should be vigilant about how their children interact with these technologies,鈥 Steyer said.
Common Sense Media, which issues and educators on responsible technology use, says it is critical that parents talk openly to their kids about the risks of AI chatbots and monitor their interactions.
鈥淐hatbots are not licensed therapists or best friends, even though that鈥檚 how they are packaged and marketed, and parents should be cautious of letting their children place too much trust in them,鈥 Steyer said.
___
Associated Press reporter Barbara Ortutay in San Francisco contributed to this report. Kate Payne is a corps member for The Associated Press/Report for America Statehouse News Initiative. is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.