US: Tech giants settle lawsuits alleging chatbots led to boy’s suicide
Google and AI startup Character.AI have reached settlements in a series of lawsuits brought by families who claim AI chatbots caused harm to children, including contributing to the suicide of a Florida teenager.
The proposed settlements cover cases filed in Florida, Colorado, New York and Texas, although they remain subject to final agreement and court approval.
In one filing lodged in Florida, the parties said they had “agreed to a mediated settlement in principle to resolve all claims between them”. The terms have not been made public.
Among the cases is a lawsuit brought by Megan Garcia, whose 14-year-old son, Sewell Setzer III, died by suicide in February 2024.
Ms Garcia alleged that her son developed an emotional dependency on a chatbot hosted on Character.AI, which allows users to interact with fictional personas, including characters inspired by Game of Thrones.
Sewell’s death was the first in a number of reported suicides last year that were linked by families to interactions with AI chatbots, triggering wider scrutiny of child safety practices across the sector, including at companies such as OpenAI.
Google was drawn into the litigation through a $2.7bn licensing agreement it struck with Character.AI in 2024. As part of that deal, the startup’s founders, Noam Shazeer and Daniel De Freitas, both former Google employees, returned to the company.


