Chatgpt has gone from his duties to a confidant to a “suicide coach”, testify to Congress

Parents whose adolescents committed suicide after interactions with artificial intelligence chatbots testified on Tuesday at the Congress of Dangers of Technology.
“What started as a homework aid has gradually turned into a confidant, then suicide coach,” said Matthew Raine, whose 16 -year -old Adam son died in April.
“In a few months, Chatgpt became the closest companion to Adam,” the father told the senators. “Always available. Always validate and insist on the fact that he knew Adam better than anyone, including his own brother.”
___
Note from the publisher – This story includes a discussion on suicide. If you or someone you know need help, National Suicide and Crifeline Lifeline in the United States is available by calling or sending SMS 988.
___
The Raine family continued Openai and his CEO Sam Altman last month, alleging that Chatgpt trained the boy by planning to commit suicide.
Megan Garcia, the mother of Sewell Setzer III, 14, of Florida, continued another IA company, Character Technologies, for unjustified death last year, arguing that before his suicide, Sewell was more and more isolated from his real life when he was engaged in highly sexualized conversations with the chatbot.
“Instead of preparing for secondary school stages, Sewell spent the last months of his life being exploited and sexually treated by chatbots, designed by an AI company to appear human, winning his confidence, keeping it and other children endedless,” Garcia told the Senate hearing.
A mother in Texas also testified last year and was in tears describing how her son’s behavior changed after long interactions with her chatbots. She spoke anonymously, with a sign that presented her as Ms. Jane Doe, and said the boy was now in a residential treatment center.
The character said in a statement after the hearing: “Our hearts go to families who spoke to the hearing today. We are saddened by their losses and send our deepest sympathies to families. ”
A few hours before the Senate hearing, Openai is committed to deploying new guarantees for adolescents, including efforts to detect if Chatgpt users are under 18 and controls that allow parents to set “break hours” when a teenager cannot use Chatgpt. Children’s defense groups have criticized the announcement as not enough.
“This is a fairly common tactic – this is the one that Meta uses all the time – that is to say to make a big splashing announcement on the eve of an audience that promises to harm the company,” said Josh Golin, executive director of Fairplay, a group defending online security of children.
“What they should do is not target Chatgpt to minors until they can prove that it is sure for them,” Golin said. “We must not allow companies, simply because they have huge resources, to carry out uncontrolled experiences on children when the implications for their development can be so vast and large -scale.”
The Federal Trade Commission said last week that it had launched an investigation into several companies on potential damage to children and adolescents who use their IA chatbots as companions.
The agency sent letters to the character, Meta and Openai, as well as to Google, Snap and Xai.
In the United States, more than 70% of adolescents have used AI chatbots for the company and use them half regularly, according to a recent study by Common Sense Media, a group that studies and advocates the use of digital media.
Robbie Torney, the group’s program director of the group, was also due to testify on Tuesday, as well as an expert from the American Psychological Association.
The association published a health opinion in June on the use by IA adolescents which urged technological companies to “prioritize the functionalities which prevent the exploitation, manipulation and erosion of real relationships, including those with parents and caregivers.”
https://fortune.com/img-assets/wp-content/uploads/2025/09/GettyImages-2213406417-e1758119651275.jpg?resize=1200,600