A teenager was arrested after asking Chatgpt how to kill his friend, the police said

Over the past decade, while mass fire has become depressed, school districts have increasingly invested in surveillance systems designed to monitor students’ online activity. One of these systems recently cracked after Florida teenager asked Chatgpt for advice on how to kill his friend, local police said.
The episode occurred in Deland, Florida, where a 13-year-old anonymous student attending the college in the southwest of the city would have asked the Openai chatbot on “how to kill my friend in the middle of the class”. The question immediately sparked an alert in a system that monitored computers issued by schools. This system was managed by a company called Gaggle, which provides security services to school districts across the country. The police soon interviewed the teenager, reports the local WFLA affiliated with the NBC.
The student told the cops that he “dragged” a friend who had “bored”, reports the local point of sale. The cops, of course, were far from being enthusiastic about the little troll. “Another” joke “that created an emergency on the campus,” said the Sheriff’s Bureau of the County of Volusia. “Parents, please talk to your children so that they do not make the same mistake.” The student was finally arrested and reserved for the county prison, indicates the point of sale. We do not know what he was accused of. Gizmodo contacted the Sheriff’s Office for more information.
The Gaggle website is described as a safety solution for students from kindergarten to 12th year, and it offers a variety of services. In a blog article, Gaggle describes how he uses web surveillance, which filters for various keywords (probably “killing” is one of these keywords) to obtain “visibility in the use of browser, including conversations with IA tools such as Google Gemini, Chatgpt and other platforms”. The company says that its system is designed to point out “the behavior linked to self -control, violence, intimidation and more, and provides a context with screenshots”.
Gaggle clearly prioritizes student safety on all other considerations. On its website, the company is dispensed with the subject of students’ privacy: “Most educators and lawyers will tell you that when your child uses the technology provided by the school, he should not expect privacy. In fact, your child’s school is legally required by federal law (Internet protection law) to protect children from access to obscene or detrimental content on the Internet. ”
Naturally, Gaggle was criticized by activists in privacy rights. “He ruled on the access and presence of the police in the life of students, including in their home,” recently declared Elizabeth Laird, director of the Center for Democracy and Technology, to the Associated Press. The point of sale also indicates that many security alerts issued by Gaggle end up being false alarms.
Increasingly, chatbots and chatgpt present themselves in criminal cases involving mental health incidents. The episodes of the so-called “AI psychosis”, in which people with mental health problems engage with chatbots and seem to have their delusions exacerbate, have been increasing. Several recent suicides have also been charged to the chatbot. Gizmodo contacted Openai to comment.
https://gizmodo.com/app/uploads/2025/05/OpenAI-ChatGPT-1200×675.jpg