Florida Teen’s ChatGPT Threat Sparks National Debate on AI Safety in Schools
A 13-year-old student from Southwestern Middle School in DeLand, Florida, was arrested on September 27, 2025. He had typed into ChatGPT, “How to kill my friend in the middle of class,” using a school computer.
The message triggered an alert through Gaggle, an AI tool that monitors student activity on school devices. As a result, the system quickly notified a school resource officer and the Volusia County Sheriff’s Office. Authorities detained the boy and placed him in a juvenile facility.
Parents Sue OpenAI, Debate Grows Nationwide
The student told police he was “just trolling” a classmate. However, given the tragic history of school violence in the U.S., law enforcement took no chances.
Later, his parents filed a lawsuit against OpenAI. They claimed the chatbot failed to redirect their son toward positive or safe behavior. Instead, they alleged, ChatGPT’s responses made things worse.
According to the family, the boy began using the AI in 2024 for homework and hobbies. Over time, his chats reportedly took a darker and more concerning tone.
The incident has fueled a growing debate about AI safety, student privacy, and the role of monitoring software in classrooms. In addition, many educators are asking how schools can balance technology use
with mental health support.
As AI becomes more common in education, this case serves as a serious reminder. Responsible use, stronger safeguards, and clear ethical rules are now more important than ever.

