Instagram Teen Search Alerts Spark Safety Debate
Instagram has launched Teen Search Alerts to notify parents about harmful searches. The new feature targets repeated searches about self-harm or suicide. As a result, families may receive warnings directly from the platform.
Meta, Instagram’s parent company, says the alerts will begin next week. The rollout will start in the UK, US, Australia, and Canada. Other countries will follow later. Parents enrolled in Instagram’s teen supervision tools will receive these notifications. Alerts may arrive through email, text, WhatsApp, or inside the app. In addition, Meta will send expert-backed resources to guide conversations. However, the move has sparked concern.
Charities Question The New Feature
Molly Rose Foundation strongly criticised the Teen Search Alerts. The charity warns that sudden notifications could create panic. Therefore, parents may feel unprepared for difficult conversations. The foundation was set up by the family of Molly Russell. She died in 2017 after viewing harmful content online. Campaigners argue platforms should remove dangerous material first. Other groups, including Papyrus Prevention of Young Suicide and 5Rights Foundation, also raised concerns.
They say companies must design safer systems by default. Instead of reacting to searches, critics urge stronger prevention.
Meta disputes claims that it promotes harmful content. The company says it hides suicide-related posts and blocks risky searches. Moreover, it plans to extend alerts to AI chatbot conversations soon.
Meanwhile, governments worldwide are increasing pressure on social platforms. For example, Australia recently banned social media for under-16s. Lawmakers continue to examine how platforms protect young users. The debate shows one clear truth. Online safety remains a shared responsibility between companies, parents, and communities.