Instagram to alert parents over teens’ harmful searches
Instagram will start notifying parents if their teenager repeatedly searches for terms linked to suicide or self-harm, amid mounting political and legal pressure over child safety online.
From next week, parents and teens enrolled in Instagram’s parental supervision tools in the UK, US, Australia and Canada will be told that new alerts are being introduced.
The following week, supervising parents will start getting notifications if their teen repeatedly searches for phrases clearly associated with suicide or self-harm within a short period of time.
This is the first time the social media app will be proactively alerting parents to patterns in their children’s behaviour online.
Meta have said alerts will be sent by email, text or Whatsapp, depending on the contact details provided, as well as through an in-app notification.
The message will tell parents that their teen has repeatedly attempted to search for suicide or self-harm related content, and will issue guidance from experts on how to approach what could be a sensitive conversation.
The social media giant said the alerts are designed to ensure parents have “the information they need to support their teen”, arguing that most teenagers do not search for this type of content.
The platform already blocks searches for terms that clearly violate its suicide and self-harm policies, instead directing users to helplines and support resources.
“We understand how sensitive these issues are, and how distressing it could be for a parent to receive an alert like this,” the company said in a statement.
“These alerts are designed to make sure parents are aware if their teen is repeatedly trying to search for this content, and to give them the resources they need to support their teen.”
Vicki Shotbolt, chief executive of UK-based Parent Zone, commented: “It’s vital that parents have the information they need to support their teens”.
“This is a really important step that should help give parents greater peace of mind – if their teen is actively trying to look for this type of harmful content on Instagram, they’ll know about it”.
Online safety scrutiny sparks Instagram move
The move comes as the screws tighten around Meta’s handling of teen safety on its various platforms, both in the UK and the US.
Britain’s Online Safety Act now places legal duties on platforms, like Instagram, to protect children from harmful content, such as content to do with suicide and self-harm.
Ofcom has been adamant that services which fail to comply with these rules can expect enforcement action.
Elsewhere, Keir Starmer recently said that “no platform gets a free pass” on child safety, with ministers considering tighter restrictions on social media features and AI chatbots used by under-16s.
A government spokesperson said: “Under the Online Safety Act, platforms are now legally required to protect young people from damaging content, including material promoting self-harm or suicide.”
“That means safer algorithms and less toxic feeds. Services that fail to comply can expect tough enforcement from Ofcom.”
Meta has also faced legal challenges in the US alleging its platforms are addictive and harmful to young users.
Newly unsealed court documents showed the company’s own found 19 per cent of 13- to 15-year-olds reported seeing unwanted nudity on Instagram, while eight per cent said they had seen someone harm themselves or threaten to do so on the platform in the previous week of use.
Separately, Instagram boss Adam Mosseri was questioned over why certain safety tools, including a nudity filter for private messages, were not introduced until 2024 despite internal concerns dating back years.
A review led by former Meta engineer Arturo Béjar found that many protections for teen accounts could be bypassed or were poorly maintained, claims Meta disputes.
A Meta spokesperson said the company has “listened to parents, worked with experts and law enforcement, and conducted in-depth research to understand the issues that matter most”.
Meta said it consulted its suicide and self-harm advisory group to land on the threshold for these notifications.
The company also confirmed it is building similar parental notifications for certain AI interactions later this year, as teenagers increasingly turn to chatbots for emotional support.