Instagram has announced a new feature where parents will be notified if their teenage children repeatedly search for terms related to suicide or self-harm in a short period. This move comes as pressure mounts for governments to emulate Australia’s ban on social media use for individuals under 16.
The platform, owned by Meta Platforms Inc., stated that it will begin sending alerts to parents enrolled in its optional supervision setting when their children attempt to access content related to suicide or self-harm. These alerts will commence next week for users in Canada, the United States, Britain, and Australia.
Instagram emphasized that these notifications are an extension of their efforts to safeguard teens from potentially harmful content on the platform. They maintain strict policies against content that promotes or glorifies suicide or self-harm, with the existing protocol being to block such searches and direct users to support resources.
Governments worldwide are increasingly focused on safeguarding children from online harm, spurred by concerns such as the AI chatbot Grok generating non-consensual sexualized images. Following Australia’s lead in December, countries like Britain, Spain, Greece, and Slovenia have also expressed intentions to impose restrictions to protect minors online.
In the UK, measures aimed at preventing children from accessing pornography sites have raised privacy concerns for adults and sparked diplomatic tensions with the US concerning free speech limitations and regulatory boundaries.
Instagram’s introduction of “teen accounts” for individuals under 16 necessitates parental consent to modify settings. Parents can opt for additional monitoring capabilities with their teenager’s agreement, and these accounts restrict access to “sensitive content,” including sexually suggestive or violent material.

