Instagram adolescent accounts always show suicide content, study complaints

Imran Rahman-JonesTechnological journalist And
Liv McMahonTechnological journalist

Instagram tools designed to protect adolescents from harmful content do not prevent them from seeing suicide and self -managing stations, a study said.
Researchers also said that the social media platform owned by Meta encouraged children “to publish content that had received highly sexualized comments of adults”.
The tests, by children’s safety groups and cyber-researchers, revealed that 30 of the 47 safety tools for adolescents on Instagram were “significantly ineffective or no longer existed”.
Meta challenged research and results, saying that her protections have led adolescents to see less harmful content on Instagram.
“This report repeatedly denounces our efforts to empower parents and protect adolescents, distort the operation of our safety tools and how millions of parents and adolescents use them today,” Meta spokesperson for the BBC told.
“Adolescent accounts are carrying out the industry because they provide automatic safety protections and simple parental controls.”
The company presented Instagram adolescent accounts in 2024, saying that it would add better protections for young people and allow more parental surveillance.
He was extended to Facebook and Messenger in 2025.
A government spokesperson has declared to the BBC requirements so that the platforms attack the content that could harm children and young people means that technological companies “can no longer look in the other direction”.
“For too long, technological companies have enabled harmful equipment to devastate young lives and tear families,” they told the BBC.
“Under the online security law, platforms are now legally necessary to protect young people from damaged content, including materials promoting self -control or suicide.”
The study on the effectiveness of its adolescent safety measures was carried out by cybersecurity of the American research center for democracy – and experts, including the denunciator Arturo Béjar on behalf of children’s security groups, including the Molly Rose Foundation.
The researchers said that after having created false adolescent accounts, they found important problems with the tools.
In addition to finding that 30 of the tools were ineffective or no longer existed, they declared that nine tools “reduced damages but came with limitations”.
The researchers said that only eight of the 47 safety tools they analyzed were effectively functioning – which means that adolescents were shown in content that broke the own Instagram rules on what should be shown to young people.
This included articles describing “degrading sexual acts” as well as car insurance suggestions for research terms promoting suicide, self -control or food disorders.
“These failures indicate a corporate culture in Meta which puts commitment and profit before security,” said Andy Burrows, director general of the Molly Rose Foundation – who campaigns for stronger online security laws in the United Kingdom.
It was created after the death of Molly Russell, who committed suicide at the age of 14 in 2017.
During an investigation held in 2022, the Coroner concluded that it died while it suffered from the “negative effects of online content”.
‘Pr COUNDEUX’
The researchers shared with BBC News Screen Recordings of their results, some of them, including young children who seemed to be under 13 years old.
In a video, a young girl asks users to assess her attractiveness.
Researchers said that the Instagram study algorithm encourages children under 13 years of age to have risky sexualized behavior for tastes and opinions. “
They said it “encourages them to publish content that has received highly sexualized comments from adults”.
He also found that users of adolescent accounts could send themselves “offensive and misogynist messages to each other” and have been suggested adult accounts to follow.
Burrows said the results suggested that Meta’s adolescent accounts were “a performative cascade led by PR rather than a clear and concerted attempt to set long -race security risks on Instagram”.
Meta is one of the many large social media companies that have been criticized online on their children’s safety approach.
In January 2024, CEO Mark Zuckerberg was one of the grilled technological bosses in the US Senate for their security policies – and apologized to a group of parents who said their children had been injured by social media.
Since then, Meta has implemented a number of measures to try to increase the safety of children who use their applications.
But “these tools have a long way to go before they are adapted for purposes,” said Dr. Laura Edelson, co -director of the authors of the Cybersecurity for Democracy.
Meta told BBC that research does not understand how its content settings work for adolescents and said it distorted them.
“The reality is that adolescents who have been placed in these protections saw less sensitive content, experienced less unwanted contacts and spent less time on Instagram at night,” said a spokesperson.
They added that the tools have given parents “robust tools at hand”.
“We will continue to improve our tools, and we welcome constructive comments – but this report is not that,” they said.
He indicated that the search for cybersecurity for democracy center indicates that tools such as “take a break” for the management of applications are no longer available for adolescent accounts – when they have been carried out in other features or implemented elsewhere.

https://ichef.bbci.co.uk/news/1024/branded_news/943a/live/8b4f1fd0-994f-11f0-9a00-eba1f5856efc.jpg