Facebook’s Nick Clegg said last night that the social media giant will roll out as range of new features to to help safeguard children on its Instagram platform. The announcement follows allegations Instagram harms children.
New functions include prompting teenagers to take a break from using its photo sharing app Instagram, and “nudging” teenagers if they are repeatedly looking at the same content which may not be good for their wellbeing.
Facebook is also planning to introduce new optional controls to allow parents and guardians to supervise their children’s online activity.
The initiatives, announced on Sunday by Sir Nick Clegg, Facebook’s vice president for global affairs, come after the tech company revealed late last month that it was pausing work on its Instagram for Kids project.
But critics say the company has acted only after pressure from outside.
Others have said the plan lacks detail and they are sceptical about the effectiveness of the new features.
Sir Nick told CNN’s State Of The Union programme: “We are constantly iterating in order to improve our products.
We cannot, with a wave of the wand, make everyone’s life perfect. What we can do is improve our products, so that our products are as safe and as enjoyable to use.
Sir Nick said Facebook has invested 13 billion dollars (£9.5 billion) over the past few years in trying to help keep the platform safe and that the company has 40,000 people working on these issues.
Whistleblower Frances Haugen, a former data scientist with Facebook, last week went before the United States Congress to accuse the social media platform of failing to make changes to Instagram after internal research showed apparent harm to some teenagers and of being dishonest in its public fight against hate and misinformation.
Haugen’s accusations were accompanied by tens of thousands of pages of internal research documents she secretly copied before leaving her job in the company’s civic integrity unit.
Ian Russell, who set up suicide prevention charity the Molly Rose Foundation after his 14-year-old daughter Molly took her own life in 2017 after viewing disturbing content on Instagram, branded the announcement “a crisis comms initiative”.
He said: “Since Molly’s death, nearly four years ago, I have grown used to hearing the response of the social platforms when they tragically fall short and their practices are inevitably challenged.
“Their usual PR reaction is to announce a small change to their guidelines or processes; speak fine words indicating their understanding and empathy; and then to go about their business as close to usual as possible.
“Every positive change that makes the platforms safer is of course welcome but, as time goes by, I am left asking, ‘Why do the platforms wait before reacting? Why don’t they use their unique influence and extraordinary wealth to innovate and lead?’”
Andy Burrows, head of child safety online policy at the NSPCC, said: “It is no surprise that yet again Facebook has committed to introducing child safety measures only in response to media pressure, as opposed to making sure their sites are safe by design in the first place.
“In this instance, this announcement has come a week after damning evidence revealed that Facebook has been fully aware of the ways it causes harm to children and has failed to act.
“It is time for tech firms to stop treating children’s safety as a public relations exercise, which is why we need to see an Online Safety Bill that forces companies to finally treat children as a priority.”