Social media platforms have once again been thrust into the spotlight, with speculation about the harmfulness of using these platforms at a young age.
Facebook has been criticised for not consulting experts when producing its Messenger Kids product, and is now facing backlash in the press.
For many, these platforms seem like a scary abyss, where children can easily find themselves in a vulnerable situation. Facebook seems to be trying to address this, but does its lack of understanding of the needs of both parents and children show that Facebook is out of touch with the concerns of parents?
Facebook built this platform independently, yet has only recently asked for feedback from children's charities to ensure that the product is safe for use.
With children being a vulnerable group in society, the product needs to be safe and address the concerns of parents. Recent research we conducted at Giraffe Insights into parents in the UK indicated that 69 per cent of parents do not trust social channels to protect their children.
With research showing that use of social networks is linked to a number of issues, including adolescent depression, this is an unsurprising number. Facebook needs to ensure that it understands this audience and their concerns, if the new product is going to succeed.
There is also the argument of age. Currently, guidelines from the large social channels dictate that account holders need to be at least 13 years old. It’s rarely enforced, and according to Ofcom it is believed that more than 50 per cent of children aged 11 and 12 in the UK have a social media profile.
Our research shows that many parents believe that this system works, and social channels should be able to dictate when their child is ready to use these sites. However, 23 per cent believed that the government should implement legislation to ensure younger children are protected.
The introduction of ‘child safe’ social platforms is not a new concept. Facebook’s Messenger Kids is joining a market already dominated by child-friendly messaging apps such as Monster Messenger and PLAYMessenger.
However, 78 per cent of parents had never heard of any of them. These platforms have a big responsibility, much bigger than standard social platforms, as these apps need to ensure they are protecting the children using the site, especially when these platforms are primarily targeted at children as young as six years old.
These apps need to ensure they are protecting the children using the site, especially when these platforms are primarily targeted at children as young as six years old
While we see that parents do not trust social media or messaging apps to protect their children online, we have seen strong evidence to suggest that there is a trade off between concern and convenience. Research we recently conducted, looking at the total video consumption of kids aged two to nine, showed that despite the creation of more child friendly platforms like YouTube Kids, the majority of the younger age group continues to use the YouTube main site, unaccompanied by an adult, with this platform remaining as the favourite place to go. This highlights a disconnect with concerns over safety and real life behaviour.
Social platforms need to work alongside parents and organisations to ensure that child safety is the number one priority. While it’s natural for parents to want to protect their children, ultimately these platforms need to be aware of the behavioural nuances of this age group, and ensure they are doing everything possible to protect them.
The debate continues as to who is ultimately responsible for safety, but it’s clear there is still a long way to go to understand how to best protect children online.
Read more: How to invest for your children