New research has found just one in six young people flag harmful content online.
It comes as the UK pushes forward with its Online Safety Bill, which looks to protect young people in the digital world.
Culture Secretary Nadine Dorries has said it will make the UK “the safest place in the world for our children to go online”.
Media regulator Ofcom will notably enforce these new laws, and has already started regulating video sharing platforms established in the UK – such as TikTok, Snapchat and Twitch.
However, according to Ofcom, two thirds of teens and young adults have recently encountered at least one potentially harmful piece of content online, but only around one in six go on to report it.
Data revealed the most common potential harms young people came across online were offensive or ‘bad’ language (28 per cent); misinformation (23 per cent); scams, fraud and phishing (22 per cent); unwelcome friend or follow requests (21 per cent) and trolling (17 per cent).
A significant number of youngsters (14 per cent) also encountered bullying, abusive behaviour and threats; violent content; and hateful, offensive or discriminatory content, targeted at a group or individual based on their specific characteristics.
Following the results, the watchdog has launched a social media campaign aiming to reach young people on the sites and apps they use regularly.
Nonetheless, the Government is notably facing calls to “slim down” its new Online Safety Bill, which is currently making its way through Parliament, amid concerns over its impact on people’s freedoms and privacy, as well as innovation.
The new internet laws will hand ministers “unprecedented” censorship powers, with significant implications for free speech, new research warns.
The legislation is set to require platforms legally to protect users from harmful content for the first time, with penalties for breaching the new rules including fines that could run into billions of pounds for larger companies.