Mark Zuckerberg has called on regulators and governments to take a “more active role” in controlling content on the internet.
Writing in the Washington Post, Facebook’s founder said firms like his could not be expected to shoulder all the responsibility for moderating harmful content.
“Every day, we make decisions about what speech is harmful, what constitutes political advertising, and how to prevent sophisticated cyberattacks. These are important for keeping our community safe,” he wrote.
“But if we were starting from scratch, we wouldn’t ask companies to make these judgments alone.”
Speaking out a little more than two weeks after a terrorist used Facebook to livestream a gun attack on a mosque in Christchurch, New Zealand, Zuckerberg called for new rules in four areas: “Harmful content, election integrity, privacy and data portability.”
He said Facebook was creating an “independent body so people can appeal our decisions” on harmful content.
“Internet companies should be accountable for enforcing standards on harmful content. It’s impossible to remove all harmful content from the Internet, but when people use dozens of different sharing services — all with their own policies and processes — we need a more standardized approach,” he said.
Facebook came under sustained criticism for allowing the Christchurch attacker to livestream the killing of 50 people, and late last week announced it was exploring plans to restrict access to Facebook Live.
In an open letter on the New Zealand Herald, Facebook chief operating officer Sheryl Sandberg said: “We have heard feedback that we must do more – and we agree.
“In the wake of the terror attack, we are taking three steps: strengthening the rules for using Facebook Live, taking further steps to address hate on our platforms, and supporting the New Zealand community.”
“First, we are exploring restrictions on who can go Live depending on factors such as prior community standard violations.”
Sandberg said Facebook was also exploring ways to quickly spot edited versions of offending videos so they can be taken down.
She said Facebook had identified more than 900 videos showing portions of the attack, saying that people re-sharing and re-editing the video had made it more difficult for Facebook’s systems to block.