“Who watches the watchmen?” asked the Roman poet Juvenal. To Big Tech watchers, the question is simpler: have the watchmen been watching at all?
Technology is remarkably under-regulated. For years, regulators blithely waved through a series of mega-mergers, failing to act for lack of proof that free products can harm consumers.
Meanwhile, a single piece of legislation – Section 230 of the US’s Communications Decency Act of 1996 – absolved internet firms of responsibility for what they publish. As a result, tweets are far less strictly policed than columns like this one.
But the clamour for regulation is growing. The Biden administration is spoiling to regulate, and the tech firms find few allies on the right to protect them. The behemoths fear enforced breakups, while the social media companies fear a repeal of Section 230.
Industries under pressure from regulation have tended to follow the same path. Very publicly, and rather performatively, they self-regulate. So it is that organisations like Drinkaware and Gambleaware were formed by the industries they police.
Big Tech, the newest member of this sinner’s club, has started to do the same. Last year, Facebook launched its Oversight Board. Staffed with worthies from across the world, including a former Danish Prime Minister and Guardian editor, the group passes binding judgements on Facebook’s own content moderation decisions.
The problem with self-regulation, however, is that it has a habit of being too obviously self-serving. Instagram Kids, recently proposed and then scrapped by Facebook, fell into precisely this trap.
Today, only those aged thirteen and over are supposed to be able to create an Instagram account. In truth, many under thirteens do have accounts. Facebook’s response was Instagram Kids: a moderated version of the platform, without adverts and with parental controls.
This week, development of the platform was “paused”, presumably indefinitely. Adam Mosseri, Head of Instagram, was sent out to explain. Dressed in full tech saviour uniform – tight jeans, baggy white jumper, gold chain, thick black glasses – Mosseri both defended the idea and abandoned it.
There are two schools of thought when considering the abandoned idea. To one, it appeared foolish. To the other, malicious.
The foolish school would argue that Instagram Kids represented a heartfelt attempt to deal with a genuine problem. A safer platform, after all, is surely better than an unsafe one. The logic founders, however, on an admission that Mosseri made in his announcement video: that Facebook is unable to stop under thirteens from accessing their platform.
If that is true, Instagram Kids serves no real purpose. Any pre-teen worth their salt will follow the path more rebellious and sign up for adult Instagram anyway. The solution is therefore foolish on two counts. First, because it simply won’t work. Second, because it admits that Facebook has a problem that it cannot control.
The malicious school suggests there is more at play than misplaced good intentions. Instagram Kids was Facebook’s attempt to hook pre-teens early, so they remain loyal into adulthood. If that sounds too nefarious to believe, it is worth reading the words of a former Facebook employee, Sam Saliba, who once worked on a similar idea called Messenger Kids. “Losing the Teen audience was considered an ‘existential threat’”, he wrote on Twitter this week. At Facebook in his day, “teens and kids” were seen “as a growth lever”. According to a leaked Facebook report, seen by The Wall Street Journal, tech bosses considered using “playdates” to tempt younger and younger children onto their platforms.
Social media users are fickle beasts. Each generation has a habit of finding a new platform of choice, and rejecting its predecessor. So Myspace gave way to Facebook, before Facebook gave way to Instagram. Now Instagram is losing ground to new platforms like Tiktok and Twitch. In the past, Facebook could simply buy the emerging platform, as it did with Instagram and WhatsApp. Today, with antitrust regulators circling, that option isn’t open. By this logic, Instagram Kids was an attempt to compete for pre-teens, not protect them.
You don’t have to decide whether Facebook was foolish or malicious to see the problem with self-regulation. If foolish, Instagram Kids was an ineffective means of dealing with a real problem. If it was malicious, it was a smoke-screen for a commercial strategy. Whichever is true, and it is quite conceivable that both are, the failure of Instagram Kids contains a lesson to regulators: it is time to act.