“Have you ever met Mark Zuckerberg?” the culture select committee asks, wasting no time in getting to the real issues about the spread of fake news during the coronavirus crisis.
Facebook’s UK public policy manager, a bespectacled millennial sat predictably next to a potted plant, looks nonplussed via webcam. “I have seen him,” he says without a hint of irony. “I haven’t personally met him.”
Cue scoffs from the distinguished panel of backbench MPs who, for the remainder of the session, simply cannot fathom why the Silicon Valley tech giant hasn’t sent someone more important.
It’s an inauspicious start for Facebook in what will end up being an astounding masterclass in incompetence and an all-round waste of everyone’s Thursday morning.
After thoughtful evidence from a slate of academics and experts about the problem of disinformation on social media, the DCMS committee turns to the bogeymen themselves — or at least three people who’ve gone through the rigorous bogeyman media training programme.
First up is Twitter, which is hugely exciting for the committee. This is the one they actually use, not just the one their constituents use or that weird dancing video one their teenage children always go on about.
It soon becomes clear what they’re worried about: bots. Surely Twitter should be cracking down on those pesky automated accounts that lay into MPs — sorry, I mean that spread disinformation about Covid-19?
“We’re very proud of the progress we’ve made over the past couple of years…” begins Twitter’s representative, who is UK head of government, public policy and — inexplicably — philanthropy.
“Well I don’t know why,” retorts John Nicolson, who is verified on Twitter and has more than 29,000 followers, but has not yet located the unmute button on Zoom.
Clive Efford, who’s lagging behind on 13,000, says: “Can I just clarify one or two things you said about blue tick accounts?”
It all seems to be missing the point. Bots may be an issue, but perhaps the key concern is that normal people are mistakenly spreading false information?
The MPs don’t ask this. But it wouldn’t matter if they did anyway, because the Twitter executive is about as likely to deviate from her pre-prepared script as the Queen is to rip up her Christmas speech and say: “Sod this, I’m winging it.”
Committee chair Julian Knight, who looks more and more exasperated by the minute, goes in for the kill one final time. Has Twitter ever blocked or restricted the spread of one of Donald Trump’s tweets?
The question is expertly avoided with vague talk of “world leaders” and some feigned concern about Brazil.
“I didn’t actually detect an answer there,” says Knight, who’s not angry, just disappointed.
Then comes the turn of the Facebook whippersnapper who — despite his alleged inexperience — has already mastered the art of, ahem, making sure the questions match whatever it was he was planning to say anyway. You almost had to admire the semantic acrobatics employed to ensure that nothing meaningful was said about the firm’s duty of care to users.
Then the committee asks whether it’s in Facebook’s commercial interests to stop misinformation when this is what creates the most engagement, and for a moment we almost seem to be approaching something interesting.
But the moment soon passes. “Just give us a rough guess,” cries an MP at one point, with admirable commitment to the farce.
Last up is Google’s public policy wonk, who bowls everybody over by actually answering the first couple of questions. But as the talk turns to why Google serves ads to websites that peddle conspiracy theories, her responses drift back into platitudes about “rigorous enforcement”.
Chair Julian Knight, who by now has resorted to shouting “yes or no?” after every question, finally musters up the compassion to put the session out of its misery.
But if you’re worried the tech titans are off the hook, fear not. The committee will be writing to all three companies to express its “displeasure” at the lack of clear answers. The fear in the Zoom is palpable.