You are worth £142 to Facebook.
That is the financial figure attached to your scrolling through self-indulgent statuses and pictures of your friends’ babies. The same goes for the guy sitting next to you on the Tube, smugly counting the likes on his latest selfie with someone slightly famous.
It’s a simple calculation: company value divided by total users. Facebook published its earnings for the fourth quarter of 2018 this week, revealing a record 2.32bn people on the platform.
It is these users – nearly one third of the world’s population – that are Facebook’s commodity, and ultimately its revenue stream.
This all comes after a year of data scandals, congressional hearings, and accusations that Facebook influenced elections. Yet the company has beaten revenue expectations and shown genuine growth. The smiling adverts and messianic speeches from senior executives proclaiming that they “will do better” clearly had the intended effect.
Trust is being rebuilt, with careful messaging around data privacy and security. Facebook is reformed, Facebook is redeemed – or so it would have us believe.
I gave a talk at Davos last month on the relationship between ethics and artificial intelligence (AI). While there, I was fortunate enough to watch Sheryl Sandberg, Facebook’s chief operating officer, speak about trust.
It was a classic case of style over substance, showing the disconnect between what’s disclosed publicly and what’s discussed behind closed doors in Californian boardrooms.
Just this Tuesday, Facebook was caught offering teen users money to download a “research” app that gathered sensitive data, including private messages and location information.
Like the interface of the platform, which is immaculately tuned to lure us in, Facebook’s recent overtures mask something more concerning.
The dirty truth is that creating protection mechanisms for Facebook users’ data is much more difficult than the company’s senior team is letting on. If you build a platform and set of algorithms that prioritise engagement and monetisation above everything else, you can’t simply click your fingers (or a mouse) to give autonomy or privacy back to the user.
Facebook is insatiable. Every part of the platform is designed to farm people’s data and sell it as effectively as possible.
I am not saying that Facebook, or its executive team, are evil, nor that they began with malevolent intentions. They just happened to design a business model that now dictates company direction.
This may seem like pointlessly shouting into the void, especially given Facebook’s monopoly. I have left three times, enraged by encroaching privacy rules that the company routinely sneaks into two-dozen pages of terms and conditions. And each time, I return out of social necessity.
There is hope, however. The tech community is learning that companies built on the bedrock of data monetisation are set to have problematic futures.
Some of us in the tech world, from the inventor of the internet Sir Tim Berners-Lee to my own team at Friend, are working to break this cycle, creating transparent platforms that put users first.
It is only by aligning with users that genuine trust can be built, enabling us to safely integrate AI that can help us merge our lives with technology.
Big Tech will argue that the more they know, the more they can help us. But as billions of people come online (in 2016 the United Nations pledged to make internet access a universal human right), the infrastructure of the internet needs to be supportive of privacy.
If data is such a valuable commodity, why are we – the producers – not reaping the financial benefits? We must be given the choice of what data we share, and what we want to keep private.
Our memories may be worth £142 to an internet giant – but they are far more valuable than that to us.