Three ways to prevent the UK from falling into data anarchy
No business will admit to using personal data in ways that are unethical or discriminatory.
But when every social media click, phone call, card payment, and gym visit pings into existence a new nugget of data, how can any of us be sure that our data is being used ethically?
There are many excellent examples of how these nuggets are being aggregated into clouds of big data and used to improve lives around the world.
Take the crowdsourcing platform Ushahidi, for example, which uses big data to coordinate disaster relief activities. Data is volunteered by individuals and anonymised, before being used for specific activities that benefit others. Very few would argue that this sort of use of data is unacceptable.
At the other end of the spectrum, we see personalisation as a product offered to advertisers. Personal profiles are auctioned off to third parties as part of online advertising, creating revenue for buyers and sellers while denying individuals their right to opt out or control how their information is used.
No ethical business wants to be caught in that space. But there is a grey area in which many businesses unintentionally and embarrassingly find themselves, such as when female customers get given lower credit card limits than their husbands because of the way a finance provider used data, or when insurance premiums vary based on gender or ethnicity.
Cases like this get blamed on insufficiently-trained algorithms. This, though, is a symptom of an over-reliance on bought-in “black box” systems: a credit card application goes in, and a credit limit number pops out.
As this technology becomes more widespread, these issues are garnering more attention. The Information Commissioner’s Office has now proposed that, in order to comply with GDPR, companies must be able to explain every decision made by automated systems.
Having such a map of analyses will ultimately make it easier for businesses to critically review how they are using customer data and ensure that their actions align with their ethics.
In addition to this more formal oversight, we often propose that companies apply a “tweet test” when they use customers’ data. They have to ask themselves what the reaction would be from the public or media if the company tweeted about their analysis: praise or backlash? In other words, can they properly explain and justify what they are doing?
By integrating this simple empathy experiment into their work, everyone can sense-check how data is used, from contracted developer to chief executive.
That said, preventing data anarchy is not the task of businesses alone. The government also has a role to play. Legislation like GDPR and the European Commission’s Ethics Guidelines for Trustworthy AI are starting to give businesses guidance on what is and is not acceptable.
But enforcement has so far focused on cyber security breaches and privacy invasions. This must expand to cover all GDPR requirements, which should be thought of as an important part of our digital human rights.
The final way to prevent data anarchy is via consumers. This is arguably the weakest link in the chain. When we accept free wifi in public spaces without reading the terms and conditions or “accept cookies” on a website, we are giving our data away in return for less than its market worth, without even properly thinking about it.
In order to decide whether we are getting a fair deal, we need to get better educated about the value of our data. Once we understand what our clicks are worth, we can start to change the data landscape.
While we don’t do enough to protect our data, companies don’t always make it easy for us to understand or opt out, and the government needs to tighten up legislation. Only when all three stakeholders work together will data anarchy be prevented.
We’ve had a taste of the potential consequences if we get this wrong. It will be tough, but it is possible to take back control of our data.
Main image credit: Getty