We are at the big data frontier – we’re going to need some rules
We create data all the time, even if we don’t realise it.
Whether from social media use, video-watching habits, satellite images, traffic flows, or location data from smart devices, companies are increasingly able to use this information to extrapolate trends and capitalise on behaviour patterns.
This has driven change across many industries, but it’s of particular relevance to financial services.
The effects could range from how insurance premiums are calculated to enabling credit providers to exploit behavioural biases and vulnerabilities with inducements to spend.
There will be winners and losers from the fundamental changes to banking, insurance and investment management brought about by the ability to capitalise on big data. More competition, the erosion of traditional concepts like risk-pooling, and the ability of discrete companies to undertake separate parts of transactions all challenge current business models.
There are obvious upsides. Customers should benefit from improved services and more suitable products, and companies can better manage risks and become more efficient.
However, the obverse risk is that “more tailored” for some comes at the exclusion of others.
The increasing use of big data risks some customers becoming unfairly excluded or priced out of vital markets. For example, last year saw allegations of insurance companies quoting higher premiums for customers with names common among ethnic minorities.
It’s possible that financial services such as credit or insurance could – inadvertently, due to bias in data – be priced according to other legally-
protected characteristics such as gender, or additional factors which people cannot control, such as where they live or health conditions.
It is equally probable that pricing could be influenced by considerations such as where consumers shop, or whether they use cash instead of cards. In some cases this would be illegal, in others ethically questionable.
The potential outcomes are not necessarily negative. We have already seen banks help problem gamblers by enabling them to block betting sites, and the UK government is asking online payment providers to help address university cheating by stopping people buying essays online.
It is not new data abilities which enable such interventions, but a new way of looking at the role financial services firms could take in using data to help people make decisions.
However, if finance organisations are blocking legal purchases – as has happened in the US, where some banks prohibit their cards from being used to buy guns – on ethical grounds, they are acting as moral arbiters. This surely requires both social license and a clear, and stated, purpose.
As data capabilities increase, the ability of financial services to directly influence purchasing decisions on moral grounds is only going to increase.
That’s why we need an ethical framework for big data use. ICAEW has set out some principles aimed at helping financial services institutions to make the right choices about how they use data – and not take unfair advantage of information they hold.
This is meant as a starting point for companies to consider how they should be accountable for their use of big data, and how to ensure that customers are treated fairly. But as a society we must have this conversation – and it needs input from the widest possible range of stakeholders.
Companies need to ensure that they have the skills and resources to be accountable – and customers need to think about their “data balances” as well as their cash balances.
Society has started to wake up to the power and influence of big tech when underpinned by data. Financial services may face similar ethical considerations as use of big data and artificial intelligence becomes more prevalent.
We need to make sure that the sector is true to its social purpose – and gets it right first time.