In 2013, Oxford University published research that said accountants and auditors have a 94 per cent chance of being replaced by robots.
It is no surprise that areas of the finance function are at risk of automation. Artificial intelligence (AI) systems are powerful, and are improving quickly and constantly. Evidence from areas like medical diagnostics show that algorithms can make vital decisions – and perhaps better ones than people.
Attempts to ignore the fact that machine learning is superseding human capabilities are fated to end in failure. So why aren’t accountants rising up as latter day luddites, and smashing mainframes?
To understand how machine learning has come so far in decision making, it’s worth looking at the development trends over the last 30 years. Humans, we know, have two main ways of making decisions – intuition and reasoning. Both are very powerful, but different. Accountants, for example, use both all the time; they apply knowledge to specific situations to make reasoned conclusions, but also make quick intuitive decisions based on experience.
Intuition is flexible and fast, but liable to a cognitive biases and inconsistencies. So when developers first started to create machines that could “think” like humans, they focused on reason and designed expert systems that used rules and logic. The problem is that human decision-making relies on both systems, and rules and decision trees, no matter how sophisticated, faltered when they came into contact with the greater complexity of the real world.
So instead, developers refocused machine learning on pattern-recognition and machine learning. The results have been profound. Using artificial neural nets, we have seen major breakthroughs – the combination of the ability to spot complex patterns and process huge amounts of data means machines are highly adaptive.
Moreover, unlike humans they do not suffer from tiredness, boredom or social biases.
However, they do rely on data. The ability of machines to learn relies on large amounts of data being made available, which is not always desirable or possible. Second, the outcomes are only mathematical projections. There are other aspects of decision making that are also important, such as ethics, or deeper root cause analysis.
In other words, the finance profession need not worry just yet. Yes, it is inevitable that some functions, like bookkeeping or compliance work, will be done by machines. But this is already happening. Nor is it a bad thing – with machines generating better, cheaper data and new insights, accountants will be free to refocus on problem solving, strategy, relationship building and leadership. It will, of course, mean change.
The profession will have to adapt, learning new skills, such as data analytics, and sharpening current ones like communication and strategy as the nature of the value they add changes.
It is likely that auditing and assurance will need to be applied to algorithms themselves, or to training models. And regulators and standard setters will need to build their understanding of AI and be comfortable with the associated risks. For example, if audit firms become reliant on black box models in their operations, all stakeholders will want to have confidence that things are being done to an appropriate degree of transparency.
The accountancy profession has been embracing new technology since the invention of the abacus.
Developments in AI have the potential to reimagine and radically improve the quality of business and investment decisions, and organisations that fail to adapt to the new reality will face significant challenges.
But ultimately, however cyberpunk this may seem, it does not alter the fundamental purpose of the finance profession – which is to solve underlying business problems and deliver confidence that the numbers are right.
Kirstin Gillon is technical manager at the ICAEW IT Faculty.