Wednesday 23 September 2020 5:00 am

DEBATE: Can we trust facial recognition technology?

James Fisher is chief product officer at Qlik.
and Robert Hoyle Brown
Robert Hoyle Brown is vice president of Cognizant’s Center for the Future of Work.

Can we trust facial recognition technology?

James Fisher, chief product officer at Qlik, says YES.

We need to move away from discussing whether we can trust a technology as if it has an innate “goodness” or “badness”. Arguing that if we cannot trust it implies that we should dispose of it. As a result, we risk jeopardising public opinion and missing out on its potential benefits as the technology matures. For example, did we all trust internet banking in the 90s? The answer is no, of course not.

Instead, what we should be asking ourselves is “Do I trust the data that is fuelling facial recognition algorithms and who is using it?” This is where a risk of bias is introduced and is a question that we need to become better at asking. 

When we question the data and the potential outcomes of its analysis, we can call upon organisations – whether it be the companies creating these solutions or those implementing them – to make the necessary improvements and be responsible for its governance. This is where we really need to focus. 

Governance and testing of all forms of artificial intelligence is key, just like the recent exam results crisis proves.

Read more: Police use of facial recognition tech ruled unlawful by UK court

Robert Hoyle Brown, vice president of Cognizant’s Center for the Future of Work, says NO.

The error rates for facial recognition technologies remain very high, especially for people of different races. Given the potential for those errors to compound – at scale – in the context of law enforcement, this undermines principles of being innocent until proven guilty.

To overcome these challenges, tech companies need to ensure they conduct stringent performance testing. To add further safeguards, one job of the future we anticipate is the “algorithm bias auditor”, to ensure human bias is eliminated from a technology – its objectives, its input and output, related value judgments, and consequences – before it goes live.

For facial recognition to become fully mainstream, we will also have to navigate a thorny mix of standards, laws, regulations, and ethics. A cornerstone of democratic government is consent of citizens – not digital-political overlords – to ultimately control who “watches the watchers”. 

Currently, there should be a veto on these kinds of technologies until the highest thresholds of certainty are met, and citizens feel like they can fully trust the technology.

Main image credit: Getty

City A.M.'s opinion pages are a place for thought-provoking views and debate. These views are not necessarily shared by City A.M.