In August this year, Sophie Linden, London’s Deputy Mayor for Policing and Crime, approved a £3m contract for the development of retrospective facial recognition. This investment will bolster the Metropolitan Police Department’s surveillance capabilities and enable authorities to “exploit investigative opportunities.”
But legal experts view this technology as a form of exploitation which erodes freedoms, and have urged government officials to cease using it. While this push for moratoriums is well-intentioned, it fails to address the issues that underpin mass surveillance policies. Facial recognition is harmful because it’s underdeveloped, but the root of the problem is how it is being abused.
Over the past few years, the Metropolitan Police Service has tested live facial recognition cameras at major events like the Notting Hill Carnival. The AI-powered cameras scan faces in real time, without consent, and have garnered intense scrutiny because of their high error rates. According to one study by the University of Essex, the systems failed to work 80 per cent of the time, which resulted in wrongful arrests. UK Information Commissioner Elizabeth Denham recently expressed deep concern that LFR might be used “inappropriately, excessively or even recklessly.”
Retrospective facial recognition differs from its live counterpart; it applies to images and videos that have already been captured. But they share a common flaw: both scan and match faces against larger databases, a technique that is prone to algorithmic bias and misidentification, particularly when the subject is female, black, or asian. Consequently, both systems pose risks to society when applied to decision-making processes. Police use of facial recognition is one of the most troubling applications, since an incorrect match can wrongfully take away someone’s freedoms.
London is at the center of a global reckoning for facial recognition. For now, police use of the technology is legally permissible, but the tide is quickly turning around the world. In the United States, many cities have adopted moratoriums and, just last week, a majority of members in the European Parliament called for a ban on both government and business use of the technology.
Prohibitions may sufficiently assuage public concerns, but only temporarily. They erroneously depict facial recognition as the menace without properly evaluating how it has been misused. When officials take this route, they are placing inordinate blame on the technology rather than on their own actions.
Facial recognition enables these groups to collect and sort through mass amounts of information without permission, which in turn enables them to pick out and target individuals. In some countries, it is used to chill religious activities, and censor political speech. Moratoriums and bans on the technology that do not address these human rights and civil liberties violations will do little to protect the public.
But mass surveillance didn’t start with facial recognition – and it won’t end with it. Outright bans give a false sense of safety but we already have vast technology for governments to monitor citizens and businesses track their customers. If we focus on the symptom, rather than the cause, of mass identification without consent, these problems will only persist.