When the Metropolitan Police trialled live facial recognition (LFR) technology it commissioned an independent review of its use. The review concluded that the Met had failed to adequately consider the technology’s impact on human rights and that it was unlikely to pass the key legal test of being “necessary in a democratic society”. Now we learn that they plan to plough ahead regardless.
During that trial, a member of the public walked by and covered his face. The police forced him to show his face and fined him £90 for “disorderly conduct”. This tells us something important about the sinister nature of the technology and how it will affect the nature of policing and damage their relationship with the people they are supposed to serve. Facial recognition facilitates harassment, and warps the ideal of the presumption of innocence. When we are all tracked, scanned and databased, we will have become people who have not been found guilty yet.
The Metropolitan Police said every deployment would be “bespoke” and would target lists of individuals of special interest. Assistant Commissioner Nick Ephgrave said the technology would be primarily used for serious and violent offenders who are at large, as well as missing children and vulnerable people. We’ve heard this all before — that any new power or technology will be used for limited purposes. But it’s a slippery slope.
Ephgrave said LFR was a “fantastic crime-fighting tool”. The same arguments were made for every piece of anti-terror legislation and the new powers were used and abused by the police and even local councils. Look up the names: Brian Haw, Maya Evans, Steve Jago, John Catt, Charlotte Dennis, Walter Wolfgang, Nick Gargan, Hicham Yezza and Rizwaan Sabir to learn more about how legislation designed for terrorists was used against the innocent.
It will be the same story with facial recognition technology, which has the potential to become the most dangerous surveillance mechanism ever invented. If we gave the police and security services every “fantastic crime fighting tool” they wanted, we’d be kissing goodbye to every last bit of our freedom. Identity cards, secret courts, the snooper’s charter — sometimes the answer has to be “no”.
Sure, we might have more security, it may make us safer in some ways, but so would a GPS microchip in our brains and a CCTV camera in every room of our house. But that doesn’t make it right. China is busy building a high-tech tyranny and, before long, authoritarian states will be using this technology to oppress their citizens. The supposedly free west should be horrified by this, and should seek to differentiate itself by conserving those principles of liberty we purport to hold dear.
The potential for the persecution of minorities is one of the most concerning issues with the tech. Trials carried out by the Metropolitan Police between 2016 and 2018 resulted in a 96 per cent rate of “false positives,” where software wrongly informed police that a person passing through the scanning area matched a photo on the database. The system had an issue with racial profiling. AI systems of this sort can contain the biases and prejudices of those who design them. The Met seems happy to ignore concerns of this kind.
“If you have nothing to hide, you have nothing to fear,” goes the mantra. But that is an attitude that surrenders too much. Now is the time to consider the implications of facial recognition technology before it becomes a ubiquitous part of the security apparatus. Facial recognition can identify you without your knowledge. There is no need for active consent, as with fingerprints or blood samples. It’s exactly the kind of “fantastic crime fighting tool” that numerous authors of dystopian novels have tried to warn us about for years. This is the road to serfdom. Tread carefully.