The technology isn’t ready. Society isn’t ready. And the law isn’t ready.
This week, San Francisco became the first major U.S. city to bar itself from using facial recognition systems. The city’s Board of Supervisors voted 8–1 on Tuesday to prohibit the police and other public agencies — though not private companies — from using the emerging technology in any form as part of a larger bill to regulate broader surveillance efforts.
Some cheered the move as a victory for privacy and civil liberties. Some criticized it as a blow to law enforcement and public safety. And cynics dismissed it as an empty gesture, given that San Francisco wasn’t using facial recognition technology in the first place.
We’re not prepared as a society to ensure that facial recognition will be used responsibly and without discriminatory effects.
But you don’t have to be a hippie or a Luddite to see the logic in a ban like San Francisco’s. It makes sense even if the effect is nil in the short term, and even if you think facial recognition could be a valuable tool in the long term. The logic is simple: We’re not ready for it.
We’re not prepared as a society to ensure that facial recognition will be used responsibly and without discriminatory effects. We’re not prepared as individuals for a world in which we can be automatically tracked and identified wherever we go without our knowledge or consent.
Even if we were ready, the technology itself isn’t: Experts both inside and outside the technology industry acknowledge that the artificial intelligence underlying facial recognition systems still struggles with accuracy, particularly when it comes to identifying the faces of people of color — which is to say, the people who are most likely to be affected by it. In a test last year by the ACLU, Amazon’s facial recognition software falsely matched the faces of 28 members of U.S. Congress to the mug shots of people who had been arrested. The mismatches disproportionately affected representatives of color.
Perhaps most important, our governments and law enforcement agencies are not prepared to guard against abuses of the technology or the data it produces, to ensure it is kept confidential, or to constrain its use to the appropriate situations. That was the clear takeaway from a pair of new reports from Georgetown Law’s Center on Privacy and Technology, published on Thursday.
One report documents how two major U.S. cities, Chicago and Detroit, have quietly set up systems capable of monitoring people’s faces using cameras around the city and matching them to a mugshot database. Neither city government has been transparent about exactly how or where it’s using the system, which makes proper oversight impossible. And there are hints that Detroit’s system in particular may be violating people’s civil rights — for example, by targeting churches, clinics, and community centers for surveillance.
The second Georgetown report details how local law enforcement agencies are playing fast and loose with facial recognition systems, again without rules or oversight. When the New York Police Department can’t find a match for an actual photo of a suspect, it has been known to turn instead to photos of celebrities who resemble the suspect and find matches for the celebrity’s face. For instance, police used a photo of Woody Harrelson to find a match for the mug shot of a petty larceny suspect who had been described as resembling the famous actor. In other instances, they’ve used digitally altered images or even hand-drawn sketches to find matches for suspects. Many of the companies selling their facial recognition systems to cities boast that they can work with forensic sketches, even though studies have shown that the results tend to be wildly inaccurate.
The ability to passively surveil huge numbers of people, around the clock, all across a city, explodes the norms that privacy laws took for granted.
None of this is to say that facial recognition can’t be a valuable tool for law enforcement, whether in deterring crime, catching people who commit crimes, or even helping to establish a suspect’s innocence. We’ve already seen its value in at least one high-profile case: The technology was used to help police identify the man who allegedly killed five journalists in a June 2018 mass shooting in Annapolis, Maryland. It’s possible to imagine a future in which facial recognition is employed judiciously, with proper legal oversight, and society both understands and broadly accepts the privacy trade-offs.
But that future is not here yet. And it won’t come until we’ve modernized privacy laws to catch up with the technology. We have a system of legal protections and case law designed to deal with police search and seizure in a world where surveillance means staking out an individual’s house, following their car, or wiretapping their phone — albeit a system that relies in part on nebulous concepts like “reasonable expectation of privacy.” But the ability to passively surveil huge numbers of people, around the clock, all across a city, explodes the norms that those laws took for granted.
You can see that disconnect at work in a video the BBC published this week showing a police test of a facial recognition system in London. The test led to three arrests in a single day. But it also led to a confrontation when one presumably innocent man shielded his face from the camera — and police responded by grabbing him, pulling him aside, and taking his picture. He told reporters they also fined him 90 British pounds, or about $115.
Does facial recognition mean that people have to go around in public with their faces covered if they don’t want to be tracked by police? Does it mean they no longer have the right to do so? The authorities in the video justified accosting the man on the basis that his desire to avoid facial recognition made him suspicious, effectively providing them with probable cause. What might that mean for people who cover their faces for religious reasons?
Two years ago, when it issued its last report on facial recognition, Georgetown’s Center on Privacy and Technology recommended “common-sense regulation” of the technology at the state level to guard against abuses. But the center changed its stance in the new report: Since 2016, the world has seen such a “dramatic range of abuse and bias” that the report now recommends that local, state, and federal governments place a moratorium on the technology. And any cities that ban it outright are “amply justified,” the report adds. In the New York Times on Thursday, Farhad Manjoo agreed.
So do I.
Banning facial recognition for law enforcement won’t stop the technology from progressing. Private companies will continue to develop it, and no doubt some will put it to dubious or even nefarious uses of their own. That’s a separate problem that requires its own set of solutions, but it is not an argument for governments to plow ahead.
Facial recognition surveillance may be the future. But in the present, it’s out of control — literally. And it needs to be stopped before the phrase “reasonable expectation of privacy” loses all meaning.
Via Medium