spotted —
Cops’ use of the tech among the list of things protesters are demonstrating against.
Kate Cox
–
Law enforcement in several cities, including New York and Miami, have reportedly been using controversial facial recognition software to track down and arrest individuals who allegedly participated in criminal activity during Black Lives Matter protests months after the fact.
Miami police used Clearview AI to identify and arrest a woman for allegedly throwing a rock at a police officer during a May protest, local NBC affiliate WTVJ reported this week. The agency has a policy against using facial recognition technology to surveil people exercising “constitutionally protected activities” such as protesting, according to the report.
“If someone is peacefully protesting and not committing a crime, we cannot use it against them,” Miami Police Assistant Chief Armando Aguilar told NBC6. But, Aguilar added, “We have used the technology to identify violent protesters who assaulted police officers, who damaged police property, who set property on fire. We have made several arrests in those cases, and more arrests are coming in the near future.”
An attorney representing the woman said he had no idea how police identified his client until contacted by reporters. “We don’t know where they got the image,” he told NBC6. “So how or where they got her image from begs other privacy rights. Did they dig through her social media? How did they get access to her social media?”
Similar reports have surfaced from around the country in recent weeks. Police in Columbia, South Carolina, and the surrounding county likewise used facial recognition, though from a different vendor, to arrest several protesters after the fact, according to local paper The State. Investigators in Philadelphia also used facial recognition software, from a third vendor, to identify protestors from photos posted to Instagram, The Philadelphia Inquirer reported.
New York City Mayor Bill de Blasio promised on Monday the NYPD would be “very careful and very limited with our use of anything involving facial recognition,” Gothamist reported. This statement came on the heels of an incident earlier this month when “dozens of NYPD officers—accompanied by police dogs, drones and helicopters” descended on the apartment of a Manhattan activist who was identified by an “artificial intelligence tool” as a person who allegedly used a megaphone to shout into an officer’s ear during a protest in June.
Unclear view
The ongoing nationwide protests, which seek to bring attention to systemic racial disparities in policing, have drawn more attention to the use of facial recognition systems by police in general.
Repeated tests and studies have shown that most facial recognition algorithms in use today are significantly more likely to generate false positives or other errors when trying to match images featuring people of color. Late last year, the National Institute of Standards and Technology (NIST) published research finding that facial recognition systems it tested had the highest accuracy when identifying white men but were 10 to 100 times more likely to make mistakes with Black, Asian, or Native American faces.
There’s another, particularly 2020 wrinkle thrown in when it comes to matching photos of civil rights protesters, too: NIST found in July that most facial recognition algorithms perform significantly more poorly when matching masked faces. A significant percentage of the millions of people who have shown up for marches, rallies, and demonstrations around the country this summer have worn masks to mitigate against the risk of COVID-19 transmission in large crowds.
The ACLU in June filed a complaint against the Detroit police, alleging the department arrested the wrong man based on a flawed, incomplete match provided by facial recognition software. In the wake of the ACLU’s complaint, Detroit Police Chief James Craig admitted that the software his agency uses misidentifies suspects 96 percent of the time.
IBM walked away from the facial recognition business in June. The company also asked Congress to pass laws requiring vendors and users to test their systems for racial bias and have such tests audited and reported. Amazon echoed the call for Congress to pass a law while asking police to take a year off from using its Rekognition product in the hope that Congress acts by next summer.
Clearview in particular—used in Miami—is highly controversial for reasons beyond the potential for bias. A New York Times report from January found that the highly secretive startup was scraping basically the whole Internet for images to populate its database of faces. Facebook, YouTube, Twitter, Microsoft, and other firms nearly all sent Clearview orders to stop within days of the report becoming public, but the company still boasts it has around 3 billion images on hand for partners (mostly but not exclusively law enforcement) to match individuals’ pictures against.
The company is facing several lawsuits from states and the ACLU, while individuals are seeking class-action status.