(CNN Business) As George Floyd's death sparked protests in cities across the country, six federal agencies turned to facial-recognition software in an effort to identify people in images of the civil unrest, according to a new report from a government agency.
The agencies used facial recognition software from May to August of last year "to support criminal investigations related to civil unrest, riots, or protests," according to a report released on Tuesday by the US Government Accountability Office, based on a survey of 42 federal agencies. The US Postal Inspection Service, for instance, told the GAO that it used software from Clearview AI, a controversial facial-recognition system, to help track down people suspected of crimes, such as stealing and opening mail and stealing from Postal Service buildings.
Floyd's death in May 2020, and the racial reckoning that followed, prompted prominent tech companies such as Amazon and Microsoft to stop providing facial-recognition tools to law enforcement. But as the GAO report shows, this technology had by then already spread throughout the US government, with uses ranging from conducting criminal investigations to identity verification.
Moreover, agencies were often unaware how their employees or contractors were using the technology. One agency, for example, first told the GAO that its employees didn't use facial-recognition systems from outside the federal government — such as ones from state police or private companies — but a poll at that agency later found employees used such a system to handle over 1,000 facial-recognition searches.
At least 20 federal agencies used or owned facial-recognition software between January 2015 and March 2020, according to the report, including the FBI, Secret Service, US Immigration and Customs Enforcement, US Capitol Police, Federal Bureau of Prisons, and the Drug Enforcement Administration. In addition to being used to monitor civil unrest following Floyd's death, the report indicated that three agencies used the technology to track down rioters who participated in the attack on the US Capitol in January.
The use of facial-recognition technology to identify people in the wake of Floyd's death, in particular, concerns Lindsey Barrett, a Georgetown University Law Center fellow and adjunct law professor who studies privacy and surveillance. Barrett said she worries about the possibility that protestors could be subject to "unwarranted scrutiny for expressing" their "First Amendment rights."
"It's pretty frightening," she told CNN Business.
While 14 agencies told the GAO they used facial-recognition technology from outside the federal government to help with criminal investigations, only one of them "has awareness of what non-federal systems are used by employees," the GAO report said. That agency is US Immigration and Customs Enforcement, which told the GAO that in late 2020 it was building a list of approved facial-recognition technologies for employee use, the report said.
"However, the other 13 agencies do not have complete, up-to-date information because they do not regularly track this information and have no mechanism in place to do so," the report said, citing the IRS's Criminal Investigation Division as saying it doesn't track non-federal systems that employees use "because it is not the owner of these technologies."
"There's clearly a complete lack of oversight of the use of facial-recognition services by many federal agencies," Jeramie Scott, senior counsel for the Electronic Privacy Information Center, or EPIC, told CNN Business.
The report, which is a public version of a "sensitive" one the GAO initially issued in April, recommends that more than a dozen agencies track how their employees use facial-recognition systems that come from outside the federal government and assess the privacy and accuracy risks.
Facial-recognition systems have spread swiftly across the United States in recent years, as they can be used for everything from helping identify criminals and ensuring only certain people can get into an office building to tracking your face across the internet. Yet the technology has been vociferously opposed by civil rights groups for privacy issues and other potential dangers it presents. For example, it has been shown to be less accurate when identifying people of color, and several Black men, at least, have been wrongfully arrested due to the use of facial recognition.
There are currently no federal laws governing the application of such technology, though some states and local governments have set their own rules limiting how it can be rolled out and legislation related to the technology has been introduced in Congress.
Ten agencies used Clearview AI between April 2018 and March 2020, according to the report, including the FBI, Secret Service, and DEA. Clearview is perhaps the most well-known and controversial facial-recognition technology company, having built an enormous stash of faces with billions of images of people from social networks. The company limits its use to law enforcement (Clearview has said it has hundreds of such customers).
Five federal agencies used facial recognition software from another company, Vigilant Solutions. The use of an assortment of other companies' facial-recognition products was also reported in that same timeframe, such as Amazon's Rekognition software. Ten agencies told the GAO that it used facial-recognition software from companies as a free trial; the U.S. Postal Inspection Service was one of them, as it reportedly used Vigilant Solutions for free for 10 months in 2017.
Because of its scope and timeline, the report doesn't detail some of the ways federal agencies are currently using facial recognition. For instance, the IRS's Criminal Investigation Division reported using the technology between April 2018 and March 2020, and the IRS recently began employing facial-verification technology from the company ID.me to allow taxpayers to opt out of the child tax credit. Facial verification compares two pictures of the same person.