San Francisco(CNN Business) Hundreds of Amazon employees, groups of investors, and dozens of civil rights groups and academic researchers are upset that the online retailer is selling facial-recognition software to governments. Yet on Wednesday, Amazon shareholders rejected a proposal that asked the company to stop the practice.
The resolution requested Amazon's board stop selling its Rekognition software to governments unless a third-party evaluation determines the tool "does not cause or contribute to actual or potential violations of civil and human rights." Its failure was announced at the company's annual meeting in its hometown of Seattle, along with that of a similar proposal asking Amazon to enlist an outside group to study the risks of using Rekognition.
Amazon previously tried to prevent stockholders from considering the facial-recognition proposals, but the Securities and Exchange Commission said in April that it had to allow a vote.
Shareholder resolutions rarely pass — in fact 10 others on issues including action on climate change, food waste and hate speech were also voted down by Amazon shareholders. Even when these resolutions are approved, they are typically toothless since the companies don't need to abide by them, so they represent more of a symbolic victory for supporters.
Yet the Amazon measures represented a growing backlash against facial-recognition software, which is increasingly used everywhere from police departments to rock concerts to homes, stores and schools. The systems are designed to identify specific people from live video feeds, recorded video footage or still photos, often by comparing their features with a set of faces (such as mugshots).
While the technology grows in popularity, it has come under increased scrutiny as concerns mount regarding its deployment, accuracy, and even where the faces come from that are used to train the systems. And despite the vote, the worries surrounding the technology are unlikely to dissipate.
"Even if perfectly accurate, face surveillance changes the balance of power between government and individuals. None of us can change our face print," said Shankar Narayan, director of the Technology and Liberty Project at the American Civil Liberties Union of Washington, who presented the resolution asking Amazon not to sell Rekognition to government agencies.
Speaking in advance of the meeting, Sister Pat Mahoney — a member of the Sisters of St. Joseph of Brentwood, New York, which invests in Amazon and supported the proposals — told CNN Business that her goal was to raise awareness about facial recognition.
"We're hoping for consciousness raising and an awareness in a broader base of shareholders," she said.
And while the resolutions were struck down, she said she will continue working to address underlying issues surrounding the use and sale of facial-recognition technology.
There are currently no federal laws addressing how facial-recognition systems can be used, but a handful of states and local governments have passed or are considering their own rules in relation to this and other surveillance technologies — San Francisco, for instance, banned the use of facial-recognition technology by city government just last week. And concerns are mounting about the technology on Capitol Hill.
On Wednesday, as Amazon's meeting unfolded in Washington, the House Committee on Oversight and Reform held a hearing in Washington, D.C., on facial-recognition technology and its impact on civil rights and liberties. There, legislators asked questions about such issues as how developers can make the technology more accurate.
AI researchers and civil rights groups such as the American Civil Liberties Union are particularly worried about accuracy and bias in facial-recognition systems; there are concerns that they are not as effective at correctly recognizing people of color and women. One reason for this issue is that the datasets used to train the software may be disproportionately male and white.
Joy Buolamwini, a witness at the hearing who founded the Algorithmic Justice League to combat bias in artificial intelligence, said her research indicates that better training data for these systems can help.
"But we have to be very cautious because even if you make accurate facial-recognition systems, they will be abused without regulations," she said.