With more than half of the world’s population poised to vote in elections around the world this year, tech leaders, lawmakers and civil society groups are increasingly concerned that artificial intelligence could cause confusion and chaos for voters. Now, a group of leading tech companies say they are teaming up to address that threat.
More than a dozen tech firms involved in building or using AI technologies pledged on Friday to work together to detect and counter harmful AI content in elections, including deepfakes of political candidates. Signatories include OpenAI, Google, Meta, Microsoft, TikTok, Adobe and others.
The agreement, called the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” includes commitments to collaborate on technology to detect misleading AI-generated content and to be transparent with the public about efforts to address potentially harmful AI content.
“AI didn’t create election deception, but we must ensure it doesn’t help deception flourish,” Microsoft President Brad Smith said in a statement at the Munich Security Conference Friday.
Tech companies generally have a less-than-stellar record of self-regulation and enforcing their own policies. But the agreement comes as regulators continue to lag on creating guardrails for rapidly advancing AI technologies.
A new and growing crop of AI tools offers the ability to quickly and easily generate compelling text and realistic images — and, increasingly, video and audio that experts say could be used to spread false information to mislead voters. The announcement of the accord comes after OpenAI on Thursday unveiled a stunningly realistic new AI text-to-video generator tool called Sora.
“My worst fears are that we cause significant — we, the field, the technology, the industry — cause significant harm to the world,” OpenAI CEO Sam Altman told Congress in a May hearing, during which he urged lawmakers to regulate AI.
Some firms had already partnered to develop industry standards for adding metadata to AI-generated images that would allow other companies’ systems to automatically detect that the images were computer-generated.
Friday’s accord takes those cross-industry efforts a step further — signatories pledge to work together on efforts such as finding ways to attach machine-readable signals to pieces of AI-generated content that indicate where they originated and assessing their AI models for their risks of generating deceptive, election-related AI content.
The companies also said they would work together on educational campaigns to teach the public how to “protect themselves from being manipulated or deceived by this content.”
However, some civil society groups worry that the pledge doesn’t go far enough.
“Voluntary promises like the one announced today simply aren’t good enough to meet the global challenges facing democracy,” Nora Benavidez, senior counsel and director of digital justice and civil rights at tech and media watchdog Free Press, said in a statement. “Every election cycle, tech companies pledge to a vague set of democratic standards and then fail to fully deliver on these promises. To address the real harms that AI poses in a busy election year … We need robust content moderation that involves human review, labeling and enforcement.”