President Joe Biden’s 2024 campaign has assembled a special task force to ready its responses to misleading AI-generated images and videos, drafting court filings and preparing novel legal theories it could deploy to counter potential disinformation efforts that technology experts have warned could disrupt the vote.
The task force, which is composed of the campaign’s top lawyers and outside experts such as a former senior legal advisor to the Department of Homeland Security, is exploring what steps Biden could take if, for example, a fake video emerged of a state election official falsely claiming that polls are closed, or if an AI-generated image falsely portrayed Biden as urging non-citizens to cross the US border to cast ballots illegally.
The effort aims to produce a “legal toolkit” that can allow the campaign to quickly respond to virtually any scenario involving political misinformation and particularly AI-created deepfakes — convincing audio, video or images made using artificial intelligence tools.
“The idea is we would have enough in our quiver that, depending on what the hypothetical situation we’re dealing with is, we can pull out different pieces to deal with different situations,” said Arpit Garg, deputy general counsel for the Biden campaign, adding that the campaign intends to have “templates and draft pleadings at the ready” that it could file in US courts or even with regulators outside the country to combat foreign disinformation actors.
In recent months, the campaign has spun up the internal task force, dubbed the “Social Media, AI, Mis/Disinformation (SAID) Legal Advisory Group,” part of a broader effort across the campaign to counter all forms of disinformation, TJ Ducklo, a senior adviser to the Biden campaign, told CNN.
The group, which is led by Garg and the campaign’s general counsel Maury Riggan alongside outside volunteer experts, has already begun the drafting work on some legal theories as it continues to research others, Garg said. It aims to have enough prepared to be able to run a campaign-wide tabletop exercise in the first half of 2024.
The scramble highlights the vast legal gray area covering AI-generated political speech, and how policymakers are struggling to respond to the threat it could pose to the democratic process. Without clear federal legislation or regulation, campaigns such as Biden’s are being forced to take matters into their own hands, trying to devise ways to respond to images that might falsely portray candidates or others saying or doing things they never did.
Leveraging old laws to fight a new threat
Absent a federal ban on political deepfakes, lawyers for the Biden campaign have begun considering how they could use existing voter protection, copyright and other laws to compel or persuade social media and other platforms to remove deceptive content.
The officials said the campaign is also considering how new laws against disinformation in the European Union could be invoked if such a campaign is launched from or hosted on a platform that is based there. A recently passed EU law known as the Digital Services Act imposes tough new transparency and risk-mitigation requirements on large tech platforms, violations of which could lead to billions of dollars in fines.
The group is taking legal inspiration from a recent case in which a Florida man was convicted under a Reconstruction-era law for sharing fraudulent claims on social media about how to vote. The law in question criminalizes conspiracies to deprive Americans of their constitutionally guaranteed rights and has previously been used in human trafficking cases. Another law the group is examining is a federal statute that makes it a misdemeanor for a government official to deprive a person of his or her constitutional rights — in this case, the right to vote, Garg said.
Existing US election law prohibits campaigns from “fraudulently misrepresenting other candidates or political parties,” but whether this prohibition extends to AI-generated content is an open question. In June, Republicans on the Federal Election Commission blocked a move that could have made clear the law extended to AI-created depictions; the agency has since begun to consider the idea, but it has not reached a decision on the matter.
As part of that proceeding, the Republican National Committee told the FEC in public comments last month that while it “was concerned about the potential misuse of artificial intelligence in political campaign communications,” it believes a current proposal that would explicitly give the FEC oversight of political deepfakes would exceed the commission’s authority and “would raise serious constitutional concerns” under the First Amendment.
The Democratic National Committee, meanwhile, has urged the FEC to crack down on intentionally misleading uses of AI, arguing the technology enables “a new level of deception by quickly fabricating hyperrealistic images, audio, and video” that could mislead voters.
Lack of guardrails around AI
Despite rising alarm about AI among members of Congress, US lawmakers are still in the early stages of grappling with the issue and do not appear close to finalizing any AI-related legislation. Beginning this summer, Senate Majority Leader Chuck Schumer convened a series of closed-door hearings for lawmakers to get up to speed on the technology and its implications, covering topics such as AI’s impact on workers, intellectual property and national security. Those sessions are ongoing.
Schumer has signaled that with the election looming, he could seek to fast-track an election-focused AI bill before turning to legislation dealing with the technology’s other effects. But he has also emphasized the need for a deliberate process, saying to expect results within months, not days or weeks. A separate proposal in September by a bipartisan group of senators has sought to ban the deceptive use of AI in political campaigns, but it has yet to progress.
With no promise of regulatory clarity on the horizon, Biden’s team has been forced to grapple with the threat directly.
Some of the campaign’s anti-disinformation efforts, including coordination with DNC officials, had been in place since the 2018 midterm elections. But the rapid surge in the availability of sophisticated AI tools in the past year makes AI a unique factor in the 2024 race, Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN.
In response, tech companies such as Meta — the parent company of Facebook and Instagram — have announced restrictions and requirements for AI in political speech on their platforms. This month, Meta said it would prohibit political advertisers from using the company’s new artificial intelligence tools that help brands generate text, backgrounds and other marketing content. Any political advertiser that uses deepfakes in ads on Facebook or Instagram will need to disclose that fact, it said.
Concerns about the use of AI technology extend beyond the creation of fake video and audio. Darren Linvill, a professor at Clemson University’s Media Forensic Hub, said AI can also be used to mass-produce articles and online comments designed to support or attack a candidate.
In a report released Thursday anticipating threats ahead of the 2024 election, Meta’s security team warned AI could be used by nefarious groups to create “larger volumes of convincing content,” but also expressed optimism that advances in AI can help root out coordinated disinformation campaigns.
The Meta report details how some social media platforms are grappling with how to handle deceptive uses of AI.
“Whereas foreign interference campaigns using AI-created content (or any other content for that matter) are seen as uncontroversially abusive and adversarial, authentic political groups and other domestic voices leveraging AI can quickly fall into a ‘gray’ area where people will disagree about what is permissible and what isn’t,” the report read.
Meta pointed specifically to an advertisement released by the RNC in April that used AI to create deepfake images imagining a dystopian United States if Biden were reelected.