- Campaigns fear AI-powered deepfakes will interfere with elections
- Legal remedies are limited and campaigns lack funds to fight
Political lawyers are gearing up for a contentious election cycle featuring a new generation of attack ads—AI-assisted and largely unregulated deepfake clips of their clients.
Lack of federal regulation, limited litigation strategies, and potential action from the Federal Election Commission are creating a volatile legal landscape for political lawyers, who are anticipating an onslaught of sophisticated AI-generated video or audio clips of a client doing or saying something that never happened.
It’s the “Wild West,” Caleb Burns, a lawyer at Wiley Rein that works for the Republican Party, warns his clients.
Some of these deceptive videos have already hit voters’ screens.
A clip circulating on social media of Sen. Elizabeth Warren (D-Mass.) manipulated Warren as saying GOP votes “could threaten the integrity of the election.” The clip had no clear author. Florida Governor Ron DeSantis’ presidential campaign posted an attack ad in June featuring a deepfake of rival Donald Trump hugging and kissing former White House medical adviser Anthony Fauci. The Republican National Committee released a deepfake political ad in April positing an apocalyptic America during President Joe Biden’s second term.
The bipartisan board of the American Association of Political Consultants in May unanimously agreed to condemn the use of generative deepfake content in campaigns. The organization also encouraged media to refuse to carry or deliver ads using deepfake generative AI content.
“The tools that AI is providing recently has upped the ante because they’ve made the product much better looking, much more believable, and much more targeted,” said Stuart Gerson, former US acting attorney general, who is now a member of Epstein Becker & Green’s litigation practice and a board member of the Campaign Legal Center.
With still more than a year until the next presidential election, most campaigns don’t have the time or resources to dive into deepfake strategies yet, said Kate Belinski, a partner at Ballard Spahr’s political law practice.
“Campaigns don’t have the money to litigate these cases,” Belinski said.
“It’s always hard at the outset of a cycle to kind of anticipate how these things will play out,” said Claire Rajan, who leads Allen & Overy’s political law group. “But I think it’s safe to say there’s not going to be a new rule at least from the FEC, maybe it’ll be a new statute.”
Can Candidates Sue?
Lawyers working for campaigns, PACs, and national parties say their clients’ legal remedies to combat falsified ads are limited.
Privacy law, copyright law, and defamation claims could all be potential avenues for potential litigious solutions, whether for candidates who find themselves targeted in a political ad featuring deepfake technology or those who want to defend their use of the technology, according to Burns.
Questionable use of AI is often untraceable to its original source, which makes legal strategies even more complicated, said Adam Bonin, who represents Democrats in state and federal campaigns.
“Whether it’s a person—especially if it’s an opposing campaign—you could file a complaint in court to try to stop it,” said Bonin, who runs a solo practice specializing in political law compliance and advocacy.
Candidates at both ends of the political spectrum are worried about AI-powered interference, which could make it easier to get regulation through the door.
The Federal Election Commission on Aug. 10 requested public comments on possible regulation of deepfakes in political ads. A total of 50 lawmakers, including Reps. Adam Schiff (D-Calif.) and Katie Porter (D-Calif.), wrote a letter urging the FEC to step in.
Rep. Yvette Clarke (D-NY) introduced a bill that would require disclosure of AI-generation in political ads but it’s expected to die in the Republican-controlled House. A handful of states including California, Texas and Minnesota have passed their own AI regulations and other states are following suit.
But political lawyers and reform advocates say lawmakers likely won’t act until an egregious deepfake forces a strong reaction, and won’t happen until after the election. The FEC is typically hesitant to regulate candidates’ speech.
“The courts are generally reluctant to interfere with speech in the middle of the campaign,” Bonin said. “This is the heart of the First Amendment and judges don’t want to play referee between candidates.”
Any new agency regulation or law from Congress will be in light of a bad AI-political ad on the market, said Kenneth Gross, senior political counsel at Akin Gump.
“I’m not optimistic, at all optimistic” on the prospects of imminent regulation from Congress or the FEC, Gross said.
High Stakes
AI-powered deepfake political ads aren’t just a threat to candidates and their races, they also threaten government stability, according to Catherine Powell, a professor at Fordham University School of Law.
“This has serious risks for knowing the truth and for democracy,” Powell said.
The AI-backed attacks aren’t limited to domestic political actors, Gerson at Epstein Becker said.
“We are in the moral equivalent of a cyber war with Russia, China, Iran and North Korea,” Gerson said.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
