Advertisement

Judge blocks California law that targeted deepfake campaign ads

A woman at a lectern.
Vice President Kamala Harris, shown at the Democratic National Convention in Chicago, is one of the politicians targeted by AI-manipulated campaign material this year.
(Robert Gauthier / Los Angeles Times)
Share via

With deepfake video and audio making their way into political campaigns, California enacted its toughest restrictions yet in September: a law prohibiting political ads within 120 days of an election that include deceptive, digitally generated or altered content unless the ads are labeled as “manipulated.”

On Wednesday, a federal judge temporarily blocked the law, saying it violated the 1st Amendment.

Other laws against deceptive campaign ads remain on the books in California, including one that requires candidates and political action committees to disclose when ads are using artificial intelligence to create or substantially alter content. But the preliminary injunction granted against Assembly Bill 2839 means that there will be no broad prohibition against individuals using artificial intelligence to clone a candidate’s image or voice and portraying them falsely without revealing that the images or words are fake.

Advertisement

The injunction was sought by Christopher Kohls, a conservative commentator who has created a number of deepfake videos satirizing Democrats, including the party’s presidential nominee, Vice President Kamala Harris. Gov. Gavin Newsom cited one of those videos — which showed clips of Harris while a deepfake version of her voice talked about being the “ultimate diversity hire” and professing both ignorance and incompetence — when he signed AB 2839, but the measure actually was introduced in February, long before Kohls’ Harris video went viral on X.

When asked on X about the ruling, Kohls said, “Freedom prevails! For now.”

A close-up of open laptops.
Deepfake videos satirizing politicians, including one targeting Vice President Kamala Harris, have gone viral on social media.
(Darko Vojinovic / Associated Press)

Two bills newly signed by Gov. Gavin Newsom outlaw the possession and distribution of sexually charged images of minors even when they’re created with computers, not cameras.

Oct. 3, 2024

The ruling by U.S. District Judge John A. Mendez illustrates the tension between efforts to protect against AI-powered fakery that could sway elections and the strong safeguards in the Bill of Rights for political speech.

Advertisement

In granting a preliminary injunction, Mendez wrote, “When political speech and electoral politics are at issue, the 1st Amendment has almost unequivocally dictated that courts allow speech to flourish rather than uphold the state’s attempt to suffocate it. ... [M]ost of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”

Countered Robert Weissman, co-president of Public Citizen, “The 1st Amendment should not tie our hands in addressing a serious, foreseeable, real threat to our democracy.”

A man stands at a lectern.
Robert Weissman of the consumer advocacy organization Public Citizen says 20 other states have adopted laws similar to AB 2839, but there are key differences.
( Nick Wass / Associated Press)
Advertisement

Weissman said 20 states had adopted laws following the same core approach: requiring ads that use AI to manipulate content to be labeled as such. But AB 2839 had some unique elements that might have influenced Mendez’s thinking, Weissman said, including the requirement that the disclosure be displayed as large as the largest text seen in the ad.

In his ruling, Mendez, an appointee of President George W. Bush, noted that the 1st Amendment extends to false and misleading speech too. Even on a subject as important as safeguarding elections, he wrote, lawmakers can regulate expression only through the least restrictive means.

AB 2839 — which required political videos to continuously display the required disclosure about manipulation — did not use the least restrictive means to protect election integrity, Mendez wrote. A less restrictive approach would be “counter speech,” he wrote, although he did not explain what that would entail.

Responded Weissman, “Counter speech is not an adequate remedy.” The problem with deepfakes isn’t that they make false claims or insinuations about a candidate, he said; “the problem is that they are showing the candidate saying or doing something that in fact they didn’t.” The targeted candidates are left with the nearly impossible task of explaining that they didn’t actually do or say those things, he said, which is considerably harder than countering a false accusation uttered by an opponent or leveled by a political action committee.

For the challenges created by deepfake ads, requiring disclosure of the manipulation isn’t a perfect solution, he said. But it is the least restrictive remedy.

Liana Keesing of Issue One, a pro-democracy advocacy group, said the creation of deepfakes is not necessarily the problem. “What matters is the amplification of that false and deceptive content,” said Keesing, a campaign manager for the group.

Advertisement

Alix Fraser, director of tech reform for Issue One, said the most important thing lawmakers can do is address how tech platforms are designed. “What are the guardrails around that? There basically are none,” he said, adding, “That is the core problem as we see it.”

Advertisement