The AI-generated media today is of astonishingly high quality, producing images and audio that are almost the same indistinguishable from realityand video isn’t far behind. But with this progress comes a new wave of legal and ethical battles.
Lawmakers are alarmed by deepfakes – synthetic media that mimic reality – fearing their potential to destroy reputations, especially in high-stakes election campaigns. Yet some of the new state deepfake laws raise serious First Amendment concerns.
While “political disinformation” has become a focus of the Democratic Party in recent years, Republicans also object to AI-enabled media used opportunistically to damage the reputation of their candidates. Deepfake fears have led to rare bipartisan action almost a third of the states pass laws to regulate its use during elections.
Most laws against deepfakes are limited to civil penalties, but anyway Texas And Minnesota go a step further and criminalize synthetic media intended to ‘influence elections’. Texas law resembles a defamation statute and violations can mean up to a year in jail.
Minnesota’s law is even stricter: simply “spreading” a deepfake – resharing it on social media could suffice– repeat offenders can face up to five years in prison. Furthermore, a government official or nominee who is guilty of spreading deepfake can be removed from office.
From vague terms (“deepfake,” “distribute”) to harsh criminal penalties, these laws conflict with First Amendment protections, especially because they fail to rule out parodies or satire.
Fortunately in Septembera state appeals court declared the Texas law prima facie unconstitutional. Regarding the scope of the Texas law, the court said, “Given that influencing elections is the essence of political speech, it is difficult to imagine what speech would not be covered by the statute.”
But even state civil liability laws have many of the same problems. It’s worth exploring California’s new deepfake law, AB2839which prohibits the dissemination of altered political media that could deceive a “reasonable person,” provided it is done “maliciously.” The law extends broadly to include popular political content. California Governor Newsom has made this clearfor example, that banned media includes everyday memes and edited media.
California law requires creators of parodies or satire to label their media as such. There are exceptions for broadcasters and newspapers, but no express exceptions for social media companies. Social media companies ‘spread’ political memes and deepfakes, making it appear that they could be held liable for damages.
A controversial and shocking twist in AB 2839 is the “bounty hunter” provision, which allows any “recipient of materially misleading content” to sue “the person, committee, or other entity” that distributed the content. The winning side also wins attorneys’ fees, so this law creates a potential litigation frenzy over digital content.
The California law essentially invites millions of social media users to sue people who create political memes and edited videos. Even someone who merely shares a post on social media could be liable because “distribution” remains undefined.
Like Minnesota and Texas laws, California law has serious First Amendment problems. It is apparently designed to function as a prior constraint on online political media. As a nonprofit official who helped draft the law told TechCrunch:
The real purpose is actually neither compensation nor injunctive relief. It’s just to keep people from doing it in the first place. That would actually be the best outcome… to ensure that these deepfakes do not fraudulently impact our elections.
AB 2839 was signed into law in September. Christopher Kohls, the conservative meme maker whose edited satirical video was singled out by Governor Newsom, was indicted for blocking the law. A federal judge in early October enforcement imposed of almost the entire meme bounty hunter law in this case Kohls vs. Bonta.
Some of these laws may remain, especially if they require only clear and simple disclosures. However, Minnesota and Texas laws still raise serious First Amendment concerns because they criminalize election-related content.
In the words of one federal judge, these deepfake laws often act “like a hammer instead of a scalpel,” chilling far too much speech.
Leave a Reply