How battleground states are targeting AI in political campaigns – Washington Examiner
The article discusses the growing concern over the use of artificial intelligence (AI) and “deepfakes” in political campaigns, particularly in battleground states where legislation is being proposed or enacted to regulate these technologies. Deepfakes are AI-generated media that can impersonate individuals without consent, raising significant risks of misinformation and election interference.
Several swing states, including Pennsylvania, North Carolina, and Ohio, are actively considering laws to control AI’s use in political advertising. Pennsylvania representatives propose measures such as establishing a task force and banning the use of AI content without approval. North Carolina’s Senate Bill 88 requires Disclaimer labels on AI-generated advertisements, although it faces opposition. Similarly, Ohio’s House Bill 410 would mandate disclosures for political ads using AI and allow civil actions against deepfake offenders.
States like Minnesota, Wisconsin, Michigan, and Arizona have already passed laws regulating the political use of AI. These laws mandate disclosures for AI-generated content and impose penalties on violators. Minnesota’s legislation imposes significant liability on both creators and media platforms, while Wisconsin’s requires Disclaimers for any AI content used.
However, some states like Nevada, Georgia, and Virginia have yet to enact any regulations. In Nevada, officials are exploring potential legislation, while Georgia attempted to introduce strict penalties for deepfake creation but ultimately failed to pass a bill. Virginia has seen several proposed bills in the past, but none have been successful.
these developments reflect an increasing recognition of the implications of AI in shaping public discourse and political integrity in the electoral process.
How battleground states are targeting AI and ‘deepfakes’ in political campaigns
Multiple swing states are considering or have passed legislation regulating the use of artificial intelligence in political advertising, a novel medium that is expected to wield growing influence over future elections.
“Deepfakes” (videos, audio recordings, photos, and other content created through AI to impersonate someone without their consent) have raised concerns over their ability to interfere with elections. Such technology has been used to impersonate President Joe Biden and Taylor Swift, for example.
Here’s how battleground states are regulating the use of AI in the political arena.
Swing states considering laws
Pennsylvania, North Carolina, and Ohio are considering legislation that aims to regulate the use of AI in political elections.
Several Pennsylvania state representatives have spearheaded AI bills. These proposals include instituting a task force to study the matter, creating a registry of companies developing AI software, requiring disclosures of any AI content, and banning videos using AI-created deepfakes without permission.
Republican state Sen. Tracy Pennycuick recently led a bipartisan measure to prohibit “the dissemination” of deepfakes, according to a press release. Pennycuick told Technical.ly last month that her bill looks to “strike a balance between free speech and artificially putting words into someone’s mouth.”
North Carolina is also considering regulating AI’s use in political advertising. This comes as a parody campaign wielding AI-generated satirical content against Republican gubernatorial candidate Mark Robinson raises eyebrows in the state.
Included within Senate Bill 88 are limited AI regulations that would require any “advertisements in print media, on radio, or on television” produced using AI to provide Disclaimers. Offenders would receive a Class 1 misdemeanor.
However, it has stalled in the state House and faces opposition from some Democrats because provisions for signature-matching for mail-in ballots would also fall under the umbrella of the legislation.
Ohio lawmakers have also considered legislation that requires AI content to be labeled with Disclaimers.
Democratic Ohio state Rep. Joe Miller introduced House Bill 410 in February. His bill mandates disclosures on all political ads using AI and permits people harmed by deepfakes to take civil action. The legislation is now sitting in committee.
Swing states that have passed laws
Minnesota, Wisconsin, Michigan, and Arizona have already passed legislation regulating AI in the political arena.
While Minnesota hasn’t historically achieved swing-state status, it has risen to the forefront in recent years after former President Donald Trump lost the state by only a small margin in 2016.
The North Star State passed AI legislation in 2023 that carries harsh consequences for offenders. It penalizes not only the creator but also the broadcaster, making media companies potentially liable for all AI content on their platforms.
Democratic–Farmer-Labor Party state Rep. Zack Stephenson is pushing another bill that enacts even tougher penalties. HF3625 would institute consequences such as forfeiture of office if a candidate is convicted of a deepfake crime.
Meanwhile, Wisconsin’s AI law primarily oversees only the creators of the content. A bill signed into law this year requires campaign materials to include a Disclaimer both at the start and end of an ad if they used media containing AI. Offenders are penalized with a fine of up to $1,000. An amendment attached to the bill excludes broadcasters from liability.
In Michigan, when Gov. Gretchen Whitmer (D-MI) signed House Bill 5141 into law last December, the Great Lake State became the fifth state to regulate the political use of AI.
The law stipulates that people who create political media using AI provide Disclaimers. However, it also provides a caveat that exempts broadcasters from liability if they “adopt and make available to advertisers a policy that prohibits the use of AI in political ads without complying with the disclosure or other requirements imposed by the state law.”
Arizona’s AI law, which was signed into law in May, carries some nuances unique to the state.
Republican state Rep. Alexander Kolodin’s House Bill 2394 carries protection for candidates depicted nude or engaging in a sexual act, artificially shown committing a crime, or are expected to suffer personal or financial hardship from deepfakes. The law also states that if the publisher of the deepfake removes it from the platform within 21 days of being asked by the court, no damages are available to the victim.
Kolodin said he took a “modest approach” because he isn’t interested in trying to “overregulate” deepfakes.
“You miss out on a lot of poignant satire that might illuminate things about politics for the public,” Kolodin told 13 News. “You miss out on a lot of good criticism and valid criticism of elected officials and you don’t want to do that.”
Another Arizona law enacted in May requires that deepfakes distributed within 90 days before an election must include a Disclaimer. The law does not apply to content created as satire or parody.
Swing states that have no laws
Nevada, Georgia, and Virginia have yet to pass laws that govern how AI is used in political advertising.
But elected officials in Nevada are showing an interest in legislating the policy, according to the Las Vegas Review-Journal. Nevada Democratic Secretary of State Cisco Aguilar is working with the state’s attorney general to provide recommendations to lawmakers for the next legislative session.
Because the Nevada legislature only meets for six months out of every two years and all of its Assembly members are up for election every two years, it is difficult to predict what regulations could come from the swing state. A spokeswoman for Gov. Joe Lombardo (R-NV) said she isn’t aware of any AI proposals ahead of the next legislative session.
Georgia is also looking to pass harsh penalties for creating deepfakes. Earlier this year, a bipartisan group of lawmakers backed SB 392, which aimed to criminalize as “fraudulent election interference” the publication of “materially deceptive media within 90 days of an election.”
The Senate bill would make it a felony to use, create, or request a deepfake. Offenders could receive up to a five-year sentence in prison and up to a $50,000 fine. The American Civil Liberties Union of Georgia has fiercely opposed the bill over concerns that it violates the First Amendment right to freedom of speech.
A House version of the bill spearheaded by Republican state Rep. Brad Thomas passed by overwhelming margins in February. But both bills died in the state’s General Assembly this year. A spokesperson for Thomas said the legislature would likely take components of the bills and reincorporate them into fresh legislation in the next legislative session.
Virginia has yet to pass any laws regarding the use of AI in political campaigns. Though the state legislature considered a handful of bills on the matter earlier this year, none of them passed before the end of the 2024 legislative session.
However, Gov. Glenn Youngkin (R-VA) signed an executive order establishing a task force to” examine the use of artificial intelligence by public bodies” in the state.
On the national stage, after a deepfake video targeted Rep. Rob Wittman (R-VA) last December, Youngkin told the Washington Examiner that he’s pushing Congress to establish “guidelines for responsible use of artificial intelligence.” He’s also pushing two bills, including the bipartisan NO FAKES Act, which aims to “protect the voice and likeness of all individuals from unauthorized, computer-generated recreations from generative AI and other technologies.”
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
Now loading...