Byย Zachary Roth & John Coleย |ย Pennsylvania Capital-Star

This yearโs presidential election will be the first since generative AI โ a form of artificial intelligence that can create new content, including images, audio, and video โ became widely available. Thatโs raising fears that millions of voters could be deceived by a barrage of political deepfakes.
Advertisements
But, while Congress has done little to address the issue, states are moving aggressively to respond โ though questions remain about how effective any new measures to combat AI-created disinformation will be.
โI think weโre at a point where we really need to keep an eye on AI being exploited by bad faith actors to spread election misinformation,โ Pennsylvania Secretary of the Commonwealth Al Schmidt told the Capital-Star.
While there could be potential benefits from AI down the road when it comes to voter education, he added, โwe saw in 2020, how easily lies spread simply from a tweet, or an email, or a Facebook post. AI has the potential to be far more convincing when it comes to misleading people. And thatโs a real concern of mine.โ
Advertisements
Last year, a fake, AI-generatedย audio recordingย of a conversation between a liberal Slovakian politician and a journalist, in which they discussed how to rig the countryโs upcoming election, offered a warning to democracies around the world.
Here in the United States, the urgency of the AI threat was driven home in February, when, in the days before the New Hampshire primary, thousands of voters in the state received a robocall with an AI-generated voice impersonating President Joe Biden, urging them not to vote. A Democratic operative working for a rival candidate has admitted to commissioning the calls.
Advertisements
In response to the call, the Federal Communications Commission issued a ruling restricting robocalls that contain AI-generated voices.
Some conservative groups even appear to be using AI tools to assist with mass voter registration challenges โ raising concerns that the technology could be harnessed to help existing voter suppression schemes.
โInstead of voters looking to trusted sources of information about elections, including their state or county board of elections, AI-generated content can grab the votersโ attention,โ said Megan Bellamy, vice president for law and policy at the Voting Rights Lab, an advocacy group that tracks election-related state legislation. โAnd this can lead to chaos and confusion leading up to and even after Election Day.โ
Disinformation worries
Advertisements
The AI threat has emerged at a time when democracy advocates already are deeply concerned about the potential for โordinaryโ online disinformation to confuse voters, and when allies of former president Donald Trump appear to be having success inย fighting offย efforts to curb disinformation.
But states are responding to the AI threat. Since the start of last year, 101 bills addressing AI and election disinformation have been introduced, according to a March 26 analysis by the Voting Rights Lab.
Advertisements
Pennsylvania state Rep. Doyle Heffley (R-Carbon) sent out a memo on March 12 seeking co-sponsors for a piece of legislation that would prohibit the practice of artificially-generated voices being used for political campaign purposes and establish penalties for those who do.
He told the Capital-Star his legislation isnโt about disallowing robocalls from campaigns, but instead would prohibit using AI to make voters think they are having personalized conversations with the candidates.
Advertisements
โThis is brand new and emerging technology,โ Heffley added. โSo I think we need to set boundaries about what is ethical and what isnโt.โ
A bill from state Rep. Chris Pielli (D-Chester) that would require a disclosure on content generated by artificial intelligence was passed by the House Consumer Protection, Technology & Utilities Committee by a 21-4 margin last week.
Advertisements
โThis really is bipartisan or should be perceived as a bipartisan issue,โ Pielli told the Capital-Star. โI mean thereโs nothing more sacred than keeping our elections fair and free and not being tampered with. And you know, with just three seconds of your voice recorded, current AI technology can have you doing a political speech that youโve never done.โ
Heffley said he was concerned that it would be difficult to get AI legislation passed this session, given the divided Legislature. He added that heโs willing to work with anybody on the matter.
Pielli was a bit more optimistic. โThis is a threat. This is a clear and present danger to our republic, to our democracy, our elections,โ he said. โAnd I think both sides will be able to see this and Iโm hoping that we will pull together like we always have in the past to face this threat and to protect our citizens.โ
Advertisements
There may be models from other states for Pennsylvania to follow.
On March 27, Oregon became the latest state โ after Wisconsin, New Mexico, Indiana and Utah โ to enact a law on AI-generated election disinformation. Florida and Idaho lawmakers have passed their own measures, which are currently on the desks of those statesโ governors.
Advertisements
Arizona, Georgia, Iowa and Hawaii, meanwhile, have all passed at least one bill โ in the case of Arizona, two โ through one chamber.
As that list of states makes clear, red, blue, and purple states all have devoted attention to the issue.
States urged to act
Meanwhile, a new report on how to combat the AI threat to elections, drawing on input from four Democratic secretaries of state, was released March 25 by the NewDEAL Forum, a progressive advocacy group.
Advertisements
โ(G)enerative AI has the ability to drastically increase the spread of election mis- and disinformation and cause confusion among voters,โ the report warned. โFor instance, โdeepfakesโ (AI-generated images, voices, or videos) could be used to portray a candidate saying or doing things that never happened.โ
The NewDEAL Forum report urges states to take several steps to respond to the threat, including requiring that certain kinds of AI-generated campaign material be clearly labeled; conducting role-playing exercises to help anticipate the problems that AI could cause; creating rapid-response systems for communicating with voters and the media, in order to knock down AI-generated disinformation; and educating the public ahead of time.
Secretaries of State Steve Simon of Minnesota, Jocelyn Benson of Michigan, Maggie Toulouse Oliver of New Mexico and Adrian Fontes of Arizona provided input for the report. All four are actively working to prepare their states on the issue.
Loopholes seen
Advertisements
Despite the flurry of activity by lawmakers, officials, and outside experts, several of the measures examined in the Voting Rights Lab analysis appear to have weaknesses or loopholes that may raise questions about their ability to effectively protect voters from AI.
Most of the bills require that creators add a disclaimer to any AI-generated content, noting the use of AI, as the NewDEAL Forum report recommends.
But the new Wisconsin law, for instance, requires the disclaimer only for content created by campaigns, meaning deepfakes produced by outside groups but intended to influence an election โ hardly an unlikely scenario โ would be unaffected.
In addition, the measure is limited to content produced by generative AI, even though experts say other types of synthetic content that donโt use AI, like Photoshop and CGI โ sometimes referred to as โcheap fakesโ โ can be just as effective at fooling viewers or listeners, and can be more easily produced.
Advertisements
For that reason, the NewDEAL Forum report recommends that state laws cover all synthetic content, not just that which use AI.
The Wisconsin, Utah, and Indiana laws also contain no criminal penalties โ violations are punishable by a $1000 fine โ raising questions about whether they will work as a deterrent.
The Arizona and Florida bills do include criminal penalties. But Arizonaโs two bills apply only to digital impersonation of a candidate, meaning plenty of other forms of AI-generated deception โ impersonating a news anchor reporting a story, for instance โ would remain legal.
And one of the Arizona bills, as well as New Mexicoโs law, applied only in the 90 days before an election, even though AI-generated content that appears before that window could potentially still affect the vote.
Experts say the shortcomings exist in large part because, since the threat is so new, states donโt yet have a clear sense of exactly what form it will take.
โThe legislative bodies are trying to figure out the best approach, and theyโre working off of examples that theyโve already seen,โ said Bellamy, pointing to the examples of the Slovakian audio and the Biden robocalls.
โTheyโre just not sure what direction this is coming from, but feeling the need to do something.โ
Advertisements
โI think that we will see the solutions evolve,โ Bellamy added. โThe danger of that is that AI-generated content and what it can do is also likely to evolve at the same time. So hopefully we can keep up.โ
Schmidt noted that Pennsylvaniaโs Department of State has a page on its website focused on answering votersโ questions, but that it was incumbent on officials to be proactive.
โIโm under no impression that millions of people in Pennsylvania are waking up every day to check the Department of Stateโs website,โ Schmidt said. โItโs important we not be silent. Itโs important that we rely on others operating in good faith who want to do their part to strengthen our democracy by encouraging voter participation and education.โ
Pennsylvania Capital-Starย is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. Pennsylvania Capital-Star maintains editorial independence. Contact Editor Kim Lyons for questions:ย info@penncapital-star.com. Follow Pennsylvania Capital-Star onย Facebookย andย Twitter.
Report a correction via email | Editorial standards and policies


