Elections

As AI threatens election security, the search for solutions takes on new urgency

A panel on artificial intelligence and its threat to the American electoral process focused on high-tech challenges and ways to solve them.

Panelists (L-R) Lara Putnam, Bret Schafer, Philip Hensley-Robin

Panelists (L-R) Lara Putnam, Bret Schafer, Philip Hensley-Robin Hilary Danailova

When Philadelphia City Councilmember-at-Large Rue Landau and the Committee of 70, a nonpartisan good-government organization, convened with experts on Thursday to discuss artificial intelligence and election security, it was not just to highlight high-tech challenges; it was also to explore potential solutions.

After an introduction from state Rep. Tarik Khan – the Philadelphia Democrat who co-sponsored a recent bill regulating artificial intelligence in campaign materials – the gathering, held at Philadelphia’s National Liberty Museum, heard from Omar Sabir, chair of the Philadelphia City Commissioners; Lara Putnam, a University of Pittsburgh professor who co-leads the Civic Resilience Initiative of Pitt’s Institute for Cyber Law, Policy and Security; Philip Hensley-Robin, executive director of Common Cause Pennsylvania; and Brett Schafer, a senior fellow at the Alliance for Securing Democracy, where he heads the information manipulation team.

From the outset, Landau and Khan made the topic’s urgency clear. AI “is going to revolutionize how we learn, how we communicate, how we express ourselves,” the state representative told several dozen attendees. “With any technology, we have to make sure that it’s being properly regulated and that not we’re not letting it become a monster … And it’s extremely important, as an elected official, that we’re making sure that we’re protecting our elections.”

Reminiscing about the obviously fake explosions in the movies of his ‘90s youth, Khan drew a contrast with today’s deepfakes – such as a believably manipulated image from Florida Gov. Ron DeSantis’ Republican presidential primary campaign that purported to show former President Donald Trump embracing the pandemic-era public-health czar Anthony Fauci, a bête noire of anti-vaxxers. 

“Election deniers invested big time in social media,” Sabir said, describing a Wild West of bad actors – from Russian saboteurs to domestic conspiracy theorists – all trying to influence American voters: “Anybody can say anything on there, and it will not be checked.”

Schafer, whose work is sponsored by the German Marshall Fund of the United States, told of how just a decade or so ago, Russians had to master English to be convincing to Americans online. “Now, the technology has allowed basically anyone to run ... a targeted local disinformation campaign,” he said. “You don’t need to have a troll factory.” Such campaigns are particularly effective in influencing down-ballot races, about which people tend to be less informed – and therefore more susceptible to false claims.

The panelists agreed that AI – and online influencers – had the potential for positive uses as well. Asked by Committee of 70 Executive Director Lauren Cristella, the panel’s moderator, whether AI “is inherently bad or scary,” Hensley-Robin offered: “Any transformative technology like this has enormous potential, I think, for good or ill,” he said, adding that while generative AI – material that responds predictively – can be used to deceive, it can also be employed to disseminate accurate information.

In Putnam’s view, education is a cornerstone of any anti-misinformation strategy. “We’re particularly emphasizing to people: You need to actively curate the sources of information that are reaching you,” following reliable sources to ensure accuracy in one’s social media feed. Her other tip: “Don’t engage with false information or a false or inflammatory post. You’re just adding fuel to the fire in terms of its acceleration and reach ... Take that as a sign that there’s an information void out there, and push out accurate information from reliable sources instead.”

Sabir expressed optimism about increased investment in technology awareness and in recruiting younger Americans – even influencers – to educate their peers. “Organically, having conversations with younger people – they really know how to do the work,” he said.

For his part, Hensley-Robin put his faith in the power of American jurisprudence – both prophylactic regulation, such as Khan’s campaign materials bill, and legal recourse to handle violations. He predicted that legislation would evolve within the framework of the First Amendment, requiring “clear and conspicuous disclosure” of technological manipulation in political contexts.

As Putnam pointed out, efforts to control the most dangerous ramifications of AI are likely to be local and incremental rather than comprehensive and widespread. “There aren’t going to be any easy, individual, one-off fixes,” she said. Drawing a comparison with the public regulation of clean drinking water, Putnam added: “We need to have a shared public responsibility for creating a healthy information ecosystem – which our democracy needs.”