Skip to main content

Tracking Political Deepfakes: New Database Aims to Inform, Inspire Policy Solutions

Political groups, campaigns, and even candidates themselves are posting and sharing deepfakes digitally altered audio, images, and videos in an attempt to influence voters ahead of the US presidential election. As these generative artificial intelligence (AI) technologies become more common, researchers are now tracking their proliferation through a database of political deepfakes. 

The creators of the Political Deepfakes Incidents Database Christina Walker, Purdue University PhD candidate in political science; Daniel Schiff, Purdue assistant professor of technology policy; and Kaylyn Jackson Schiff, Purdue assistant professor of political science won the inaugural Northwestern Center for Advancing Safety of Machine Intelligence (CASMI) AI Incidents and Best Practices Paper Award and will present their findings at the Conference on Innovative Applications of Artificial Intelligence (IAAI-24) on Feb. 23 in Vancouver, Canada. The work is detailed in their forthcoming paper entitled, “Merging AI Incidents Research with Political Misinformation Research: Introducing the Political Deepfakes Incidents Database.” 

Kristian Hammond“This work is an example of exactly what we need to do if we are to establish genuine awareness of the problems and potential harms caused by our technologies,” said Dr. Kristian Hammond, CASMI Director and Bill and Cathy Osborn Professor of Computer Science. “Resources like this provide policy makers with real examples of real harms that they can respond to with real solutions instead of vague concerns that result in unfocused rules and off point regulations.” 

The purpose of the database is to inform policymakers, researchers, journalists, and the public about deepfakes to enable further research, raise awareness, and aid in regulation. Deepfakes need social or political significance to be admitted into the database.  Christina Walker

“It was a big motivator for us when we started seeing more deepfakes and AI-generated content in political ads, considering there are laws such as the Communications Act of 1934 stating that broadcast networks cannot refuse to run a campaign ad,” Walker said. “This really does matter for politics, and it might matter in a different way than the general misinformation we’ve seen in recent years.” 

Classifying the Data: ‘Subjectivity Is an Important Consideration’ 

While data collection began in June 2023, the database includes deepfake incidents dating back to 2017. So-called “cheapfakes” – or non-AI manipulations of multimedia content are also in the database. The listed deepfakes are posted in English and primarily hail from America, but there is a plan to expand to other languages and countries.  

The database, considered a work in progress, is currently hosted on the platform AirTable. Undergraduate students who serve as volunteer coders manually enter and analyze each deepfake incident, filling out a standardized form to categorize all the data. One possibility the team is exploring is to automate parts of this process as the database is refined. As of Jan. 26, 2024, there are 114 deepfake incidents listed on the database. 

The database includes descriptors for each deepfake, such as the URL and how much it was seen or shared on social media. It also lists the original source, the sharer, the person or group that the deepfake was targeting, and theoretical indicators. These sometimes-subjective theoretical indicators include context about the deepfake (was it shared at a politically convenient time?), the type of harm it may or could cause, and policy narrative (who is painted as the hero, and who’s the villain?) 

Daniel Schiff“The subjectivity is an important consideration,” said Daniel Schiff, co-director of the Governance and Responsible AI Lab (GRAIL). “For example, one discussion we had was about a deepfake in which former President Donald Trump is arrested by police. We had a discussion debating who the hero was and who the villain was.   

“One way that we thought to approach subjectivity is through transparency. We are keeping notes of our own interpretations and the evidence we used. This makes it tractable for third parties to evaluate the evidence.”  

This was a lesson the authors learned from consulting other databases, like the AI Incident Database. They also learned about design improvements they can make to the website using software engineering.   

The Political Deepfakes Incidents Database also tracks false allegations of deepfakes that have political significance. For example, one genuine photo showing smoke rising from an Israeli attack in Gaza was wrongly depicted on social media as being AI-generated.  Kaylyn Jackson Schiff

“It's really troubling that there's a worry that any image or video could be seen as a deepfake,” said Kaylyn Jackson Schiff, co-director of GRAIL. “People don't know what to trust. They don’t know, when they see an image or video now, whether it’s real or faked.” 

To assist with understanding the larger context surrounding a deepfake, the database has a communication goal category, which distinguishes between deepfakes that were created with malicious intent from those with potentially benevolent intent. It shows that most of the deepfakes fall under the satire, entertainment, and reputational harm categories. And while some (potential) deepfakes are verified as authentic or false by external parties, the authenticity of a third of them (38) is either disputed or unknown.  

“We've seen a shift in deepfakes going from more strategic or malicious to more entertainment,” Walker said. “It's become more recognized, more prevalent, and more accepted to share them as memes or satire.” 

How Anyone Can Use the Political Deepfakes Incidents Database 

The creators of the Political Deepfakes Incidents Database said it could be a valuable research tool. Journalists can use it to identify trends about online content. Political scientists can use it to measure whether or to what extent certain deepfakes might impact elections. The authors of the paper may also conduct follow-up studies or experiments to measure people’s attitudes and beliefs about the deepfakes they saw. 

Kaylyn Jackson Schiff noted that deepfakes are currently receiving a lot of policy attention. The European Union (EU) is working to finalize its landmark AI Act, which would require disclosures regarding AI-generated content. China has rules restricting deepfakes. In the US, the White House secured a commitment from leading AI companies to watermark AI content, and several members of Congress have also proposed federal legislation related to deepfakes. GRAIL was recently invited to serve on the new US National Institute of Standards and Technology (NIST) AI Safety Institute Consortium, also tasked with addressing these issues. 

“The database can help policymakers who want information about the impacts, the spread, the types of policy sectors that might be implicated by these technologies, and the types of narratives they might need to combat in how these images and videos are presented,” Kaylyn Jackson Schiff said. 

Members of the public can use the database to check the validity of questionable online content. Daniel Schiff said one of their goals is to educate people about digital literacy. Another is to increase awareness about the benefits of tools like the database.  

“In misinformation communications research, we know that technical fixes aren’t enough,” Daniel Schiff said. “This is part of the conversation. We're trying to expand beyond technical detection.”

Update (2/27/24): CASMI Director Kristian Hammond congratulated the winners of the AI Incidents and Best Practices Paper Award in the following YouTube video.

Back to top