Trump's Deepfake Strategy: Eroding Truth with a World of Lies
We live in a strange world. On one side, companies, research groups, and governments are working tirelessly and spending massive amounts of money to identify and stop deepfakes—digitally manipulated content. Their goal? To protect us from a flood of false information, whether it's text, images, or videos. On the other side, we have a presidential candidate who is exploiting this technology for his own political gain.
The deepfakes shared by former President Donald Trump are outrageously obvious. Whether it’s an image of Kamala Harris giving a speech in front of a crowd brandishing Soviet-era hammer and sickle banners or other outlandish scenarios, these fakes are so over the top that it’s clear they're not real. But herein lies the danger. When we accept things that are so obviously false, it becomes easier to slip in things that aren't so obvious. And that’s where the real threat lies.
There will always be people who can't tell whether an image is AI-generated. Different people can be fooled at different times, by different images. But Trump's use of deepfakes highlights a terrifying way in which this technology can be wielded strategically to sow confusion in the information wars.
But it's not the technology that’s the problem—it’s the people using it. Trump’s use is a perfect example of this and a clever two-pronged strategy. First, post AI generated images you find online giving people content that they can share. Then, whenever you see an image you don’t like, dismiss it as a deepfake. Share the lie. Dismiss the truth. It’s a modern version of “fake news”—a strategy designed to erode our confidence in what we see, hear, and read. By aggressively pushing out falsehoods and labeling truths as lies, the lines between reality and fiction are blurred. At scale.
Platforms like X (formerly Twitter) are playing a role in this too. With tools like Grok, which has very few guardrails, it's possible to generate images that other systems might block. Grok is more flexible, but flexibility bends both ways. When you take the guardrails off, you can have more fun, but those with malicious intent can do even more damage.
Unfortunately, we can’t depend on regulations to fix this anytime soon. There’s no legislation that can be put in place in real-time. So, what can we do? It’s easy. Every time you see a deepfake, instead of reposting it, download it. Then stamp a big red “Fake” on it and share that. In a world flooded with falsehoods, we need people who are willing to call out lies, and we need to be those people.
Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program