Skip to main content

Meta's Nick Clegg downplaying AI election fears

April 11, 2024

During a recent Meta AI Day event in London, Meta’s President of Global Affairs Nick Clegg tried to calm people's fears regarding the use of generative AI to influence people’s political views. The concern has been that AI might be used to generate text, speech, images, and video that would then be used as part of the political discourse. 

Deepfakes of politicians committing actions that they hadn't committed or saying things that they hadn't said could be mixed in with other content. Clegg’s argument is that we haven't seen a lot of this so far, so we shouldn’t be that worried about it. 

Remember the early days of the pandemic? There were states saying, “I know you're worried that COVID-19 will spread exponentially, but so far, we haven't seen it here.” It feels like the same argument is being made here. 

The fact that we haven’t seen a tsunami of generated political content doesn’t mean that we shouldn't be very, very careful about looking at the content that's being produced. In fact, we have seen some political deepfakes, and they have been somewhat effective. We need to assume that what we see is possibly AI-generated and be skeptical of everything we look at. 

According to Meta’s chief AI scientist Yann LeCun, having open-source models will fix this problem. That is, by putting the technologies in everybody's hands as opposed to proprietary models such as OpenAI's we'll be able to build technologies to recognize when things are false or recognize what things are generated by machine. This is somewhat self-serving, in that this is part of Meta's business approach. They want us to think about how we could build an ecosystem of devices that can recognize and protect us from false information. 

Another interesting thing that LeCun said was that everything we do online will be mediated through AI assistance. I actually believe this. The question now is, where are the dangers? Where are the possible harms? Because if everything is mediated through a set of AI systems, and those AI systems have points of view or cultural bias, how are we going to make sure that we're not skewing people's interactions with the machine based upon those cultural perspectives? 

Kristian HammondKristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top