Deepfakes – AI in the Hands of Propaganda

Written by HWAG/UCMC analyst, Anastasiia Ratieieva

Artificial intelligence has ushered in a new era of data interaction, and deepfakes are one of the most visible and dangerous areas of AI application. This technology not only allows for the creation of impressive digital impersonations but also raises ethical concerns and concerns about manipulating public perception. Deepfakes have become a tool for influencing and spreading uncertainty in a world where it is becoming ever more challenging to distinguish between reality and fiction. 

This article will investigate how Russia employs fake news and artificial intelligence in its propaganda, as well as the methods and potential consequences of such interventions. In a world where deepfakes conceal threats to democratic values, we will also learn how virtual reality can influence political decisions, public opinion, and security.

Deepfakes for everyone: technology in various cultural spheres

Interestingly, writer Viktor Pelevin introduced the concept of a political deepfake to Russian mass culture in 1999. “Reagan was already animated in his second term,” says the hero of his novel Generation P during a fake video production facility tour. 

Deepfake technology has advanced far beyond science fiction or artistic allegory, now being prevalent in popular culture, including the film industry. For example, in the latest installment of the legendary Star Wars saga, Princess Leia’s face was generated by AI after the infamous Carrie Fisher died before filming commenced. 

Moreover, pornography has been and continues to be the most prolific source of deepfakes. This type of pornography is a phenomenon that primarily affects and targets women. This phenomenon raises profound concerns due to its infringement upon victims’ fundamental online safety and the consequential jeopardy of their reputation and psychological well-being. In 2019, 96% of all AI-generated videos were pornographic in nature. Thus, highlighting the importance of not only strengthening anti-deepfake pornography measures but also developing technologies for detecting and tracking this type of content will protect users online.

“Another Body” is a documentary about a girl who had to deal with the consequences of using her face in a pornographic deepfake.

Technologies and tools for creating deepfakes are becoming more accessible and user-friendly, thanks in part to the fact that their development has piqued the interest of both large technology companies and individual developers. For example, OpenAI’s DALL-E, which generates unique images based on a text description, has gained enormous popularity among meme creators and artists. Midjourney is another example, which charges a fee for its services. This service allows users to create high-quality, realistic artificial images.

Parrot AI is a website that lets you read text aloud in the voice of a public figure. The site will remove the mention that the audio is artificially generated for an additional fee. 

Deepfakes can already be created without leaving Instagram or Facebook. Meta will launch artificial intelligence integrated into its social networks in September 2023 (so far, this feature is only available in the United States). Furthermore, Meta has created 28 non-existent people with the faces of famous people who can be contacted via WhatsApp, Messenger, and Instagram and will respond based on their roles. 

Why are Deepfakes so dangerous? 

According to scientists Bobby Chesney and Danielle Citron, the work on developing deepfake technology is distinguished by the active participation of cutting-edge technologies, academia, and government agencies. 

The power of these tools is not limited to material resources but also to knowledge and advanced machine learning methods such as generative adversarial networks (GANs). In this context, the main factor is open access to knowledge, which allows even non-professional users to create forgeries. This phenomenon is what makes deepfake technology so widely available and rapidly spreading and developing. 

However, this is insufficient to have a significant impact, and cognitive biases come to the rescue. According to studies, fake stories and hoaxes spread several times faster than true news reports, making them highly effective influence tools.

A study conducted during the 2016 election campaign in the United States revealed an intriguing feature: There’s often a distinction between those who consume fake news and those who engage with articles focused on fact-checking and debunking false information. This illustrates that a compelling fake news story only requires reaching the right audience to gain credibility and belief. 

Furthermore, studies show that people tend to perceive and disseminate information that supports their own beliefs, resulting in filtered information ‘bubbles.’ When deepfake technology is used in the context of cognitive biases, people are more likely to accept false information that fits their beliefs and reject true information that contradicts their stereotypes. 

In such cases, the skilled application of AI can significantly undermine trust in information, make distinguishing between fact and fiction difficult, and increase the risk of socio-political division in society. As a result of the advancement of deepfake technology, there is a need to comprehend and address its potential negative implications for the modern information space.

The Kremlin’s AI arsenal

In the development of artificial intelligence, Moscow is attempting to catch up with global leaders. Vladimir Putin has repeatedly expressed support for AI (though it is unlikely that the Russian leader, who does not use the Internet or a mobile phone, understands what he is talking about). The Russian Federation is researching to develop and counter deepfakes. It promotes the development of political deepfakes: for example, at one of the technology exhibitions, Putin was shown AI achievements based on a deepfake with Olaf Scholz.

The Russian Federation’s State Duma considered a draft law on criminal liability for deepfakes in June 2023, but the bill was not passed. Moscow appears to purposefully avoid addressing the issue of deep fakes to leave room for manipulation in its own propaganda. 

For example, the Russian Liberal Democratic Party of Russia (LDPR) is attempting to stay afloat following the death of its charismatic leader by utilizing artificial intelligence. The NeuroZhirinovsky Telegram channel is operational for this purpose, answering voters’ questions by generating audio files in the late Vladimir Zhirinovsky’s characteristic odious manner and with appropriate content. In addition to preserving the party’s position, the deepfake ruse acts as a propaganda mouthpiece, producing messages in line with Kremlin rhetoric. 

Deepfakes as an information warfare tool

Deepfakes are a weapon used by Russia for information operations. Such videos are realistic, contain anti-Moscow rhetoric, and are intended to influence public consciousness and beliefs. “Serious” deepfakes include, most recently, the “participation” of Valeriy Zaluzhnyi, Commander-in-Chief of Ukraine’s Armed Forces.

The death of his aide, Hennadiy Chastiakov, sparked a flood of messages from Russian propagandists about the “confrontation” between Zelenskyiand Zaluzhnyi. This narrative is now central to the Kremlin’s rhetoric: propaganda is attempting to create a myth of a split among Ukrainian political elites in this way. In support of this, a deepfake dummy in the image of Zaluzhnyi was created and distributed. The video’s text is replete with classic Russian propaganda narratives: 

“I have accurate information that yesterday Zelensky eliminated my assistant. He is going to take me out and then betray the country. Zelensky is an enemy of our state. Because of him, we completely failed the counteroffensive, losing half a million of the nation’s best sons.”

The video encourages the public to protest and prompts the military to disobey orders, advocating for a military coup to take place by traveling to Kyiv. In the video, Zelensky is branded as a “traitor” and a “liar,” with claims suggesting he holds ownership of all Ukrainian media outlets.

A still from the phony video featuring Zaluzhnyi

The Telegram channels created for propaganda purposes initially distributed this video as genuine but later “recognized” it as a deep fake. The video, according to Russian military commander Aleksey Zhivov, “was the work of some volunteers” but “made a lot of noise in the Ukrainian segment of social networks” (this statement is false, though bots attempted to spread the deepfake in Telegram comments). 

Another deepfake using Maia Sandu’s face aims to create a dangerous image of politicians who are inconvenient to the Russian regime. 

A video was recently shared online in which Maia Sandu was asked in English about possible mobilization in Moldova. Sandu allegedly responds in the affirmative, confirming that work is underway to strengthen the defense sector and that specialists are undergoing joint training with the Romanian military. He also allegedly discusses a possible Russian army invasion of Odesa and readiness to accept refugees. 

According to RT, pranksters posed as Ukrainian Prime Minister Shmyhal and posed questions to Moldova’s president. Sandu “said” that the US was assisting the Moldovan government in its fight against the opposition and “announced” that she was willing to give Ukraine the village of Giurgiulesti “for use for several years.”

Intelligence is artificial; laughter is real

The Kremlin also employs AI to generate fake memes that mock (sometimes cruelly ironically) specific political figures and their decisions. Memes are an effective component of Russian propaganda because of their emotionality and contextualization as a unit of information. Deepfakes can be used to create meaning through irony as a new type of caricature. Read the article “Very Black Humor: Memes as a Tool of Russian Propaganda” for more information on using humor in disinformation. 

A Russia Today promotional video released in 2020, just as RT covered the US presidential election, is an example of such a spoof. The video shows Donald Trump walking around Moscow and discussing his relationship with the Kremlin. 

A still from RT’s video showing Trump’s deepfake speaking while pointing to the Kremlin: “I love working with them. They pay me hundreds of millions of dollars.”

Another example of political opponents being mocked by artificial intelligence is a video in which figures from Europe and the United States decide what additional sanctions to impose on Russia. In the video, Joe Biden bangs his head against a wall, Olaf Scholz uses ChatGPT, and Rushi Sunak spins a wheel to determine what should be banned from Russia. 

Rospop uses mocking videos and recognizes some videos as deepfakes to show their population and political opponents that they have cutting-edge technology and know how to use it. Moscow is proud of its propaganda’s effectiveness and frequently exaggerates its significance. 

On his channel, Simonyan, for example, quoted The Daily Beast: “‘RT’s capabilities with the use of ‘deepfake’ technology may surprise many.” Following the reaction to the new sanctions video, RT writes that “Moscow is hitting the boundaries between reality and fiction with a sledgehammer.” The editorial staff covers every mention of its video, from articles dedicated to it to a mention in a report on internal threats by the US Department of Homeland Security. 

According to the aforementioned military commander Zhyvov, deepfakes should be used as a new psychological weapon: “You can use every information occasion in the Ukrainian segment and bombard them with the deepfakes on any occasion.” Ukrainian politicians’ weakness is their love of excessive publicity. It can be actively used to demoralize and mislead the enemy.” Furthermore, Zhyvov employs the classic Russian propaganda tactic of mirroring: “The Americans are well aware of the deepfake community’s power and potential.” That’s why, at the start of Election Day, they digitized Zelensky and the entire Kvartal 95 [the comedy show in which Zelensky appeared prior to his political career – note].”

To Summarise

Artificial intelligence and its products are already being introduced online into almost every aspect of people’s lives, from popular culture to politics. Deepfakes are widespread and easily accessible, and they have the potential to be an effective weapon against both ordinary citizens and top politicians.

Most people’s cognitive biases amplify the impact of realistic fake content on the beliefs of entire societies. Russia is attempting to exploit this in its hybrid war against Ukraine by creating and maintaining deepfake ruses with the “participation” of the Kremlin’s political adversaries. 

Deepfakes are a tool for spreading disinformation in Moscow. They aim, in particular, to reinforce the Kremlin’s already popular narratives among the population and to demonstrate the Kremlin’s ability to use cutting-edge technology.

The Russian Federation, on the other hand, uses AI to make reputational attacks on its opponents by mocking them. Combating a threat like political and deepfake news should include both the development of technologies for detecting artificial content and efforts to improve critical thinking. Successfully refuting fake information can only achieve partial success; complete success demands that the falsehood is not believed right from the beginning.