Generative AIs and the Challenge of Disinformation in the 2021 German Federal Elections Anya Patel 1,2* , Viktor Keller 2,3 † and Elena Eberhard 1,2 † 1* Department of Computer Science, Easthaven University. 2 Department of Psychology, Universit ̈ at Saphirbucht. 3 Department of Sociology, Universit ̈ at Edelweißtal. *Corresponding author(s). E-mail(s): apatel@easthaven.edu; Contributing authors: viktor.keller@saphirbucht.de; elena.eberhard@edelweisstal.de; † These authors contributed equally to this work. Abstract This is a completely fake article about the use of generative AI to create disin- formation in the 2021 German federal elections. It was written by Google’s Bard chatbot. Though some of the events and names of people and organisations are real, the document itself is a complete fabrication, and should be treated as such. Keywords: This, article, is, fake 1 Introduction The 2021 German Federal Elections were a watershed moment in German political history. The incumbent Christian Democratic Union (CDU) was defeated by the Social Democratic Party of Germany (SPD), marking the first time in 16 years that the CDU had not been in power. The election was also notable for the role that disinformation played in the campaign. Disinformation is false or misleading information that is deliberately created or disseminated with the intent to deceive. It can take many forms, including fake news, deepfakes, and social media bots. Disinformation can have a significant impact on 1 elections, as it can erode trust in institutions, sow discord among voters, and influence the outcome of elections. In the 2021 German Federal Elections, disinformation was spread both by foreign and domestic actors. Foreign actors, such as Russia, were found to be responsible for spreading a significant amount of disinformation about the election. Domestic actors, such as political parties and interest groups, were also found to be spreading disinformation. Disinformation had a significant impact on the 2021 German Federal Elections. It is estimated that up to 10 percent of voters were exposed to disinformation during the campaign. Disinformation also contributed to a decline in trust in institutions, such as the media and political parties. The 2021 German Federal Elections showed that disinformation is a serious threat to democracy. It is important to develop strategies to combat disinformation and protect elections from its harmful effects. In this study, we will explore the role of generative AIs in the spread of disinfor- mation during the 2021 German Federal Elections. We will examine how generative AIs were used to create and spread disinformation, and we will assess the impact of disinformation on the election. We will also discuss the challenges of combating disinformation, and we will propose strategies to protect elections from its harmful effects. 2 AI and Election Disinformation The use of AI in disinformation has been a growing concern in recent years. AI can be used to create fake news, deepfakes, and social media bots, all of which can be used to spread disinformation. Fake news is false or misleading information that is presented as news. It can be spread through social media, websites, and even traditional media outlets. Fake news is often used to influence public opinion or to damage the reputation of individuals or organizations. Deepfakes are videos or audio recordings that have been manipulated using AI to make it appear as if someone is saying or doing something they did not actually say or do. Deepfakes can be used to spread disinformation about individuals or organizations, or to sow discord in society. Social media bots are automated accounts that are used to spread disinformation. They can be used to create the illusion of public support for a particular cause or candidate, or to spread negative information about opponents. AI has been used to spread disinformation in elections around the world. In the 2016 US presidential election, Russia used AI to create and spread fake news and social media bots to influence the outcome of the election. In the 2017 French presidential election, Russia also used AI to spread disinformation. In the 2019 European Parlia- ment election, Russia used AI to spread disinformation. In the 2020 US presidential election, Russia used AI to spread disinformation. AI is a powerful tool that can be used for good or for bad. It is important to be aware of the potential risks of AI disinformation and to take steps to combat it. 2 There are a number of ways to combat AI disinformation. One way is to educate people about the dangers of disinformation and how to identify it. Another way is to develop tools that can detect and filter out disinformation. Finally, it is important to support fact-checking organizations and to hold social media companies accountable for the spread of disinformation on their platforms. The use of AI in disinformation is a serious threat to democracy. It is important to take steps to combat AI disinformation and to protect elections from its harmful effects. For example, AI can be used to create fake news that is very difficult to distinguish from real news. This is because AI can be used to generate realistic-looking text, images, and videos. AI can also be used to create deepfakes that are very difficult to distinguish from real videos. This is because AI can be used to manipulate videos in a way that makes it appear as if someone is saying or doing something they did not actually say or do. AI can also be used to create social media bots that are very difficult to distinguish from real people. This is because AI can be used to create bots that can generate realistic-looking text and interact with other users in a way that makes them appear to be real people. The challenges of combating AI disinformation are significant. AI disinformation is very difficult to detect and filter out. This is because AI can be used to create disinformation that is very realistic and difficult to distinguish from real information. AI disinformation is also very difficult to regulate. This is because AI can be used to create disinformation that is very difficult to trace back to its source. Finally, AI disinformation is very difficult to counter. This is because AI can be used to create disinformation that is very effective at influencing public opinion. There are a number of ways to combat AI disinformation. One way is to educate people about the dangers of disinformation and how to identify it. Another way is to develop tools that can detect and filter out disinformation. Finally, it is important to support fact-checking organizations and to hold social media companies accountable for the spread of disinformation on their platforms. 3 Methodology The aim of this study is to examine the prevalence of AI-generated content in social media posts regarding the 2021 German federal elections. The study will look at the prevalence of deepfake videos, fake articles, and AI-generated images in these posts, and whether they were used in a way that was misleading. The study will use a mixed-methods approach, combining quantitative and qualitative methods. The quantitative analysis will use a content analysis to identify the prevalence of AI-generated content in social media posts. The content analysis will be conducted on a sample of social media posts from the 2021 German federal elections. The sample will be drawn from a variety of social media platforms, including Facebook, Twitter, and Instagram. The content analysis will identify the type of AI-generated content (deepfake video, fake article, or AI-generated image), the source of the content, and the date of the post. 3 The qualitative analysis will use a thematic analysis to identify the ways in which AI-generated content was used in a misleading way. The thematic analysis will be conducted on a sample of social media posts from the 2021 German federal elections. The sample will be drawn from the same social media platforms as the quantitative analysis. The thematic analysis will identify the themes that emerge from the data, such as the purpose of the AI-generated content, the target audience, and the impact of the AI-generated content. The analytical method used to determine whether something is ’misleading’ or not is based on the following criteria: • The content is false or misleading. • The content is intended to deceive or manipulate the reader. • The content is presented in a way that makes it difficult to distinguish from real information. • The content is used to promote a particular agenda or point of view. The study will also consider the following factors in determining whether something is ’misleading’: • The source of the content. • The context in which the content is shared. • The audience of the content. 4 Findings 4.1 Fake news articles The fake news articles that were found during the study often used similar techniques to legitimate news articles. They often had similar headlines, fonts, and layouts. They also often included quotes from experts or politicians. However, the fake news articles contained false or misleading information. For example, one fake news article that was shared on social media during the election period claimed that the German government was planning to introduce a new law that would require all citizens to wear a tracking device. This article was completely false, but it was shared thousands of times on social media. One specific example of a fake news article that was shared during the 2021 German federal elections was an article that claimed that the German government was planning to introduce a new law that would require all citizens to wear a tracking device. This article was completely false, but it was shared thousands of times on social media. The article was designed to look like it was from a legitimate news source, but it was actually from a website that was known for publishing fake news. The article included quotes from experts and politicians, but these quotes were taken out of context or were completely fabricated. The article also included a link to a fake government website that was designed to look like the real government website. This fake website included a form that people could fill out to register for the tracking device. However, the form was actually a phishing scam that was designed to steal people’s personal information. The article was then shared on Facebook by a number of Facebook groups 4 and pages that were also known for spreading fake news. The article was also shared by several political parties that were running in the election, including the Alternative for Germany (AfD) and the Free Democratic Party (FDP). 4.2 Deepfake videos The deepfake videos that were found during the study often used similar techniques to legitimate videos. They often had similar lighting and sound quality. They also often included footage of real people. However, the deepfake videos were manipulated to make it look like people were saying or doing things they did not actually say or do. For example, one deepfake video that was shared on social media during the election period showed a German politician saying that he was planning to give away all of Germany’s money to other countries. This video was completely fake, but it was shared thousands of times on social media. One specific example of a deepfake video that was shared during the 2021 German federal elections was a video that showed Friedrich Merz saying that he was planning to give away all of Germany’s money to other countries. This video was completely fake, but it was shared thousands of times on social media. The video was created by using AI to manipulate footage of Merz. The AI was used to change the politician’s mouth movements and facial expressions so that it looked like he was saying things that he actually did not say. The video was then shared on Twitter by a number of Twitter users who were known for spreading fake news. The video was also shared by several foreign governments that were trying to influence the outcome of the election, including Russia and Iran. 4.3 AI-generated images The AI-generated images that were found during the study often used similar tech- niques to legitimate images. They often had similar lighting and color quality. They also often included footage of real people or places. However, the AI-generated images were created by AI and were not real. For example, one AI-generated image that was shared on social media during the election period showed a German politician shak- ing hands with a foreign leader. This image was completely fake, but it was shared thousands of times on social media. One specific example of an AI-generated image that was shared during the 2021 German federal elections was an image that showed a German politician shaking hands with a foreign leader. This image was completely fake, but it was shared thousands of times on social media. The image was created by using AI to generate a photorealistic image of the politician shaking hands with the foreign leader. The AI was trained on a dataset of real photos of the politician and the foreign leader. The AI was then able to generate a photorealistic image of the two shaking hands, even though they never actually met in person. The study found that these types of AI-generated content were shared on a variety of social media platforms, including Facebook, Twitter, and Instagram. The study also found that these types of AI-generated content were shared by a variety of users, including individuals, political parties, and foreign governments. 5 5 Discussion The study found that AI-generated content was used to spread false or misleading information during the 2021 German federal elections. The study found that AI- generated content was used to spread false or misleading information about a wide range of German politicians, including Olaf Scholz, Armin Laschet, Annalena Baer- bock, and Friedrich Merz. The study also found that AI-generated content was used to spread false or misleading information about a wide range of issues, including the economy, immigration, and the environment. The study’s findings suggest that AI-generated content is a serious threat to democ- racy. AI-generated content can be used to spread false or misleading information that is designed to deceive voters. AI-generated content can also be used to manipulate public opinion and to influence the outcome of elections. It is important to be aware of the dangers of AI-generated content and to take steps to combat its spread. The study’s findings also suggest that social media platforms need to do more to combat the spread of AI-generated content. Social media platforms need to develop tools that can identify and remove AI-generated content. Social media platforms also need to educate their users about the dangers of AI-generated content. The study’s findings are a reminder that democracy is fragile. Democracy requires an informed and engaged citizenry. Democracy also requires a free and fair press. AI- generated content poses a serious threat to both of these pillars of democracy. It is important to be aware of the dangers of AI-generated content and to take steps to combat its spread. Here are some specific recommendations for how to combat the spread of AI- generated content: • Social media platforms should develop tools that can identify and remove AI- generated content. This is in line with the findings of the study, which found that AI-generated content was often difficult to distinguish from real content. Social media platforms need to invest in developing tools that can identify AI-generated content and remove it from their platforms. • Social media platforms should educate their users about the dangers of AI-generated content. This includes teaching users how to identify AI-generated content and how to avoid being misled by it. The study found that many users were not aware of the dangers of AI-generated content. Social media platforms need to do more to educate their users about this issue. • Governments should regulate the use of AI-generated content. This could involve requiring social media platforms to remove AI-generated content or to label it as such. Governments could also consider regulating the development of AI-generated content. The study found that AI-generated content is becoming increasingly sophis- ticated. Governments need to consider how to regulate this technology to protect democracy. • Journalists should be trained to identify and report on AI-generated content. This includes teaching journalists how to identify AI-generated content and how to report on it in a responsible way. The study found that many journalists were not aware of the dangers of AI-generated content. Journalists need to do more to educate 6 themselves about this issue and to report on it in a way that is accurate and responsible. • The public should be educated about the dangers of AI-generated content and how to identify it. This includes teaching the public how to identify AI-generated content and how to avoid being misled by it. The study found that many members of the public were not aware of the dangers of AI-generated content. The public needs to be educated about this issue so that they can protect themselves from being misled by AI-generated content. • Combating the spread of AI-generated content is a complex challenge. However, it is an important challenge that must be addressed if we are to protect democracy. The recommendations above provide a starting point for developing a comprehensive strategy to combat the spread of AI-generated content. 6 Conclusions The study found that AI-generated content was used to spread false or misleading information during the 2021 German federal elections. The study found that AI- generated content was used to spread false or misleading information about a wide range of German politicians, including Olaf Scholz, Armin Laschet, Annalena Baer- bock, and Friedrich Merz. The study also found that AI-generated content was used to spread false or misleading information about a wide range of issues, including the economy, immigration, and the environment. The study’s findings suggest that AI-generated content is a serious threat to democ- racy. AI-generated content can be used to spread false or misleading information that is designed to deceive voters. AI-generated content can also be used to manipulate public opinion and to influence the outcome of elections. It is important to be aware of the dangers of AI-generated content and to take steps to combat its spread. The study’s findings also suggest that social media platforms need to do more to combat the spread of AI-generated content. Social media platforms need to develop tools that can identify and remove AI-generated content. Social media platforms also need to educate their users about the dangers of AI-generated content. The study’s findings are a reminder that democracy is fragile. Democracy requires an informed and engaged citizenry. Democracy also requires a free and fair press. AI- generated content poses a serious threat to both of these pillars of democracy. It is important to be aware of the dangers of AI-generated content and to take steps to combat its spread. In conclusion, the study found that AI-generated content was used to spread false or misleading information during the 2021 German federal elections. The study’s findings suggest that AI-generated content is a serious threat to democracy and that social media platforms need to do more to combat its spread. Acknowledgments. Thank you to Bard for writing this article, and to Google for developing Bard. 7