Language:

Digital Disinformation in the Age of AI

Published on 20.08.2024

Digital disinformation has many names and faces: alternative facts, fake news and deepfakes are just a few. AI-supported tools are making manipulated content ever more sophisticated and difficult to recognise creating a whole new challenge when it comes to protecting digital identities.

Disinformation becomes especially volatile during a US presidential election campaign. This is particularly evident in the currently much-discussed image manipulation of Republican candidate Donald Trump. Trump recently gave the impression that the prominent US pop singer Taylor Swift and some of her fans, known as “Swifties”, had already decided on a presidential candidate: He recently published a poster on his online service Truth Social that shows the singer calling for people to vote for him. Other pictures also show women wearing T-shirts that read: “Swifties für Trump”. The problem is that, according to experts, the Swift poster and other images have been created or manipulated by artificial intelligence. Swift herself has criticised Trump in the past and also campaigned for the current US President Joe Biden in 2020.

The case is an example of how content manipulated with the help of artificial intelligence represents a new dimension of digital disinformation: deliberately disseminated false information that is shared online thousands of times and blurs the distinction between reality and fake. It is becoming increasingly difficult to judge which sources and content are trustworthy and which are not. Using manipulated videos, it is even possible now to put false information into the mouths of real people. This is a threat to democracy and society – and to our digital identities.

What Disinformation Means

Disinformation is defined as the deliberate dissemination of false or misleading information. While the terms disinformation and misinformation are often used interchangeably in public discourse, there is a clear distinction between the two. The fundamental difference between disinformation and misinformation is the intention behind it: misinformation is circulated by mistake. The classic example of this is an inadvertent misreport in the newspaper, such as due to a transposed number, which is often corrected.

In contrast, disinformation is based on intent. Its aim is to influence recipients’ opinions in a targeted manner and thus manipulate processes – even to the point of reinforcing social divisions. Disinformation is therefore playing an increasingly important role, especially in the political arena.

Manipulated Truths: Disinformation as a Political Instrument

The term “fake news” became popular during the US election campaign in 2016 – not just because election winner Donald Trump adapted it for his own use. Following the presidential election, a study by the University of Southern California showed that at least 400,000 bots had spread messages via Twitter during the election campaign. According to the report, around 20 per cent of all tweets about the election came from accounts that were not run by people but by automated programs – programs that deliberately spread falsehoods and disinformation via fake profiles. Many real users spread the bots’ messages on social networks. They failed to realise that the supposed Twitter users were not real people. The studies on disinformation in the US election campaign stimulated a public discussion about the influence of untrue information and the large number of fake accounts on election results.

“Especially in a digital world like today, anyone can post information and statements online thanks to the low threshold. Fake news has therefore become something of a threat to our society. It can influence political moods and cause lasting damage to social institutions, which is why we need to keep a close eye on it.”

Bettina Stark-Watzinger
Bettina Stark-Watzinger, Federal Research Minister
Source: Deutscher Bundestag / Achim Melde

The fact is, disinformation can create mistrust in governments, distort public perception and establish “false truths”. Even journalists are finding it increasingly difficult to verify facts. Are the leaked images truly genuine? And is the source trustworthy? These questions are often very difficult for reporters to obtain answers to. This is largely due to the fact that the creators of disinformation now use highly complex AI tools to create and conceal manipulated content, such as with “deepfakes”: video recordings created or manipulated using artificial intelligence that make it possible to depict a person saying or doing things that they have never actually said or done.

A lot has happened since the US election: AI-controlled bots have become even more authentic – and deepfakes even more realistic. Distinguishing fake profiles from real users has become almost impossible without technical aids. With a view to the 2024 presidential elections and the 2025 federal elections, the following questions are therefore once becoming ever more urgent: With what forms of disinformation is society increasingly being confronted? And which technologies help to expose AI-supported manipulation?

Types of Disinformation

There are now many different forms of disinformation, all of which are constantly changing and evolving. Much of the widespread disinformation is now circulating on social networks, which means that new formats for untrue claims are constantly emerging and spreading at enormous speed.

The best-known types of disinformation include the following:

Fake News

The term fake news is often used as a synonym for disinformation. At its core, fake news is simply disinformation disguised as serious journalistic reporting, giving recipients the impression that they are consuming trustworthy news.

Deepfakes

Deepfakes are images, videos and voice recordings of a person that have been falsified or manipulated with the help of artificial intelligence. For example, words can be put into people’s mouths, or actions can be attributed to them or placed in a different context. For a deepfake, a large amount of data is first collected in the form of images and videos of a person in order to train an algorithm – usually what is referred to as a “neural network”. In this way, the system learns to realistically imitate a person’s face and voice. This imitated version is then placed on another person’s video. The manipulated content is now so perfect that the human eye can hardly tell what is real and what is fake without using technical aids.

Social Bots

These independent computer programs – the oft-cited bots – act as fake profiles in social networks and comment, like or create posts with untrue claims. It is often very difficult for other users to recognise whether the profile is actually that of a human or a bot.

Uncovering Disinformation

Irrespective of the use of technical solutions, there are some basic strategies for distinguishing disinformation from trustworthy content. The following tips give an initial overview of the assistance that is available. A comprehensive guide has been prepared on the website of the German Federal Government.

  1. First Verify, Then Share

    Before sharing content, it is advisable to check for obvious contradictions.
  2. Check the Authorship

    If the originator of a message is not trustworthy, the same will often apply to the content. Real accounts use clear names and have complete imprint details.
  3. Compare Sources

    If other, reputable media have also reported a news item, its level of credibility is higher.
  4. Fake Images? A Reverse Search Is Useful

    You can test whether an image matches the content of a message by using the reverse image search. Incomplete references to images may point to inconsistencies.

Manipulation of Digital Identities

However, AI-based forgery tools are not only used to spread disinformation in a political context. They are being used more and more to misuse digital identities as well. Disinformation is increasingly being published in the name of well-known personalities. With the help of deepfakes, it is now possible to create content that imitates a target person in real time, thereby overcoming biometric systems. This makes it possible for cyber criminals to open an account under a false name.

Disinformation in the Age of AI Bundesdruckerei’s Involvement

Without suitable tools and systems, it is almost impossible to expose identities and content manipulated by AI. Researching the technological foundations and developing solutions is therefore essential to effectively combat digital disinformation. The solutions are now using the very same means used by the originators of falsified information to identify their manipulation and fraud: platforms and tools that also operate with the aid of artificial intelligence. Current solutions tackle the threat from three different directions: multimodal analyses include deep learning, biometrics and content credentials. The Bundesdruckerei Group is also working intently on AI applications and is involved in research projects and collaborations in all three areas in order to achieve security and trust in the digital space:

Group of people with smartphones

Group of people with smartphones.

The sponsored FAKE-ID research project employs linear and AI methods to detect deepfakes. Special detectors have been developed as a proof of concept to make it easier to recognise false and manipulated identities. The research project has been running since May 2021 and is being sponsored by the Federal Ministry of Education and Research until the end of 2024.The project team consists of Bundesdruckerei GmbH, the Fraunhofer Heinrich Hertz Institute, the Berlin School of Economics and Law and BioID GmbH. In the first step, the consortium analysed possible attacks and forgeries of video material and formally described the features and characteristics of real and fake identities. The second step involved a two-prong approach: The recognition of initial features was trained, and then artificial intelligence was used in a general way to learn how to correctly declare videos annotated as being either “genuine” or “falsified”. The result is a demonstrator in the form of a software platform that analyses the degree to which the tested moving image material is authentic. The platform includes various deep-fake detectors and is currently being tested for practical suitability in user tests.

The demonstrator analyses image and video data streams at runtime using trained algorithms to identify various suspicious circumstances. For example, these include features of fake generators that are invisible to humans, deviating eye movements such as the closing of eyelids or the depth of detail of the mouth and teeth. If the FAKE-ID demonstrator detects such inaccuracies or indications of manipulation, they are then visualised and made comprehensible to the human user in the form of a risk and suspicion map. Human action could be aided in this way, but not replaced.

“The project shows that we can also use AI to counter the threat posed by AI to digital identities. In the consortium, we have developed a demonstrator with detectors that can at least pre-filter a large part of the video material. However, this method will not provide one hundred per cent certainty. We can only achieve this if we start signing the source, such as by linking a video to a clear author who has identified themselves with the eID function – and has signed it accordingly.”

Florian Peters, Fellow at Bundesdruckerei 

Falsified or completely new images created by AI are becoming an ever greater problem for journalistic reporting as well. This makes manipulation even easier. At the same time, it is becoming increasingly difficult to determine whether an image was published by a reputable publisher and whether it may have been altered. In order to guarantee the authenticity of image material, Bundesdruckerei’s subsidiary D-Trust provides special signature certificates that ensure the authenticity of images. As soon as a photo is captured, device certificates in cameras can add a digital signature to the corresponding content and the attached set of metadata comprising authorship, location, date and time. A personal certificate can guard against further processing steps, such as the work of a photographer in Photoshop. D-Trust supplies the appropriate certificates for the various use cases. In future, content credentials could be used to protect a wide range of digital image and media material from manipulation. This would make it easier for editors to check the origin of media files and recognise any manipulation done for disinformation purposes.

“Device certificates bring about trust in the authenticity of content, even in times of limitless digital image processing. This is essential for serious news reporting – not to mention a functioning democracy.”

Jochen Felsner, D-Trust Managing Director

In response to the increasing challenges posed by image manipulation, many technology and media companies have come together in the “Content Authenticity Initiative” (CAI). The primary aim of the CAI is to support the industry standard “Coalition for Content Provenance and Authenticity” (C2PA). The purpose of this is to help confirm the authenticity of digital content by means of content credentials. The Leica M11-P is the world’s first camera that signs photo metadata in accordance with the C2PA standard of the Content Authenticity Initiative. This is made possible by a public key infrastructure (PKI) from D-Trust. This PKI creates the relevant device certificates individually during the camera’s production process and integrates them into the security chip.

The SENSIBLE-AI project also deals with defending against deepfake attacks. As part of the sponsored research project, Bundesdruckerei GmbH, neXenio GmbH and Darmstadt University of Applied Sciences, under the consortium leadership of the Fraunhofer Institute for Applied and Integrated Security AISEC, have researched the integrity and authenticity of AI systems. In the process, the consortium partners classified AI systems over a total of three years, determined their protection requirements and identified suitable protective measures for characteristic use cases. The focus of the project was on Android systems and hardware as well as on how to protect particularly confidential information such as medical data or trade secrets.

During the research project, Bundesdruckerei GmbH developed a prototype for real-time detection of deepfake attacks in video conferences. It utilises “Self-ID” technology, previously developed in an innovation project by Bundesdruckerei GmbH. This uses visual recognition of your own face as a biometric identification mechanism. Put simply, the technology records eye movements of the video conference participants and analyses the extent to which their behaviour corresponds to natural self-observation. If there are any anomalies, the system reports a suspected attack. The SENSIBLE-AI project was funded by the German Federal Ministry of Economics and Climate Protection (BMWK). The results of the project, completed since spring 2024, can be read in detail on the project’s public website.  

Another reason why it is so difficult to expose disinformation as such is that it is not always easy to determine the trustworthiness of websites and their authors in the digital space. Qualified website authentication certificates – or QWACs – can therefore play an important role in combating digital disinformation. QWACs verify the identity of domain owners and make it transparent for users. This allows the users to be sure that they are interacting with an authentic and trustworthy source. As a trust service under the European Union’s eIDAS Regulation, QWACs may only be issued by qualified trust service providers (QTSPs), which are strictly regulated. Bundesdruckerei subsidiary D-Trust is one of them.

True or Fake? What the Future Has to Offer

Distinguishing deepfakes and the like from trustworthy content will be even more difficult in future than it already is today. The range of different forms of disinformation is growing, as is the complexity of the techniques used to spread it. At the same time, however, a wide variety of technological solutions are being developed that will help to expose disinformation campaigns. Fast responses to new technologies therefore play a decisive role in the fight against digital disinformationrequiring basic research, awareness campaigns and technological expertise.Political institutions can provide citizens with practical assistance for recognising disinformation. Yet it is becoming increasingly relevant for political institutions to protect their own content as well. After all, the use of AI in public administration also requires new protection mechanisms for sensitive data and processes.

Media professionals are also facing new challenges as a result of digital disinformation. To ensure reliable reporting and not jeopardise the trust of their recipients, journalists need to find new ways to check sources and verify content. Content Credentials can play a decisive role in this.

Bundesdruckerei is helping to counteract digital disinformation with its products and research projects. In particular, this includes the protection of digital identities. Along with existing solutions – such as in the form of qualified certificates – the Bundesdruckerei Group is working with strong partners from politics, science and business to develop new technologies and AI applications that will provide security and trust in digitisation.

You might also be interested in

Article
Article
Article