JavaScript is required to use this site. Please enable JavaScript in your browser settings.

November 8, 2025

Detecting Deepfakes – Supporting bpb Student Competition

Detecting Deepfakes – Supporting bpb Student Competition
Education

How can we distinguish between real and fake when videos and voices can be perfectly imitated? A group of students from Gymnasium Dresden-Plauen explored the topic of deepfakes as part of a student competition organized by the Bundeszentrale für politische Bildung (bpb) – and discussed the risks, opportunities, and responsibilities with Prof. Michael Färber. Deepfakes are AI-generated, deceptively real video and audio fakes that are increasingly influencing our perception of reality.

Risks of deepfakes

The discussion focused primarily on the most striking negative consequences of deepfakes. A major topic was the risk of reputational damage and the personal consequences, such as revenge porn, blackmail, or cyberbullying. Those affected can suffer considerable psychological stress as a result. Another key point was the impact on democracy and public opinion. Manipulated political videos can influence election campaigns, discredit opponents, and be used specifically for propaganda, polarization, and destabilization. The general loss of trust in the media was also highlighted. In a world where AI can create perfect fakes, it is becoming increasingly difficult to distinguish between what is real and what is not.

Potentials of deepfakes

In addition to the risks, positive applications for deepfake technologies were also discussed. In the film and entertainment industry, they enable realistic scenes to be created without costly reshoots. They can also virtually rejuvenate actors and actresses. Deepfakes can also be useful in education, for example by creating virtual historical figures or scientists who can explain complex topics in a clear and vivid way.

Responsibility and regulation

During the discussion, Prof. Färber emphasized that platforms and software providers bear particular responsibility when it comes to dealing with deepfakes. In his opinion, measures such as mandatory labeling with watermarks, visible symbols, or robust provenance metadata would be useful in helping journalists verify the authenticity of a video. At the same time, the private sphere remains difficult to control, as the relevant programs are now readily available and technically mature. Deepfakes can now be created with little effort and a normal laptop. This makes it increasingly difficult to distinguish between real and fake. Regulation is therefore necessary, but it is not enough on its own. Education and media literacy are just as important.

Why is this relevant?

Prof. Michael Färber’s research focuses intensively on large language models and the question of how text, language, and video can be reliably linked together. He is fascinated by the fact that AI can generate realistic videos from a simple text instruction – a prompt. At the same time, he emphasizes that this shows precisely how important it is to use such technologies responsibly. Trust and transparency can only arise if it remains clear how and on what basis AI generates content.

In conclusion, Prof. Michael Färber warmly thanks the students for their keen interest and intelligent questions. The exchange was exciting and dealt with a topic that will concern us all even more in the future.

Previous Entry Back to Overview Next Entry
funded by:
Gefördert vom Bundesministerium für Bildung und Forschung.
Gefördert vom Freistaat Sachsen.