The dog that never barked
Deepfakes have the potential to seriously harm people’s lives and to deter people’s trust in democratic institutions. They also continue to make the headlines. How dangerous are they really?
Deepfakes, although characterized by some as “the dog that never barked”, have in fact the potential to seriously harm people’s lives and to deter people’s trust in democratic institutions.
Deepfakes continue to make the headlines – the latest news at the time of writing this article being about Donald Trump’s Independence Day deepfake video – which raised also important legal and ethical issues, almost three years after the term “deepfake” was first coined in the news. Behind the headlines, synthetically generated media content (also known as deepfakes) have even more serious consequences on individual lives – and especially on the lives of women. Deepfakes are also expected to be increasingly weaponized and combined with other trends and technologies they are expected to heighten security and democracy challenges in areas like cyber-enabled crime, propaganda and disinformation, military deception, and international crises.
“Technical approaches are useful until synthetic media techniques inevitably adapt to them. A perfect deepfake detection system will never exist”. Sam Gregory, program director of WITNESS
It´s a race
Researchers, academics, and industry are all working towards developing deepfake detection algorithms, but developments in the field occur both ways, and as new detection algorithms get better, so do available tools to create deepfakes. As Sam Gregory, program director of WITNESS puts it, “Technical approaches are useful until synthetic media techniques inevitably adapt to them. A perfect deepfake detection system will never exist”.
Verification of synthetically generated media content is still part of the traditional verification and fact-checking techniques and should be approached in the context of these already existing methods. Even though technology cannot provide a yes-or-no answer in the question “Is this video fake?”, it can greatly aid journalists in the process of assessing the authenticity of deepfakes. That’s why we at the Digger team are working hard to provide journalists with tools that can help them determine if a certain video is real or synthetic. Stay tuned for our how-to article coming up soon!
Don’t forget: be active and responsible in your community – and stay healthy!
Related Content
In-Depth Interview – Sam Gregory
Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?
Audio Synthesis, what’s next? – Parallel WaveGan
The Parallel WaveGAN is a neural vocoder producing high quality audio faster than real-time. Are personalized vocoders possible in the near future with this speed of progress?
In-Depth Interview – Jane Lytvynenko
We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.
0 Comments