Select Page

All sorts of video manipulation

All sorts of video manipulation

What is the difference between a ‘face swap’, a ‘speedup’ or even a ‘frame reshuffling’ in a video? At the end of the day they all are manipulations of video content. We want to have a closer look into the different kinds of manipulations – whether it are audio changes, face swapping, visual tampering, or simply taking content out of context.

In Digger we look at synthetic media and how to detect manipulations in all of its forms. 

This is not a tutorial on how to manipulate video. We want to highlight the different technical sorts of manipulation and raise awareness so that you might recognise one it crosses your path. Let’s start with:

Tampering of visuals and audio

Do you remember the Varoufakis finger?! Did he show it, or didn´t he?

This clip has been manipulated by pasting in a layer of an arm of another person. It is possible to crop and add any element in a video. 

As well as deleting specific parts of audio tracks in a speech or conversation to mislead you. Be careful, also background noises can be added to change the whole context of a scene. Therefore it is important to find the original version so you can compare the videos with each other.

Synthetic audio and lip synchronisation

Imagine you can say anything fluently in 7 different languages, like David Beckham did?

It is incredible but the larger part of his video is completely synthetic. They created a 3D model of Beckham´s face and reanimate that. That means that a machine learned what David looks like, how he moves when speaking in order to reproduce David saying anything in any language. One tip by Hany Farid: Detect mouth and lip movements and compare them with your own human behaviour. This is one example for English speaking lip movements.

Cloned voices are already offered online, so make sure you search for the original version (yes, again), trusted media reports or try and get access to an official transcript if it was a public speech. 

Shallowfakes or Cheapfakes

Just by slowing down or speeding up a video the whole context can change. In this example the speed of the video has been lowered. Nancy Pelosi, US Speaker of the House and Democrats Congresswoman, seems to be drunk in an interview.

In order to correct the lower voice the pitch of the voice has been turned up. All this effort was made to make you believe that Nanci Pelosi was drunk during an interview.

In the case of Jim Acosta part of a video has been sped up in order to suggest that he is making an aggressive movement in the situation where a microphone is being taken away from him.

It shows that also non-hightech manipulations can do harm and be challenging to detect . How can you detect low-tech manipulations? Again, find the original and compare. Try playing around with the speed in your own video player for example with the VLC player. 

Face swap or Body swap 

Imagine dancing like Bruno Mars or Beyonce Knowles without any training, a dream comes true.

This highly intelligent system captures the poses and motions of Bruno Mars and maps them on the body of the amateur. Copying dance moves, arms and legs, torso and head all at once is still challenging for artificial intelligence. If you focus on the details you will be able to see the manipulation. It’s still far from perfect, but it’s possible and just a matter of time till the technology is trained better. 

Synthetic video and synthetic voice

You can change and tamper videos and audio, but what happens when you do all of it in one video? When you would be able to generate video completely synthetically? One could recreate a person who died already many years ago. Please meet Salvador Dalí anno 2019:

Hard to believe, right? Therefore, always ask yourself if what you see could be true. Check the source and search for more context on the video. Maybe a trustworthy media outlet already reported about it. If you cannot find anything, just do not share it.

The Liar´s Dividend

We also need to be prepared that people might claim that a video or audio is manipulated which actually isn’t. This is called “The Liar´s Dividend”. 

If accused of having said or done something that he/she said or did, liars may generate and spread altered sound or images to create doubt or even say the authentic footage is a deepfake.

Make sure you have your facts checked. Ask colleagues or experts for help if needed and always watch a video more than twice. 

Have you recently watched a music video? Musicians seem to be among the first professional customers for the deepfake industry. Have a look, this is where the industry is currently being built up.

Did we forget techniques for video manipulation? Let us know and we will add it to our collection in this article.

The Digger project aims:

  • to develop a video and audio verification toolkit, helping journalists and other investigators to analyse audiovisual content, in order to be able to detect video manipulations using a variety of tools and techniques.
  • to develop a community of people from different backgrounds interested in the use of video and audio forensics for the detection of deepfake content.

Related Content

All sorts of video manipulation

All sorts of video manipulation

What is the difference between a ‘face swap’, a ‘speedup’ or even a ‘frame reshuffling’ in a video? At the end of the day they all are manipulations of video content. We want to have a closer look into the different kinds of manipulations - whether it are audio...

About The Digger Team

About The Digger Team

The team working on the Digger Project consists of Fraunhofer IDMT (audio forensics technology), Athens Technology Center (product development) and Deutsche Welle (project lead and concept development). The project is co-funded by Google DNI.

The team working on the Digger Project consists of Fraunhofer IDMT (audio forensics technology), Athens Technology Center (product development) and Deutsche Welle (project lead and concept development). The project is co-funded by Google DNI.

This joint collaboration between a technology company, a research institute and the innovation unit of a broadcaster is quite unique. We combine different perspectives and expertise to come to an efficient and user centered solution. This is also what verification is all about: collaboration.

 

Ruben Bouwmeester (Project Lead, DW):

What is your motivation to work on the Digger project?

The technology to create synthetic video content could be a threat to society. Specifically when it becomes available to a larger audience, the Deepnude app was just a wake-up call. Digger, to me, is a contribution to our quest for the truth. A toolset for journalists to verify online content and put it into perspective for the audience at large. And possibly some better understanding for those who want to learn more about deepfakes and how they affect society.

How can the Digger project help to detect manipulated or synthetic media?

Within Digger we are creating a toolset that varies from visual video verification assistants to audio forensic tools to detect video manipulation. Our challenge is to make sure these tools are user centered and easy to understand / work with so that future users can actually easily detect video manipulation.

 

Julia Bayer (Journalist, Project Manager, DW)

What is your motivation to work on the Digger project?

Manipulated videos like shallow fakes and synthetic media like deepfakes are a big challenge we journalists have to deal with already. Verification is a crucial part of our job and the process that we need for this is not changing but gets more technical. With the Digger project we can support journalists and investigators and give them a technical toolset to debunk manipulated content and be prepared for the challenge.

How can the Digger project help to detect manipulated or synthetic media?

Digger helps to understand the architecture and technology behind synthetic media. We share our knowledge with the public and collaborate with experts to create the best software accessible to verify manipulated video and synthetic media.

 

Patrick Aichroth (Head of Media Distribution and Security, Fraunhofer IDMT)

What is your motivation to work on the Digger project

Free and democratic societies depend on science, rational argument and reliable data. Being able to distinguish between real and fake information is essential for our societies and for our freedom. However, fakes can be created more and more easily, and both “shallow” and “deep” fakes represent a growing threat that requires the use of various measures, including technical solutions. Detecting audio fakes and manipulation is one important element to that, and we are happy to provide and further improve our audio forensics technologies for this purpose.

How can the Digger project help to detect manipulated or synthetic media?

Digger provides a unique opportunity to explore and establish the combination of human- and machine-based detection of manipulated and synthetic material. Adapting and integrating automatic analysis into real-life verification workflows for journalists is a big challenge, but is also key for the future of content verification.

 

Luca Cuccovillo (Audio Forensic Specialist ,Fraunhofer IDMT)

What is your motivation to work on the Digger project?

Audio forensics is a discipline which, when the fate of a person is at stake in a courtroom, helps the judges in their quest for truth. Deepfakes, however, have their largest impact not in the courtrooms, but in the Internet and in the news.

I decided to participate in the Digger project to provide my expertise not only to the judges, but to the whole society, with the hope that journalists and normal citizens may use our platform to defend themselves against fake content and ‘fake news’.

How can the Digger project help to detect manipulated or synthetic media?

There is a large gap between the analysis possibilities available for a courtroom and the ones available for public citizens and journalists: Most of the forensic methods at present would be inconclusive or not applicable for the public.

Digger is the best chance we have for fixing this issue and reducing the gap. If we succeed, then plenty of analyses methods developed in the last years would become applicable, and we could finally fight against disinformation on equal grounds.

Stratos Tzoannos (Head Developer, ATC)

What is your motivation to work on the Digger project?

As a software developer, I realize that the power of technology is scary when it comes to deepfakes. On the other side, it is a huge opportunity to understand and use the same technology for social good, trying to tackle such cases and save people from manipulation. From a technical point of view, I find it extremely challenging the task to deal with advanced algorithms and tools to make a step forward in the fight against misinformation.

How can the Digger project help to detect manipulated or synthetic media?

The first step is to understand what deepfakes are and how they are synthesized. After gaining a good knowledge of this technology, the next step is to apply reverse engineering in order to produce detection algorithms for deep fakes. Moreover, the implementation of different tools to facilitate multimedia fact-checking could also add more weapons to the fight against malicious actions involving synthetic media.

 

Danae Tsabouraki (Project Manager, ATC)

What is your motivation to work on the Digger project?

Both from a personal and professional perspective, I feel that manipulated and synthetic media content like deepfakes may pose a serious danger to democracy, but also to other aspects of our private and public life. At the same time, most journalists are short of widely accessible, commercial tools that will help them detect deepfakes. By participating in the Digger project along with Deutsche Welle and Fraunhofer IDMT, we aspire to further contribute to the ongoing effort of developing and providing technologies and tools that help journalists detect truth from fiction.

How can the Digger project help to detect manipulated or synthetic media?

Digger takes a twist in its approach to manipulated/synthetic media detection by adding audio forensics in the mix. By using state-of-the-art audio analysis algorithms along with video verification features, Digger will provide journalists with a complete toolkit for detecting deepfakes. Digger also aims to bring together a community of experts from different fields to share their knowledge and experiences leading to a better understanding of the challenges we face.

The Digger project aims:

  • to develop a video and audio verification toolkit, helping journalists and other investigators to analyse audiovisual content, in order to be able to detect video manipulations using a variety of tools and techniques.
  • to develop a community of people from different backgrounds interested in the use of video and audio forensics for the detection of deepfake content.

Related Content

The dog that never barked

The dog that never barked

Deepfakes have the potential to seriously harm people’s lives and to deter people’s trust in democratic institutions. They also continue to make the headlines.

All sorts of video manipulation

All sorts of video manipulation

What is the difference between a ‘face swap’, a ‘speedup’ or even a ‘frame reshuffling’ in a video? At the end of the day they all are manipulations of video content. We want to have a closer look into the different kinds of manipulations - whether it are audio...