Select Page

From Rocket-Science to Journalism

From Rocket-Science to Journalism

In the Digger project we aim to implement scientific audio forensic functionalities in journalistic tools to detect both shallow- and deepfakes. At the Truth and Trust Online Conference 2020 we explained how we are doing this.

Today we have presented the Digger project and our approach on the Truth & Trust Online Conference. We explained all the audio forensic functionality that we are looking into and how this functionality can help us with detecting video manipulation and also with deepfake detection. The presentation is available via the conference website but we did not want to keep it from you so below you will find the full 11 minutes presentation.

If you are interested to learn more or have questions then please get into contact with us, either via commenting on this article or via our Twitter channel.

We hope you liked it! Happy Digging and keep an eye on our website for future updates!

 Don’t forget: be active and responsible in your community – and stay healthy!

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

The dog that never barked

The dog that never barked

Deepfakes have the potential to seriously harm people’s lives and to deter people’s trust in democratic institutions. They also continue to make the headlines. How dangerous are they really?

Deepfakes, although characterized by some as “the dog that never barked”, have in fact the potential to seriously harm people’s lives and to deter people’s trust in democratic institutions. 

Deepfakes continue to make the headlines – the latest news at the time of writing this article being about Donald Trump’s Independence Day deepfake video – which  raised also important legal and ethical issues, almost three years after the term “deepfake” was first coined in the news. Behind the headlines, synthetically generated media content (also known as  deepfakes) have even more serious consequences on individual lives – and especially on the lives of women. Deepfakes are also expected to be increasingly weaponized and combined with other trends and technologies they are expected to heighten security and democracy challenges in areas like cyber-enabled crime, propaganda and disinformation, military deception, and international crises.

“Technical approaches are useful until synthetic media techniques inevitably adapt to them. A perfect deepfake detection system will never exist”.   Sam Gregory, program director of WITNESS

 

It´s a race

Researchers, academics, and industry are all working towards developing deepfake detection algorithms, but developments in the field occur both ways, and as new detection algorithms get better, so do available tools to create deepfakes. As Sam Gregory, program director of WITNESS puts it, “Technical approaches are useful until synthetic media techniques inevitably adapt to them. A perfect deepfake detection system will never exist”. 

Verification of synthetically generated media content is still part of the traditional verification and fact-checking techniques and should be approached in the context of these already existing methods. Even though technology cannot provide a yes-or-no answer in the question “Is this video fake?”, it can greatly aid journalists in the process of assessing the authenticity of deepfakes. That’s why we at the Digger team are working hard to provide journalists with tools that can help them determine if a certain video is real or synthetic. Stay tuned for our how-to article coming up soon!

 

Don’t forget: be active and responsible in your community – and stay healthy!

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

ICASSP 2020 International Conference on Acoustics, Speech, and Signal Processing

ICASSP 2020 International Conference on Acoustics, Speech, and Signal Processing

Here is what we think are the most relevant upcoming audio-related conferences. And which sessions you should attend at the ICASSP 2020.

To keep up-to-date with the latest on audio-technology for our software development, we follow other researchers studies and we usually visit many conferences. Sadly, this time, we cannot attend them in person. Nevertheless, we can visit them virtually, together with you. Here is what we think are the most relevant upcoming audio-related conferences:

Let’s take a more detailed look at,

ICASSP 2020 International Conference on Acoustics, Speech, and Signal Processing

Date: 04th – 8th of May, 2020
Location: https://2020.ieeeicassp.org/program/schedule/live-schedule/

This is a list of panels we recommend during the ICASSP 2020:

Date: Tuesday 05th of May 2020

  • Opening Ceremony (9:30 – 10:00h)
  • Plenary by Yoshua Bengio on “Deep Representation Learning” (15:00 – 16:00h)
    • Note: may be pretty technical, for deep learning enthusiastic
    • Note: He’s one of the fathers of deep learning

Date: Wednesday 06th of May 2020

Date: Thursday 07th of May 2020

We’re looking forward to seeing you there!

The Digger project aims:

  • to develop a video and audio verification toolkit, helping journalists and other investigators to analyse audiovisual content, in order to be able to detect video manipulations using a variety of tools and techniques.
  • to develop a community of people from different backgrounds interested in the use of video and audio forensics for the detection of deepfake content.

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

Digger – Detecting Video Manipulation & Synthetic Media

Digger – Detecting Video Manipulation & Synthetic Media

What happens when we cannot trust what we see or hear anymore? First of all: don’t panic! Question the content: Could that be true? And when you are not 100 percent sure, do not share, but search for other media reports about it to double-check.

What happens when we cannot trust what we see or hear anymore? First of all: don’t panic! Question the content: Could that be true? And when you are not 100 percent sure, do not share, but search for other media reports about it to double-check.

How do professional journalists and human rights organisations do this? Every video out there could be manipulated. With video editing software anyone can edit a video.

It is challenging to verify content which has been edited, mislabeled or staged. What is even more complex is to verify content that has been modified. We roughly see two kinds of manipulation:

  1. Shallow fakes: manipulated audiovisual content (image, audio, video) generated with ‘low tech’ technologies like Cut & Paste or speed adjustments. 
  2. Deepfakes: artificial (synthetic) audiovisual content (image, audio, video) generated with technologies like Machine Learning.

Deepfakes and synthetic media are some of the most feared things in journalism today. It is a term which describes audio and video files that have been created using artificial intelligence. Synthetic media is non-realistic media and often referred to as Deepfakes at the moment. Generated by algorithms it is possible to create or swap faces, places, and digital synthetic voices that realistically mimic human speech and face impressions but actually do not exist and aren´t real. That means machine-learning technology can fabricate a video with audio to make people do and say things they never did or said. These synthetic media can be extremely realistic and convincing but are actually artificial.

Detection of synthetic media

Face or body swapping, voice cloning and modifying the speed of a video is a new form of manipulating content and the technology is becoming widely accessible

At the moment the real challenge are the so called shallow fakes. Remember the video where Nancy Pelosi appeared to be drunk during a speech. It turned out the video was just slowed down, but with the pitch turned up to cover up the manipulation. Video manipulation and creation of synthetic media is not the end of the truth but it makes us more cautious before using the content in our reporting. 

On the technology side it is a rat race. Forensic journalism can help detect altered media. DW´s Research & Cooperation team works together with ATC, a technology company from Greece and the Fraunhofer Institute for digital media technology to detect manipulation in videos. 

Digger – Audio forensics

In the Digger project we focus on using audio forensics technologies to detect manipulation. Audio is an essential part of video and with a synthetic voice of  a politician or the tampered noise of a gunshot a story can change completely. Digger aims to provide functionalities to detect audio tampering and manipulation in videos. 

Our approach makes use of:

  1. Microphone analysis: Analysing the device being used for the recording of audio. 
  2. Electrical network Frequency Analysis: Detect editing (cut & paste analyses) of audio.
  3. Codec Analysis: We follow the digital footprint of audio by extraction of ENF traces.

Synthetic media in reality

Synthetic media technologies can have a positive as well as a negative impact on society.

It is exciting and scary at the same time to think about the ability to create audio-visual content in the way we want it and not in the way it exists in reality. Voice synthesis will allow us to speak in hundreds of languages in our own voice. (Hyperlink: Video David Beckham) 

Or we could bring the master of surrealism back to life:

With the same technology you can also make politicians say something they never have or place people in scenes they have never been. These technologies are being used in pornography a lot but the unimaginable impact is also showcased in short clips in which actors are placed in films they have never acted in. Possibly one of the most harmful effects is that perpetrators can also easily claim “that’s a deepfake” in order to dismiss any contested information. 

How can the authenticity of information be proofed reliably? This is exactly what we aim to address with our project Digger.  

Stay tuned and get involved

We will publish regular updates about our technology, external developments and interview experts to learn about ethical, legal and hand-on expertise. 

The Digger project is developing a community to share knowledge and initiate collaboration in the field of synthetic media detection. Interested? Follow us on Twitter @Digger_project and send us a DM or leave a comment below. 

Related Content

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.