Select Page

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

Jane Lytvynenko, is a senior reporter with Buzzfeed News, based in Canada. She is primarily focusing on online mis- and disinformation. You can check out her work here. We talked to Jane about how big the synthetic media problem actually is and Jane provides us with practical tips on how to detect deepfakes and how to handle disinformation.

Jane, what is your definition of a deepfake?

Oh, that is a tricky one. I mean it is very different from a video that was just slowed down or manipulated in a very basic way e.g. cut&paste of scenes.

For me a deepfake is using computer technology to make it look like a person said something or took an action they did not say or do.

Where do you mostly encounter deepfakes at the moment?

We do not encounter deepfakes a lot on the day-to-day.  Because the technology is not widely accessible at the moment, we are much more worried about cheapfakes than we are about deepfakes. They spread much faster. We do find deepfakes mostly in satire. We have seen a lot of that come up in the last little while. We also see GAN (Generative Adversarial Networks) generated images being used for fake personas. Essentially faces generated by a computer that are being used to present a persona on social media that doesn’t actually exist in real life.

Do you expect an increase of video synthetic media ahead during the US elections?

No. The reason why I say no is because deepfakes are still fairly difficult to create for people who do not have a lot of tech knowledge. But cheapfakes you can make in iMovie. You can make things using very basic tools that are also convincing.

There is of course always a fear in the back of my head of the ‘Big One’. What is going to be the big deepfake that fools everybody?

Another fear that I have, is a deepfake inserted among legitimate videos. Using one small portion rather than the whole video being a deepfake.

Where do deepfakes have their biggest impact?  

Right now, we see deepfakes mostly used for harassment of women, in pornography in particular. They are primarily targeted at women, but sometimes also at men. Deepfake technology is also being used in movies in Hollywood. And like I mentioned previously for sort of high level production satire.

But when it comes to the field of politics, we’ve seen a couple here and there but in North America we haven’t seen deepfakes that are so convincing that they uproot the political conversation.

Should journalists be concerned?

Do all journalists need to be trained in verification? Or is that a task for experts?

I think it needs to be both. In order to send something to a researcher you need to first understand what it is you are looking for and why you are sending it to a researcher.

So, we need to have basic training. We need reporters to understand what a deepfake looks like. What a GAN-generated image looks like. Get them in on the basics of verification of all types of content. If something extremely technical comes up, they can just send it to the researchers.

For journalists it is very important to have that source. It is very important to have an expert opinion for second verification. That is part of the practice of journalism. But if you do not know to ask the right question, you are not going to be able to get that second opinion.

What are the tools you still miss?

There are a few things. The biggest problem is social media discovery, especially when it comes to video. If there is a video going viral it is fairly difficult to trace back where it came from. There are some tools that break video down into thumbnails. You can then reverse image search them and try to find your way back. But for me right now, there is no way to tell where the video originated. Part of that is a lack of cross-platform searches. For example, Instagram stories; it is one of the most popular Facebook products right now, but they disappear within 24 hours. If somebody downloads that Instagram story and cuts off the banner that says who posted it, they can upload it to Twitter or Facebook, and I will not know where the video came from. It doesn’t allow reporters to see the bigger picture.

Right now, we do not necessarily have the tools to both look at video cross-platform and to look at these videos in terms of when they were shot, what time, by whom, from what angle, at what location. It is a challenge that requires a lot of time that reporters just do not have.

The other thing is, we do not really have a strong way of mapping video spread. So, when reporters do content analysis, they generally focus on text. The reason for that is because text is machine-readable. We have the tools to sort of map out the biggest account that posted this, the smaller accounts that came from it and the sort of audience that looked at this. We do not have similar tools for video even though analyzing the spreading of information is one of the most useful things we do as disinformation reporters. It allows us to see the key points where disinformation traveled. It allows us to understand where to look next time. It allows us most importantly to understand which communities were most impacted.

Is collaboration with researchers and platforms essential in fighting disinformation?

Definitely collaboration with researchers. Platforms are a bit on and off again in terms of what kind of information they are willing to provide us with. Sometimes they are willing to confirm fact findings, but they are rarely helping us to do research independently.

This is where researchers, analytics, sort of third parties that specialize in this are really key for reporters.

How should we report about disinformation and deepfakes?

At Buzzfeed News we always try to put correct information first. We repeat the accurate information before you get to the inaccurate information. There are two different approaches you can take. One is reporting on the content of the video and the other is reporting on the existence of the video as well as any information you have in terms of where it came from, who posted it and why.

We generally focus on the second approach as the primary presentation of facts.

That is how we frame a lot of these things. Because the key aim of a manipulated video is to get the message across and if we put the message at the top then they still get the message across. What you want to do is describe the techniques, describe how they are attempting to manipulate the audience. And then explain the other part of manipulation which is the message.

Do you have a specific workflow in verifying digital content?

We do have best practices. Putting accurate information first is definitely the top priority. We also make sure to never put up an image or a screenshot without stamping it or crossing it out in some way. That gives a visual clue to anybody who comes across it that it is false. But more importantly, if a search engine scrapes that image and somebody comes across it on Google or Bing they are immediately able to see that it is false.

Buzzfeed false stamp

In terms of workflow for verification, the key part is documentation. We archive everything that we come across and take a screenshot. We are essentially making sure that we are able to retrace our steps. It is kind of a scientific process, we want to make sure that if anybody repeated our steps, they would come to the same conclusion. A lot of the times when we do pure debunks that is what we focus on. Because not only does it increase trust and shows how we got to the conclusion, it also teaches our audience some of the techniques that we are using so that they can use them in the future as well.

Can you still trust what you are seeing, or do you always have this critical view?

The short answer is yes, especially with a lot of videos. I really fear missing something because sometimes the manipulation is so subtle you can’t quite tell that it is a manipulation from the first or the second look. If somebody is just scrolling on their feed, they are not looking very closely at those details. They might not even listen to the audio and hear that it sounds off. They might read the subtitles instead. They might not notice that the mouth in a deepfake is a little bit imperfect because those are little details. We are bombarded with information in our news feed so we might just not notice it.

I’m always extremely suspicious and sometimes I’m more suspicious than I should be, sometimes I look at a video and I’m like was that slowed down by 0.7 of a second or am I losing my mind?

What are the three main tips for a news consumer to detect synthetic media?

Tip #1
My first tip, if you see a video that is extremely viral or if you see video that sparks a lot of emotion or if you see a video that just kind of like feels a little bit off
just pause.

From there the number one thing you can do is search a couple of key terms in a search engine with the words fact check to see if somebody has already picked up on what you are seeing. You can also read the comments, very often in the comments people will explain what is going on in the video.

Tip #2
If you are unsure about the video really do play the audio and look at the key features that make people people. Look at the eyes, look at the mouth. What are the mannerisms of the person that you are seeing in the video and do they match with what you understand about that person? Ask yourself if the voice of the person sounds like their real voice.

A lot of people when they see a GAN-generated photo for example, they have a gut feeling that something is wrong. They feel that they are not looking at a real person, but they can’t quite explain why. So, just really trust that feeling and start looking for those little signs that something is wrong. If it is a photo usually the best thing to look at are the earlobes. If a person has glasses look at the glasses. Eyebrows are not generally perfect if a photo is computer-generated and teeth are always a little off. Those are the things that I would look for.

Tip #3

The final tip is: do not share anything you are not sure of. Do not pass it on to your network.

We all have created a small online community around us, whether it is friends, family, acquaintances or sort of strangers that we met on the internet. And most of that community really trusts us. Even if you are not a public figure your friends are going to trust what you post. So, take that responsibility seriously and try to not pass on anything that you are unsure of to that online community.

When not deepfakes, what else would be our challenge in disinformation?

My biggest worry when it comes to disinformation is not necessarily synthetic media. It is humans trying to convince other humans.

Look at the most insidious falsehoods that we see right now in the US; the QAnon mass delusion like we call it at Buzzfeed. People who believe in this are very often brought on board by other people they know. So, what I really worry about is the continuing creation of online communities where people bring one another along for the ride except the ride is extremely false.

I think that manipulated images and manipulated videos and fake news articles are all just tools. They are all parts of the problem. But the problem itself I think is a community problem and I definitely foresee that community problem growing beyond synthetic media.

Many thanks Jane! If you are interested to learn more or have questions then please get into contact with us, either via commenting on this article or via our Twitter channel.

We hope you liked it! Happy Digging and keep an eye on our website for future updates!

 Don’t forget: be active and responsible in your community – and stay healthy!

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

From Rocket-Science to Journalism

From Rocket-Science to Journalism

In the Digger project we aim to implement scientific audio forensic functionalities in journalistic tools to detect both shallow- and deepfakes. At the Truth and Trust Online Conference 2020 we explained how we are doing this.

Today we have presented the Digger project and our approach on the Truth & Trust Online Conference. We explained all the audio forensic functionality that we are looking into and how this functionality can help us with detecting video manipulation and also with deepfake detection. The presentation is available via the conference website but we did not want to keep it from you so below you will find the full 11 minutes presentation.

If you are interested to learn more or have questions then please get into contact with us, either via commenting on this article or via our Twitter channel.

We hope you liked it! Happy Digging and keep an eye on our website for future updates!

 Don’t forget: be active and responsible in your community – and stay healthy!

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

The dog that never barked

The dog that never barked

Deepfakes have the potential to seriously harm people’s lives and to deter people’s trust in democratic institutions. They also continue to make the headlines. How dangerous are they really?

Deepfakes, although characterized by some as “the dog that never barked”, have in fact the potential to seriously harm people’s lives and to deter people’s trust in democratic institutions. 

Deepfakes continue to make the headlines – the latest news at the time of writing this article being about Donald Trump’s Independence Day deepfake video – which  raised also important legal and ethical issues, almost three years after the term “deepfake” was first coined in the news. Behind the headlines, synthetically generated media content (also known as  deepfakes) have even more serious consequences on individual lives – and especially on the lives of women. Deepfakes are also expected to be increasingly weaponized and combined with other trends and technologies they are expected to heighten security and democracy challenges in areas like cyber-enabled crime, propaganda and disinformation, military deception, and international crises.

“Technical approaches are useful until synthetic media techniques inevitably adapt to them. A perfect deepfake detection system will never exist”.   Sam Gregory, program director of WITNESS

 

It´s a race

Researchers, academics, and industry are all working towards developing deepfake detection algorithms, but developments in the field occur both ways, and as new detection algorithms get better, so do available tools to create deepfakes. As Sam Gregory, program director of WITNESS puts it, “Technical approaches are useful until synthetic media techniques inevitably adapt to them. A perfect deepfake detection system will never exist”. 

Verification of synthetically generated media content is still part of the traditional verification and fact-checking techniques and should be approached in the context of these already existing methods. Even though technology cannot provide a yes-or-no answer in the question “Is this video fake?”, it can greatly aid journalists in the process of assessing the authenticity of deepfakes. That’s why we at the Digger team are working hard to provide journalists with tools that can help them determine if a certain video is real or synthetic. Stay tuned for our how-to article coming up soon!

 

Don’t forget: be active and responsible in your community – and stay healthy!

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

ICASSP 2020 International Conference on Acoustics, Speech, and Signal Processing

ICASSP 2020 International Conference on Acoustics, Speech, and Signal Processing

Here is what we think are the most relevant upcoming audio-related conferences. And which sessions you should attend at the ICASSP 2020.

To keep up-to-date with the latest on audio-technology for our software development, we follow other researchers studies and we usually visit many conferences. Sadly, this time, we cannot attend them in person. Nevertheless, we can visit them virtually, together with you. Here is what we think are the most relevant upcoming audio-related conferences:

Let’s take a more detailed look at,

ICASSP 2020 International Conference on Acoustics, Speech, and Signal Processing

Date: 04th – 8th of May, 2020
Location: https://2020.ieeeicassp.org/program/schedule/live-schedule/

This is a list of panels we recommend during the ICASSP 2020:

Date: Tuesday 05th of May 2020

  • Opening Ceremony (9:30 – 10:00h)
  • Plenary by Yoshua Bengio on “Deep Representation Learning” (15:00 – 16:00h)
    • Note: may be pretty technical, for deep learning enthusiastic
    • Note: He’s one of the fathers of deep learning

Date: Wednesday 06th of May 2020

Date: Thursday 07th of May 2020

We’re looking forward to seeing you there!

The Digger project aims:

  • to develop a video and audio verification toolkit, helping journalists and other investigators to analyse audiovisual content, in order to be able to detect video manipulations using a variety of tools and techniques.
  • to develop a community of people from different backgrounds interested in the use of video and audio forensics for the detection of deepfake content.

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

All sorts of video manipulation

All sorts of video manipulation

What is the difference between a ‘face swap’, a ‘speedup’ or even a ‘frame reshuffling’ in a video? At the end of the day they all are manipulations of video content. We want to have a closer look into the different kinds of manipulations – whether it are audio changes, face swapping, visual tampering, or simply taking content out of context.

In Digger we look at synthetic media and how to detect manipulations in all of its forms. 

This is not a tutorial on how to manipulate video. We want to highlight the different technical sorts of manipulation and raise awareness so that you might recognise one it crosses your path. Let’s start with:

Tampering of visuals and audio

Do you remember the Varoufakis finger?! Did he show it, or didn´t he?

This clip has been manipulated by pasting in a layer of an arm of another person. It is possible to crop and add any element in a video. 

As well as deleting specific parts of audio tracks in a speech or conversation to mislead you. Be careful, also background noises can be added to change the whole context of a scene. Therefore it is important to find the original version so you can compare the videos with each other.

Synthetic audio and lip synchronisation

Imagine you can say anything fluently in 7 different languages, like David Beckham did?

It is incredible but the larger part of his video is completely synthetic. They created a 3D model of Beckham´s face and reanimate that. That means that a machine learned what David looks like, how he moves when speaking in order to reproduce David saying anything in any language. One tip by Hany Farid: Detect mouth and lip movements and compare them with your own human behaviour. This is one example for English speaking lip movements.

Cloned voices are already offered online, so make sure you search for the original version (yes, again), trusted media reports or try and get access to an official transcript if it was a public speech. 

Shallowfakes or Cheapfakes

Just by slowing down or speeding up a video the whole context can change. In this example the speed of the video has been lowered. Nancy Pelosi, US Speaker of the House and Democrats Congresswoman, seems to be drunk in an interview.

In order to correct the lower voice the pitch of the voice has been turned up. All this effort was made to make you believe that Nanci Pelosi was drunk during an interview.

In the case of Jim Acosta part of a video has been sped up in order to suggest that he is making an aggressive movement in the situation where a microphone is being taken away from him.

It shows that also non-hightech manipulations can do harm and be challenging to detect . How can you detect low-tech manipulations? Again, find the original and compare. Try playing around with the speed in your own video player for example with the VLC player. 

Face swap or Body swap 

Imagine dancing like Bruno Mars or Beyonce Knowles without any training, a dream comes true.

This highly intelligent system captures the poses and motions of Bruno Mars and maps them on the body of the amateur. Copying dance moves, arms and legs, torso and head all at once is still challenging for artificial intelligence. If you focus on the details you will be able to see the manipulation. It’s still far from perfect, but it’s possible and just a matter of time till the technology is trained better. 

Synthetic video and synthetic voice

You can change and tamper videos and audio, but what happens when you do all of it in one video? When you would be able to generate video completely synthetically? One could recreate a person who died already many years ago. Please meet Salvador Dalí anno 2019:

Hard to believe, right? Therefore, always ask yourself if what you see could be true. Check the source and search for more context on the video. Maybe a trustworthy media outlet already reported about it. If you cannot find anything, just do not share it.

The Liar´s Dividend

We also need to be prepared that people might claim that a video or audio is manipulated which actually isn’t. This is called “The Liar´s Dividend”. 

If accused of having said or done something that he/she said or did, liars may generate and spread altered sound or images to create doubt or even say the authentic footage is a deepfake.

Make sure you have your facts checked. Ask colleagues or experts for help if needed and always watch a video more than twice. 

Have you recently watched a music video? Musicians seem to be among the first professional customers for the deepfake industry. Have a look, this is where the industry is currently being built up.

Did we forget techniques for video manipulation? Let us know and we will add it to our collection in this article.

The Digger project aims:

  • to develop a video and audio verification toolkit, helping journalists and other investigators to analyse audiovisual content, in order to be able to detect video manipulations using a variety of tools and techniques.
  • to develop a community of people from different backgrounds interested in the use of video and audio forensics for the detection of deepfake content.

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

Digital verification – Lessons learned from social distancing

Digital verification – Lessons learned from social distancing

Rules are in place to prevent the spread of the Coronavirus. One of them is called “social distancing” which helps to stop the transmission of Covid-19. What are the rules concerning Coronavirus information online?

Rules have been introduced across the globe about how to behave in order to prevent the spread of the Coronavirus. One of these rules is called “social distancing” which helps to stop the transmission of Covid-19. It recommends we avoid crowds, take public transport at off-peak hours and keep physical distance to other people.  

What are the rules concerning Coronavirus information online? Daily updates about the spread and protection of Covid-19 are crucial and easy to find – online – but not all of them are true. Sharing misinformation online is like refraining from social distancing; tempting, but it could harm people. So how can you contribute to stopping the transmission of dis- and misinformation about Covid-19? These are the rules on how to deal with dubious information online:

1. Ask yourself: Does this information make sense? 

Subsequent questions are: What sources does the information rely on? Where do the numbers come from?

2. Double check the information with reliable sources like; quality journalism, fact-checkers and relevant experts. Here’s how to do that:

  • Google the claim using the main keywords or the headline like in the video below:
  • Golden rule: reverse image search a photo or video
  • Use Google Dorks (a search string that uses advanced search operators) and search on a specific news sites or in a specific timeframe (see image below):
  • Check Twitter lists with reliable sources on Covid-19, like these:
    • Tutorial: Search on Tweetdeck your Twitter List
    • Here is a Twitter list curated by Journalism.Co with reliable journalists and media
    • Here is a Twitter list curated by Jeff Jarvis with medical experts
    • Here is a Twitter list curated by IFCN with national fact-checking organisations

3. Still nothing? Wait or reach out to your doctor.

Remember: Share only what is fact-checked with your family and friends in your FB, WhatsApp and other communities!

Verification of videos and synthetic media

For verification of videos there are some specific rules. Here are the most important ones:

Golden rule: reverse image search a video 

The reverse image search enables a simple and quick check whether a video has been published online before, possibly in another context. This way you might also be able to retrieve the original source of the video. You can use the InVid-Plugin for selecting several thumbnails from the video and reverse search with different tools like Google and Yandex – or you take a screenshot and do the same via Google Images.

Visual content verification

Most video manipulation is still visible with your bare eye. Look at the small details visible in the video. If you think you are watching a fake you might want to check:

  • strange cuts, non-fluent frames can indicate manipulation,
  • does the body actually fit the face of a person and does the body language match the facial expression,
  • is the person on video showing natural behaviour; eg. blinking eyes, movement of eyes, movement of hands? 

To help make it a bit easier you can check the video ‘frame by frame’ with VLC player and check if the colour and shadows are changing in a consistent way that makes sense to you.  

With a precise eye for details you could use the verification plugin with magnifier functionality from InVid/WeVerify that enhances the quality of your zoomed area in video stills.

If there are shadows visible and you are really getting into it, you can determine when on the day the video was filmed with tools like Suncalc. Here is a detailed tutorial on using shadows.

Technical video verification

The devil is in the detail, and the manipulation technology evolves. For well done synthetic video you would need some elaborate algorithms to check manipulation. We’re trying to develop those in the Digger project. What you could try is to focus on audio; check if the acoustics of a video correlate with the scene recorded – is it outside and thus with background noises, are people talking in the background, and does that match the video? And obviously if the audio does not match the lip movements with poorly implemented lip synchronization, it is more likely a fake. Still not sure, consult a forensic expert like anyone on this deepfakes forensics Twitter list.

Reliable sources

Social Media platforms like Facebook, Twitter, Youtube, Pinterest and more have taken steps against misinformation about the Coronavirus such as directing users to official sources when they search for Covid-19. 

Here are some helpful trustworthy sources for everyone of us:

  • World Health Organization – The WHO offers daily updates on the pandemic, guidance, and data on the spread. 
  • FirstDraft Resources for Reporters – Guides how to verify and a searchable archive of Coronavirus debunks.
  • Sifting Through the Pandemic – Mike Caufield’s simple and effective educational website teaches how to navigate online information. His approach runs counter to the news reader´s natural instincts. Mike has updated this post specifically for Coronavirus.
  • Fighting the Infodemic: The #CoronaVirusFacts Alliance The #CoronaVirusFacts / #DatosCoronaVirus Alliance unites more than 100 fact-checkers around the world in publishing, sharing and translating facts surrounding the Coronavirus.

Be active and responsible in your community – and stay healthy!

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

About The Digger Team

About The Digger Team

The team working on the Digger Project consists of Fraunhofer IDMT (audio forensics technology), Athens Technology Center (product development) and Deutsche Welle (project lead and concept development). The project is co-funded by Google DNI.

The team working on the Digger Project consists of Fraunhofer IDMT (audio forensics technology), Athens Technology Center (product development) and Deutsche Welle (project lead and concept development). The project is co-funded by Google DNI.

This joint collaboration between a technology company, a research institute and the innovation unit of a broadcaster is quite unique. We combine different perspectives and expertise to come to an efficient and user centered solution. This is also what verification is all about: collaboration.

 

Ruben Bouwmeester (Project Lead, DW):

What is your motivation to work on the Digger project?

The technology to create synthetic video content could be a threat to society. Specifically when it becomes available to a larger audience, the Deepnude app was just a wake-up call. Digger, to me, is a contribution to our quest for the truth. A toolset for journalists to verify online content and put it into perspective for the audience at large. And possibly some better understanding for those who want to learn more about deepfakes and how they affect society.

How can the Digger project help to detect manipulated or synthetic media?

Within Digger we are creating a toolset that varies from visual video verification assistants to audio forensic tools to detect video manipulation. Our challenge is to make sure these tools are user centered and easy to understand / work with so that future users can actually easily detect video manipulation.

 

Julia Bayer (Journalist, Project Manager, DW)

What is your motivation to work on the Digger project?

Manipulated videos like shallow fakes and synthetic media like deepfakes are a big challenge we journalists have to deal with already. Verification is a crucial part of our job and the process that we need for this is not changing but gets more technical. With the Digger project we can support journalists and investigators and give them a technical toolset to debunk manipulated content and be prepared for the challenge.

How can the Digger project help to detect manipulated or synthetic media?

Digger helps to understand the architecture and technology behind synthetic media. We share our knowledge with the public and collaborate with experts to create the best software accessible to verify manipulated video and synthetic media.

 

Patrick Aichroth (Head of Media Distribution and Security, Fraunhofer IDMT)

What is your motivation to work on the Digger project

Free and democratic societies depend on science, rational argument and reliable data. Being able to distinguish between real and fake information is essential for our societies and for our freedom. However, fakes can be created more and more easily, and both “shallow” and “deep” fakes represent a growing threat that requires the use of various measures, including technical solutions. Detecting audio fakes and manipulation is one important element to that, and we are happy to provide and further improve our audio forensics technologies for this purpose.

How can the Digger project help to detect manipulated or synthetic media?

Digger provides a unique opportunity to explore and establish the combination of human- and machine-based detection of manipulated and synthetic material. Adapting and integrating automatic analysis into real-life verification workflows for journalists is a big challenge, but is also key for the future of content verification.

 

Luca Cuccovillo (Audio Forensic Specialist ,Fraunhofer IDMT)

What is your motivation to work on the Digger project?

Audio forensics is a discipline which, when the fate of a person is at stake in a courtroom, helps the judges in their quest for truth. Deepfakes, however, have their largest impact not in the courtrooms, but in the Internet and in the news.

I decided to participate in the Digger project to provide my expertise not only to the judges, but to the whole society, with the hope that journalists and normal citizens may use our platform to defend themselves against fake content and ‘fake news’.

How can the Digger project help to detect manipulated or synthetic media?

There is a large gap between the analysis possibilities available for a courtroom and the ones available for public citizens and journalists: Most of the forensic methods at present would be inconclusive or not applicable for the public.

Digger is the best chance we have for fixing this issue and reducing the gap. If we succeed, then plenty of analyses methods developed in the last years would become applicable, and we could finally fight against disinformation on equal grounds.

Stratos Tzoannos (Head Developer, ATC)

What is your motivation to work on the Digger project?

As a software developer, I realize that the power of technology is scary when it comes to deepfakes. On the other side, it is a huge opportunity to understand and use the same technology for social good, trying to tackle such cases and save people from manipulation. From a technical point of view, I find it extremely challenging the task to deal with advanced algorithms and tools to make a step forward in the fight against misinformation.

How can the Digger project help to detect manipulated or synthetic media?

The first step is to understand what deepfakes are and how they are synthesized. After gaining a good knowledge of this technology, the next step is to apply reverse engineering in order to produce detection algorithms for deep fakes. Moreover, the implementation of different tools to facilitate multimedia fact-checking could also add more weapons to the fight against malicious actions involving synthetic media.

 

Danae Tsabouraki (Project Manager, ATC)

What is your motivation to work on the Digger project?

Both from a personal and professional perspective, I feel that manipulated and synthetic media content like deepfakes may pose a serious danger to democracy, but also to other aspects of our private and public life. At the same time, most journalists are short of widely accessible, commercial tools that will help them detect deepfakes. By participating in the Digger project along with Deutsche Welle and Fraunhofer IDMT, we aspire to further contribute to the ongoing effort of developing and providing technologies and tools that help journalists detect truth from fiction.

How can the Digger project help to detect manipulated or synthetic media?

Digger takes a twist in its approach to manipulated/synthetic media detection by adding audio forensics in the mix. By using state-of-the-art audio analysis algorithms along with video verification features, Digger will provide journalists with a complete toolkit for detecting deepfakes. Digger also aims to bring together a community of experts from different fields to share their knowledge and experiences leading to a better understanding of the challenges we face.

The Digger project aims:

  • to develop a video and audio verification toolkit, helping journalists and other investigators to analyse audiovisual content, in order to be able to detect video manipulations using a variety of tools and techniques.
  • to develop a community of people from different backgrounds interested in the use of video and audio forensics for the detection of deepfake content.

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.