Select Page

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

Sam Gregory is Program Director of WITNESS. We talked to Sam about the development and challenges of new ways to create mis- and disinformation, specifically those making use of artificial intelligence. We discussed the impact of shallow- and deepfakes, and what the essential questions are with development of tools for detection of such synthetic media.

The following has been edited and condensed. 

Sam, what is your definition of a deepfake?

I use a broad definition of a deepfake. I use the phrase synthetic media to describe the whole range of ways in which you can manipulate audio or video with artificial intelligence. 

We look at threats and in our search for solutions we look at how you can change audio, how you can change faces and how you can change scenes by for example removing objects or adding objects more seamlessly. 

What is the difference to shallowfakes? 

We use the phrase shallowfake in contrast to deepfake to describe what we have seen for the past decade at scale, which is people primarily miscontextualizing videos like claiming a video is from one place when it is actually from another place. Or claiming it is from one date when it is actually from another date. Also when people do deceptive edits of videos or do things you can do in a standard editing process, like slowing down a video, we call it a shallowfake.

The impact can be exactly the same but I think it’s helpful to understand that deepfakes can create these incredibly realistic versions of things that you haven’t been able to do with shallowfakes. For example, the ability to make someone look like they’re saying something or to make someone’s face appear to do or say something that they didn’t do. Or the really seamless and much easier ability to edit within a scene. All are characteristics of what we can do with synthetic media. 

We did a series of threat modeling and solution prioritization workshops globally. In Europe, US, Brazil, Sub-Sahara Africa, South and Southeast Asia people keep on saying, we have to view both types of fakes as a continuum and we have to be looking at solutions across it. And also we need to really think about the wording we use because it may not make that much difference to an ordinary person who is receiving a WhatsApp message whether it is a shallowfake or a deepfake. It matters, whether it’s true or false.

Where do you encounter synthetic media the most at the moment?

Indisputably the greatest range of malicious synthetic media is targeting women. We know that from the research that has been done by organizations like Sensity. We have to remember that synthetic media is a category in the non-malicious, but potentially malicious usages. There is an explosion of apps that enable very simple creation of deepfakes. We are seeing deepfakes starting to emerge on those parody lines, a kind of an appropriation of images. And, at what time does software become readily available to lots of people to do moderately good deepfakes that could be used in satire, which is a positive usage but can also be used in gender-based violence?  

Where is the highest impact of deepfakes at the moment?

It is on the individual level. In terms of impact on individual women and their ability to participate in the public sphere, related to the increasing patterns of online and offline harassment that journalists and public figures face. 

Four threat areas were identified in our meetings with journalists, civic activists, movement leaders and fact-checkers that they were really concerned about in each region. 

  1. The Liars dividend, which is the idea that you can claim something is false when it is actually true which forces people to prove that it is true. This happens particularly in places where there is no strong established media. The ability to just call out everything as false benefits the powerful, not the weak. 
  2. There is no media forensics capacity amongst most journalists and certainly no advanced media forensics capacity. 
  3. Targeting of journalists and civic leaders using gender-based violence, as well as other types of accusations of corruption or drunkenness.
  4. Emphasis on threats from domestic actors. In South Africa we learned that the government is using facial recognition, harassing movement leaders or activists. 

These threats have to be kept in mind with the development of tools for detection. Are they going to be available to a community media outlet in the favelas in Rio facing a whole range of misinformation? Are they going to be available to human rights groups in Cambodia who know the government is against them? We have to understand that they cannot trust a platform like Facebook to be their ally.

Can be synthetic media used as an opportunity as well?

I come from a creative background. At WITNESS the center of our work is the democratization of video, the ability to film and edit. Clearly these are potential areas that are being explored commercially to create video without requiring so much investment.

I think if we do not have conversations about how we are going to find structured ways to respond to malicious usages, I see positive usage of these technologies being outweighed by the malicious usage. And I think there is a little bit too much of a „it will all work itself out” approach being described by many of the people in this space.

We need to look closely at what we expect of the people who develop these technologies: Are they making sure that they include a watermark? That they have a provenance tree that can show the original? Are they thinking about consent from the start?

Although I enjoy playing with apps that use these types of tools, I don’t want to deny that I think 99% of the usage of these are malicious.

We have to recognize that the malicious part of this can be highly damaging to individuals and highly disruptive to the information ecosystem.

Should we use synthetic media in satire for media literacy? 

We have been running a series of webtalks called deepfakery . One of the main questions is, what are the boundaries around satire? Satire is an incredibly powerful weapon of the weak against the powerful. So for example, in the US we see the circulation of shallowfakes and memes made on sites that say very clearly on the top that this is satire. But of course no one ever sees that original site. They just see the content retweeted by President Trump in which case it looks like it is a real claim.

So satire is playing both ways. I do think the value of satire is to help people understand the existence of this and to push them to sort of responsibly question their reaction to video.

I think the key question in the media literacy discussion is: how do we get people to pause? Not to dismiss everything but to give them the tools to question things. Give them the tools to be able to pause emotionally before they share.

From a technology point of view, what are we still missing to detect synthetic media?

Synthesis of really good synthetic media is still hard. So synthesizing a really good faceswap, or a convincing scene is still hard. What is getting easier is the ability to use apps to create something that is impactful but perhaps not believable. I think sometimes people over assume how easy it is to create a deepfake.

We’re not actually surrounded by convincing deepfakes at this point. 

A lot of our work has been thinking about detection and authentication. How do you spot evidence of media manipulation which could be detection of a deepfake or detection of a shallowfake? How to spot that a video has been miscontextualized and there is an original or an earlier version that has different edits? Then authentication, how do we trace a video over time to see it’s manipulations. 

At the moment the detection of synthetic media is, and this is the nature of the technology, an arms race between the people who will develop the detection tool and those who will use it to test and enhance their new synthesis tool. The results of detection tools are getting better but they are not at the level that you could do it at scale.

The meta question for us on detection is actually who to make this accessible to. If it is only the BBC, Deutsche Welle, France 24 and New York Times, that leaves out 90% of the world as well as ordinary people who may be targeted by this in an incredibly damaging way.

Do all journalists need to be trained in using advanced forensic technology?

One of the things we have learned as we have been working on deepfakes is that we shouldn’t exclusively focus on media forensics. I think it is important to build the media forensic skills of journalists and it is a capacity gap for almost every journalist to do any kind of media forensics with existing content. I do not think we can expect that every journalist will have that skill set. We also need to consider how we invest in e.g. regional hubs of expertise.

The bigger backdrop is that we need to build a stronger set of OSINT skills in journalism. We need to be careful not to turn this purely into a technical question around media forensics at a deep level because it is a complicated and specialist skill set.

We identified a range of areas that need to be addressed to develop tools that plug into journalistic workflows. For example that journalists are not going to rely on tools easily. They do not need just a confidence number, they need software to explain why it is coming up with this result. So, I think we need a constant interchange between journalists and researchers and tools developers and the platforms to say what the tools are that we really need as this gets more pervasive. And we need tools that potentially provide information to consumers and community leader level activists to help them do the kind of rapid debunking and rapid challenging of the kind of digital wildfire of rumors that journalists frankly often do not get too. Often community leaders are talking about things that circulate very rapidly in a Favela or a Township and journalists never get to them in a timely way. So we need to focus on journalists, but also on community leaders.

What are your three tips for consumers to deal with synthetic media? 

  1. Pause before you share the content.
  2. Consider the intention of why people are trying to encourage you to share it.
  3. To take an emotional pause when consuming media trying to understand the context of it is supported by a range of tools like the SIFT methodology or the Sheep Acronym

I don’t think it is a good idea to encourage people to think that they can spot deepfakes.

The clearest and most consistent demand we heard primarily from journalists and fact checkers is to show them if this is a mis-contextualized video so that they can then just clearly say, no this video is from 2010 and not from 2020.

Therefore reverse video search or finding similar videos is pretty important because that shallowfake problem remains the most predominant.

Many thanks Sam! Here’s the ‘Ticks or it didn’t happen‘ report that Sam mentioned. If you are interested to learn more or have questions then please get into contact with us, either via commenting on this article or via our Twitter channel.

We hope you liked it! Happy Digging and keep an eye on our website for future updates!

 Don’t forget: be active and responsible in your community – and stay healthy!

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

All sorts of video manipulation

All sorts of video manipulation

What is the difference between a ‘face swap’, a ‘speedup’ or even a ‘frame reshuffling’ in a video? At the end of the day they all are manipulations of video content. We want to have a closer look into the different kinds of manipulations – whether it are audio changes, face swapping, visual tampering, or simply taking content out of context.

In Digger we look at synthetic media and how to detect manipulations in all of its forms. 

This is not a tutorial on how to manipulate video. We want to highlight the different technical sorts of manipulation and raise awareness so that you might recognise one it crosses your path. Let’s start with:

Tampering of visuals and audio

Do you remember the Varoufakis finger?! Did he show it, or didn´t he?

This clip has been manipulated by pasting in a layer of an arm of another person. It is possible to crop and add any element in a video. 

As well as deleting specific parts of audio tracks in a speech or conversation to mislead you. Be careful, also background noises can be added to change the whole context of a scene. Therefore it is important to find the original version so you can compare the videos with each other.

Synthetic audio and lip synchronisation

Imagine you can say anything fluently in 7 different languages, like David Beckham did?

It is incredible but the larger part of his video is completely synthetic. They created a 3D model of Beckham´s face and reanimate that. That means that a machine learned what David looks like, how he moves when speaking in order to reproduce David saying anything in any language. One tip by Hany Farid: Detect mouth and lip movements and compare them with your own human behaviour. This is one example for English speaking lip movements.

Cloned voices are already offered online, so make sure you search for the original version (yes, again), trusted media reports or try and get access to an official transcript if it was a public speech. 

Shallowfakes or Cheapfakes

Just by slowing down or speeding up a video the whole context can change. In this example the speed of the video has been lowered. Nancy Pelosi, US Speaker of the House and Democrats Congresswoman, seems to be drunk in an interview.

In order to correct the lower voice the pitch of the voice has been turned up. All this effort was made to make you believe that Nanci Pelosi was drunk during an interview.

In the case of Jim Acosta part of a video has been sped up in order to suggest that he is making an aggressive movement in the situation where a microphone is being taken away from him.

It shows that also non-hightech manipulations can do harm and be challenging to detect . How can you detect low-tech manipulations? Again, find the original and compare. Try playing around with the speed in your own video player for example with the VLC player. 

Face swap or Body swap 

Imagine dancing like Bruno Mars or Beyonce Knowles without any training, a dream comes true.

This highly intelligent system captures the poses and motions of Bruno Mars and maps them on the body of the amateur. Copying dance moves, arms and legs, torso and head all at once is still challenging for artificial intelligence. If you focus on the details you will be able to see the manipulation. It’s still far from perfect, but it’s possible and just a matter of time till the technology is trained better. 

Synthetic video and synthetic voice

You can change and tamper videos and audio, but what happens when you do all of it in one video? When you would be able to generate video completely synthetically? One could recreate a person who died already many years ago. Please meet Salvador Dalí anno 2019:

Hard to believe, right? Therefore, always ask yourself if what you see could be true. Check the source and search for more context on the video. Maybe a trustworthy media outlet already reported about it. If you cannot find anything, just do not share it.

The Liar´s Dividend

We also need to be prepared that people might claim that a video or audio is manipulated which actually isn’t. This is called “The Liar´s Dividend”. 

If accused of having said or done something that he/she said or did, liars may generate and spread altered sound or images to create doubt or even say the authentic footage is a deepfake.

Make sure you have your facts checked. Ask colleagues or experts for help if needed and always watch a video more than twice. 

Have you recently watched a music video? Musicians seem to be among the first professional customers for the deepfake industry. Have a look, this is where the industry is currently being built up.

Did we forget techniques for video manipulation? Let us know and we will add it to our collection in this article.

The Digger project aims:

  • to develop a video and audio verification toolkit, helping journalists and other investigators to analyse audiovisual content, in order to be able to detect video manipulations using a variety of tools and techniques.
  • to develop a community of people from different backgrounds interested in the use of video and audio forensics for the detection of deepfake content.

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

Digger – Detecting Video Manipulation & Synthetic Media

Digger – Detecting Video Manipulation & Synthetic Media

What happens when we cannot trust what we see or hear anymore? First of all: don’t panic! Question the content: Could that be true? And when you are not 100 percent sure, do not share, but search for other media reports about it to double-check.

What happens when we cannot trust what we see or hear anymore? First of all: don’t panic! Question the content: Could that be true? And when you are not 100 percent sure, do not share, but search for other media reports about it to double-check.

How do professional journalists and human rights organisations do this? Every video out there could be manipulated. With video editing software anyone can edit a video.

It is challenging to verify content which has been edited, mislabeled or staged. What is even more complex is to verify content that has been modified. We roughly see two kinds of manipulation:

  1. Shallow fakes: manipulated audiovisual content (image, audio, video) generated with ‘low tech’ technologies like Cut & Paste or speed adjustments. 
  2. Deepfakes: artificial (synthetic) audiovisual content (image, audio, video) generated with technologies like Machine Learning.

Deepfakes and synthetic media are some of the most feared things in journalism today. It is a term which describes audio and video files that have been created using artificial intelligence. Synthetic media is non-realistic media and often referred to as Deepfakes at the moment. Generated by algorithms it is possible to create or swap faces, places, and digital synthetic voices that realistically mimic human speech and face impressions but actually do not exist and aren´t real. That means machine-learning technology can fabricate a video with audio to make people do and say things they never did or said. These synthetic media can be extremely realistic and convincing but are actually artificial.

Detection of synthetic media

Face or body swapping, voice cloning and modifying the speed of a video is a new form of manipulating content and the technology is becoming widely accessible

At the moment the real challenge are the so called shallow fakes. Remember the video where Nancy Pelosi appeared to be drunk during a speech. It turned out the video was just slowed down, but with the pitch turned up to cover up the manipulation. Video manipulation and creation of synthetic media is not the end of the truth but it makes us more cautious before using the content in our reporting. 

On the technology side it is a rat race. Forensic journalism can help detect altered media. DW´s Research & Cooperation team works together with ATC, a technology company from Greece and the Fraunhofer Institute for digital media technology to detect manipulation in videos. 

Digger – Audio forensics

In the Digger project we focus on using audio forensics technologies to detect manipulation. Audio is an essential part of video and with a synthetic voice of  a politician or the tampered noise of a gunshot a story can change completely. Digger aims to provide functionalities to detect audio tampering and manipulation in videos. 

Our approach makes use of:

  1. Microphone analysis: Analysing the device being used for the recording of audio. 
  2. Electrical network Frequency Analysis: Detect editing (cut & paste analyses) of audio.
  3. Codec Analysis: We follow the digital footprint of audio by extraction of ENF traces.

Synthetic media in reality

Synthetic media technologies can have a positive as well as a negative impact on society.

It is exciting and scary at the same time to think about the ability to create audio-visual content in the way we want it and not in the way it exists in reality. Voice synthesis will allow us to speak in hundreds of languages in our own voice. (Hyperlink: Video David Beckham) 

Or we could bring the master of surrealism back to life:

With the same technology you can also make politicians say something they never have or place people in scenes they have never been. These technologies are being used in pornography a lot but the unimaginable impact is also showcased in short clips in which actors are placed in films they have never acted in. Possibly one of the most harmful effects is that perpetrators can also easily claim “that’s a deepfake” in order to dismiss any contested information. 

How can the authenticity of information be proofed reliably? This is exactly what we aim to address with our project Digger.  

Stay tuned and get involved

We will publish regular updates about our technology, external developments and interview experts to learn about ethical, legal and hand-on expertise. 

The Digger project is developing a community to share knowledge and initiate collaboration in the field of synthetic media detection. Interested? Follow us on Twitter @Digger_project and send us a DM or leave a comment below. 

Related Content

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.