Select Page

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

Sam Gregory is Program Director of WITNESS. We talked to Sam about the development and challenges of new ways to create mis- and disinformation, specifically those making use of artificial intelligence. We discussed the impact of shallow- and deepfakes, and what the essential questions are with development of tools for detection of such synthetic media.

The following has been edited and condensed. 

Sam, what is your definition of a deepfake?

I use a broad definition of a deepfake. I use the phrase synthetic media to describe the whole range of ways in which you can manipulate audio or video with artificial intelligence. 

We look at threats and in our search for solutions we look at how you can change audio, how you can change faces and how you can change scenes by for example removing objects or adding objects more seamlessly. 

What is the difference to shallowfakes? 

We use the phrase shallowfake in contrast to deepfake to describe what we have seen for the past decade at scale, which is people primarily miscontextualizing videos like claiming a video is from one place when it is actually from another place. Or claiming it is from one date when it is actually from another date. Also when people do deceptive edits of videos or do things you can do in a standard editing process, like slowing down a video, we call it a shallowfake.

The impact can be exactly the same but I think it’s helpful to understand that deepfakes can create these incredibly realistic versions of things that you haven’t been able to do with shallowfakes. For example, the ability to make someone look like they’re saying something or to make someone’s face appear to do or say something that they didn’t do. Or the really seamless and much easier ability to edit within a scene. All are characteristics of what we can do with synthetic media. 

We did a series of threat modeling and solution prioritization workshops globally. In Europe, US, Brazil, Sub-Sahara Africa, South and Southeast Asia people keep on saying, we have to view both types of fakes as a continuum and we have to be looking at solutions across it. And also we need to really think about the wording we use because it may not make that much difference to an ordinary person who is receiving a WhatsApp message whether it is a shallowfake or a deepfake. It matters, whether it’s true or false.

Where do you encounter synthetic media the most at the moment?

Indisputably the greatest range of malicious synthetic media is targeting women. We know that from the research that has been done by organizations like Sensity. We have to remember that synthetic media is a category in the non-malicious, but potentially malicious usages. There is an explosion of apps that enable very simple creation of deepfakes. We are seeing deepfakes starting to emerge on those parody lines, a kind of an appropriation of images. And, at what time does software become readily available to lots of people to do moderately good deepfakes that could be used in satire, which is a positive usage but can also be used in gender-based violence?  

Where is the highest impact of deepfakes at the moment?

It is on the individual level. In terms of impact on individual women and their ability to participate in the public sphere, related to the increasing patterns of online and offline harassment that journalists and public figures face. 

Four threat areas were identified in our meetings with journalists, civic activists, movement leaders and fact-checkers that they were really concerned about in each region. 

  1. The Liars dividend, which is the idea that you can claim something is false when it is actually true which forces people to prove that it is true. This happens particularly in places where there is no strong established media. The ability to just call out everything as false benefits the powerful, not the weak. 
  2. There is no media forensics capacity amongst most journalists and certainly no advanced media forensics capacity. 
  3. Targeting of journalists and civic leaders using gender-based violence, as well as other types of accusations of corruption or drunkenness.
  4. Emphasis on threats from domestic actors. In South Africa we learned that the government is using facial recognition, harassing movement leaders or activists. 

These threats have to be kept in mind with the development of tools for detection. Are they going to be available to a community media outlet in the favelas in Rio facing a whole range of misinformation? Are they going to be available to human rights groups in Cambodia who know the government is against them? We have to understand that they cannot trust a platform like Facebook to be their ally.

Can be synthetic media used as an opportunity as well?

I come from a creative background. At WITNESS the center of our work is the democratization of video, the ability to film and edit. Clearly these are potential areas that are being explored commercially to create video without requiring so much investment.

I think if we do not have conversations about how we are going to find structured ways to respond to malicious usages, I see positive usage of these technologies being outweighed by the malicious usage. And I think there is a little bit too much of a „it will all work itself out” approach being described by many of the people in this space.

We need to look closely at what we expect of the people who develop these technologies: Are they making sure that they include a watermark? That they have a provenance tree that can show the original? Are they thinking about consent from the start?

Although I enjoy playing with apps that use these types of tools, I don’t want to deny that I think 99% of the usage of these are malicious.

We have to recognize that the malicious part of this can be highly damaging to individuals and highly disruptive to the information ecosystem.

Should we use synthetic media in satire for media literacy? 

We have been running a series of webtalks called deepfakery . One of the main questions is, what are the boundaries around satire? Satire is an incredibly powerful weapon of the weak against the powerful. So for example, in the US we see the circulation of shallowfakes and memes made on sites that say very clearly on the top that this is satire. But of course no one ever sees that original site. They just see the content retweeted by President Trump in which case it looks like it is a real claim.

So satire is playing both ways. I do think the value of satire is to help people understand the existence of this and to push them to sort of responsibly question their reaction to video.

I think the key question in the media literacy discussion is: how do we get people to pause? Not to dismiss everything but to give them the tools to question things. Give them the tools to be able to pause emotionally before they share.

From a technology point of view, what are we still missing to detect synthetic media?

Synthesis of really good synthetic media is still hard. So synthesizing a really good faceswap, or a convincing scene is still hard. What is getting easier is the ability to use apps to create something that is impactful but perhaps not believable. I think sometimes people over assume how easy it is to create a deepfake.

We’re not actually surrounded by convincing deepfakes at this point. 

A lot of our work has been thinking about detection and authentication. How do you spot evidence of media manipulation which could be detection of a deepfake or detection of a shallowfake? How to spot that a video has been miscontextualized and there is an original or an earlier version that has different edits? Then authentication, how do we trace a video over time to see it’s manipulations. 

At the moment the detection of synthetic media is, and this is the nature of the technology, an arms race between the people who will develop the detection tool and those who will use it to test and enhance their new synthesis tool. The results of detection tools are getting better but they are not at the level that you could do it at scale.

The meta question for us on detection is actually who to make this accessible to. If it is only the BBC, Deutsche Welle, France 24 and New York Times, that leaves out 90% of the world as well as ordinary people who may be targeted by this in an incredibly damaging way.

Do all journalists need to be trained in using advanced forensic technology?

One of the things we have learned as we have been working on deepfakes is that we shouldn’t exclusively focus on media forensics. I think it is important to build the media forensic skills of journalists and it is a capacity gap for almost every journalist to do any kind of media forensics with existing content. I do not think we can expect that every journalist will have that skill set. We also need to consider how we invest in e.g. regional hubs of expertise.

The bigger backdrop is that we need to build a stronger set of OSINT skills in journalism. We need to be careful not to turn this purely into a technical question around media forensics at a deep level because it is a complicated and specialist skill set.

We identified a range of areas that need to be addressed to develop tools that plug into journalistic workflows. For example that journalists are not going to rely on tools easily. They do not need just a confidence number, they need software to explain why it is coming up with this result. So, I think we need a constant interchange between journalists and researchers and tools developers and the platforms to say what the tools are that we really need as this gets more pervasive. And we need tools that potentially provide information to consumers and community leader level activists to help them do the kind of rapid debunking and rapid challenging of the kind of digital wildfire of rumors that journalists frankly often do not get too. Often community leaders are talking about things that circulate very rapidly in a Favela or a Township and journalists never get to them in a timely way. So we need to focus on journalists, but also on community leaders.

What are your three tips for consumers to deal with synthetic media? 

  1. Pause before you share the content.
  2. Consider the intention of why people are trying to encourage you to share it.
  3. To take an emotional pause when consuming media trying to understand the context of it is supported by a range of tools like the SIFT methodology or the Sheep Acronym

I don’t think it is a good idea to encourage people to think that they can spot deepfakes.

The clearest and most consistent demand we heard primarily from journalists and fact checkers is to show them if this is a mis-contextualized video so that they can then just clearly say, no this video is from 2010 and not from 2020.

Therefore reverse video search or finding similar videos is pretty important because that shallowfake problem remains the most predominant.

Many thanks Sam! Here’s the ‘Ticks or it didn’t happen‘ report that Sam mentioned. If you are interested to learn more or have questions then please get into contact with us, either via commenting on this article or via our Twitter channel.

We hope you liked it! Happy Digging and keep an eye on our website for future updates!

 Don’t forget: be active and responsible in your community – and stay healthy!

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

Jane Lytvynenko, is a senior reporter with Buzzfeed News, based in Canada. She is primarily focusing on online mis- and disinformation. You can check out her work here. We talked to Jane about how big the synthetic media problem actually is and Jane provides us with practical tips on how to detect deepfakes and how to handle disinformation.

Jane, what is your definition of a deepfake?

Oh, that is a tricky one. I mean it is very different from a video that was just slowed down or manipulated in a very basic way e.g. cut&paste of scenes.

For me a deepfake is using computer technology to make it look like a person said something or took an action they did not say or do.

Where do you mostly encounter deepfakes at the moment?

We do not encounter deepfakes a lot on the day-to-day.  Because the technology is not widely accessible at the moment, we are much more worried about cheapfakes than we are about deepfakes. They spread much faster. We do find deepfakes mostly in satire. We have seen a lot of that come up in the last little while. We also see GAN (Generative Adversarial Networks) generated images being used for fake personas. Essentially faces generated by a computer that are being used to present a persona on social media that doesn’t actually exist in real life.

Do you expect an increase of video synthetic media ahead during the US elections?

No. The reason why I say no is because deepfakes are still fairly difficult to create for people who do not have a lot of tech knowledge. But cheapfakes you can make in iMovie. You can make things using very basic tools that are also convincing.

There is of course always a fear in the back of my head of the ‘Big One’. What is going to be the big deepfake that fools everybody?

Another fear that I have, is a deepfake inserted among legitimate videos. Using one small portion rather than the whole video being a deepfake.

Where do deepfakes have their biggest impact?  

Right now, we see deepfakes mostly used for harassment of women, in pornography in particular. They are primarily targeted at women, but sometimes also at men. Deepfake technology is also being used in movies in Hollywood. And like I mentioned previously for sort of high level production satire.

But when it comes to the field of politics, we’ve seen a couple here and there but in North America we haven’t seen deepfakes that are so convincing that they uproot the political conversation.

Should journalists be concerned?

Do all journalists need to be trained in verification? Or is that a task for experts?

I think it needs to be both. In order to send something to a researcher you need to first understand what it is you are looking for and why you are sending it to a researcher.

So, we need to have basic training. We need reporters to understand what a deepfake looks like. What a GAN-generated image looks like. Get them in on the basics of verification of all types of content. If something extremely technical comes up, they can just send it to the researchers.

For journalists it is very important to have that source. It is very important to have an expert opinion for second verification. That is part of the practice of journalism. But if you do not know to ask the right question, you are not going to be able to get that second opinion.

What are the tools you still miss?

There are a few things. The biggest problem is social media discovery, especially when it comes to video. If there is a video going viral it is fairly difficult to trace back where it came from. There are some tools that break video down into thumbnails. You can then reverse image search them and try to find your way back. But for me right now, there is no way to tell where the video originated. Part of that is a lack of cross-platform searches. For example, Instagram stories; it is one of the most popular Facebook products right now, but they disappear within 24 hours. If somebody downloads that Instagram story and cuts off the banner that says who posted it, they can upload it to Twitter or Facebook, and I will not know where the video came from. It doesn’t allow reporters to see the bigger picture.

Right now, we do not necessarily have the tools to both look at video cross-platform and to look at these videos in terms of when they were shot, what time, by whom, from what angle, at what location. It is a challenge that requires a lot of time that reporters just do not have.

The other thing is, we do not really have a strong way of mapping video spread. So, when reporters do content analysis, they generally focus on text. The reason for that is because text is machine-readable. We have the tools to sort of map out the biggest account that posted this, the smaller accounts that came from it and the sort of audience that looked at this. We do not have similar tools for video even though analyzing the spreading of information is one of the most useful things we do as disinformation reporters. It allows us to see the key points where disinformation traveled. It allows us to understand where to look next time. It allows us most importantly to understand which communities were most impacted.

Is collaboration with researchers and platforms essential in fighting disinformation?

Definitely collaboration with researchers. Platforms are a bit on and off again in terms of what kind of information they are willing to provide us with. Sometimes they are willing to confirm fact findings, but they are rarely helping us to do research independently.

This is where researchers, analytics, sort of third parties that specialize in this are really key for reporters.

How should we report about disinformation and deepfakes?

At Buzzfeed News we always try to put correct information first. We repeat the accurate information before you get to the inaccurate information. There are two different approaches you can take. One is reporting on the content of the video and the other is reporting on the existence of the video as well as any information you have in terms of where it came from, who posted it and why.

We generally focus on the second approach as the primary presentation of facts.

That is how we frame a lot of these things. Because the key aim of a manipulated video is to get the message across and if we put the message at the top then they still get the message across. What you want to do is describe the techniques, describe how they are attempting to manipulate the audience. And then explain the other part of manipulation which is the message.

Do you have a specific workflow in verifying digital content?

We do have best practices. Putting accurate information first is definitely the top priority. We also make sure to never put up an image or a screenshot without stamping it or crossing it out in some way. That gives a visual clue to anybody who comes across it that it is false. But more importantly, if a search engine scrapes that image and somebody comes across it on Google or Bing they are immediately able to see that it is false.

Buzzfeed false stamp

In terms of workflow for verification, the key part is documentation. We archive everything that we come across and take a screenshot. We are essentially making sure that we are able to retrace our steps. It is kind of a scientific process, we want to make sure that if anybody repeated our steps, they would come to the same conclusion. A lot of the times when we do pure debunks that is what we focus on. Because not only does it increase trust and shows how we got to the conclusion, it also teaches our audience some of the techniques that we are using so that they can use them in the future as well.

Can you still trust what you are seeing, or do you always have this critical view?

The short answer is yes, especially with a lot of videos. I really fear missing something because sometimes the manipulation is so subtle you can’t quite tell that it is a manipulation from the first or the second look. If somebody is just scrolling on their feed, they are not looking very closely at those details. They might not even listen to the audio and hear that it sounds off. They might read the subtitles instead. They might not notice that the mouth in a deepfake is a little bit imperfect because those are little details. We are bombarded with information in our news feed so we might just not notice it.

I’m always extremely suspicious and sometimes I’m more suspicious than I should be, sometimes I look at a video and I’m like was that slowed down by 0.7 of a second or am I losing my mind?

What are the three main tips for a news consumer to detect synthetic media?

Tip #1
My first tip, if you see a video that is extremely viral or if you see video that sparks a lot of emotion or if you see a video that just kind of like feels a little bit off
just pause.

From there the number one thing you can do is search a couple of key terms in a search engine with the words fact check to see if somebody has already picked up on what you are seeing. You can also read the comments, very often in the comments people will explain what is going on in the video.

Tip #2
If you are unsure about the video really do play the audio and look at the key features that make people people. Look at the eyes, look at the mouth. What are the mannerisms of the person that you are seeing in the video and do they match with what you understand about that person? Ask yourself if the voice of the person sounds like their real voice.

A lot of people when they see a GAN-generated photo for example, they have a gut feeling that something is wrong. They feel that they are not looking at a real person, but they can’t quite explain why. So, just really trust that feeling and start looking for those little signs that something is wrong. If it is a photo usually the best thing to look at are the earlobes. If a person has glasses look at the glasses. Eyebrows are not generally perfect if a photo is computer-generated and teeth are always a little off. Those are the things that I would look for.

Tip #3

The final tip is: do not share anything you are not sure of. Do not pass it on to your network.

We all have created a small online community around us, whether it is friends, family, acquaintances or sort of strangers that we met on the internet. And most of that community really trusts us. Even if you are not a public figure your friends are going to trust what you post. So, take that responsibility seriously and try to not pass on anything that you are unsure of to that online community.

When not deepfakes, what else would be our challenge in disinformation?

My biggest worry when it comes to disinformation is not necessarily synthetic media. It is humans trying to convince other humans.

Look at the most insidious falsehoods that we see right now in the US; the QAnon mass delusion like we call it at Buzzfeed. People who believe in this are very often brought on board by other people they know. So, what I really worry about is the continuing creation of online communities where people bring one another along for the ride except the ride is extremely false.

I think that manipulated images and manipulated videos and fake news articles are all just tools. They are all parts of the problem. But the problem itself I think is a community problem and I definitely foresee that community problem growing beyond synthetic media.

Many thanks Jane! If you are interested to learn more or have questions then please get into contact with us, either via commenting on this article or via our Twitter channel.

We hope you liked it! Happy Digging and keep an eye on our website for future updates!

 Don’t forget: be active and responsible in your community – and stay healthy!

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

Video verification step by step

Video verification step by step

What should you do if you encounter a suspicious video online? Although there is no golden rule for video verification and each case may present its own particularities, the following steps are a good way to start.

What should you do if you encounter a suspicious video online? Although there is no golden rule for video verification and each case may present its own particularities, the following steps are a good way to start. 

Pay attention and ask yourself these basic questions

Start with asking some basic questions like “Could what I am seeing here be true?”, “Who is the source of the video and why am I seeing/receiving this?”. “Am I familiar with this account?”, “Has the account’s content and reporting been reliable in the past?” and “Where is the uploader based, judging by the account’s history?”. Thinking the answers to such questions may raise some red flags about why you should be skeptical towards what you see. Also, watch the video at least twice and pay close attention to the details; this remains your best shot for identifying fake videos, especially deepfakes. So, careful viewers may be able to detect certain inconsistencies in the video (e.g. non-synchronized lips or irregular background noises) or signs of editing/manipulation (e.g. certain areas of a face that are blurry or strange cuts in the video). Most video manipulation is still visible by the naked eye. If you want to read more on how to deal with dubious claims in general, you can read our previous blog post

Capture and reverse search video frames

When encountering a suspicious image, reverse searching it on Google or Yandex is one of the first steps you take in order to find out if it was used before in another context . For videos, although reverse video search tools are not commercially available yet, there are ways to work around that, in order to examine the provenance of a video and see whether similar or identical videos have circulated online in the past. There are many tools like Frame-By-Frame that enable users to view a video frame-by-frame, capture any frame and save it – if you have the VLC player installed it works as well. 

Cropping certain parts of a frame or flipping the frame (flipping images is one method disinformation actors use to make it more difficult to find the original source through reverse image search) before doing a reverse search may sometimes yield unexpected results. Also, searching in several reverse search engines (Google, Yandex, Baidu, TinEye, Karma Decay for Reddit, etc.) increases the possibility of finding the original video. The InVID-WeVerify plugin can help you verify images and videos using a set of tools like contextual clues, image forensics, reverse image search, keyframe extraction and more.

Examine the location where the video was allegedly filmed

Although in some instances it is very difficult or nearly impossible to verify the location where a video was shot, other times the existence of landmarks, reference points or other distinct signs in the video may reveal its filming location. For example, road signs, shop signs, landmarks like mountains, distinct buildings or other building structures can help you corroborate the video’s filming location.

Tools like Google Maps, Google Street View, Wikimapia, and Mapillary can be used to cross-check whether the actual filming location is the same as the alleged. Checking historical weather conditions for this particular place, date and time is another way to verify a video. Shadows visible in the video should also be cross-checked to determine whether they are consistent with the sun’s trajectory and position at that particular day and time. SunCalc is a tool that helps users check if shadows are correct by showing sun movement and sunlight phases during the given day and time at the given location. And sometimes it helps to stitch together several keyframes to narrow down the location – you may check this great tutorial by Amnesty

Video metadata and image forensics 

Even though most social media platforms remove content metadata once someone uploads a video or an image, if you have the source video, you can use your computer’s native file browser or tools like Exiftool to examine the video’s metadata. Also, with tools like Amnesty International’s YouTube DataViewer you will be able to find out the exact day and time a video was uploaded on YouTube.  If the above steps don’t yield confident results and you are still unsure of the video you can try out some more elaborate ways to assess its authenticity. With tools like the InVID-WeVerify plugin or FotoForensics you can examine an image or a video frame for manipulations with forensics algorithms like Error Level Analysis (ELA) and Double Quantization (DQ). The algorithms may reveal signs of manipulation, like editing, cropping, splicing or drawing. Nevertheless, to be able to understand the results and draw safe conclusions avoiding false-positives a level of familiarity with image forensics is required.

A critical mind and an eye for detail

As mentioned above, there is no golden rule on how to verify videos. The above steps are merely exhaustive, but they can be a good start. But as new methods of detection are developed, so are new manipulation methods – in a game that doesn’t seem to end. The commercialization of the technology behind deepfakes through openly accessible applications like Zao or Doublicat is making matters worse driving the “democratization of propaganda”. What remains most important and independent of the tools that can be used for the detection of manipulated media is to approach any kind of online information (especially user generated content) with a critical mind and an eye for detail. Traditional steps in the verification process, such as checking the source and triangulating all available information still remain central.   

In the effort to tackle mis- and disinformation, collaboration is key. In Digger we work with Truly Media to provide journalists with a working environment where they can collaboratively verify online content. Truly Media is a collaborative platform developed by Athens Technology Center and Deutsche Welle that helps teams of users collect and organise content relevant to an investigation they are carrying out and together decide on how trustworthy the information they have found is.  In order to make the verification process as easy as possible for journalists, Truly Media integrates a lot of the tools and processes mentioned above, while offering a set of image and video tools that aid users in the verification of multimedia content. Truly Media is a commercial platform – for a demo go here.

How to get started?

If you are a beginner in verification or if you would like to learn more about the whole verification process, we would suggest reading the first edition of the Verification Handbook, the Verification Handbook for Investigative Reporting, as well as the latest edition published in April 2020.

Stay tuned and get involved

We will publish regular updates about our technology, external developments and interview experts to learn about ethical, legal and hand-on expertise.

The Digger project is developing a community to share knowledge and initiate collaboration in the field of synthetic media detection. Interested? Follow us on Twitter @Digger_project and send us a DM or leave a comment below.

Related Content

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

The dog that never barked

The dog that never barked

Deepfakes have the potential to seriously harm people’s lives and to deter people’s trust in democratic institutions. They also continue to make the headlines. How dangerous are they really?

Deepfakes, although characterized by some as “the dog that never barked”, have in fact the potential to seriously harm people’s lives and to deter people’s trust in democratic institutions. 

Deepfakes continue to make the headlines – the latest news at the time of writing this article being about Donald Trump’s Independence Day deepfake video – which  raised also important legal and ethical issues, almost three years after the term “deepfake” was first coined in the news. Behind the headlines, synthetically generated media content (also known as  deepfakes) have even more serious consequences on individual lives – and especially on the lives of women. Deepfakes are also expected to be increasingly weaponized and combined with other trends and technologies they are expected to heighten security and democracy challenges in areas like cyber-enabled crime, propaganda and disinformation, military deception, and international crises.

“Technical approaches are useful until synthetic media techniques inevitably adapt to them. A perfect deepfake detection system will never exist”.   Sam Gregory, program director of WITNESS

 

It´s a race

Researchers, academics, and industry are all working towards developing deepfake detection algorithms, but developments in the field occur both ways, and as new detection algorithms get better, so do available tools to create deepfakes. As Sam Gregory, program director of WITNESS puts it, “Technical approaches are useful until synthetic media techniques inevitably adapt to them. A perfect deepfake detection system will never exist”. 

Verification of synthetically generated media content is still part of the traditional verification and fact-checking techniques and should be approached in the context of these already existing methods. Even though technology cannot provide a yes-or-no answer in the question “Is this video fake?”, it can greatly aid journalists in the process of assessing the authenticity of deepfakes. That’s why we at the Digger team are working hard to provide journalists with tools that can help them determine if a certain video is real or synthetic. Stay tuned for our how-to article coming up soon!

 

Don’t forget: be active and responsible in your community – and stay healthy!

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

Digital verification – Lessons learned from social distancing

Digital verification – Lessons learned from social distancing

Rules are in place to prevent the spread of the Coronavirus. One of them is called “social distancing” which helps to stop the transmission of Covid-19. What are the rules concerning Coronavirus information online?

Rules have been introduced across the globe about how to behave in order to prevent the spread of the Coronavirus. One of these rules is called “social distancing” which helps to stop the transmission of Covid-19. It recommends we avoid crowds, take public transport at off-peak hours and keep physical distance to other people.  

What are the rules concerning Coronavirus information online? Daily updates about the spread and protection of Covid-19 are crucial and easy to find – online – but not all of them are true. Sharing misinformation online is like refraining from social distancing; tempting, but it could harm people. So how can you contribute to stopping the transmission of dis- and misinformation about Covid-19? These are the rules on how to deal with dubious information online:

1. Ask yourself: Does this information make sense? 

Subsequent questions are: What sources does the information rely on? Where do the numbers come from?

2. Double check the information with reliable sources like; quality journalism, fact-checkers and relevant experts. Here’s how to do that:

  • Google the claim using the main keywords or the headline like in the video below:
  • Golden rule: reverse image search a photo or video
  • Use Google Dorks (a search string that uses advanced search operators) and search on a specific news sites or in a specific timeframe (see image below):
  • Check Twitter lists with reliable sources on Covid-19, like these:
    • Tutorial: Search on Tweetdeck your Twitter List
    • Here is a Twitter list curated by Journalism.Co with reliable journalists and media
    • Here is a Twitter list curated by Jeff Jarvis with medical experts
    • Here is a Twitter list curated by IFCN with national fact-checking organisations

3. Still nothing? Wait or reach out to your doctor.

Remember: Share only what is fact-checked with your family and friends in your FB, WhatsApp and other communities!

Verification of videos and synthetic media

For verification of videos there are some specific rules. Here are the most important ones:

Golden rule: reverse image search a video 

The reverse image search enables a simple and quick check whether a video has been published online before, possibly in another context. This way you might also be able to retrieve the original source of the video. You can use the InVid-Plugin for selecting several thumbnails from the video and reverse search with different tools like Google and Yandex – or you take a screenshot and do the same via Google Images.

Visual content verification

Most video manipulation is still visible with your bare eye. Look at the small details visible in the video. If you think you are watching a fake you might want to check:

  • strange cuts, non-fluent frames can indicate manipulation,
  • does the body actually fit the face of a person and does the body language match the facial expression,
  • is the person on video showing natural behaviour; eg. blinking eyes, movement of eyes, movement of hands? 

To help make it a bit easier you can check the video ‘frame by frame’ with VLC player and check if the colour and shadows are changing in a consistent way that makes sense to you.  

With a precise eye for details you could use the verification plugin with magnifier functionality from InVid/WeVerify that enhances the quality of your zoomed area in video stills.

If there are shadows visible and you are really getting into it, you can determine when on the day the video was filmed with tools like Suncalc. Here is a detailed tutorial on using shadows.

Technical video verification

The devil is in the detail, and the manipulation technology evolves. For well done synthetic video you would need some elaborate algorithms to check manipulation. We’re trying to develop those in the Digger project. What you could try is to focus on audio; check if the acoustics of a video correlate with the scene recorded – is it outside and thus with background noises, are people talking in the background, and does that match the video? And obviously if the audio does not match the lip movements with poorly implemented lip synchronization, it is more likely a fake. Still not sure, consult a forensic expert like anyone on this deepfakes forensics Twitter list.

Reliable sources

Social Media platforms like Facebook, Twitter, Youtube, Pinterest and more have taken steps against misinformation about the Coronavirus such as directing users to official sources when they search for Covid-19. 

Here are some helpful trustworthy sources for everyone of us:

  • World Health Organization – The WHO offers daily updates on the pandemic, guidance, and data on the spread. 
  • FirstDraft Resources for Reporters – Guides how to verify and a searchable archive of Coronavirus debunks.
  • Sifting Through the Pandemic – Mike Caufield’s simple and effective educational website teaches how to navigate online information. His approach runs counter to the news reader´s natural instincts. Mike has updated this post specifically for Coronavirus.
  • Fighting the Infodemic: The #CoronaVirusFacts Alliance The #CoronaVirusFacts / #DatosCoronaVirus Alliance unites more than 100 fact-checkers around the world in publishing, sharing and translating facts surrounding the Coronavirus.

Be active and responsible in your community – and stay healthy!

Related Content

In-Depth Interview – Sam Gregory

In-Depth Interview – Sam Gregory

Sam Gregory is Program Director of WITNESS, an organisation that works with people who use video to document human rights issues. WITNESS focuses on how people create trustworthy information that can expose abuses and address injustices. How is that connected to deepfakes?

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.

Digger – Detecting Video Manipulation & Synthetic Media

Digger – Detecting Video Manipulation & Synthetic Media

What happens when we cannot trust what we see or hear anymore? First of all: don’t panic! Question the content: Could that be true? And when you are not 100 percent sure, do not share, but search for other media reports about it to double-check.

What happens when we cannot trust what we see or hear anymore? First of all: don’t panic! Question the content: Could that be true? And when you are not 100 percent sure, do not share, but search for other media reports about it to double-check.

How do professional journalists and human rights organisations do this? Every video out there could be manipulated. With video editing software anyone can edit a video.

It is challenging to verify content which has been edited, mislabeled or staged. What is even more complex is to verify content that has been modified. We roughly see two kinds of manipulation:

  1. Shallow fakes: manipulated audiovisual content (image, audio, video) generated with ‘low tech’ technologies like Cut & Paste or speed adjustments. 
  2. Deepfakes: artificial (synthetic) audiovisual content (image, audio, video) generated with technologies like Machine Learning.

Deepfakes and synthetic media are some of the most feared things in journalism today. It is a term which describes audio and video files that have been created using artificial intelligence. Synthetic media is non-realistic media and often referred to as Deepfakes at the moment. Generated by algorithms it is possible to create or swap faces, places, and digital synthetic voices that realistically mimic human speech and face impressions but actually do not exist and aren´t real. That means machine-learning technology can fabricate a video with audio to make people do and say things they never did or said. These synthetic media can be extremely realistic and convincing but are actually artificial.

Detection of synthetic media

Face or body swapping, voice cloning and modifying the speed of a video is a new form of manipulating content and the technology is becoming widely accessible

At the moment the real challenge are the so called shallow fakes. Remember the video where Nancy Pelosi appeared to be drunk during a speech. It turned out the video was just slowed down, but with the pitch turned up to cover up the manipulation. Video manipulation and creation of synthetic media is not the end of the truth but it makes us more cautious before using the content in our reporting. 

On the technology side it is a rat race. Forensic journalism can help detect altered media. DW´s Research & Cooperation team works together with ATC, a technology company from Greece and the Fraunhofer Institute for digital media technology to detect manipulation in videos. 

Digger – Audio forensics

In the Digger project we focus on using audio forensics technologies to detect manipulation. Audio is an essential part of video and with a synthetic voice of  a politician or the tampered noise of a gunshot a story can change completely. Digger aims to provide functionalities to detect audio tampering and manipulation in videos. 

Our approach makes use of:

  1. Microphone analysis: Analysing the device being used for the recording of audio. 
  2. Electrical network Frequency Analysis: Detect editing (cut & paste analyses) of audio.
  3. Codec Analysis: We follow the digital footprint of audio by extraction of ENF traces.

Synthetic media in reality

Synthetic media technologies can have a positive as well as a negative impact on society.

It is exciting and scary at the same time to think about the ability to create audio-visual content in the way we want it and not in the way it exists in reality. Voice synthesis will allow us to speak in hundreds of languages in our own voice. (Hyperlink: Video David Beckham) 

Or we could bring the master of surrealism back to life:

With the same technology you can also make politicians say something they never have or place people in scenes they have never been. These technologies are being used in pornography a lot but the unimaginable impact is also showcased in short clips in which actors are placed in films they have never acted in. Possibly one of the most harmful effects is that perpetrators can also easily claim “that’s a deepfake” in order to dismiss any contested information. 

How can the authenticity of information be proofed reliably? This is exactly what we aim to address with our project Digger.  

Stay tuned and get involved

We will publish regular updates about our technology, external developments and interview experts to learn about ethical, legal and hand-on expertise. 

The Digger project is developing a community to share knowledge and initiate collaboration in the field of synthetic media detection. Interested? Follow us on Twitter @Digger_project and send us a DM or leave a comment below. 

Related Content

In-Depth Interview – Jane Lytvynenko

In-Depth Interview – Jane Lytvynenko

We talked to Jane Lytvynenko, senior reporter with Buzzfeed News, focusing on online mis- and disinformation about how big the synthetic media problem actually is. Jane has three practical tips for us on how to detect deepfakes and how to handle disinformation.