Digger Deepfake Detection
The Digger project aims to use both visual verification and audio forensic technologies to detect both shallow fakes as well as deepfakes or synthetic media as we call it..
Shallowfakes are manipulated audiovisual content (image, audio, video) generated with ‘low tech’ technologies like Cut & Paste or speed adjustments which, often taken out of context, is extremely convincing.
Deepfakes / Synthetic Media are artificial audiovisual content (image, audio, video) generated with technologies like Machine Learning which is extremely realistic.
Here is what we think are the most relevant upcoming audio-related conferences. And which sessions you should attend at the ICASSP 2020.
Rules are in place to prevent the spread of the Coronavirus. One of them is called “social distancing” which helps to stop the transmission of Covid-19. What are the rules concerning Coronavirus information online?
What is the difference between a ‘face swap’, a ‘speedup’ or even a ‘frame reshuffling’ in a video? At the end of the day they all are manipulations of video content. We want to...
The team working on the Digger Project consists of Fraunhofer IDMT (audio forensics technology), Athens Technology Center (product development) and Deutsche Welle (project lead and concept development). The project is co-funded by Google DNI.
What happens when we cannot trust what we see or hear anymore? First of all: don’t panic! Question the content: Could that be true? And when you are not 100 percent sure, do not share, but search for other media reports about it to double-check.
Your opinion and expertise matter to us.
Please get involved via comments on the articles here or via Twitter. Thanks!
We would love to see you on Twitter!