Home / Tech / News / Deepfake detection contest winner is still wrong one in three times

Deepfake detection contest winner is still wrong one in three times

These scores were based on using a public data set of 100,000 videos that Facebook had created for the competition that was called the Deepfake Detection Challenge (DF
These scores were based on using a public data set of 100,000 videos that Facebook had created for the competition that was called the Deepfake Detection Challenge (DF (Pixabay)

It’s still hard to detect fake videos of people and that doesn’t bode well for the upcoming US elections.

It’s hard for a computer to distinguish between a genuine video and a deepfake video, despite tons of practice, a competition run by Facebook showed. Around 2,114 participants submitted 35,000 deepfake computer models for this competition and the average success rate of being able to spot a fake was 70% with the best being an 83%.

These scores were based on using a public data set of 100,000 videos that Facebook had created for the competition that was called the Deepfake Detection Challenge (DFDC). When a separate set of 100,000 videos that hadn’t been seen before were used along with some extra techniques to make them harder to judge, the best these computer models could do was a 65%.

AI tech has automated many tasks that were hard for computers to pull off earlier like transcribing human voices, screening spam emails, adding that filter right to your selfies etc. But the downside of AI has been the use of it to create deepfake videos that map a person’s voice style and features onto a video of another person. Something like this could be used for propaganda and fake news and be made to go viral before it can be detected and debunked.

The DFDC was launched in September 2019 by the likes of Microsoft, Facebook, Amazon and universities like Oxford, MIT, Cornwall and the University of California at Berkeley. The organisers paid 3,500 actors to generate videos that were then modified to generate 100,000 public videos that contestants used to train their AI models. These 3,500 actors were picked to represent a variety of genders, ethnicities, age, skin tones etc.

The DFDC results are significant and problematic especially in a run-up to the upcoming US elections that are scheduled for November. Though most deepfakes are not convincing and can be easily detected, the very existence of these deepfake videos could lead voters to doubt the credibility of any video they come across and disrupt campaigns, say experts.

Can cues help detect deepfakes?

"As the research community looks to build upon the results of the challenge, we should all think more broadly and consider solutions that go beyond analyzing images and videos. Considering, provenance, and other signals may be the way to improve deepfake detection models,” Facebook researchers said in a blog post.

The research into deepfake is ongoing and competition organisers plan to release the raw source videos for new work in the field. "This will help AI researchers develop new generation and detection methods to advance the state of the art in this field," Facebook said.

Follow HT Tech for the latest tech news and reviews, also keep up with us on Twitter, Facebook, and Instagram. For our latest videos, subscribe to our YouTube channel.