Revealed: deepfake contest run by Facebook shows that top algorithm was only able to spot a doctored video 65% of the time
- A contest found the best algorithm had a 65 percent success rate
- The contest had 2,000 participants with 35,000 algorithms tested
- Facebook will release the winner as an open-source tech for researchers
- It will continue to work on its own algorithm as well
Algorithms designed as part of a contest run by Facebook struggled to reliably identify digitally manipulated videos called deepfakes.
According to the company, which recently announced the results of its contest, the best performing algorithm in its competition – which had more than 2,000 participants – was able to identify ‘challenging real world examples’ of deepfakes about 65 percent of the time on average.
While the results of the contest are far from perfect, Facebook’s Mike Schroepfer, the company’s chief technology office, told journalists in a press call that its contest exceeded expectations.
‘Honestly the contest has been more of a success than I could have ever hoped for,’ said Schroepfer according to The Verge.
The technology will be released to the public as an open-source code for researchers
In the contest, the more than 2,000 participants submitted more than 35,000 different algorithms which were tasked with identifying manipulate videos from a data set of more than 100,000 clips.
The clips themselves were created by 3,000 actors hired by Facebook who were told to recreate conversations and other naturalistic movements.
As noted by The Verge, when trained using the clips the algorithms had about an 82 percent detection rate, but when presented with ‘black box’ data that hadn’t been viewed, the detection rate fell to 65 percent.
Facebook says it has its own technology under development that it won’t release to the public for fear that it could be reverse engineered.
‘We have deepfake detection technology in production and we will be improving it based on this context,’ said on a call according to The Verge.
Facebook insists that though deepfakes aren’t currently a pressing issue, the technology is mean as a preventative measure against manipulation in the future.
‘The lesson I learned the hard way over the last couple of years, is I want to be prepared in advance and not be caught flat-footed,’ said Schroepfer according to The Verge.
‘I want to be really prepared for a lot of bad stuff that never happens rather than the other way around.’