Facebook cracks down on deepfake videos ahead of US election
Facebook is tightening the noose around deepfake and other kinds of manipulated videos on its platform. The company on Tuesday said it is working to strengthen its policy to tackle manipulated content that mislead people by looking real.
Going forward, Facebook will remove a video if it misleads someone into thinking that a subject in the video said words that they never actually said. Facebook will also take down misleading videos that are a result of technologies such as AI or ML and "merges, replaces or superimposes content on to a video, making it appear to be authentic."
The social networking company also clarified that the updated policy doesn't apply on content that is "parody or satire, or video that has been edited solely to omit or change the order of words."
"This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we're providing people with important information and context," said Facebook in a post.
Deepfakes, developed using sophisticated tools such as deep learning, are becoming a new problem for the social networking platforms. Experts believe miscreants could misuse deepfakes to propagate fake news.
In the run-up to the US presidential election later this year, the social networking companies are under pressure to fight fake news, especially propagated through manipulated media.
Facebook last year announced a partnership with Microsoft and MIT to fight deepfakes. The companies are investing $10 million to develop open source tools to make it easier to spot doctored videos.