Facebook wants to use AI to screen content, but fairness issues remain
Alongside developing its AI, Facebook has sought to address the issue by hiring thousands of human reviewers, often through contractors.
One of Facebook Inc.'s biggest issues in trying to stop the spread of fake news on its platform is being able to train its algorithms on good examples of truth and falsehoods.
'Often there is not common agreement on whether something is false news or not,' Joaquin Quinonero Candela said in a phone interview ahead of his talk at the F8 developer conference in San Jose, California. 'At our scale, there are not enough professional fact-checkers in the world to do it.'
Facebook has been under pressure from governments and users around the world for not doing enough to check the spread of misinformation, extremist propaganda and hate speech on its platform.
The company in April unveiled new artificial intelligence tools to help flag posts potentially containing false information by letting them point to trusted sources that contradicted the post. But Candela acknowledged such a system could potentially be gamed, particularly in countries where most news sources have political biases, or by users teaming up to flag an accurate piece of information as false.
'This is a huge concern,' he said. 'It is very important not to let the bias flow into the labels themselves.'
Alongside developing its AI, Facebook has sought to address the issue by hiring thousands of human reviewers, often through contractors, but the company has been continually caught out -- for instance, failing to block the live video transmission of the gunman who attacked two mosques in New Zealand in March and then struggling to prevent the same video from being reposted.
Mark Zuckerberg, Facebook's chief executive officer, has repeatedly told US lawmakers that artificial intelligence would soon be able to automatically filter content from Facebook's two billion users to flag objectionable posts. But, today, the technology remains too immature to do this well.
ALSO READ: Facebook launches Project LightSpeed for faster, lighter Messenger app
Candela said that even if ground truth could be determined, Facebook needed to guard against bias in the way the algorithm classified content, and in how moderators chose to act when confronted with content flagged as false, extreme or hateful.
'Our community reviewers bring personal opinions and biases to the process themselves and we want to make sure all content is being treated the same no matter where it is coming from,' he said.
This ambition may prove too difficult for Facebook. Candela said that when figuring out when the algorithm will flag content to a reviewer, the company wants to use the same rules for all content. But Candela said the company was aware that this definition of fairness -- that all groups be treated the same statistically -- might not satisfy all users.
For instance, certain language may be unacceptable for an outsider to use to refer to a member of a particular group, but suitable within that same group.
Candela said the company knows there are no easy answers to these questions. Referencing the time he spent learning the complex mathematics that underpins machine-learning algorithms and comparing it to the thorny problem of content moderation, Candela said, 'I feel like when I was doing super-complicated math, that felt a lot easier than this.'
WATCH: Facebook unveils new desktop, mobile versions with white theme
Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.