Fairness Flow: Facebook builds new system for AI neutrality
Facebook has built a system called Fairness Flow that can measure for potential biases for or against particular groups of people.
Facebook wants to ensure that its Artificial Intelligence (AI) system comes across as neutral to everyone so that nobody feels discriminated against in all the things that it does - from job recommendations to removal of posts that violate the policies of the social network.
The company has built a system called Fairness Flow that can measure for potential biases for or against particular groups of people, research scientist Isabel Kloumann was quoted as saying at Facebook's F8 developer conference on Wednesday by CNET.
"We wanted to ensure jobs recommendations weren't biased against some groups over others," Kloumann said.
Facebook also announced that it was using AI to remove posts from its platform that involve hate speech, nudity, graphic violence, terrorist content, spam, fake accounts and suicide. "We view AI as a foundational technology, and we've made deep investments in advancing the state of the art through scientist-directed research," Facebook said in a statement on Wednesday.
At F8, its AI research and engineering teams shared a recent breakthrough: the teams successfully trained an image recognition system on a data set of 3.5 billion publicly available photos, using the hashtags on those photos in place of human annotations.
"We've already been able to leverage this work in production to improve our ability to identify content that violates our policies," the statement added.
The announcements came even as the company finds itself in the midst of increased scrutiny over its data protection practices. On the inaugural day of the two-day developer conference, Facebook CEO Mark Zuckerberg promised more steps to stop abuse of its services.