Facebook to remove misinformation that leads to violence | Tech News

Facebook to remove misinformation that leads to violence

Facebook has been roundly criticized over the way its platform has been used to spread hate speech and false information that prompted violence.

By: THE NEW YORK TIMES
| Updated on: Aug 20 2022, 00:03 IST
The logo for Facebook appears on screens at the Nasdaq MarketSite in New York's Times Square.
The logo for Facebook appears on screens at the Nasdaq MarketSite in New York's Times Square. (AP Photo)
The logo for Facebook appears on screens at the Nasdaq MarketSite in New York's Times Square.
The logo for Facebook appears on screens at the Nasdaq MarketSite in New York's Times Square. (AP Photo)

Facebook, facing growing criticism for posts that have incited violence in some countries, said Wednesday that it would begin removing misinformation that could lead to people being physically harmed.

The policy expands Facebook's rules about what type of false information it will remove, and is largely a response to episodes in Sri Lanka, Myanmar and India in which rumors that spread on Facebook led to real-world attacks on ethnic minorities.

"We have identified that there is a type of misinformation that is shared in certain countries that can incite underlying tensions and lead to physical harm offline. We have a broader responsibility to not just reduce that type of content but remove it," said Tessa Lyons, a Facebook product manager.

Facebook has been roundly criticized over the way its platform has been used to spread hate speech and false information that prompted violence. The company has struggled to balance its belief in free speech with those concerns, particularly in countries where access to the internet is relatively new and there are limited mainstream news sources to counter social media rumors.

In Myanmar, Facebook has been accused by United Nations investigators and human rights groups of facilitating violence against Rohingya Muslims, a minority ethnic group, by allowing anti-Muslim hate speech and false news.

In Sri Lanka, riots broke out after false news pitted the country's majority Buddhist community against Muslims. Near-identical social media rumors have also led to attacks in India and Mexico. In many cases, the rumors included no call for violence, but amplified underlying tensions.

The new rules apply to one of Facebook's other big social media properties, Instagram, but not to WhatsApp, where false news has also circulated. In India, for example, false rumors spread through WhatsApp about child kidnappers have led to mob violence.

In an interview published Wednesday by the technology news site Recode, Mark Zuckerberg, Facebook's chief executive, tried to explain how the company is trying to differentiate between offensive speech — the example he used was people who deny the Holocaust — and false posts that could lead to physical harm.

"I think that there's a terrible situation where there's underlying sectarian violence and intention. It is clearly the responsibility of all of the players who were involved there," Zuckerberg told Recode's Kara Swisher, who will become an Opinion contributor with The New York Times later this summer.

The social media company already has rules in place in which a direct threat of violence or hate speech is removed, but it has been hesitant to remove rumors that do not directly violate its content policies.

Under the new rules, Facebook has said it will create partnerships with local civil society groups to identify misinformation for removal. The new rules are already being put in effect in Sri Lanka, and Lyons said the company hoped to soon introduce them in Myanmar, then expand elsewhere.

Zuckerberg's example of Holocaust denial quickly created an online furor, and on Wednesday afternoon he clarified his comments in an email to Swisher. "I personally find Holocaust denial deeply offensive, and I absolutely didn't intend to defend the intent of people who deny that," he said.

He went on to outline Facebook's current policies around misinformation. Posts that violate the company's community standards, which ban hate speech, nudity and direct threats of violence, among other things, are immediately removed.

The company has started identifying posts that are categorized as false by independent fact checkers. Facebook will "downrank" those posts, effectively moving them down in each user's News Feed so that they are not highly promoted across the platform.

The company has also started adding information boxes under demonstrably false news stories, suggesting other sources of information for people to read.

But expanding the new rules to the United States and other countries where objectionable speech is still legally protected could prove tricky, as long as the company uses free speech laws as the guiding principles for how it polices content. Facebook also faces pressure from conservative groups that argue the company is unfairly targeting users with a conservative viewpoint.

When asked in an interview how Facebook defined misinformation that could lead to harm and should be removed versus that material it would simply downrank because it was objectionable, Lyons said, "There is not always a really clear line."

"All of this is challenging — that is why we are iterating," she said. "That is why we are taking serious feedback."

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 19 Jul, 17:07 IST
Tags:
NEXT ARTICLE BEGINS