Google, Facebook Should Use AI to Combat Conspiracy Theories | Opinion

Google, Facebook Should Use AI to Combat Conspiracy Theories

New research suggests chatbots could combat much of the damage caused by people going down rabbit holes on YouTube and Facebook. But will tech companies use them?

By:BLOOMBERG
| Updated on: Apr 18 2024, 14:36 IST
Google, Facebook Should Use AI to Combat Conspiracy Theories
It stands to reason that online platforms like YouTube parent Alphabet and Facebook owner Meta should consider adopting Large Language Models for busting conspiracy theories. (REUTERS)

It's understandable to feel rattled by the persuasive powers of artificial intelligence. At least one study has found people were more likely to believe disinformation generated by AI than humans. The scientists in that investigation concluded that people preferred the way AI systems used condensed, structured text. But new research shows how the technology can be used for good.

A recent study conducted by researchers at the Massachusetts Institute of Technology has validated something many AI watchers long suspected: The technology is remarkably persuasive when reinforced with facts. The scientists invited more than 2,000 people who believed in different conspiracy theories to summarize their positions to a chatbot — powered by OpenAI's latest publicly available language model — and briefly debate them with the bot. On average, participants subsequently described themselves as 20% less confident in the conspiracy theory; their views remained softened even two months later.

While companies like Alphabet Inc.'s Google and Meta Platforms Inc. might use persuasive chatbots for advertising, given their heavy reliance on ads for revenue, that's far off at best and unlikely to happen at all, people in the ad industry tell me. For now, a clearer and better use case is tackling conspiracy theories, and the MIT researchers reckon there's a reason generative AI systems do it so well: they excel at combating the so-called Gish gallop, a rhetorical technique that attempts to overwhelm someone in a debate with an excessive number of points and arguments, even when thin on evidence. The term is named after American creationist Duane Gish, who had a rapid-fire debating style in which he'd frequently change topics; those who believe in conspiracy theories tend to do the same.

“If you're a human, it's hard to debate with a conspiracy theorist because they say, ‘What about this random thing and this random thing?'” says David Rand, one of the MIT study's authors. “A lot of times, the experts don't look great because the conspiracy theories have all this crazy evidence.”

We humans are also worse than we think at engaging in debate generally. Ever had a family member at dinner passionately explain why they weren't vaccinating their kids? If so, their comments were probably either met by earnest nods, silence or someone asking about dessert. That reluctance to argue can inadvertently allow friends and family members to become entrenched in their views. It may be why other research shows that conspiracy theory believers often overestimate how much other people agree with them, Rand says. 

In the MIT study, OpenAI's GPT-4 Turbo, the large language model that the participants engaged with, was unflappable. (1)

In one example, a person who believed the US government was behind the Sept. 11, 2001, attacks told the chatbot about the collapse of building 7 World Trade Center, then-President George W. Bush's muted reaction to the attacks while in a classroom with children, and they also cited “a lot of videos and shows” that backed up their views. The bot answered all their points in a single detailed and rational explanation. It started sympathetically, noting that it made sense to question what really happened on 9/11, given the complexity of the events that unfolded, before launching into a clinical blow-by-blow repudiation of each of the issues in the prompt. After going back and forth with the bot twice more, the participant's level of confidence in the conspiracy theory had decreased to 40% from 100%.

Large language models are good at dissuading conspiracy-theory believers because they're armed with facts and a semblance of patience that most humans don't possess. So it stands to reason that online platforms like YouTube parent Alphabet and Facebook owner Meta should consider adopting them. After all, YouTube executives for years let conspiracy videos run rampant on the site at the behest of greater engagement. And misinformation about topics like abortion and the “rigged” 2020 US presidential election have been allowed to fester across Facebook. Meta didn't respond to a request for comment but a spokesperson for Google said YouTube recently banned conspiracy theory content that justified violence, and that it also recommended videos to viewers from “authoritative sources.” Meta's Chief Executive Officer Mark Zuckerberg has long argued that his platforms should not be arbiters of truth for what people say online. But given the impact his site has had on the democratic process in the US and elsewhere, both Meta and Google have a duty of care to combat the lingering effects of misinformation, particularly during a big election year. One way might be for both companies to provide a dialogue box for Facebook or Google users who search for keywords like QAnon, flat Earth, or chemtrails, and invite them to talk to Meta's Llama 2 language model, or Google's Gemini, about any of those issues.A Google spokeswoman said that when people searched for or watched videos on YouTube related to topics prone to conspiracy theories, like the moon landing or flat earthers, the company surfaced “information panels” showing additional context from third parties. That, perhaps, could be a good place to add a friendly, rational chatbot.  

“There is this general sense in the field and in the world that once someone has gone down the rabbit hole, it's too late,” says Rand. “This shows that is overly pessimistic.” Social media giants now have the tools to combat one of modern society's biggest scourges. They should seriously consider using them.   

More from this writer at Bloomberg Opinion:

AI's Advances Will Echo Internet, Not Steam Engine: Parmy Olson

Big Tech's Stuck in a Glass House on Data Snatching: Parmy Olson

Amazon's AI Stores Seemed Too Magical. They Were: Parmy Olson

(1) The MIT scientists took a unique approach of plugging the model directly into their online survey, which allowed all 2,190 study participants to talk to the bot in controlled conditions.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 18 Apr, 14:35 IST
NEXT ARTICLE BEGINS