Can Social Media Fix Its Misinformation Problem? | Tech News

Can Social Media Fix Its Misinformation Problem?

A Q&A with researcher Renee DiResta on how anti-vax campaigners took over Facebook and what government regulators can learn from the “Birds Aren’t Real” fake-conspiracy theory phenomenon.

By:BLOOMBERG
| Updated on: Aug 21 2022, 22:48 IST
Facebook
In a conversation with Renee DiResta, the technical director, Stanford Internet Observatory.  (REUTERS)

This is one of a series of interviews by Bloomberg Opinion columnists on how to solve the world's most pressing policy challenges. It has been edited for length and clarity. This interview was broadcast Dec. 15 on Twitter Spaces.

Parmy Olson: Among the biggest public-policy questions facing the technology industry is whether social-media companies should be held accountable for the spread of harmful content online. You're the technical director of the Stanford Internet Observatory and have spent years studying how misinformation and conspiracy theories gain traction, including Russia's influence campaign during the 2016 U.S. presidential election. It would be great to learn a little bit about how you got into this area of research. You previously worked in venture capital and then helped run a logistics startup in San Francisco. Then, almost as a side hobby, you started tracking misinformation online and were advising Congress. How did you develop this obsession?

You may be interested in

MobilesTablets Laptops
Apple iPhone 15 Pro Max
  • Black Titanium
  • 8 GB RAM
  • 256 GB Storage
27% OFF
Samsung Galaxy S23 Ultra 5G
  • Green
  • 12 GB RAM
  • 256 GB Storage
Google Pixel 8 Pro
  • Obsidian
  • 12 GB RAM
  • 128 GB Storage
10% OFF
Apple iPhone 15 Plus
  • Black
  • 6 GB RAM
  • 128 GB Storage

Renee DiResta, technical director, Stanford Internet Observatory: By accident, is the honest answer. I was a derivative trader for about seven or eight years. When I decided I was done with Wall Street, I came out to Silicon Valley and got a job as a junior venture capitalist. This was the age of mining data and it turned into seeing what entrepreneurs were doing with that tech. So I started doing data science at night for fun. Twitter had gotten quite popular by that point, and I thought we could do network analysis of conversations and understand how ideas spread.

Also read
Looking for a smartphone? To check mobile finder click here.

This was when the Disneyland measles outbreak happened. I was trying to put my child into preschool and so started looking at measles vaccination rates. I was really troubled by this growing use of what were called “personal belief” exemptions. I felt that you should not be able to just opt your child out of getting a measles shot, particularly for public school. So I became an activist and started mapping the anti-vaccine conversation online. I worked with a data scientist and we looked at conversations on Twitter around a particular bill that was moving through the California legislature to eliminate vaccine opt-outs. That was what started it all.

PO: For the data nerds among us, how did you map that data? Do you put it all in a spreadsheet or a Google doc?

RD: There was a tool called Scale Model that allowed you to see clusters of [Twitter] accounts linked by a particular type of interest, then analyze them by who they were following, to understand relationships between the accounts themselves. This was called “social listening” and it was a thing that companies were beginning to use to understand conversations about their brands.

PO: What happened when you started following these anti-vax groups yourself?

RD: So when I started following these groups on Twitter, the recommendation engine started promoting not only anti-vaccine content to me but also chemtrails groups and then Flat Earth groups. And I thought, “You've gotta be kidding me. Are there really Flat Earthers in 2015?” The answer, it turns out, is yes. As time went on, it got a lot more disturbing. I got Pizzagate groups, and then following that, I got QAnon groups. What's happening is you have systems creating links between people based on an evaluation of similarity between them, in either their behavior, their interests, what they click on and who they know. The system facilitates connection where no connection would've been made previously. That was the real “Oh” moment. That sensationalism was not only hyper-visible, but also very easily gameable.

PO: So in effect, these social platforms were creating whole new networks. It seems that many of them exist now as Facebook Groups. People talk a lot about Facebook's Newsfeed and how it recommends potentially toxic content, but how big an issue is the Newsfeed versus Groups when it comes to misinformation?

RD: It's hard to separate them. It used to be that when you joined Facebook, you brought your real social network, right? People you knew in real life, people in your church. What changed with Groups was that Facebook recognized that based on interests, it could connect people into communities. It could push people into these groups. Oftentimes the groups were very highly active; if you joined a group, you probably really cared about the topic. And those posts would dominate your news feed.

For many years, the recommendation engines were not really cognizant of what they were suggesting. If people joined and formed connections, that was seen as a positive thing. You know, Facebook was “connecting the world.” But Facebook was also recommending these deeply toxic communities that began to veer into the realm of advocating violence or becoming quite cult-like. Facebook's internal research showed that 64% of people who joined extreme groups did so through the recommendation engine.

PO: If Facebook is doing so much of the work connecting these groups, do the groups really need to do much in terms of coordinating an influence campaign? Can they just let the recommendation system do the work for them?

RD: Well, starting in about 2019, Facebook started to remove certain topics from the recommendation engine. Anti-vaccine groups were no longer recommended. So in 2019, we started to see some changes. The challenge, though, is that the networks are established. The connections are made, the communities are formed and so their dismantling is [seen] as an act of censorship. Some of these communities just go and constitute themselves on a platform like Telegram. What do you do with those ingrained connections?

PO: It's like once you've put the cream into the coffee and stirred it around, you can't take the cream out anymore. This is a good moment to segue to the topic of solutions. I'm going to start with something a little bit irreverent, but have you heard of the “Birds Aren't Real” phenomenon?

RD: (Laughs) I have. It was a group of younger people who created this conspiracy theory satire that birds weren't real, that the government had secretly replaced them with surveillance drones. It gently mocks some of the foundational tropes of many, many conspiracy theories. It attempts to call attention to the ridiculousness of certain types of conspiracy theories.

PO: And it's become hugely popular. It strikes me as a cultural approach to attacking the problem. Do you think that could be as effective as something like regulation or changes by the social-media companies themselves?

RD: I don't think regulation is the direction to look for if we want to defuse conspiratorial beliefs. There are deeply problematic free speech implications for the government weighing in on this sort of thing. [But] there are ways to use innovative content where you're kind of in on the joke. I think that actually is a very good educational tool. My eight-year-old son's school sent a module on how to make sure your kids don't believe everything they see on YouTube. It's good that we have media literacy in elementary school now.

PO: What about the efforts tech companies have made? Facebook uses machine learning algorithms to flag hate speech and bullying behavior, along with thousands of human moderators. Do they just need to hire way more human moderators to help?

RD: There's some real trade-offs there. When speech is taken down that shouldn't have been taken down, even if it was taken down by AI, it still precipitates a feeling that you've been censored. What is the implication of that versus leaving more things up? Where should that line be? I don't think we have a particularly refined understanding of that. Back in 2015 and 2016, many of us thought maybe more fact checks would help. I am much less confident in that belief now. We've reached a point where there's a politicization in the fact check.

The conversation around content moderation is often quite binary. “Do we take it down or leave it up?” But there are other interventions the platforms have at their disposal. There's also deciding what to put into a recommendation engine, what to surface and how to surface a trend. Moderation efforts have to be tied to a degree of harm; if the false information is believed, what is the impact of that?

PO: It's interesting that you say that. The U.K.'s upcoming Online Safety Law won't target specific content but would force companies like Facebook to carry out risk assessments on harm caused by content on their sites. Is that a reasonable way to tackle harmful content without infringing free speech?

RD: Yes. I think the content agnostic approach approach is right. In the U.S., we've not really had any legislation. It's remarkable to have an industry with power where consequence does not exist. This question of how we preserve freedom of expression without having the government weighing in on content, that's the sort of structural direction that we need to be looking at. One U.S. bill that I'm personally most excited about is the Platform, Accountability and Transparency Act. [It would allow scientists to apply to access data on how, for instance, social media companies' algorithms affect people at scale.]

PO: Let's do some future gazing and talk about the metaverse. I've personally tried socializing on social VR apps and found a lot of creepy behavior that made me uncomfortable. It made me think some of the issues that have been around for years, particularly for women in gaming, are going to hit social VR pretty hard. What are your thoughts on the kinds of problems we might see when people move on to the metaverse, if they do at all?

RD: I was on Second Life back in the day. Okay. I'm that old, which is why I'm having a hard getting excited about the metaverse…

PO: Right, because Second Life didn't actually grow that big. It plateaued and just had its core followers and that was it.

RD: Look, there are moments when a company really nails a user experience. Going back to the very start of this conversation, we were talking about my days in venture capital 10 years ago. Virtual reality was always just around the corner then, too. I actually really liked it for first-person gaming. I thought it was a lot of fun. But the question is, will there be harassment in the metaverse? Absolutely. But that's because harassment is a function of participation in human society. Then the question becomes: How do you create structures for creating safe communities online? What are the terms of service? Are the platforms thinking ahead about that? One of the things I loved about Silicon Valley was the excitement and belief that you could create something beautiful and give people experiences. But there was always this lack of thinking adversarily: How is this going to be abused by the worst possible people? Because when you put groups of people together, there is going to be anti-social behavior, guaranteed.

Parmy Olson is a Bloomberg Opinion columnist covering technology. She previously reported for the Wall Street Journal and Forbes and is the author of 'We Are Anonymous.'

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 19 Dec, 00:53 IST
Tags:
NEXT ARTICLE BEGINS