Could AI Help Us Humans Trust Each Other More? | Opinion

Could AI Help Us Humans Trust Each Other More?

AI has the potential to expose more people to different cultures, which tends to make us more tolerant and trusting.

By:BLOOMBERG
| Updated on: May 14 2023, 07:04 IST
artificial intelligence
Could the AI revolution somehow be harnessed to bring about a resurgence in social trust, rather than its further collapse? (Pexels)
artificial intelligence
Could the AI revolution somehow be harnessed to bring about a resurgence in social trust, rather than its further collapse? (Pexels)

America is undergoing a crisis of social trust, whether it be in government, the media, the Federal Reserve or simply people with opposing political views. America is also witnessing a revolution in artificial intelligence, as AI transforms everything from Google's business model to childhood.

These two developments got me thinking: Could the AI revolution somehow be harnessed to bring about a resurgence in social trust, rather than its further collapse?

It's a tall order, I admit. The most recent technology to have taken over the internet, social media, has often been linked with a rise in misinformation and thus a decline in social trust. And much of the recent commentary on AI points to this same risk. Large-language models (LLMs) can be used to create vast quantities of propaganda, possibly swamping the internet.

That is a real risk, but it does not sound so different from the status quo. Tyrants and bad-faith actors already hire humans to flood the internet with bad content.

The more hopeful news is less about content than about curation. The major current LLMs, such as those from Anthropic and OpenAI, are trained with internet data, yet if asked questions about Russia or China, they offer relatively objective answers. Users can also make answers more factual or academic by asking for that.

The point is not that LLMs won't be used to create propaganda — they will — but that they offer users another option to filter it out. With LLMs, users can get the degree of objectivity that they desire, at least after they learn how they work.

Still, this is a safe prediction: Within a year or two, there will be a variety of LLMs, some of them open source, and people will be able to use them to generate the kinds of answers they want. You might wonder how this is an improvement on the status quo, which now affords viewers a choice of polarized media sources such as Fox News or MSNBC, not to mention voices of all sorts on Twitter.

I hold out some hope for improvement in part because LLMs operate on a “pull” basis — that is, you ask them for what you want. Even if you are working with a “right-wing” LLM, you can always ask it for a left-wing or centrist perspective, and vice versa. It would be like watching Fox and having a button on your remote that you can click to get an opposing or contrasting view — within seconds. This is a vast improvement over cable TV, in terms of immediacy if nothing else. LLMs also make it very easy to generate a debate or a “compare and contrast” answer on just about any issue.

Again, there is no knowing how much balance people will want. But at least AI will make it easier to get balance if they so choose. That seems better than having a particular cable TV channel or a pre-established Twitter feed as the baseline default.

The impersonal nature of many LLMs may also be a force for ideological balance. Currently, many left-wingers don't want to switch to Fox (or visit their neighbor) to hear a different perspective because they find the personalities offensive or obnoxious. LLMs offer the potential to sample points of view in their driest, least provocative and most analytically argued forms.

LLMs could also boost trust by making translation across languages nearly seamless. Might some people who oppose immigration trust Latino immigrants more if they could sample Spanish-language news and media and better grasp the very real problems those migrants face in their home countries? Call me naïve, but I see more potential for upside than downside. People with more exposure to different cultures tend to be more tolerant and trusting of them.

LLMs might also increase trust in areas beyond politics. Doctor visits, for example, might become more meaningful and productive if an LLM has helped you prepare good questions about your health problems. You and your doctor might establish a better and more trusting relationship.

America's crisis in social trust has complex causes, not all of them known, and AI is by no means a panacea. But at the very least LLMs have the potential to remove current biases against balance, and make a wider range of views more widely available.

Of course, we humans will still have to decide just how much we really want to trust each other. If we can't get that one right, no amount of AI, no matter how smartly deployed, can help us.

Tyler Cowen is a Bloomberg Opinion columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution. He is coauthor of “Talent: How to Identify Energizers, Creatives, and Winners Around the World.”

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 14 May, 06:41 IST
NEXT ARTICLE BEGINS