AI experts aren't always right about AI | Opinion

AI Experts Aren’t Always Right About AI

Creating a system of checks and balances for artificial intelligence systems is a task for social and political scientists.

By:BLOOMBERG
| Updated on: May 07 2023, 12:40 IST
In their defense, public-health officials are trained to prioritize public safety above all else.  (iStockphoto)
In their defense, public-health officials are trained to prioritize public safety above all else. (iStockphoto) (MINT_PRINT)
In their defense, public-health officials are trained to prioritize public safety above all else.  (iStockphoto)
In their defense, public-health officials are trained to prioritize public safety above all else. (iStockphoto) (MINT_PRINT)

One of the lasting consequences of the Covid-19 pandemic has been a decline of trust in public-health experts and institutions. It is not hard to see why: America botched Covid testing, kept the schools closed for far too long, failed to vaccinate enough people quickly enough, and inflicted far more economic damage than was necessary — and through all this, public-health experts often had the dominant voice.

In their defense, public-health officials are trained to prioritize public safety above all else. And to their credit, many now recognize that any response to a public-health crisis needs to consider the tradeoffs inherent in any intervention. As Dr. Anthony Fauci recently told the New York Times, “I'm not an economist.”

As it happens, I am. And my fear is that we are about to make the same mistake again — that is, trusting the wrong experts — with artificial intelligence.

Some of the greatest minds in the field, such as Geoffrey Hinton, are speaking out against AI developments and calling for a pause in AI research. Earlier this week, Hinton left his AI work at Google, declaring that he was worried about misinformation, mass unemployment and future risks of a more destructive nature. Anecdotally, I know from talking to people working on the frontiers of AI, many other researchers are worried too.

What I do not hear, however, is a more systematic cost-benefit analysis of AI progress. Such an analysis would have to consider how AI might fend off other existential risks — deflecting that incoming asteroid, for example, or developing better remedies against climate change — or how AI could cure cancer or otherwise improve our health. And these analyses often fail to take into account the risks to America and the world if we pause AI development.

I also do not hear much engagement with the economic arguments that, while labor market transitions are costly, freeing up labor has been one of the major modes of material progress throughout history. The US economy has a remarkable degree of automation already, not just from AI, and currently stands at full employment. If need be, the government could extend social protections to workers in transition rather than halt labor-saving innovations.

Each of these topics is so complicated that there are no simple answers (even if we ask an AI!). Still, within that complexity lies a lesson: True expertise on the broader implications of AI does not lie with the AI experts themselves. If anything, Hinton's remarks about AI's impact on unemployment — “it takes away the drudge work,” he said, and “might take away more than that” — make me downgrade his judgment.

Yet Hinton is acknowledged to be the most important figure behind recent development in AI neural nets, and he has won the equivalent of a Nobel Prize in his field. And he is now doubting whether he should have done his research at all. Who am I to question his conclusions?

To be clear, I am not casting doubt on either his intentions or his expertise. But I would ask a different question: Who, today, is an expert in modeling how different AI systems will interact with each other to create checks and balances, much as decentralized human institutions do? These analyses are not very far along, much less tested against data. They would require an advanced understanding of the social sciences and political science, not just AI and computer science, and it is not obvious who exactly is capable of pulling off such a synthesis — especially in an era of hyper-specialists.

It almost goes without saying that there are different kinds of expertise. National security specialists, for example, confront dangerous risks to America all the time, and they have to develop a synthetic understanding of how to respond. How many of them have resigned from the establishment to become AI Cassandras? I haven't seen a flood of protests, and these are people who have studied how destructive actions can amplify through a broader social and economic order. Perhaps they are used to the idea that serious risks are always with us.

Albert Einstein helped to create the framework for mobilizing nuclear energy, and in 1939 he wrote President Franklin Roosevelt urging him to build nuclear weapons. He later famously recanted, saying in 1954 that the world would be better off without them. He may yet be proved right, but so far most Americans see the tradeoffs as acceptable, in part because they have created an era of US hegemony and ensured that US leaders cannot easily escape the costs of major wars. Nuclear disarmament still exists as a movement, but it is has the support of no major political party it in any nuclear nation. (If anything, Ukraine regrets having given up its nuclear weapons.)

The lesson is clear: Experts from other fields often turn out to be more correct than experts in the “relevant” (quotes intentional) field — with the qualification, as the Einsteins of 1939 and 1954 show, that all such judgments are provisional.

When it comes to AI, as with many issues, people's views are usually based on their priors, if only because they have nowhere else to turn. So I will declare mine: decentralized social systems are fairly robust; the world has survived some major technological upheavals in the past; national rivalries will always be with us (thus the need to outrace China); and intellectuals can too easily talk themselves into pending doom.

All of this leads me to the belief that the best way to create safety is by building and addressing problems along the way, sometimes even in a hurried fashion, rather than by having abstract discussions on the internet.

So I am relatively sympathetic to AI progress. I am skeptical of arguments that, if applied consistently, also would have hobbled the development of the printing press or electricity.

I also believe that intelligence is by no means the dominant factor in social affairs, and that it is multidimensional to an extreme. So even very impressive AIs probably will not possess all the requisite skills for destroying or enslaving us. We also tend to anthropomorphize non-sentient entities and to attribute hostile intent where none is present.

Many AI critics, unsurprisingly, don't share my priors. They see coordination across future AIs as relatively simple; risk-aversion and fragility as paramount; and potentially competing intelligences as dangerous to humans. They deemphasize competition among nations, such as with China, and they have a more positive view of what AI regulation might accomplish. Some are extreme rationalists, valuing the idea of pure intelligence, and thus they see the future of AI as more threatening than I do.

So who exactly are the experts in debating which set of priors are more realistic or useful? The question isn't quite answerable, I admit, but neither is it irrelevant. Because the AI debate, when it comes down to it, is still largely about priors. At least when economists debate the effects of the minimum wage, we sling around broadly commensurable models and empirical studies. The AI debates are nowhere close to this level of rigor.

No matter how the debates proceed, however, there is no way around the genuine moral dilemma that Hinton has identified. Let's say you contributed to a technological or social advance that had major implications, and a benefit-to-cost ratio of 3 to 1. The net gain would be very high, but so would the (gross) costs. And those costs would be imposed because of your labor.

How easily would you sleep knowing that your work, of which you had long been justifiably proud, was leading to so many cyberattacks and job losses and suffering? Would seeing the offsetting gains make you feel better? What if the ratio of benefit to cost were 10 to 1? How about 1.2 to 1?

There are no objective answers to these deeply normative questions. How you respond probably depends on your personality type. But the question of how you feel about your work is not the same as how it affects society and the economy. Progress shouldn't feel like working in the triage ward, but sometimes it does.

Tyler Cowen is a Bloomberg Opinion columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution. He is coauthor of “Talent: How to Identify Energizers, Creatives, and Winners Around the World.”

 

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 07 May, 12:40 IST
NEXT ARTICLE BEGINS