A guide to the AI safety debate after Sam Altman’s ouster

Artificial intelligence can be dangerous, but it can also help us reduce the many existential risks we already face.

| Updated on: Nov 19 2023, 10:54 IST
Gift your kids these 5 best learning apps on Children’s Day
Sam Altman
1/6 Children's Day is celebrated every year on November 14 in India, the birth anniversary of India's first Prime Minister, Jawaharlal Nehru. He emphasised the need to promote children's education and rights and was popularly referred to as "Chacha". United Nations also celebrates World Children's Day, but that is on November 20, the date in 1959 when the UN General Assembly adopted the Declaration of the Rights of the Child. Check out these 5 apps to make for a bright future for your child. (Pexels)
image caption
2/6 Duolingo ABC: This app has been created by the makers of the renowned language learning app Duolingo. It is a comprehensive tool designed to assist children from preschool to first grade in learning to read and write in English. With over 700 hands-on reading lessons, the app covers the alphabet, phonics, sight words, vocabulary, and more. This app ensures a fun and effective learning experience for kids.  (Play Store)
image caption
3/6  Kids Math: Math Games For Kids: This is an educational app that focuses on early mathematics skills for preschoolers, kindergarteners, and 1st graders. Through various mini-games, children can learn numbers, counting, addition, and subtraction in an enjoyable way.  (Play Store)
image caption
4/6 Epic : This app stands out as the world's largest digital library for kids, offering instant access to over 40,000 books, audiobooks, and learning videos. It includes features such as Reading Buddies, badges, and quizzes to keep kids motivated, and it supports multiple languages, making it accessible to a diverse audience.  (Play Store)
image caption
5/6 Baby Shark ABC Phonics: This is an educational app catering to kids aged 0 to 8, using catchy tunes and engaging visuals to teach basic ABC phonics skills. The app employs various activities, including tracing letters and tapping animals, to make learning enjoyable and effective.  (Play Store)
image caption
6/6 YouTube Kids:This app is specifically designed for children and offers a curated selection of family-friendly videos. With a safer online experience, parents can use features like Parental Controls to limit screen time, monitor viewing history, block videos or channels, and flag inappropriate content. The app allows for up to eight individual kid profiles, each with unique preferences, making it suitable for a wide range of age groups and interests. (Play Store)
Sam Altman
icon View all Images
Sam Altman, right, then CEO of ChatGPT maker OpenAI, and Mira Murati, chief technology officer, appear at OpenAI DevDay, OpenAI's first developer conference, on Monday, Nov. 6, 2023 in San Francisco. (AP)

When it comes to artificial intelligence, one of the most commonly debated issues in the technology community is safety — so much so that it has helped lead to the ouster of OpenAI's co-founder Sam Altman, according to Bloomberg News.

And those concerns boil down to a truly unfathomable one: Will AI kill us all? Allow me to set your mind at ease: AI is no more dangerous than the many other existential risks facing humanity, from supervolcanoes to stray asteroids to nuclear war.

I am sorry if you don't find that reassuring. But it is far more optimistic than someone like the AI researcher Eliezer Yudkowsky, who believes humanity has entered its last hour. In his view, AI will be smarter than us and will not share our goals, and soon enough we humans will go the way of the Neanderthals. Others have called for a six-month pause of AI progress, so we humans can get a better grasp of what's going on.

AI is just the latest instantiation of the many technological challenges humankind has faced throughout history. The printing press and electricity involved pluses and misuses too, but it would have been a mistake to press the “stop” or even the “slow down” buttons on either.

AI worriers like to start with the question: “What is your ‘p' [probability] that AI poses a truly existential risk?” Since “zero” is obviously not the right answer, the discussion continues: Given a non-zero risk of total extinction, shouldn't we be truly cautious? You then can weigh the potential risk against the forthcoming productivity improvements from AI, as one Stanford economist does in a recent study. You still end up pretty scared.

One possible counterargument is that we can successfully align the inner workings of AI systems with human interests. I am optimistic on that front, but I have more fundamental objections to how the AI pessimists are framing their questions.

First, I view AI as more likely to lower than to raise net existential risks. Humankind faces numerous existential risks already. We need better science to limit those risks, and strong AI capabilities are one way to improve science. Our default path, without AI, is hardly comforting.

The above-cited risks may not kill each and every human, but they could deal civilization as we know it a decisive blow. China or some other hostile power attaining super-powerful AI before the US does is yet another risk, not quite existential but worth avoiding, especially for Americans.

It is true that AI may help terrorists create a bioweapon, but thanks to the internet that is already a major worry. AI may help us develop defenses and cures against those pathogens. We don't have a scientific way of measuring whether aggregate risk goes up or down with AI, but I will opt for a world with more intelligence and science rather than less.

Another issue is whether we should confront issues probabilistically or by thinking at the margin. The AI doomsayers tend to ask the question this way: “What is your ‘p' for doom?” A better way might be this: “We're not going to stop AI, so what should we do?” The obvious answer is to work to make it better, safer, and more likely to lower risks.

It is very hard to estimate AI or indeed any other existential risk in the abstract. We can make more progress by considering a question in a specific real-world context.

Note that the pessimistic arguments are not supported by an extensive body of peer-reviewed research — not in the way that, say, climate-change arguments are. So we're being asked to stop a major technology on the basis of very little confirmed research. In another context, this might be called pseudo-science.

Furthermore, the risk of doom does not show up in market prices. Risk premiums are not especially high at the moment, and most economic variables appear to be well-behaved and within normal ranges. If you think AI is going to end the world, there is likely some intermediate period when you could benefit by going long on volatility and short on the market. If nothing else, you could give away money and alleviate human suffering before the final curtain falls. Yet that is not a bet that many seasoned traders are willing to make.

When I ask AI pessimists if they have adjusted their portfolio positions in accord with their beliefs, they almost always say they have not. At the end of the day, they are too sensible to think probabilistically and de novo about each and every life decision. The best ones are working to make AI safer — and that is a project we should continue to encourage.

Tyler Cowen is a Bloomberg Opinion columnist, a professor of economics at George Mason University and host of the Marginal Revolution blog.

Follow HT Tech for the latest tech news and reviews , also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 18 Nov, 20:42 IST