Scientist says 'Everyone on Earth will die' if AI is allowed to be more intelligent
Many notable individuals in the last few weeks have spoken about the dangers of AI. An AI researcher has even gone as far as to say, "literally everyone on Earth will die".
We are entering an important era of technology — the rise of AI. Artificial intelligence has been among us for a long time, but recent developments have pushed its capabilities and intelligence to a level where it may start to leave humans behind. A prime example of this is OpenAI-build ChatGPT. Based on the GPT-4 language processing model, this AI chatbot is capable of processing vast amounts of data, analyzing it and thereafter generating content. In fact, it can answer just about any question asked of it. But this is just the beginning. An AI researcher has recently claimed that “literally everyone will die” if AI is allowed to grow more intelligent without any checks.
Eliezer Yudkowsky, of the Machine Intelligence Research Institute in Berkeley, California, told The Sun, “Many researchers steeped in these issues, including myself, expect that the most likely result of a superhumanly intelligent AI is that literally everyone on Earth will die”. And he thinks this is an inevitability. “Not that this might be possible, but that it would be the obvious thing to happen,” he added.
But why exactly does he fear AI so much? Let us take a look.
AI can wipe humans off the planet Earth
On the surface, an AI chatbot is a greatly helpful tool that can improve the efficiency and productivity of humankind. It can act as a direct source of information available on the internet so users do not have to spend time scrolling through pages. It can also analyze large amounts of text to let users know about a precise data point. Recently, ChatGPT even helped diagnose an illness in a dog correctly and saved his life, leaving vets stunned.
But all that is the good stuff. There is a dark side to this as well. And it is called misinformation, deep fakes, data privacy issues, security risks, malware and more. All these risks exist while AI is still at a relatively infant stage.
The fear is that if the AI reaches super-intelligence to the point it can develop sentience, it may bring bad news for humans on Earth.
“Behind the outward appearance of an AI that talks to you and answers your questions are giant arrays of inscrutable numbers…In our current state of ignorance, the most likely outcome is that we create an AI that does not do what we want and does not care for us or life in general…Visualise an alien civilisation, thinking at millions of times the speed of humans and operating in a world of creatures that are, from its perspective, very stupid and very slow,” explains Yudkowsky, in a conversation with The Sun.
This would explain why Elon Musk and Apple cofounder Steve Wozniak are among those who have recently signed a petition that demands all AI activities be put on a hold till regulatory bodies can be built. The petition asks for institutions that can not only keep a track on AI activity but also determine the right approach for it and the areas where AI should not have an access to.
Follow HT Tech for the latest tech news and reviews , also keep up with us on Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.