Just before Sam Altman was fired, OpenAI researchers warned the board of a major AGI breakthrough | Tech News

Just before Sam Altman was fired, OpenAI researchers warned the board of a major AGI breakthrough

According to a report, OpenAI researchers made a significant breakthrough toward creating artificial general intelligence (AGI) just a day before Sam Altman was fired.

By: HT TECH
| Updated on: Nov 23 2023, 08:47 IST
Sam Altman
The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI). (REUTERS)
Sam Altman
The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI). (REUTERS)

Just when everyone thought that the OpenAI saga was done and dusted, a report brought shocking information to the surface. As per Reuters, right before Sam Altman was fired by the OpenAI board, a team of researchers in the company had sent the directors a letter warning of a powerful artificial intelligence (AI) discovery, that they said could even threaten humanity. This key breakthrough is being considered as artificial general intelligence (AGI), which is otherwise known as superintelligence.

For the unaware, AGI or AI superintelligence is when the computing capabilities of a machine are higher than that of humans. This could result in solving complex problems faster than humans, especially those which require elements of creativity or innovation. This is still far away from sentience, a stage where AI gains consciousness, and can operate without receiving any inputs and beyond the knowledge of its training material.

OpenAI researchers made a breakthrough toward AGI

The previously unreported letter and AI algorithm was a key development ahead of the board's ouster of Altman, the poster child of generative AI, the two sources told Reuters. Before his unexpected return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman's firing. Reuters was unable to review a copy of the letter. The researchers who wrote the letter did not immediately respond to requests for comment.

According to one of the sources, long-time executive Mira Murati mentioned the project, called Q*, to employees on Wednesday and said that a letter was sent to the board prior to this weekend's events.

After the story was published, an OpenAI spokesperson said Murati told employees what media were about to report, but she did not comment on the accuracy of the reporting.

The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*'s future success, the source said.

Reuters highlighted that it could not independently verify the capabilities of Q* claimed by the researchers. 

In response to the Reuters report suggesting that Mira Murati told employees the letter “precipitated the board's actions” to fire Sam Altman last week, The Verge posted a statement from OpenAI spokesperson Lindsey Held Bolton who said, “Mira told employees what the media reports were about but she did not comment on the accuracy of the information”. 

The path towards AI superintelligence

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn, and comprehend.

In their letter to the board, researchers flagged AI's prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by superintelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Against this backdrop, Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to superintelligence, or AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a gathering of world leaders in San Francisco that he believed AGI was in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman. 

(With inputs from Reuters)

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 23 Nov, 07:43 IST
NEXT ARTICLE BEGINS