Counted to be one of the biggest coups to take place in the tech world in recent history, the OpenAISam Altman saga is the most high-stakes drama that unfolded and still continues to surprise. Sam co-founder of OpenAI and brain behind the life-changing ChatGPT, was ousted as the company’s CEO on November 17, 2023, and then reinstated back in just a span of 5 days. At one point, OpenAI had changed three CEOs in three days. The board that fired Altman was in turn fired and replaced upon his return.

Could this get any more dramatic? His return has been a result of employees threatening to quit en masse as well as pressure from Microsoft and OpenAI’s investors. The four-day exile of Altman led to numerous speculations about the cause, from disagreement with board members over products, lack of consistent communication, and differences over AI safety. One of the reasons behind Sam Altman’s sacking as OpenAI CEO by the ChatGPT creator’s board could be a project known as Q*.

Also Read: Drama Unfolds At OpenAI: Sam Altman Is Now The Ex-CEO, Gets Fired By The Board, Co-Founder Greg Brockman Quits Too

In a blog from February 2023, Sam Altman described AGI as “AI systems that are generally smarter than humans.” He also talked about how if AGI was created successfully, it could help elevate humanity by pushing the global economy. However, Sam did mention the cons of this superintelligence, “AGI would also come with serious risk of misuse, drastic accidents, and societal disruption.

Yet he defended it by saying, “Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”

Know Project Q*

Several staff researchers had reportedly written to the board of directors about the discovery of a powerful AI that could potentially threaten the existence of humanity. According to Reuters, many people at OpenAI believe Q* (Q-Star) to be the San Francisco-based startup’s breakthrough in its search for what is known as ‘artificial general intelligence’ (AGI).

openAI Project Q* Star

OpenAI describes AGI as ‘autonomous systems that surpass humans in most economically valuable tasks.’ According to a report in The Information, earlier this year, a team led by OpenAI’s lead scientist Ilya Sutskever made a breakthrough with AI. This later allowed them to build a new model named Q* (read Q-star). It was reported that this new model could solve basic mathematical problems.

However, this technological breakthrough also triggered some fears among staff who felt that the AI company did not have enough safeguards in place to ‘commercialise’ such an advanced model. Mira Murati, the AI research firm’s Chief Technology Officer (CTO), had acknowledged the existence of Q* in an internal email to employees and alerted them to ‘certain media stories’ without commenting on their accuracy.

The AGI’s Abilities

Q* is basically an algorithm that is capable of solving elementary mathematical problems by itself, including those that are not part of its training data. This makes it a significant leap towards the much anticipated Artificial General Intelligence (AGI) – a hypothetical capability of AI that makes it perform any intellectual task that the human brain can do. This breakthrough is credited to Sutskever and has been further developed by Szymon Sidor and Jakub Pachoki.

openAI Project Q* Star AI AGI

Also, unlike a calculator, AGI can generalise, learn, comprehend, and demonstrate advanced reasoning capabilities similar to humans, while the former can solve only a limited number of operations. Reportedly, the breakthrough is part of a larger initiative by an AI scientist team that has been formed by combining the Code Gen and Math Gen teams at OpenAI. The team focuses on enhancing the reasoning capabilities of AI models for scientific tasks. 

The Warning Signs

While the OpenAI staff researchers highlighted the model’s AI prowess, it also flagged the potential danger regarding safety concerns. Computer scientists have long held discussions about the risks posed by highly intelligent machines, for example, instances in which the machines might decide that the ‘destruction of humanity’ was in their interest.

The staff also expressed concerns regarding the ‘AI scientist team’ that was exploring ways to optimise the existing AI models to improve their reasoning and eventually make the models perform scientific tasks. They also questioned the adequacy of safety measures deployed by OpenAI. According to a report in Reuters, the model reportedly provoked an internal outcry, with staff stating that it could threaten humanity.

This warning is believed to be one of the major reasons that led to the sacking of Sam Altman. He reportedly spoke about the recent technological advance at the APEC CEO Summit, describing it as something that allowed them to “push the veil of ignorance back and the frontier of discovery forward.” Ever since the OpenAI boardroom saga, this comment by Altman has been seen as him hinting at this breakthrough model. 

openAI Project Q* Star threat to humanity AI

Why Project Q* Could Be A Threat

All the reports on the internet as of now suggest that Q* has the ability to understand abstract concepts and logical reasoning. This is a tremendous leap as no AI model so far is capable of it. While on a practical level, it is a breakthrough, but this could also lead to unpredictable behaviours or decisions that humans may not be able to foresee. 

Sophia Kalanovska, a researcher told Business Insider that Project Q* is a fusion of two known AI methods, Q-learning and A* search. She said that the new model could combine deep learning with rules programmed by humans, which may make the model more powerful and versatile than any other current AI model. Essentially, this could lead to an AI model that not only learns from data but also applies reasoning like humans, which makes it difficult to control or predict. 

Presently, AI models primarily repeat existing information, but Q* will be a milestone as it will be able to generate new ideas and solve problems even before they happen. The advanced capabilities of Q* could also lead to possible misuse or unintended consequences. Even if someone deploys it with good intentions, the complexity of Q*s reasoning and decision-making could well lead to outcomes that could prove damaging to humanity. 

These concerns definitely underscore the need for thoughtful consideration as well as strong ethical and safety frameworks in the development of such advanced AI technologies.