Earlier this year, OpenAI’s CEO Sam Altman sparked a wave of speculation with a succinct Reddit post: “AGI has been achieved internally.” AGI, or Artificial General Intelligence, represents the pinnacle of AI research—a form of intelligence akin to human cognition, capable of reasoning, creative thought, and potentially consciousness.
While Altman later dismissed the post as a joke, recent events surrounding his departure from OpenAI have reignited questions about its validity. Reports suggest that OpenAI’s board was alerted to a major breakthrough just before Altman’s exit, hinting at a revolutionary discovery tied to a new model, cryptically named Q*.
The intrigue lies in Q*‘s ability to perform basic math, a seemingly mundane feat but a groundbreaking one for Large Language Models (LLMs). Historically, symbolic reasoning and deterministic answers, as required for mathematical operations, have been challenging for neural networks. Q*‘s reported success in this area suggests a novel approach that bridges the gap between symbolic reasoning and neural network capabilities.
If the reports are accurate, OpenAI might have developed a hybrid system—Q*—that utilizes both silicon and computer chips, mimicking the symbolic “virtual computer” proposed by cognitive scientist Paul Smolensky. This hybrid system, if realized, could offer the advantages of both intuitive, creative reasoning and symbolic, deterministic problem-solving.
The significance of Q*’s ability to perform basic math goes beyond the immediate task. It signals a major step toward replicating the underlying capabilities of the human brain. This leap in mimicking human cognition likely prompted concerns within OpenAI’s board, possibly contributing to Altman’s departure.
Contrary to doomsayers’ fears, Q* is unlikely to bring about the end of the world. While achieving brain-like capabilities is a notable advancement, it doesn’t imply superintelligence, consciousness, or AGI-level capabilities. Instead, it positions OpenAI at the forefront of creating AI with broader applications in fields such as natural language processing, drug discovery, and mathematics.
A model like Q*, blending symbolic and intuitive reasoning, holds great potential. In natural language processing, it could grasp not only statistical language patterns but also the symbolic logic inherent in human languages. This capability could pave the way for AI systems that truly understand the context and generate creative outputs beyond statistical predictions.
Similarly, in fields like drug discovery, a system understanding both deterministic processes and intuitive aspects could revolutionize the speed and efficiency of inventing new medicines.
Even if Q* proves to be a reality, it is unlikely to be ready for public consumption anytime soon. OpenAI must grapple with ethical considerations, weighing the benefits of such a system against potential risks, including the creation of bioweapons or the generation of convincing yet undetectable propaganda.
In conclusion, if reports about Q* are accurate, Sam Altman’s Reddit post may not have been mere coyness. While AGI might not be here yet, Q* represents a significant stride toward AI that functions more like the human brain—a leap toward a general intelligence capable of reasoning and creating on par with human abilities.