GPT‑5 Pro is brilliant, but it’s still nowhere near real AGI, says one of the professors who coined the term

10 hours ago 11
GPT-5
(Image credit: OpenAI)

GPT‑5 Pro impresses with its complex, layered response to prompts. The crown jewel of the GPT-5 rollout this month even made OpenAI CEO Sam Altman nervous with some of its responses. But you shouldn't confuse brilliant algorithmic models with true independent thinking, according to Dr. Ben Goertzel, who helped popularize the term Artificial General Intelligence (AGI) in the early 2000s.

Now the CEO of the Artificial Superintelligence Alliance and TrueAGI Inc., and the founder of SingularityNET, Goertzel wrote an essay lauding GPT‑5 Pro as “a remarkable technical achievement” that he finds useful for formatting research papers, parsing mathematical frameworks, and improving his own prose. But, he's not mistaking the model's abilities for actual human-style brains.

"These models, impressive as they are, utterly lack the creative and inventive spark that characterises human intelligence at its best," Goertzel wrote. "More fundamentally, they literally 'don't know what they're talking about.' Their knowledge isn't grounded in experience or observation, it's pattern matching at an extraordinarily sophisticated level, but pattern matching nonetheless."

No matter how fast or thorough the model's performance is, it's ultimately shallow. You can be dazzled by the spectacle, but there's nothing going on underneath the statistical inference. People seeing a blurred line between GPT‑5 Pro and AGI isn't surprising, he hastened to add, since it can imitate logic, extend reasoning, and look like some thought process is happening, but it's nothing like a human or animal brain. Stringing together associations learned from training is not the same as drawing on memory, experience, or a vision of future goals.

"This distinction isn't semantic nitpicking. True AGI requires grounding knowledge in both external and internal experience," Goertzel wrote. "In terms of these basic aspects of open-ended cognition, today’s LLMs are vastly inferior to a one year old human child, their incredible intellectual facility notwithstanding."

AGI's future

GPT‑5 Pro and its siblings are built on an increasingly strained premise that scaling large language models will inevitably produce AGI. He also suggested that the current LLM approach is fused to a business model that limits innovation. OpenAI, he notes, is simultaneously trying to build AGI and sell scalable chatbot services to billions of users. The AGI label, he warns, is being thrown around too freely. While GPT-5 Pro and other tools are undeniably powerful, calling them minds is, in his view, premature and possibly misleading.

"GPT5-Pro deserves recognition as a remarkable achievement in AI engineering. For researchers and professionals needing sophisticated technical assistance, it's currently unmatched," Goertzel wrote. "But we shouldn't mistake incremental improvements in large-scale natural-language pattern matching for progress toward genuine artificial general intelligence."

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Goertzel's description of a true AGI is a model that constantly learn new things, irrespective of a user interacting with it. The continuous evolution of a mind, the human experience, goes well beyond the specific training and deployment of an AI model. GPT‑5 Pro is frozen the moment it’s deployed; a sealed jar of intelligence.

Goertzel’s work would smash that jar and spread the intelligence out across decentralized systems. Eventually, he hopes to produce an intelligence that doesn't mimic how brains work, but performs like one, with internal models of the world and beliefs it would update over time.

"The path to AGI won't be found by simply scaling current approaches. It requires fundamental innovations in how we ground knowledge, enable continual learning, and integrate different cognitive capabilities," Goertzel concludes. "GPT-5 and its successors will likely play important supporting roles in future AGI systems, but the starring role requires more innovative actors we're still in the process of creating."

You might also like

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

Read Entire Article