Microsoft’s AI boss and Sam Altman disagree on what it takes to get to AGI

1 week ago 4

Microsoft AI CEO Mustafa Suleyman disagrees with OpenAI CEO Sam Altman’s recent claim in a Reddit AMA that artificial general intelligence, or AGI, is possible on today’s hardware. While AGI is “plausible,” he tells The Verge’s Nilay Patel in the latest Decoder episode that it could take as long as 10 years to achieve.

With current hardware defined by Nilay as “within one or two generations of what we have now, I would say,” Suleyman replied, explaining why he thinks that’s unlikely:

I don’t think it can be done on [Nvidia] GB200s. I do think it is going to be plausible at some point in the next two to five generations. I don’t want to say I think it’s a high probability that it’s two years away, but I think within the next five to seven years since each generation takes 18 to 24 months now. So, five generations could be up to 10 years away depending on how things go.

“The uncertainty around this is so high,” Suleyman said, “that any categorical declarations just feel sort of ungrounded to me and over the top.”

He’s also drawing a line between AGI and the “singularity”:

It depends on your definition of AGI, right? AGI isn’t the singularity. The singularity is an exponentially recursive self-improving system that very rapidly accelerates far beyond anything that might look like human intelligence. 

To me, AGI is a general-purpose learning system that can perform well across all human-level training environments. So, knowledge work, by the way, that includes physical labor. A lot of my skepticism has to do with the progress and the complexity of getting things done in robotics. But yes, I can well imagine that we have a system that can learn — without a great deal of handcrafted prior prompting — to perform well in a very wide range of environments. I think that is not necessarily going to be AGI, nor does that lead to the singularity, but it means that most human knowledge work in the next five to 10 years could likely be performed by one of the AI systems that we develop. And I think the reason why I shy away from the language around singularity or artificial superintelligence is because I think they’re very different things.

The challenge with AGI is that it’s become so dramatized that we sort of end up not focusing on the specific capabilities of what the system can do. And that’s what I care about with respect to building AI companions, getting them to be useful to you as a human, work for you as a human, be on your side, in your corner, and on your team. That’s my motivation and that’s what I have control and influence over to try and create systems that are accountable and useful to humans rather than pursuing the theoretical super intelligence quest.

Last week, during The New York Times DealBook Summit, Altman set out a lower set of goalposts for AGI than the superintelligence-style phenomenon he’s described in the past.

Now, Altman says AGI will arrive “sooner than most people in the world think and it will matter much less.” And when it comes to superintelligence, “a lot of the safety concerns that we and others expressed actually don’t come at the AGI moment. AGI can get built, the world mostly goes on in mostly the same way, things grow faster, but then there is a long continuation from what we call AGI to what we call superintelligence.”

This is a relationship that appears strained only one year after Microsoft helped reseat Altman as OpenAI’s CEO. After confirming that Microsoft is working on its own frontier AI model capable of competing at the “GPT-4, GPT-4o scale,” Suleyman also commented on the tension between Microsoft and OpenAI:

Every partnership has tension. It’s healthy and natural. I mean, they’re a completely different business to us. They operate independently and partnerships evolve over time... partnerships evolve and they have to adapt to what works at the time, so we’ll see how that changes over the next few years.

Read Entire Article