Mark Zuckerberg has spent the better part of a decade chasing new computing frontiers, from VR to the metaverse. But today, the Meta CEO staked his next big bet on what he calls “personal superintelligence.” Unlike past efforts, this one isn’t just about virtual spaces or avatars: It’s about building AI that feels like an extension of yourself, and it’s going to require some of the most powerful hardware stacks on the planet.
In an open letter published just hours before Meta’s Q2 2025 earnings call, Zuckerberg laid out his AI vision in deliberately human-centric terms.
“This is not about automating all valuable work,” he wrote. “It’s about empowering individuals with intelligence tailored to their lives.”
That’s a subtle jab at OpenAI’s and Google's increasingly centralized AI strategies — ones that push AGI as a force of mass replacement rather than augmentation. But make no mistake: despite the philosophical packaging, Zuckerberg’s latest pivot is as much about compute as it is about ideology.
In the past few months alone, Meta has funneled billions into AI R&D, recruiting top talent from OpenAI, Google DeepMind, and Anthropic. The new Meta Superintelligence Labs — headed by former Scale AI CEO Alexandr Wang — will reportedly be responsible for developing foundation models such as Llama, alongside deeper research into AI architecture and inference. Of course, running those models at scale requires more than talent: it demands infrastructure. Lots of it.
Sources close to Meta’s datacenter expansion say the company is already deploying custom accelerators in limited workloads alongside traditional NVIDIA H100s and A100s. Meanwhile, there's speculation that Meta may be co-developing AI silicon in-house for future iterations of its Llama-based models, echoing Google's TPU strategy. At the very least, Meta has ramped up its own MTIA (Meta Training and Inference Accelerator) program, with next-gen silicon rumored to be taped out later this year.
Whether these chips will directly power "personal superintelligence" experiences remains unclear. But if Meta plans to deliver real-time, private AI companions at the edge or in VR devices, rather than purely cloud-delivered interactions, the hardware stack will need to be both extremely fast and extremely efficient.
This isn’t Zuckerberg’s first grand vision, by far. Back in 2021, he published a similar letter pitching the metaverse as the next evolution of social computing. Since then, Meta’s Reality Labs division has racked up over $60 billion in losses, much of it spent on hardware such as Quest headsets, haptic gloves, and AR displays that are still struggling to gain traction. These are numbers that make Intel's gloomy present and future look acceptable in comparison.
That said, unlike the metaverse, AI has immediate product-market fit. LLMs and chatbots are already reshaping productivity, education, and entertainment as we speak. The only constraint now is how fast the hardware can scale — and whether Meta can compete with Nvidia, AMD, and custom players such as Tenstorrent or Cerebras, in pushing the performance-per-watt frontier.
Zuckerberg’s tone was optimistic, but measured. “The rest of this decade seems likely to be the decisive period for determining the path this technology will take,” he wrote. The subtext is that Meta doesn’t want to be just another tenant in the AI data center. It wants to build the foundation itself, both in terms of algorithms and the hardware that runs them.
For now, the personal superintelligence he promises may still be a philosophical idea. But behind the scenes, it’s quickly becoming a material one — with all the heat, silicon, and supply chain pressure that comes with it.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.