Intel and Google announce multi-year chip deal — Google will deploy Intel Xeon with custom IPUs for next-gen AI, cloud infrastructure
1 day ago
16
(Image credit: Intel)
Intel and Google on Thursday announced a multi-year collaboration under which Google will continue deploying Intel Xeon platforms for its next generation of AI and cloud infrastructure. These platforms will rely not only on Intel's upcoming Xeon CPUs, but also on custom infrastructure processing units (IPUs) co-designed by Intel and Google. The announcement comes amid the accelerating adoption of custom Arm-based processors for AI workloads.
"Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand," said Lip-Bu Tan, CEO of Intel.
Google currently employs Intel Xeon 5 and Intel Xeon 6 processors for a variety of workloads, including large-scale AI training coordination, latency-sensitive inference, and general-purpose computing. For example, Intel's latest Xeon CPUs power C4 and N4 instances. Although Google's custom Armv9-based Axion processors provide the cloud giant more control and efficiency at lower cost, many workloads that are run in Google's data centers need to either be backwards compatible with x86 or just need maximum single-thread performance offered by Intel Xeon CPUs. This is something that is expected to continue for years to come, which is why the two companies inked the deal.
Article continues below
In a bid to make Intel Xeon platforms more efficient and suitable for its hyperscale data centers, Google will also co-develop custom IPUs together with Intel to offload networking, storage, and security functions from host CPUs. Ultimately, Intel Xeon platforms will combine x86 architecture with high single-thread performance and custom-built infrastructure processing, which will make them more competitive in Google's highly customized environments.
"CPUs and infrastructure acceleration remain a cornerstone of AI systems — from training orchestration to inference and deployment," said Amin Vahdat, SVP & Chief Technologist, AI Infrastructure, Google.
The announcement comes at a time when hyperscalers and AI platform developers are accelerating the adoption of their own custom CPUs based on the Arm instruction set architecture. Just a week ago, Counterpoint Research released a note claiming that 90% of AI servers running custom-silicon processors will rely on the Arm ISA, leaving x86 and RISC-V about 10%. The announcement by Intel and Google clearly states that Xeon CPUs with custom IPUs will continue to be used for AI and other demanding workloads for years to come, which is something to be expected anyway.
Intel's Xeon processors have powered cloud infrastructure since its inception in the 2000s, and Google's own servers before that, so x86 in general and Xeon in particular will not leave Google's data center premises any time soon. Nonetheless, the announcement clearly reemphasizes the relevance of Intel's Xeon CPUs, and when such a message comes from Google — which has been deploying special-purpose custom accelerators for years across virtually all of its services — it gets amplified significantly.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
"Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads," Vahdat added.
Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.