Intel could have a plan for its future GPUs to better challenge AMD and Nvidia, as patent hints at new chiplet design

4 days ago 6
A pair of Intel Arc Alchemist chips in front of a dark purple background
(Image credit: Intel)

Intel has plans for future GPUs that aren’t monolithic, but are built from separate chiplets, or at least there’s some thinking along those lines at Team Blue.

We’ve gathered that because TechSpot noticed a denizen of X, Underfox, who flagged up a patent filed by Intel for a “disaggregated GPU architecture” which will likely be the “first commercial GPU architecture with logic chiplets” from the chip giant.

Earlier this month, Intel was finally granted a patent for its disaggregated GPU architecture, which will likely be the first commercial GPU architecture with logic chiplets, also allowing for the power-gate of chiplets not used to process workloads. pic.twitter.com/XsNjjdVIOuOctober 26, 2024

What does this mean exactly? All existing consumer GPUs are thus far monolithic, which means they have a single graphics chip with everything inside. A disaggregated architecture refers to splitting up that single chip, so you have multiple chiplets instead.

This won’t happen with Battlemage, the next-gen of Arc graphics cards expected to arrive early in 2025. If it was something in the works for Battlemage, we’d surely have heard about this on the rumor mill by now.

So, this might be the plan down the line for Celestial, Druid, or one of the future generations of Arc GPUs – assuming Intel gets that far with its discrete graphic card line-up.

As ever with patents, we must bear in mind that they are often filed speculatively, and a lot of them don’t see the light of day in finished products on shelves.


An Intel Arc A750 graphics card on a pink desk mat next to its retain packaging

(Image credit: Future / John Loeffler)

Analysis: The benefits – and pitfalls – of disaggregation

Why go for a disaggregated GPU design like this? Chiplets have certain advantages in terms of chip design flexibility (modularity) and better power efficiency, with the latter being particularly important for high-end graphics cards when these days we’re getting into the realms of some truly wattage-sucking monsters.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

The tricky bit, though, is effectively splitting up a monolithic chip into multiple chiplets leaves the problem of ensuring that those chiplets have fast enough interconnects to ensure that performance doesn’t drop by going this route.

AMD was rumored to be looking at a chiplet design for the RDNA 4 flagship, before seemingly canning it (and as we know, Team Red has purportedly retreated to just mid-range graphics cards for its next-gen RX 8000 GPUs now). Nvidia, too, was rumored to be looking at a multi-chip design for the Blackwell GeForce flagship, the RTX 5090, but any chatter from the grapevine around this idea has died down to nothing.

One way or another, we’re likely to see chiplet designs for consumer GPUs in the future, perhaps from AMD, Nvidia, and indeed Intel as evidenced by the patent here.

There are broader worries about how far Intel is going to push with its discrete Arc GPUs, mind you, and Battlemage graphics cards are likely to be low-end only. While work has apparently started on Celestial, it’s notable that Team Blue isn’t really talking about its Arc line of GPUs much these days (outside of integrated graphics, that is).

You might also like

Darren is a freelancer writing news and features for TechRadar (and occasionally T3) across a broad range of computing topics including CPUs, GPUs, various other hardware, VPNs, antivirus and more. He has written about tech for the best part of three decades, and writes books in his spare time (his debut novel - 'I Know What You Did Last Supper' - was published by Hachette UK in 2013).

Read Entire Article