Marvell develops custom HBM memory solutions — interface shrinks and higher performance on the menu

2 weeks ago 3

Marvell has announced custom high-bandwidth memory (CHBM) solution for its custom XPUs designed for AI applications at its Analyst Day 2024. Developed in partnership with leading memory makers, CHBM promises to optimize performance, power, memory capacity, die size, and cost for specific XPU designs. CHBM will be compatible with Marvell's custom XPUs and will not be a part of a JEDEC-defined HBM standard, at least initially.

Breaking news: @Marvell has partnered with the leading HBM vendors with develop a custom HBM interface for faster, smaller, and lower power die2die interconnections. #Marvell2024AIDay pic.twitter.com/rnQ1ZZSox8December 10, 2024

Marvell's custom HBM solution allows tailoring interfaces and stacks for a particular application, though the company has not disclosed any details. One of Marvell's goals is to reduce the real estate that industry-standard HBM interfaces occupy inside processors. Freeing up the real estate available to compute and features. The company asserts that with its proprietary die-to-die I/O, it will not only be able to pack up to 25% more logic into its custom XPUs, but also potentially install up to 33% more CHBM memory packages next to compute chiplets to increase the amount of DRAM available to the processor. In addition, the company expects to cut memory interface power consumption by up to 70%.

Because Marvell's CHBM does not rely on a JEDEC-specified standard, on the hardware side of things it will require a new controller and customizable physical interface, new die-to-die interfaces, and overhauled HBM base dies. The new Marvell die-to-die HBM interface will have a bandwidth of 20 Tbps/mm (2.5 TB/s per mm), which is a significant increase over 5 Tbps/mm (625 GB/s per mm) that HBM offers today, based on a slide from the company's Analyst Day published by ServeTheHome. Over time Marvell envisions bufferless memory with a 50 Tbps/mm (6.25 TB/s per mm).

Marvell does not specify how wide its CHBM interface will be. Marvell does not disclose many details about its custom HBM solution except saying that it 'enhances XPUs by serializing and speeding up the I/O interfaces between its internal AI compute accelerator silicon dies and the HBM base dies,' which somewhat implies on a narrower interface width compared to industry-standard HBM3E or HBM4 solutions. Yet, it looks like cHBM solutions will be customizable.

"Enhancing XPUs by tailoring HBM for specific performance, power, and total cost of ownership is the latest step in a new paradigm in the way AI accelerators are designed and delivered," said Will Chu, Senior Vice President and General Manager of the Custom, Compute and Storage Group at Marvell. "We are very grateful to work with leading memory designers to accelerate this revolution and, help cloud data center operators continue to scale their XPUs and infrastructure for the AI era."

Working with Micron, Samsung, and SK hynix is crucial for successful implementation of Marvell's CHBM as it sets the stage for relatively widespread availability of custom high bandwidth memory.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Read Entire Article