A couple of days ago, Intel, TeamGroup and ASRock came together to unveil the "HUDIMM" spec for DDR5 RAM. HUDIMM use a single 32-bit subchannel instead of populating a 64-bit wide bus with two 32-bit channels. This effectively cuts bandwidth and capacity in half but allows for cheaper DDR5 that uses less ICs per stick. Today, new testing done by HKEPC, with the help of Asus, confirms exactly that — HUDIMM will incur an almost 50% bandwidth penalty, reducing performance significantly.
Go deeper with TH Premium: Memory
This new testing is more substantiated and was done on an Asus ROG Maximum Z890 Extreme motherboard, using an Intel Core Ultra 9 285K. The outlet "matched the BIOS that supports HUDIMM modules" because, unlike a retail 1x 32-bit stick, the modified 2x 32-bit RAM's SPD will still tell the memory controller it's supposed to have a 64-bit wide bus. The PC will fail to initialize (POST) otherwise and be stuck with training errors.
Article continues below
We start with a single rank 16 GB 7,200 MT/s stick that showed up as 8 GB. In AIDA64, it achieved read speeds of 32,447 MB/s, write speeds of 25,195 MB/s, and copy speeds of 26,894 MB/s, with an 87.7 ns latency. In contrast, the same stick when untaped hit 58,913 MB/s read, 48,800 MB/s write, and 52,648 MB/s copy speeds. That's essentially double the throughput across the board, but latency was the same at 85.7 ns.
Swipe to scroll horizontally
Read Speeds | 58,913 MB/s | 32,447 MB/s | -44.92% |
Write Speeds | 48,800 MB/s | 25,195 MB/s | -48.37% |
Copy Speeds | 52,648 MB/s | 26,894 MB/s | -48.92% |
Latency | 85.7 ns | 87.7 ns | - |
As expected, disabling one of the 32-bit subchannels slashes the numbers in half pretty consistently. You get to build cheaper sticks that require only 4 ICs instead of the usual 8 for a 16 GB DIMM, but it clearly comes at a cost. The standard 16 GB stick is almost at 60 GB/s of effective bandwidth while the simulated 8 GB HUDIMM stick only reaches 32 GB/s. That's the kind of discrepancy you'll notice.
Switching gears to a dual channel setup, HKEPC put 2x 16 GB 7,200 MT/s sticks on the motherboard, which showed 32 GB in the standard config, but only 16 GB when taped. The same story follows; half of the bandwidth is gone when simulating HUDIMM. We drop from 106 GB/s read speeds to just 58 GB/s, the write speeds go from 93 GB/s to 48 GB/s, and the copy speeds fall from 97 GB/s to 51 GB/s. The latency remained identical.
Swipe to scroll horizontally
Read Speeds | 106,200 MB/s | 58,928 MB/s | -44.51% |
Write Speeds | 93,235 MB/s | 48,461 MB/s | -48.02% |
Copy Speeds | 97,552 MB/s | 51,473 MB/s | -47.24% |
Latency | 86.4 ns | 86.5 ns | - |
The HUDIMM numbers here basically match the performance of a regular 16 GB stick running in single channel, which is to be expected. It's simple math, really. Across the board, we're just halving the bandwidth and capacity just to be able to make cheaper DDR5. The performance hit is significant, but since HUDIMM is aimed at budget gamers and business users, perhaps the tradeoff will be worthwhile for some.
One claim from the announcement that HKEPC didn't check was asymmetric dual channel support — combining HUDIMM with regular DDR5 with 2x 32-bit subchannels is supposed to drastically improve performance. ASRock said that using an 8 GB HUDIMM stick with a standard 16 GB stick nets more bandwidth than a single 24 GB UDIMM (despite having the same capacity). The 24 GB stick on its own is apparently more expensive to manufacture, too, so this is a sort of "best-of-both-worlds" pitch.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

5 hours ago
8








English (US) ·