- Nvidia scrapped SOCAMM 1 after repeated failures to meet expectations
- SOCAMM 2 promises faster transfer speeds reaching 9,600 MT/s performance
- LPDDR6 adoption discussions signal scalability beyond current SOCAMM 2 modules
Nvidia has abandoned its earlier effort to commercialize SOCAMM 1 after repeated technical problems and is now focused entirely on SOCAMM 2, reports have claimed.
The first version was positioned as a low-power, high-capacity memory alternative for AI servers, but delays and design setbacks prevented it from gaining traction.
Now, an industry insider told ET News (originally in Korean), "Nvidia originally planned to introduce SOCAMM 1 within the year, but technical issues halted the project twice, preventing any actual large-scale orders."
A shift in performance and design goals
This reset means that Samsung Electronics, SK Hynix, and Micron are all starting on the same footing with the second-generation design.
SOCAMM 2 maintains the detachable module form factor with 694 I/O ports, but increases transfer speed to 9,600 MT/s compared to the earlier 8,533 MT/s.
In practice, this translates to system bandwidth rising from around 14.3TB/s to roughly 16TB/s in the Blackwell Ultra GB300 NVL72, a platform already tied to the best GPU discussions in data center contexts.
Nvidia’s reliance on LPDDR5X continues for now, although discussions about adopting LPDDR6 suggest the format is being designed with long-term scalability in mind.
Despite these upgrades, the module still consumes less power than standard DRAM-based RDIMM, a claim that will need validation under real server workloads.
The first generation of SOCAMM modules was manufactured only by Micron, creating a single point of dependency that raised questions about supply stability.
SOCAMM 2 expands the supplier base, with Samsung and SK Hynix preparing samples alongside Micron.
This broader participation could make production more stable and pricing more competitive, although it remains uncertain how quickly mass production will begin.
Samsung and SK Hynix have indicated that they were "preparing for mass production of SOCAMM in the third quarter."
However, industry estimates suggest that SOCAMM 2 will not be available in volume until early next year.
One of the most notable differences between the two generations lies in standardization.
SOCAMM 1 was developed outside JEDEC, which limited its adoption to Nvidia’s platforms.
SOCAMM 2, however, could attract JEDEC involvement, making it easier for other companies to adopt similar modules in their systems.
If that happens, SOCAMM 2 might evolve into a new industry format, offering compact, high-bandwidth memory options beyond Nvidia’s own ecosystem.
For creative professionals, such developments could ultimately influence what counts as the best GPU for video editing, especially when memory performance directly affects handling of high-resolution footage.
However, analysts remain cautious, noting that while this path offers promise, the timing of its arrival, just as LPDDR6 development accelerates, may dilute its long-term impact.
As the computational performance of AI semiconductors improves, the demand for memory to resolve data bottlenecks is increasing.
Whether SOCAMM 2 becomes the solution to this problem, or merely one more option in a crowded field, will depend on execution, standardization, and the speed of LPDDR6’s rollout.
You might also like
- Check out the best portable SSDs you can get today
- These are the best NAS devices around
- Researchers uncover IPTV piracy network spanning 1,000 domains and 10,000 IP addresses