End of the NAND layers race: innovation across vectors

1 week ago 5
A person standing in front of a rack of servers inside a data center
(Image credit: Shutterstock.com / Gorodenkoff)

NAND is a vital component of the future of electronics. It’s everywhere, driving the storage capacity, performance and power efficiency in everything from data center servers to the smallest mobile devices – such as phones, drones, cameras and other portable devices.

As these systems and electronic devices add more features and perform more complex tasks like AI, data storage needs will continue to grow – making NAND flash memory a critical component of future innovations.

As a result, the race is on to build higher capacity NAND with better performance and lower power. Many people believe that a higher layer count is the only way forward. But the truth is there are many vectors of NAND innovation and higher layer counts aren’t the only way to increase NAND flash bits and storage capacity.

This new era of NAND is driving a period of change, where the layers-focused race is behind us. The emphasis is shifting toward strategically timing the introduction of new, longer lasting nodes optimized for specific use cases and applications. Not all applications need the latest node with the highest capacity or performance. Making each layer denser, rather than simply stacking more layers, enhances power efficiency, performance and capacity while managing cost for specific customer needs.

Senior vice president of Development Engineering at Western Digital.

Traditional Vertical Scaling

The “layers race” is the notion that more layers means more bit density and capacity, leading to a cost advantage – therefore the NAND with the highest number of layers must be best. But with 3D NAND, it’s no longer that simple.

Scaling NAND is similar to adding capacity at a hotel. Simply adding more floors may seem like a good idea, but you have to remember that building up leads to an increase in operational costs and complexity, including costs to buy and move equipment, build floors, etc. At some point, there diminishing returns to adding additional floors. Intuitively, the proportional cost reduction provided by adding ten floors to a hundred-floor building is better than adding the same number of floors to a five-hundred story building. But the capital needed to add the extra might be higher to build those additional 10 floors on top of a five-hundred-floor building.

Making each floor denser, by shrinking rooms and using space more effectively, can provide the same increase in occupancy in a much more efficient and cost-effective way.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

The same logic applies to NAND architecture. Simply adding NAND layers on top of each other may not be the only way to build more bits or capacity. Like floors of a hotel, it becomes more expensive and difficult to build usable NAND as layer counts grow. For example, stacking layers leads to increased processing time, additional capital for the advanced tools needed to ensure we can reliably manufacture NAND die with high quality.

Scaling Smarter by Leveraging Multiple Vectors

While layer count will continue to grow, it is no longer the core innovation driver. Instead, innovation spans across multiple vectors and there are other ways to scale NAND architecture in addition to vertical scaling, including lateral, logical and architecture scaling methods.

Lateral scaling works by packing every single memory layer while removing some of the redundant support structures. It’s like squeezing more rooms on the same floor of a hotel room or reducing the number of stairs and elevators in a building. For example, starting with lateral scaling allows you to optimize the available space before adding another layer. This phased approach is much more efficient, saving costs while reducing risks. It also allows customers to get to a certain capacity point at the right time, with consistent supply and quality. And when it’s decide to add more layers, the benefit is multiplied by the increased efficiency of the layers added.

Logical scaling increases the number of logical bits that can be stored on a physical device. In the hotel room scenario, this would be akin to squeezing more guests into the same hotel room without causing disturbances.

Finally, architecture scaling optimizes the way circuits support memory arrays – such as positioning circuits next to the array, tucking them underneath or perhaps implementing them on a separate wafer. In a hotel, this could be where the parking lot is put for needed guests – on the side of the building, underneath, or above the building (with a cost-effective way to airlift cars, of course).

A combination of all four

An approach that uses a combination of all four of these scaling vectors is a much smarter way of adding NAND bit growth without sacrificing performance and power efficiency for the widest range of uses cases and devices. And it has the additional benefit to optimize node-to-node cost reduction and minimize capital needed for transitions.

And while NAND technology is complex, the manufacturing processes that create viable NAND nodes, and eventually products, are even more so. These conditions are exacerbated by supply and demand dynamics in an emerging era where new applications, especially AI, will greatly increase the need for both compute- and storage-intensive flash-based solutions.

For example, this AI Data Cycle framework shows the virtuous cycle where storage feeds AI models, and AI in return demands more storage. This AI Data Cycle will be a significant incremental growth driver for the storage industry.

Performance, power, and capacity

Performance, power, and capacity play a major consideration at every phase, as each stage demands something different. Whereas the initial stages need massive capacity to contain as much data as possible for model training, as data goes through the cycle, speed and performance may be the more important factors. And power is increasingly becoming a critical factor in any AI application.

In this new era of NAND, NAND nodal migration paths should also be based on the needs of the customer, not a one-size-fits all approach of the past.

Different needs for different customers are starting to bifurcate and the role of NAND suppliers in addressing these needs is becoming much more interesting. Ultimately, what a customer builds will dictate how the flash inside it should operate—how big it should be, how much capacity it holds, and how much power it will consume. It’s not about how many layers the product has. Focusing on the features that are most important to customers—performance, capacity and power— is the winning strategy.

We list the best SSDs: top solid-state drives for your PC.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Senior vice president of Development Engineering at Western Digital.

Read Entire Article