For years, enterprise IT has followed a familiar pattern. Devices age, performance starts to lag, operating systems evolve, and a hardware refresh follows. The cycle became so routine that many organizations stopped questioning it. Replacing fleets every few years simply came to be seen as the cost of staying current.
That logic is much harder to defend in today’s market.
The rapid expansion of AI infrastructure is reshaping the global memory market in ways that now affect endpoint strategy. As suppliers prioritize memory for high-growth AI and data center demand, traditional DRAM pricing has become more volatile and endpoint costs have become harder to predict.
For IT leaders, that creates a serious budgeting problem. The price of a new PC is increasingly influenced by memory market pressures that have little to do with the day-to-day needs of the average employee.
This is why the current memory squeeze matters. It is not just making refreshes more expensive. It is exposing how much of the conventional PC lifecycle is based on habit rather than necessity.
When the cost curve stops making sense
For many organizations, the economics of refresh are starting to look out of balance. A new device may cost noticeably more than it did a year ago, yet still offer only marginal gains for users whose workloads are centered on browsers, collaboration tools, SaaS platforms, and virtual desktops.
That disconnect forces a different kind of question. Instead of asking whether a newer device is available, IT teams are asking whether a replacement is justified at all. If the user experience is already shaped mostly by cloud services and hosted applications, then buying more local hardware at inflated prices can start to look like a poor trade.
Rising memory costs are making it harder to default to wholesale replacement, and that pressure is encouraging a more grounded conversation about what employees actually need from an endpoint.
The endpoint is no longer where work happens
For many employees, the endpoint is no longer the primary place where work is processed. It is the place where work is accessed. Applications increasingly run in the browser. Files are stored in cloud environments.
Desktops are delivered virtually. In this model, the endpoint acts as a secure and reliable connection point rather than a standalone computing engine.
Once organizations recognize this shift, the logic behind hardware planning changes. The device on the desk does not need to carry the full burden of performance. In many cases, compute happens in the data center or the cloud, while the endpoint simply provides access.
What matters most is stability, security, connectivity, and a consistent user experience.
That realization reframes the refresh conversation. If the endpoint is primarily an access layer, it does not need to be replaced on a rigid schedule tied to traditional assumptions about local compute power.
Giving older hardware a second life
This shift is making repurposing far more relevant than it was even a few years ago. Many older laptops, desktops and even aging thin clients are still capable of supporting modern work when they are used in a way that aligns with today’s computing model.
Instead of running heavy, resource-intensive operating systems locally, those devices can be paired with lightweight software and redeployed as thin clients. This approach extends the life of existing hardware while still providing users with secure, reliable access to virtual desktops, SaaS applications, and cloud environments.
The result is a more efficient use of resources. Devices that might otherwise be retired can continue to deliver meaningful value, particularly when compute is handled centrally rather than on the endpoint itself.
Thin clients as a buffer against market volatility
Thin and zero clients are often associated with simplicity and centralized management, but their relevance is growing in the current environment.
They reduce reliance on local components such as DRAM, which are subject to price swings and supply constraints. By shifting compute to centralized environments, organizations can insulate themselves from volatility in the memory market and avoid overpaying for incremental hardware gains.
This creates a more predictable cost structure and allows IT teams to align spending with actual workload requirements. Some users will still need full PCs, but many will not. Thin clients make it easier to match endpoint strategy to real usage patterns instead of applying a uniform refresh approach across the organization.
Extending lifecycle without sacrificing experience
A common concern with delaying refresh cycles is that it will negatively impact user experience. That concern was valid when performance depended heavily on local hardware.
Today, cloud-delivered desktops and applications change that dynamic.
With DaaS and virtual desktop platforms such as Windows 365, Azure Virtual Desktop, Citrix, Omnissa or Parallels, performance is largely determined by the cloud environment rather than the endpoint itself. Users can access the same experience from a range of devices.
This allows organizations to extend lifecycle timelines without sacrificing productivity. It also gives IT leaders flexibility during procurement cycles that are affected by memory pricing and supply constraints.
Sustainability moves into the mainstream
Extending the life of existing hardware also has clear environmental benefits. Short refresh cycles increase e-waste and expand the carbon footprint associated with manufacturing and disposal.
Lifecycle analyses from Interzero and Fraunhofer UMSICHT show that reuse can reduce emissions by up to 37 percent. This makes repurposing a practical way to support sustainability goals while also controlling costs.
For many organizations, sustainability is no longer a secondary consideration. It is becoming part of the core decision-making process around endpoint strategy.
A more flexible approach to endpoint strategy
The pressure on memory supply is unlikely to ease quickly. AI demand continues to grow, and resource allocation will continue to favor high-value data center workloads.
In response, enterprise IT is moving toward a more flexible model. Instead of treating refresh cycles as fixed, organizations are evaluating actual needs, exploring repurposing opportunities, and adopting alternative endpoint approaches where appropriate.
The endpoint is still important, but its role has changed. It is no longer defined by local performance alone, but by how effectively it connects users to modern, cloud-driven work environments.
The RAMpocalypse may be creating short-term challenges, but it is also pushing organizations toward smarter, more efficient, and more sustainable ways of thinking about endpoint computing.
We've featured the best business laptop.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit








English (US) ·