Google, Amazon, Microsoft, and Meta plan to spend a combined $725 billion on capital expenditure in 2026, a 77% increase over last year's record $410 billion, according to first-quarter earnings reports compiled by the Financial Times.
Google led with 63% cloud revenue growth and an 81% jump in net income to $62.6 billion, while Meta's stock dropped 6% after hours despite a 33% revenue increase, punished by investors for adding $10 billion to its spending forecast and offering no firm timeline on new AI models.
Article continues below
Memory costs inside the capex
Microsoft’s CFO, Amy Hood, told investors that rising prices for memory chips and other components accounted for $25 billion of the company's record capex budget. Microsoft set its 2026 spending at $190 billion, far above the $152 billion average analyst forecast. Hood warned that even with the additional investment, Microsoft expects to remain capacity-constrained on GPUs, CPUs, and storage through at least 2026.
Meta cited the same, with the company raising its full-year capex range to $125 billion to $145 billion, up from a prior ceiling of $135 billion. In its earnings release, Meta attributed the increase to "higher component pricing this year, particularly memory," alongside rising costs for land, power, and skilled workers needed to build data centers that now consume 70% of the world's memory output.
The timing of all this is hardly coincidental, with TrendForce having reported DRAM contract prices rising roughly 95% quarter over quarter in Q1 2026, with a further 58% to 63% increase projected for Q2. NAND is following a similar trajectory, with Q2 contract prices expected to climb 70% to 75%. Server DRAM and high-density DDR5 RDIMMs are absorbing the bulk of production capacity, and all NAND output for 2026 is already committed, according to Phison CEO Khein-Seng Pua.
Hood's $25 billion, therefore, helps to put a dollar value on what has previously been an abstract concern: If one company's memory cost inflation alone exceeds the entire annual capex of most semiconductor firms, the pressure on consumer DRAM and NAND supply becomes much easier to quantify.
Google Cloud's contract backlog
Meta and Microsoft aside, Google’s Cloud revenue hit $20 billion in the same quarter, growing 63% year over year, outpacing both Amazon Web Services ($37.6 billion, up $8.3 billion) and Microsoft's Azure-driven cloud segment ($34.7 billion, up $7.9 billion).
Google's cloud contract backlog reached $460 billion, roughly double the $240 billion reported at the end of Q4 2025. Amazon reported $364 billion in its own pipeline, which will expand further after a recent $100 billion computing contract with Anthropic over the next decade. Microsoft's commercial remaining performance obligations hit $625 billion, up 110% year over year.
Cloud boss Thomas Kurian attributed Google's growth to its strategy of building custom AI chips, foundation models, and products in-house, telling the Financial Times that this gives the company a cost and research advantage over competitors that have struggled to develop their own chips and frontier models. Google's 7th-gen Ironwood TPU, which packs 192 GB of HBM3E per chip with 7.37 TB/s bandwidth in pods of up to 9,216 chips, is central to that strategy, and Anthropic has committed to access up to one million of them. Google recently unveiled its 8th-gen TPUs, which are split into two distinct variants for training and inference.
Alphabet raised its capex guidance to between $180 billion and $190 billion, up $5 billion from its previous guidance of $175 billion. CFO Anat Ashkenazi said he expects capex to “significantly increase” in 2027, causing shares to rise by some 7% after hours. It’s worth noting that $37.7 billion of Alphabet’s net income of $62.6 billion came from unrealized gains on non-marketable equity securities, primarily the company's Anthropic stake, according to the earnings release filed with the SEC. Strip that out, and operating performance was still strong, with a 36.1% operating margin, but the total net income number overstates recurring profitability.
Custom silicon and the GPU question
These capex figures reflect more than GPU purchases, because each hyperscaler is now deploying or developing custom accelerators to reduce dependence on Nvidia for inference-based workloads.
Amazon's Trainium3, built on a 3nm process with 144 GB of HBM3E and roughly 4.9 TB/s of bandwidth, is what CEO Andy Jassy described as "nearly fully subscribed" for 2026, and Meta has announced four generations of its MTIA inference chip, all fabbed at TSMC alongside Broadcom, even as it signed GPU deals worth roughly $110 billion combined with AMD and Nvidia. Meanwhile,. Microsoft's Maia 200 is deploying in U.S. Central data centers.
This pattern is likely to extend beyond accelerators as CPU demand for agentic AI workloads drives a parallel supply crunch with CPU lead times currently stretching to six months. Intel has reported billions in unmet Xeon demand, and Arm CEO Rene Haas has stated that agentic workloads require roughly 120 million CPU cores per gigawatt of data center capacity, four times what traditional AI training clusters need. Per Intel CFO David Zinsner, data center CPU-to-GPU ratios have already moved from 1:8 to 1:4, with further convergence expected to reach or go beyond parity.
Despite record spending, all four companies have acknowledged supply constraints that additional capital alone can’t resolve. Nvidia has booked an estimated 800,000 to 850,000 wafers of TSMC's CoWoS advanced packaging capacity for 2026, consuming over half of the total output and leaving AMD, Broadcom, and Google's TPU program competing for the remainder. CoWoS remains oversubscribed through at least mid-2026, and TSMC's U.S. packaging fabs aren’t expected to reach volume until 2028.
Power infrastructure is another bottleneck, with large power transformer lead times extending to roughly 128 weeks, and the IEA estimating that approximately 20% of planned global data center projects could be at risk of grid-related delays. TrendForce recently downgraded its full-year server shipment growth forecast from 20% to 13% because power management ICs and baseboard management controllers needed to assemble complete servers are stretching to 35- to 40-week lead times. Samsung's planned closure of its S7 eight-inch wafer fab in Korea will tighten PMIC supply further.
‘The bear thesis is garbage’
Meta's stock slipped by 6% after-hours following the earnings, erasing roughly $113 billion in market value. That drop reflected both the $10 billion capex increase and CEO Mark Zuckerberg's lack of a firm schedule for releasing improved AI models to follow the recently launched Muse Spark. Dec Mullarkey, managing director of SLC Management, told the FT that investors are concerned about whether Meta's historically capital-light business is becoming far more capital-intensive.
"The bear thesis is garbage," countered Brent Thill, an analyst at Jefferies, arguing that revenue growth across the sector justifies the spending. Zuckerberg offered little to settle the debate. Asked about Meta's AI agent development, he told investors he cared more about quality than deadlines, adding that most AI agents available today are not good enough for everyday users.
Amazon kept its $200 billion capex plan unchanged, and Microsoft CEO Satya Nadella said ending his company's exclusive contract with OpenAI was beneficial, claiming royalty-free access to OpenAI's frontier models and IP through 2032.

4 hours ago
6








English (US) ·