The numbers coming out of Big Tech right now are absurd, and I mean that as a technical term. Microsoft, Alphabet, Amazon, and Meta are on track to spend a combined $355 billion-plus on AI capital expenditures in 2026—effectively doubling what they spent just two years ago and tripling their 2022 investments. That's not incremental growth. That's a coordinated bet on a future where compute is destiny. The capex isn't scattered across random projects. It's laser-focused on four categories: accelerators (GPUs and custom silicon) consuming roughly 55% of the spend, data center construction at 22%, energy and cooling infrastructure at 13%, and networking with storage taking the remaining 10%. The math is simple—when training frontier models costs hundreds of millions per run and inference scales with every user query, you need hardware density that rivals small nations. Breaking down the individual players reveals different strategies wrapped in identical urgency. Amazon AWS leads with an estimated $110-130 billion for calendar year 2026, according to Morgan Stanley and Bernstein projections, betting heavily on Trainium chips and maintaining cloud dominance. Microsoft committed $95-100 billion for fiscal year 2026, with approximately 80% earmarked for cloud and AI infrastructure—a direct consequence of its OpenAI partnership requiring massive Azure compute backing. Alphabet plans $80-90 billion focused on TPU development (Trillium and the upcoming Ironwood) while expanding Google Cloud and Waymo operations. Meta rounds out the group at $70-75 billion, with CFO confirmation that essentially all capex is AI-related, pivoting hard from its failed metaverse experiment. The hardware underneath this spending spree tells a story of dependency and acceleration. Nvidia's H100s ($25K-$40K per unit), H200s, Blackwell B200s, and GB200s dominate the landscape, with Rubin announced for late 2026 promising 5-10x performance improvements over current generations. Google runs its own TPUs as a competitive moat. Amazon's Trainium chips represent a deliberate diversification strategy away from pure Nvidia dependency. Every hyperscaler knows that whoever gets stuck running obsolete silicon while competitors upgrade faces an existential disadvantage in model capability.
The Accounting Game Nobody Talks About
Here's where it gets interesting from a financial engineering perspective. Before 2022, these companies depreciated data center hardware over four years. Microsoft extended that to six years in 2022. Google followed in 2023. Meta stretched from four to 5.5 years. For a company deploying $100 billion in GPUs, extending depreciation by two years means roughly $25 billion less annual charge against earnings—inflating reported EBITDA by the same amount. The risk: if Nvidia's Rubin architecture delivers the promised performance jumps, H100s purchased in 2023 could become economically obsolete before their six-year accounting life ends, forcing write-downs that reverse this artificial margin boost.
Market Reality Check
Cloud revenue is growing—Azure AI services hit $13 billion annually, Google Cloud crossed $42 billion with positive operating margins for the first time, AWS generates over $108 billion. But investors should note: operating margins are compressing despite these top-line gains. Meta's Q4 2024 margin dropped from 41% to 38%, directly attributed to capex. The question isn't whether Big Tech can afford this spending—they're generating over $100 billion in operating cash flow each, Microsoft alone exceeds $110 billion annually. The question is whether the ROI ever justifies it.
What Comes Next
Three factors will define the next 18 months: Nvidia Rubin's actual delivery timeline and pricing, Stargate (the OpenAI + Oracle + SoftBank consortium committing $500 billion over four years) fragmenting the compute market, and algorithmic efficiency gains from mixture-of-experts architectures and distillation techniques that could reduce compute requirements per unit of AI value delivered. Analyst consensus for 2027 combined capex sits between $400-480 billion—assuming frontier model scaling continues requiring more compute.
The Bottom Line
This isn't a bubble in the 2000 sense—no one's burning cash with zero revenue. These are profitable companies making coordinated, arguably rational bets on infrastructure that could define computing's next decade. But if inference costs collapse due to open-weight models and efficient hardware (Cerebras, Groq, Etched are already pushing this), or if frontier scaling hits diminishing returns before these investments amortize, the valuation compression will be brutal. Watch margins in Q3 2026 when the first Rubin clusters supposedly ship. That's when we learn if $355 billion was visionary or just really expensive hubris.