Nvidia tumbles after hyperscaler earnings, with GPUs no longer the missing ingredient in the AI boom
On the surface, it’s difficult to see that Nvidia is getting clobbered after the Magnificent 7’s four hyperscalers reported earnings after the close on Wednesday.
The 2026 capex guidance for this group — which went up about $15 billion thanks to Meta and Google’s updates — has been a shorthand for Nvidia’s earnings outlook throughout the AI boom. That makes sense, as it’s one of the biggest suppliers to all four firms.
But the AI boom evolves, and one reason being offered for Nvidia’s sharp sell-off is that its most important product — GPUs — simply aren’t the key missing ingredient in the AI boom right now. Rather, they’re something these companies are trying to do without while building up their own suite of offerings.
$NVDA The big AI spending numbers out of the mega-caps Wed night aren't necessarily bullish for Nvidia for two reasons: 1) 2 of the companies in question are building enormous chip businesses of their own (Google and Amazon) and 2) budgets are being inflated really because of…
— Vital Knowledge Media (@knowledge_vital) April 30, 2026
After a spike during Q4 earnings, hyperscalers aren’t talking as much about the OG brains behind the AI boom...
...but they are talking a lot about the hardware they’re bringing to the table...
Amazon CEO Andy Jassy:
“Nobody has a better set of chips across AI and CPU workloads than AWS with Trainium and Graviton, and we’re unusually well positioned for this AI inflection we’re in the early stages of experiencing. While the largest number of AI chips we’re bringing in are Trainium, we continue to have a deep partnership with NVIDIA. We have immense respect for them, continue to order substantial quantities, we’ll be partners for as long as I can foresee, and we’ll always have customers who want to run NVIDIA on AWS. And we will also have a very large chips business ourselves.
Customers always want choice. It’s always been true, and always will be true.”
Microsoft CEO Satya Nadella:
“Our Maia 200 AI accelerator, which offers over 30% improved tokens per dollar compared to the latest silicon in our fleet, is now live in our Iowa and Arizona data centers.
Our Cobalt server CPU is deployed in nearly half of our DC regions running workloads at scale for customers like Databricks, Siemens, and Snowflake. As our largest customers scale their AI deployments, they’re increasingly leveraging other services across our platform and choosing to run those workloads on Cobalt, and we’re expanding Cobalt supply significantly to meet this demand.”
Google CEO Sundar Pichai:
“We are unique in the market because of our vertically optimized AI stack and the way we co-develop the components from our infrastructure and models to platforms and the tools to applications and agents. And the fact that we own frontier models, own the silicon, you know, really helps us stay ahead of the curve.”
Meta CEO Mark Zuckerberg:
“We are very focused on increasing the efficiency of our investments. And as part of that, we are rolling out more than 1 gigawatt of our own custom silicon that we’re developing with Broadcom, as well as significant amount of AMD chips to complement the new NVIDIA systems that we’re rolling out as well. One of the primary goals of our Meta compute initiative is to lead the industry in efficiency of building compute. And we expect that will be a strategic advantage over time.”
...and nodding to the idea that escalating capex numbers are indeed a function of higher memory chip prices, rather than a more aggressive accumulation of GPUs.
Zuckerberg:
“On that note, we are increasing our infrastructure CapEx forecast for this year. Most of that is due to higher component costs, particularly memory pricing.”
Jassy:
“So, on memory and storage and the supply chain, I think everybody knows that the cost of these components, particularly memory, has skyrocketed. And we’re just in a stage where there’s just not enough capacity for the amount of demand.”
Of course, this is 20/20 hindsight: Nvidia — like every chip company — has been on an absolute heater since the market bottomed in late March. And to be clear, the chip designer’s sharply rising sales estimates strongly imply that hyperscalers’ hardware offerings are meant to augment, rather than replace, demand for the most valuable company’s products.