AI

Nvidia Just Posted $68 Billion in One Quarter, Announced the Rubin Era, and the Stock Barely Moved

7 min read
Share
Nvidia Just Posted $68 Billion in One Quarter, Announced the Rubin Era, and the Stock Barely Moved

The Numbers Are Absurd

Nvidia reported fiscal 2026 fourth-quarter revenue of $68.1 billion, up 73% from a year ago and 20% from the previous quarter. Net income nearly doubled to $43 billion. Earnings per share came in at $1.62, beating the consensus estimate of $1.53 by a comfortable margin. Full-year revenue for fiscal 2026 hit $215.9 billion, up 65% from the prior year, with operating income of $130.4 billion.

To put that in perspective: Nvidia generated more profit in one quarter than most companies in the S&P 500 generate in revenue in an entire year. The data center business, which houses Nvidia's AI chips, delivered $62.3 billion in quarterly revenue, up 75% year over year. Data center now accounts for over 91% of Nvidia's total sales. This isn't a chip company that also does AI. This is an AI company that also happens to sell gaming GPUs.

And then there's the guidance: Nvidia expects $78 billion in revenue for the first quarter of fiscal 2027, plus or minus 2%. Wall Street was expecting $72.6 billion. That's a $5.4 billion beat on the forecast alone, signaling that demand isn't just sustaining; it's accelerating.

The Rubin Platform: Six Chips, One Vision

Alongside the earnings report, Nvidia formally introduced the Vera Rubin platform, the successor to the Blackwell architecture. Rubin isn't a single chip; it's an entire compute ecosystem comprising six new chips: the Rubin GPU, the Vera CPU, plus four networking and security components (NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet Switch).

The headline number: Rubin delivers up to a 10x reduction in inference token cost and a 4x reduction in the number of GPUs needed to train mixture-of-experts models, compared to Blackwell. The Vera Rubin NVL72 server combines 72 GPUs into a single system, and a full DGX SuperPOD with DGX Vera Rubin NVL72 features 1,008 Rubin GPUs delivering 50.4 exaflops of FP4 performance.

Nvidia's design philosophy with Rubin is what they call "extreme co-design": GPUs, CPUs, networking, security, software, power delivery, and cooling are all architected together as a single system. The data center itself becomes the unit of compute, not the individual chip. AWS, Google Cloud, Microsoft, and Oracle will be among the first to deploy Vera Rubin-based instances later in 2026, along with cloud partners like CoreWeave, Lambda, Nebius, and Nscale.

Jensen Huang's Declaration

CEO Jensen Huang used the earnings call to make a sweeping claim: "The agentic AI inflection point has arrived." He argued that AI is no longer a research curiosity or a chatbot novelty; it's becoming infrastructure that companies deploy to generate revenue directly.

His specific framing was notable: "Compute equals revenues now." In other words, every dollar a company spends on Nvidia's AI chips translates into revenue-generating capabilities, whether that's running inference for AI products, training new models, or deploying autonomous agents that perform tasks traditionally done by humans.

Huang also took a shot at the "AI bubble" narrative that's been circulating on Wall Street. When asked whether AI spending is sustainable, he pointed to the installed base of accelerated computing growing faster than at any point in the company's history. He described demand for Blackwell chips as "off the charts" and cloud providers as "sold out," then pivoted to Rubin as the next step in keeping that momentum going.

The characterization of Grace Blackwell as "the king of inference today" with Vera Rubin positioned to "extend that leadership even further" is Nvidia signaling that it sees itself multiple generations ahead of any competition. AMD's MI400 and custom chips from Google and Amazon are still trying to catch up to Blackwell. Nvidia is already selling the chip after the chip after that.

The Stock Barely Flinched

Here's the most telling detail of the entire report: Nvidia stock rose about 3.5% in initial after-hours trading, then faded to a 1.57% gain by the end of Jensen Huang's conference call. Futures pointed to a flat-to-slightly-down open for the broader market on Thursday.

This is a company that beat revenue estimates by $2 billion, beat guidance estimates by $5.4 billion, posted 75% year-over-year data center growth, and unveiled an entirely new chip platform. A year ago, numbers like these would have sent the stock soaring 10% overnight. The muted reaction tells you something important about where markets are psychologically.

Investors aren't questioning whether Nvidia is performing. They're questioning whether the performance is already priced in. At roughly $190 per share heading into earnings, Nvidia was already valued at over $4 trillion. The market needs increasingly extraordinary results just to justify the current price, let alone push it higher. Nvidia delivered extraordinary results, and the market shrugged.

Margins and the Competition Question

Nvidia reported GAAP gross margins of 75% for the quarter (non-GAAP 75.2%), right on target with management's previous guidance. The margin stability is significant because it suggests Nvidia isn't having to cut prices to maintain volume. Customers are paying full price for Blackwell chips because there's no viable alternative at scale.

The competitive landscape remains tilted overwhelmingly in Nvidia's favor. AMD's MI400 is expected later this year but faces questions about software ecosystem maturity. Google's TPU v6 is deployed internally but isn't available as a commercial product. Amazon's Trainium2 is ramping but serves AWS customers only. None of these alternatives has cracked Nvidia's monopoly on the broader merchant AI chip market.

The risk to margins comes not from direct competition but from the transition to Rubin. New chip architectures typically come with higher initial production costs and lower yields. Nvidia managed the Blackwell transition without margin erosion, but the Rubin transition will be another test. If margins hold through the Rubin ramp, the bull case for Nvidia becomes difficult to argue against.

What China Means (and Doesn't)

Nvidia explicitly stated it is "not assuming any Data Center compute revenue from China" in its fiscal Q1 2027 outlook. This has been the company's stance for multiple quarters as U.S. export restrictions prevent Nvidia from selling its most advanced chips to Chinese customers.

The China exclusion means Nvidia's $78 billion guidance is based entirely on demand from the U.S., Europe, Asia-Pacific (ex-China), and the Middle East. If export restrictions ease or Nvidia develops compliant chips for the Chinese market, that's pure upside to the forecast. If restrictions tighten further, the impact is already baked in.

The geopolitical calculation here is straightforward: China represents what could be a $10 billion to $20 billion annual revenue opportunity for Nvidia. Every quarter that opportunity remains blocked, Chinese companies develop more of their own AI chip capabilities, making the eventual market re-entry less valuable. Time is not on Nvidia's side with China, even if everything else is going its way.

What This Means for the AI Industry

Nvidia's results are the clearest signal yet that corporate AI spending is not slowing down. The hyperscale cloud providers (Microsoft, Google, Amazon, Meta, Oracle) are collectively spending hundreds of billions of dollars building AI infrastructure, and Nvidia is capturing the vast majority of that spend.

For the broader AI ecosystem, the Rubin announcement is significant because a 10x reduction in inference costs would fundamentally change the economics of deploying AI products. Applications that are marginally profitable at current inference costs become highly profitable at one-tenth the cost. Use cases that are currently too expensive to deploy at scale become viable. The ceiling on what AI products can do is often set by inference economics, and Nvidia is proposing to raise that ceiling dramatically.

Jensen Huang's "agentic AI inflection" framing is also worth taking seriously. If the next wave of AI value comes from autonomous agents that can take actions, not just answer questions, the compute requirements multiply. Every agent running continuously requires sustained inference capacity. A world of billions of AI agents running on Nvidia hardware is the growth story that justifies a $4 trillion valuation. Whether that world materializes fast enough to satisfy Wall Street is the $78 billion question.

References

  1. Nvidia reports earnings and guidance beat as AI boom pushes data center revenue up 75% - CNBC
  2. NVIDIA Announces Financial Results for Fourth Quarter and Fiscal 2026 - NVIDIA Newsroom
  3. Nvidia hits earnings record, Jensen Huang touts AI inflection point - Axios
  4. NVIDIA Kicks Off the Next Generation of AI With Rubin - NVIDIA Newsroom
  5. In Nvidia we trust: The agentic AI inflection point has arrived says CEO Jensen Huang - Fortune

Get the Daily Briefing

AI, Crypto, Economy, and Politics. Four stories. Every morning.

No spam. Unsubscribe anytime.