NVIDIA Confirms H300 Pilot Production As Data-Centre Backlog Extends Through 2027
NVIDIA confirmed during its quarterly investor briefing that its next-generation H300 GPU has now entered pilot production at TSMC's 2-nanometre node, with the first commercial volume shipments scheduled for late Q3 of fiscal 2027 and the disclosed customer-allocation programme fโฆ

By
Tom Whitmore
Published
May 9, 2026
Read
2 min

NVIDIA confirmed during its quarterly investor briefing that its next-generation H300 GPU has now entered pilot production at TSMC's 2-nanometre node, with the first commercial volume shipments scheduled for late Q3 of fiscal 2027 and the disclosed customer-allocation programme for the first eighteen months of volume now sold out across the company's entire planned production envelope.
The H300 platform represents the most significant generational uplift NVIDIA has delivered since the original H100 architecture launched in 2022. The combination of the 2-nanometre process advancement, the architectural changes in the new Rubin core, and the substantial improvements in the on-chip-memory-and-interconnect bandwidth deliver an aggregate performance lift that NVIDIA's chief executive Jensen Huang flagged on the call as 'the largest generational improvement we've ever shipped.' The accompanying GB300 system-level offering โ bundling the H300 with the new high-bandwidth Grace-CPU successor and updated NVLink fabric โ substantially upgrades the system-level economics for the largest hyperscaler customers.
The customer-allocation profile across the H300 programme tells the more strategically interesting half of the supply-and-demand story. The four largest hyperscaler customers โ AWS, Microsoft Azure, Google Cloud, and Meta's internal capacity-build โ collectively account for roughly 60% of the planned eighteen-month allocation. The HUMAIN programme in Saudi Arabia, the G42 programme in the UAE, and a small number of strategic Chinese customers via the export-compliant variants take a further 20%. The remaining 20% is split across the wider enterprise customer base, with the company's previous practice of preferential allocation to the most-strategic AI-lab customers continuing through the H300 cycle.
The competitive dynamic with AMD's MI400 platform โ and to a lesser extent with the in-development hyperscaler-custom-silicon programmes from AWS, Microsoft, and Google โ has clearly shifted the wider strategic calculus for NVIDIA's customer-allocation framework. The company has been more publicly receptive to alternative-silicon parallelism in customer architectures than at any prior cycle, with Huang explicitly endorsing the diversification logic as a structural feature of the maturing AI-infrastructure landscape rather than a competitive threat to NVIDIA's positioning.
For investors, the more important framing question is on the durability of the cycle through 2027 and beyond. The H300 sold-out-through-2027 backlog substantially de-risks the visible revenue trajectory through the next eighteen-month window. The harder forward question โ whether the underlying AI-infrastructure capex cycle has further to run beyond 2027 as the substantial existing capacity build-out matures and as model-efficiency improvements potentially reduce the per-workload-compute requirement โ remains the principal open variable across the wider AI-infrastructure investor framework.

Written by
Tom Whitmore
Senior correspondent ยท Technology & Energy
Tom trained as an electrical engineer, which makes him unusually patient with infrastructure stories. He reports on AI, cloud, the energy transition, and the businesses turning frontier engineering into real cash flow. Previously he covered the chip supply chain from Taipei. Skeptical of slide decks; comfortable in a substation. Based in Singapore. Reach out at tom.whitmore@theplatinumcapital.com.




