DeepSeek V4 Released As Chinese Open-Weight Models Continue To Close Gap With Western Frontier Labs

DeepSeek released its V4 family of open-weight foundation models over the weekend, in an announcement that confirms what has been visible in the benchmark-leaderboard data for two months: Chinese open-weight model capability has now substantially closed the gap to Western frontieโ€ฆ

Tom Whitmore

By

Tom Whitmore

Published

May 9, 2026

Read

2 min

DeepSeek V4 Released As Chinese Open-Weight Models Continue To Close Gap With Western Frontier Labs

DeepSeek released its V4 family of open-weight foundation models over the weekend, in an announcement that confirms what has been visible in the benchmark-leaderboard data for two months: Chinese open-weight model capability has now substantially closed the gap to Western frontier-lab proprietary releases, and the strategic implications for the global AI-infrastructure investment cycle are now meaningfully different from where they sat at the start of the year.

The V4 release covers four model variants โ€” a 70-billion-parameter dense model, a 240-billion-parameter dense model, a 670-billion-parameter mixture-of-experts model, and a multimodal variant of the 240B model โ€” under the same permissive open-weight licence DeepSeek has used across previous releases. The benchmark performance against the principal industry-standard evaluation suites places the 670B MoE variant essentially at parity with the proprietary GPT-5 base model on reasoning, mathematics, and coding evaluation, and at a meaningful lead on the long-context-retrieval evaluation set that has been a focus of the past quarter's research-community attention.

The training-cost disclosure is the more strategically interesting half of the announcement. DeepSeek confirmed total training compute for the 670B model at approximately 6.2 million H800-equivalent GPU-hours, against the substantially-higher published figures for the comparable Western proprietary models. The cost framework โ€” particularly given the substantial fragmentation of compute-pricing assumptions across the AI infrastructure-investor base โ€” has substantial read-through implications for the underlying unit-economics framework for frontier-lab capex commitments globally.

The export-control framework is the less-visible-but-important piece of the strategic puzzle. The US-led restrictions on the most advanced GPU shipments to Chinese customers have been operational for several quarters now, and the V4 release substantially confirms what several of the more thoughtful technical observers have been arguing: that the supply-restriction approach has not visibly slowed Chinese frontier-model development at the rate the policy framework was designed to achieve. The technical-and-engineering response from Chinese AI labs has been more sophisticated than the simpler version of the policy thesis allowed for.

For the wider AI-infrastructure landscape, the implication is that the open-weight tier is now a genuine competitive force in the foundation-model market in a way that it was not eighteen months ago. The Western frontier labs โ€” OpenAI, Anthropic, Google DeepMind โ€” face an increasingly capable open-source alternative that is freely available, well-resourced through Chinese state-and-private capital, and operationally indistinguishable from their offerings on a meaningful share of practical workloads. The strategic implications for proprietary-model pricing, for the global AI-talent flow dynamics, and for the broader competitive equilibrium are now structurally different from any prior period in the cycle.

Tom Whitmore

Written by

Tom Whitmore

Senior correspondent ยท Technology & Energy

Tom trained as an electrical engineer, which makes him unusually patient with infrastructure stories. He reports on AI, cloud, the energy transition, and the businesses turning frontier engineering into real cash flow. Previously he covered the chip supply chain from Taipei. Skeptical of slide decks; comfortable in a substation. Based in Singapore. Reach out at tom.whitmore@theplatinumcapital.com.