Nvidia is preparing to introduce a new artificial intelligence chip for the Chinese market with a price tag that significantly undercuts its recently restricted H20 model, aiming to navigate strict U.S. export controls. Mass production of the new chip could commence as soon as June, according to sources familiar with the matter.
The upcoming GPU, built on Nvidia’s latest Blackwell architecture, is projected to cost between $6,500 and $8,000—substantially lower than the $10,000 to $12,000 that the now-banned H20 commanded. The reduced price is largely a reflection of its trimmed-down technical capabilities and less advanced manufacturing needs.
Designed around the RTX Pro 6000D—a server-class GPU—the new model will incorporate standard GDDR7 memory rather than the high-end high bandwidth memory (HBM) found in more powerful products.
The device also sidesteps Taiwan Semiconductor Manufacturing Co. (TSMC) ‘s advanced CoWoS packaging technology, another cost-saving move aimed at complying with U.S. restrictions on advanced chip components.
As long as the company has not finalized a new product design and received U.S. government clearance, its access to China’s $50 billion data center sector remains effectively blocked, a Nvidia representative told Reuters.
China remains a crucial revenue stream for Nvidia, accounting for 13% of its global sales last year. The company has now had to redesign GPUs for the Chinese market three times in response to tightening U.S. export rules intended to stifle technological advances in China.
After April’s ban on the H20, Nvidia reportedly explored a downgraded H20 for continued Chinese sales, but the plan was scrapped as further modifications to its aging Hopper platform proved unworkable under current regulations, according to CEO Jensen Huang.
Nvidia’s Chinese market share has tumbled from 95% before the initial 2022 wave of U.S. restrictions to just 50% today, while domestic rival Huawei has ramped up production of its Ascend 910B AI chip.
The latest round of U.S. export curbs introduced strict limits on memory bandwidth—a critical parameter dictating data throughput between a chip’s core processor and memory. This is especially vital for high-intensity AI tasks.