OpenAI Taps Broadcom for In-House AI Chip to Secure Supply Chain and Control Cost

OpenAI, the artificial intelligence start-up behind ChatGPT, is collaborating with U.S. semiconductor heavyweight Broadcom to launch its first proprietary AI accelerator next year, the Financial Times reported, citing sources familiar with the project.

The new chip, reportedly designated for internal use only rather than broad commercial rollout, marks a significant step for OpenAI as it seeks to lessen its heavy dependence on external suppliers like Nvidia for high-performance silicon.

OpenAI has long faced challenges over sourcing vast compute resources needed to train and operate large language models. Last year, Reuters reported that OpenAI, supported by Broadcom and Taiwan Semiconductor Manufacturing Co. (TSMC), was finalizing the design of its premier in-house chip, while continuing to supplement its infrastructure with AMD and Nvidia solutions to keep pace with surging workloads.

By February, sources revealed OpenAI was advancing its plans to reduce Nvidia reliance, finalizing chip designs for fabrication at TSMC facilities. Internally, the move is seen as a way to control costs, secure supply, and optimize performance for its next wave of generative AI services.

Broadcom CEO Hock Tan gave further market context on Thursday, revealing that Broadcom has secured over $10 billion in AI infrastructure orders from a new, undisclosed customer—sparking speculation the client may be OpenAI. Tan noted during the firm’s earnings call that several technology leaders are actively developing custom silicon with Broadcom, joining three of the company’s established hyperscale clients.

OpenAI’s foray into bespoke silicon mirrors similar moves by Alphabet’s Google, Amazon, and Meta, all of whom have ramped up internal chip development to overcome bottlenecks and rising costs in the global semiconductor market inflicted by escalating demand for AI-driven compute.