OpenAI announced on Monday that it is partnering with Broadcom to design and develop custom artificial intelligence (AI) computer chips, expanding its growing network of hardware alliances to meet surging global demand for AI processing power.
The companies did not disclose financial details but said the new AI accelerator racks will begin deployment in late 2026. The chips are expected to complement OpenAI’s existing hardware partnerships while providing greater control over performance, scalability, and supply chain resilience.
“Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity,” said OpenAI CEO Sam Altman in a statement.
Expanding AI Infrastructure Partnerships
The Broadcom collaboration follows a series of major OpenAI deals with Nvidia, AMD, Oracle, and CoreWeave, all aimed at securing the computing capacity and data center infrastructure needed to power large-scale AI models such as ChatGPT.
Many of these partnerships involve circular financing arrangements, in which hardware and cloud providers both invest in OpenAI and supply the technology it depends on — a dynamic that some analysts say could contribute to an AI investment bubble.
Despite not yet being profitable, OpenAI reported that its flagship chatbot now has more than 800 million weekly users, underscoring the company’s exponential growth and escalating compute needs.
Broadcom’s Role and Market Response
Broadcom CEO Hock Tan said the companies plan to co-develop and deploy 10 gigawatts of next-generation accelerators and network systems, describing the project as a milestone for AI infrastructure.
“We are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI,” Tan said.
Following the announcement, Broadcom shares surged more than 9%, reflecting investor optimism about the company’s deepening role in the AI supply chain.
Building an AI Hardware Ecosystem
OpenAI’s move to design its own chips mirrors similar strategies by major tech firms such as Google (TPU) and Amazon (Trainium), which have developed proprietary AI accelerators to reduce dependence on third-party suppliers and optimize performance for specific workloads.
By collaborating with Broadcom — one of the world’s leading semiconductor and networking firms — OpenAI aims to customize chips optimized for its unique large language model architecture, potentially improving speed and energy efficiency across its global data centers.
The partnership marks another step in OpenAI’s broader effort to build an end-to-end AI technology stack, from algorithms and data to hardware and infrastructure, as competition in the generative AI industry continues to intensify.




