OpenAI Partners with Broadcom to Build Custom AI Chips, Redefining Global Compute Power

Update: 2025-10-14 11:51 IST

The global race to secure computing power for artificial intelligence has entered a new phase, with OpenAI announcing a major leap into chip design. Teaming up with Broadcom, the ChatGPT maker aims to reduce its reliance on Nvidia and AMD by developing custom-built accelerators that promise to reshape the AI hardware landscape.

On Monday, OpenAI and Broadcom unveiled plans to co-develop and deploy racks of custom AI accelerators designed in-house by OpenAI. Deployment is expected to begin late next year. The companies revealed that the systems will collectively deliver around 10 gigawatts of compute capacity, signaling the massive growth in AI demand over the past two years. Following the announcement, Broadcom’s stock surged nearly 10 percent, reflecting investor optimism about the partnership’s potential.

OpenAI CEO Sam Altman emphasized that the collaboration’s primary goal is to make computing more efficient and affordable. “We can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models, all of that,” Altman said in a joint podcast with Broadcom executives. He added that the deal would create “a gigantic amount of computing infrastructure” to power the next generation of advanced intelligence.

Beyond chip design, the partnership will also encompass networking and memory solutions using Broadcom’s Ethernet stack, customized for OpenAI’s demanding workloads. With this move, OpenAI aims to extend the value of its infrastructure spending at a time when the cost of AI data centers is soaring. Industry analysts estimate that a single 1-gigawatt data center could cost up to $50 billion, with chips accounting for nearly $35 billion — most of which currently flows to Nvidia.

The collaboration builds upon an 18-month-long relationship between OpenAI and Broadcom, during which both companies have been quietly working on early chip development. This public announcement marks the first time the scale of their partnership has been revealed. It also aligns with OpenAI’s recent flurry of infrastructure commitments — totaling around 33 gigawatts of compute capacity — through deals with Nvidia, Oracle, AMD, and now Broadcom. Currently, OpenAI operates on just over 2 gigawatts of capacity, which powers services like ChatGPT and its AI video platform, Sora.

OpenAI’s president, Greg Brockman, disclosed that the company even leveraged its own AI models to optimize chip design. “We’ve been able to get massive area reductions,” Brockman said, explaining how AI-assisted engineering outperformed traditional human methods.

For Broadcom, the partnership represents another milestone in its growing dominance within the AI ecosystem. Its custom accelerators, known as XPUs, are already being deployed by global tech giants such as Google, Meta, and ByteDance. The company’s valuation has soared past $1.5 trillion, following a 50 percent stock rise in 2025 after doubling in 2024.

Broadcom CEO Hock Tan praised OpenAI’s ambition, saying, “You continue to need compute capacity — the best, latest compute capacity — as you progress in a road map towards a better and better frontier model and towards superintelligence. If you do your own chips, you control your destiny.”

Echoing that sentiment, Altman hinted that this is just the beginning. “Even though it’s vastly more than the world has today, we expect that very high-quality intelligence delivered very fast and at a very low price — the world will absorb it super-fast and just find incredible new things to use it for,” he said.

Tags:    

Similar News