Skip to main content
Background Image

OpenAI Teams with Broadcom to Build Custom AI Chips

·304 words·2 mins·
Pini Shvartsman
Author
Pini Shvartsman
Architecting the future of software, cloud, and DevOps. I turn tech chaos into breakthrough innovation, leading teams to extraordinary results in our AI-powered world. Follow for game-changing insights on modern architecture and leadership.

OpenAI just announced it’s partnering with Broadcom to design and manufacture custom AI chips. The deal targets 10 gigawatts of custom AI accelerators, with deployment starting in the second half of 2026 and wrapping up by the end of 2029.

This isn’t OpenAI’s first chip rodeo. They’ve already locked in 6 gigawatts with AMD and 10 gigawatts with Nvidia. But the Broadcom partnership is different. OpenAI isn’t just buying off-the-shelf hardware. They’re designing custom silicon, which means they can embed what they’ve learned from building GPT, ChatGPT, and Sora directly into the chip architecture.

The pitch is compelling: purpose-built hardware that’s optimized for OpenAI’s specific workloads. In theory, that means better performance, lower costs, and more control over their infrastructure stack. Broadcom handles the manufacturing while OpenAI handles the design. They expect to start deploying racks in mid-2026.

Chipping Away at Dependency
#

This partnership is part of a bigger trend. Meta, Google, and Microsoft are all working on custom chips to reduce reliance on Nvidia. So far, none of these projects have seriously threatened Nvidia’s dominance in AI hardware. But they’ve created new opportunities for companies like Broadcom, who benefit from the surge in custom chip demand.

For OpenAI, it’s about diversification and leverage. Relying on a single vendor for something as critical as compute is risky, especially when GPU supply has been tight and unpredictable. Multiple partnerships give OpenAI negotiating power and operational flexibility. If one vendor hits a snag, they have alternatives.

But custom chips take time. If the 2026 timeline holds, OpenAI will be betting on what their infrastructure needs will look like years from now. That’s a gamble, especially in an industry moving as fast as AI. Still, if you’re Sam Altman and you believe you’re building toward superintelligence, locking in compute capacity now probably feels like smart planning.

Related

OpenAI and AMD Announce 6 Gigawatt Strategic GPU Partnership
·735 words·4 mins
OpenAI Drops $25 Billion on Argentina Data Center: Tech Renaissance or Digital Colonialism?
·367 words·2 mins
Google CEO Admits ChatGPT Beat Them to Market: Rare Corporate Honesty
·315 words·2 mins