The Broadcom networking chip enables companies to build artificial intelligence computing systems by connecting hundreds of thousands of data-processing chips, significantly intensifying competition with industry leader Nvidia. This strategic move comes as demand for AI infrastructure continues to surge globally.
The Thor Ultra represents the industry’s first 800G AI Ethernet Network Interface Card, designed to interconnect massive numbers of processing units for trillion-parameter AI workloads. By enabling computing infrastructure operators to deploy far more chips than previously possible, the Broadcom networking chip allows organizations to build and run the large-scale models that power applications like ChatGPT and other advanced AI systems.
This announcement follows Broadcom’s recent blockbuster partnership with OpenAI, where the company committed to delivering 10 gigawatts of custom AI accelerators starting in the second half of 2026. The deal positions Broadcom as a serious challenger to Nvidia’s stronghold in the AI accelerator market. CEO Hock Tan has projected that the market opportunity for Broadcom’s various AI chips could reach $60 billion to $90 billion by 2027.
The Thor Ultra is fully compliant with Ultra Ethernet Consortium specifications and introduces advanced remote direct memory access innovations including packet-level multipathing for efficient load balancing, selective retransmission for optimized data transfer, and out-of-order packet delivery directly to processing unit memory. These features maximize fabric utilization and enable seamless scaling of AI clusters without vendor lock-in.
Ram Velaga, senior vice president at Broadcom’s Core Switching Group, emphasized the critical role of networking in distributed computing systems. He explained that assembling large AI clusters requires robust networking infrastructure, making it essential for any player in the graphics processing unit sector to participate meaningfully in networking solutions. The Thor Ultra addresses this need comprehensively.
Broadcom’s strategy extends beyond networking chips to include lucrative custom AI processors designed for major cloud computing companies like Google. The company has collaborated on multiple iterations of Google’s Tensor processor over the past decade, generating billions in revenue. In fiscal 2024, Broadcom reported AI revenues of $12.2 billion and announced a new $10 billion customer for its specialized data center AI chips.
The competitive landscape in AI infrastructure is heating up rapidly. While Nvidia remains dominant with its integrated GPU and networking solutions, Broadcom is leveraging its deep expertise in networking and custom chip design to carve out significant market share. The Thor Ultra works seamlessly with Broadcom’s existing Tomahawk and Jericho switch series, providing a comprehensive Ethernet AI networking framework.
Industry analysts note that building a 1-gigawatt data center costs approximately $50 billion, with about $35 billion typically allocated for chips at current Nvidia pricing. By offering competitive alternatives through custom designs and advanced networking solutions, Broadcom is helping hyperscalers and AI companies control costs while maintaining cutting-edge performance. This value proposition is driving strong demand across Broadcom’s AI portfolio.
The deployment timeline for OpenAI’s custom accelerators, scheduled to start in late 2026 and complete by 2029, demonstrates the long-term nature of these strategic partnerships. OpenAI benefits from designing chips optimized for its specific workloads, while Broadcom provides manufacturing expertise and a broad enterprise customer base. This collaboration model could become a blueprint for future AI infrastructure partnerships.
Looking ahead, Broadcom’s engineers are continuously pushing boundaries with rigorous testing and evaluation of networking chips from the earliest production stages. The company invests heavily in silicon design, with ecosystem partners investing $6 to $10 for every dollar Broadcom spends, creating a powerful multiplier effect. This collaborative approach ensures that reference designs and system architectures are production-ready for customers building next-generation AI infrastructure.
For the latest updates on AI chip innovations and infrastructure developments transforming the tech landscape, visit ainewstoday.org your trusted source for cutting-edge AI news!