- Jericho4 offers 51.2 Tbps lossless Ethernet across data centre.
- Designed to connect over one million processors securely and efficiently
What happened: Jericho4 delivers high‑bandwidth, low‑latency connectivity for distributed AI systems
Broadcom has begun shipping its latest networking chip, the Jericho4 Ethernet fabric router, tailored to support artificial intelligence workloads across multiple data centres. The chip delivers 51.2 Tbps of deep‑buffered, lossless Ethernet which enables AI tasks to scale across racks and clusters without performance degradation. Jericho4 can interconnect over one million processing units—including GPUs, CPUs, and accelerators—across sites.
A single system can scale to 36,000 HyperPorts operating at 3.2 Tbps each and support RoCE traffic over distances exceeding 100 km using congestion‑free HyperPort architecture. The chip integrates high‑bandwidth memory and full‑speed MACsec encryption on each port to ensure data remains secure during transfer. Built on a 3 nm process, Jericho4 improves energy efficiency, lowers latency, and enhances system reliability compared to previous solutions.
Also Read: Broadcom unveils faster custom chip tech to meet GenAI demand
Also Read: Broadcom shares plunge as AI hopes fade
Why it’s important
AI model growth now demands compute across geographically dispersed data centres. Jericho4 enables seamless and secure data flow between these facilities. Its scale and reliability help cloud providers and hyperscalers overcome power and cooling limitations at single sites. This advances distributed AI architecture beyond traditional single‑site deployments. The energy efficiency gains and encryption features align with enterprise needs for secure, sustainable operations.
The chip complements Broadcom’s existing GenAI networking suite, including Tomahawk Ultra and Tomahawk 6. Jericho4 may influence how companies design future AI compute fabrics and plan infrastructure capacity. It strengthens the role of advanced networking in supporting next‑generation AI workloads.