- The UALink Consortium unveils 200G 1.0 Specification, enabling 1,024 AI accelerators to connect efficiently.
- The new standard reduces latency and improves communication between GPUs for AI workloads.
What happened: UALink unveils 200G 1.0 Specification
The Ultra Accelerator Link (UALink) Consortium has released the UALink 200G 1.0 Specification, an open standard aimed at improving GPU interconnectivity and AI computing performance. The specification enables up to 1,024 AI accelerators to communicate at high speeds within computing clusters, providing a competitive alternative to Nvidia’s proprietary NVLink technology. The Consortium, established last summer, is made up of industry leaders like AMD, Intel, and Microsoft, working to create standards that enhance GPU connectivity.
The UALink 200G 1.0 Specification aims to improve the efficiency of connected chips performing AI tasks, while reducing reliance on NVLink. It establishes a switch ecosystem for accelerators, allowing inter-hardware communication among system nodes to support multi-node AI applications. The specification also aims to lower latency in computing clusters and offers an uncomplicated load/store protocol, matching Ethernet’s speed while maintaining the latency of PCIe switches.
Also Read: Tech giants unite to challenge Nvidia with new AI standard
Also Read: Zscaler uncovers GPU-resident malware ‘CoffeeLoader’
Why it’s important
The UALink 200G 1.0 Specification is poised to revolutionise AI workloads by offering a more efficient method of connecting accelerators. The specification aims to lower latency and streamline communication within AI computing clusters, facilitating the development of multi-node systems for AI applications. With growing demand for AI compute power, the new open standard promises to accelerate the next generation of AI/ML applications and reduce dependence on Nvidia’s proprietary technologies.