The collaboration aims to bring UALink’s open interconnect standard, which enables up to 1024 AI accelerators to communicate at speeds of 200 Gbps per lane, into the OCP community’s efforts to build scalable, sustainable data centre infrastructure optimised for AI.
“By collaborating, the UALink Consortium and the OCP Community can shape system specifications to address critical challenges in interconnect bandwidth and scalability posed by advanced AI models," said George Tchaparian, CEO of the OCP Foundation.
The UALink Consortium, formed in mid-2024, includes major players such as AMD, Intel, and Meta. It aims to provide an open alternative to Nvidia’s proprietary NVLink technology and has already ratified its first specification, UALink 1.0, earlier this month.
The partnership will align UALink’s interconnect technology with OCP’s Open Systems for AI initiative and its Future Technologies Initiative’s short-reach optical interconnect workstream.
The groups hope the partnership will accelerate the adoption of open interconnects in hyperscale data centres, particularly in systems built to train and deploy cutting-edge AI models.
Peter Onufryk, an Intel Fellow and president of the UALink Consortium, said: “AI and HPC workloads require ultra-low latency and massive bandwidth to handle the scale and complexity of accelerated compute data processing to meet LLM requirements.
“Partnering with the OCP Community will accelerate the adoption of UALink's innovations into complete systems, delivering transformative performance for AI markets.”
RELATED STORIES
Tech giants launch open GPU interconnect standard to transform AI computing
Apple, Alibaba Cloud join board of group developing GPU interconnect standards