Nvidia’s next-generation data centres to work with cloud providers

  • Nvidia’s GTC event in 2024 unveils blueprint for next generation data centre to build ‘efficient AI infrastructure’ with partner support.
  • Nvidia said the next-generation Blackwell GPU architecture will enable organisations to build and run real-time generative AI on large trillion-parameter language models.
  • A number of companies, including AWS, Microsoft, and Google, have expanded their cooperation with Nvidia on AI projects.

Next-generation data centre

NVIDIA’s 2024 GTC event, which runs through March 21, will see the usual flurry of announcements for big tech conferences. One of the highlights was founder and CEO Jen-Hsun Huang’s keynote: Next-generation Blackwell GPU architecture that enables organisations to build and run real-time generative AI on large trillion-parameter language models. 

NVIDIA unveiled its blueprint for building the next generation of data centres, promising to build “efficient AI infrastructure” with support from partners such as Schneider Electric, data centre infrastructure company Vertiv and simulation software provider Ansys. The data centre, which claims to be fully operational, was demonstrated on the GTC showroom floor as a digital twin of NVIDIA Omniverse, a platform for building 3D jobs from tools to applications and services.

Also read: Nvidia unveils flagship AI chip, the B200, at GTC 2024

Also read: Oracle shares surge over 13%, hint at Nvidia collaboration

Entire industry is gearing up for Blackwell

For all the promise of the Blackwell GPU platform, it needs to run somewhere – and the largest cloud providers are actively involved in delivering NVIDIA Grace Blackwell. As Huang says, “the whole industry is gearing up for Blackwell.” The latest NVIDIA AI supercomputer is based on the NVIDIA GB200 NVL72 liquid cooling system. It has two racks, each containing 18 NVIDIA Grace cpus and 36 NVIDIA Blackwell Gpus, connected via fourth-generation NVIDIA NVLink switches. NVIDIA Blackwell on AWS will “help customers across industries unleash new generative AI capabilities at a faster pace.”  

AWS has exclusively launched Project Ceiba, an AI supercomputer collaboration that will also use the Blackwell platform for use by NVIDIA’s internal R&D team. Microsoft and NVIDIA have extended their long-standing collaboration to also bring GB200 Grace Blackwell processors to Azure. Meanwhile, Google Cloud is integrating NVIDIA NIM microservices into the Google Kunbernetes Engine (GKE) to help accelerate generative AI deployments in the enterprise.

However, in order to adapt to the next generation theme, it is not only very large scale enterprises that need to apply. Sustainable infrastructure-as-a-service Cloud providers such as NexGen Cloud have also announced that computing services powered by NVIDIA’s Blackwell platform will be part of the AI Supercloud.


Tuna Tu

Tuna Tu, an intern reporter at BTW media dedicated in IT infrastructure and media. She graduated from The Communication University of Zhejiang and now works in Hangzhou. Send tips to t.tu@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *