Nvidia enhances NIM suite and supports for 3D and robotics

  • Nvidia has significantly enhanced its Nvidia Inference Microservices (NIM) suite, introducing new capabilities for physical environments, advanced visual modelling, and vertical applications.
  • Key developments include the integration of Hugging Face’s inference-as-a-service on Nvidia’s DGX cloud, new Fast Voxel Database (FVDB) microservices for 3D worlds, and a suite of tools for developing humanoid robotics, including the Osmo orchestration service and an AI-enhanced teleoperation workflow.

OUR TAKE
Nvidia’s enhancements to its NIM suite mark a significant advancement in AI technologies, particularly for robotics and 3D world creation. The integration with Hugging Face and the introduction of FVDB and USD microservices are pivotal steps towards more immersive virtual environments and sophisticated humanoid robotics, potentially transforming industries and societal applications.
–Vicky Wu, BTW reporter

What happened

Nvidia unveiled significant enhancements to its Nvidia Inference Microservices (NIM) suite, introducing new capabilities for physical environments, advanced visual modelling, and vertical applications at the Siggraph conference in Denver. Key developments include the integration of Hugging Face’s inference-as-a-service on Nvidia’s DGX cloud, enabling faster performance and streamlined access to serverless inferencing for Hugging Face’s 4 million developers.

Nvidia also showcased generative physical AI advancements, including its Metropolis reference workflow for building interactive visual AI agents. Three new Fast Voxel Database (FVDB) NIM microservices were introduced, supporting new deep learning frameworks for 3D worlds. These services simplify processes by integrating functions that previously required multiple deep-learning libraries.

In addition, Nvidia further highlighted its support for physical AI, with microservices designed for speech, translation, vision, and realistic animation, including a new class of generative AI models called vision-language models, enhancing decision-making, accuracy, and interactivity.

For robotics, Nvidia provided a suite of services, models, and computing platforms to develop, train, and build the next generation of humanoid robotics, including new NIM microservices and frameworks for robot simulation and learning, the Osmo orchestration service, and an AI- and simulation-enabled teleoperation workflow that reduces the amount of human demonstration data required for training.

Also read: Nvidia approves Samsung’s HBM3 for China market GPUs

Also read: Nvidia develops new AI chip for China amid US export control

Why it’s important

Nvidia’s enhancements to its NIM suite signify a leap in AI-driven technologies, particularly for robotics and 3D world creation.

The integration of Hugging Face’s services with Nvidia’s cloud infrastructure accelerates AI development and enables more efficient natural language processing. Support for 3D worlds through FVDB and USD microservices marks a pivotal step towards more immersive virtual environments. Nvidia’s focus on physical AI and robotics empowers developers to create advanced humanoid robots, potentially transforming industries from manufacturing to healthcare. “The next wave of AI is robotics and one of the most exciting developments is humanoid robots,” said Jensen Huang, founder and CEO of Nvidia, in a statement. “We’re advancing the entire Nvidia robotics stack, opening access for worldwide humanoid developers and companies to use the platforms, acceleration libraries and AI models best suited for their needs.”

These developments highlight Nvidia’s role as a leader in AI and robotics, shaping the future of these technologies and their impact on society.

Vicky-Wu

Vicky Wu

Vicky is an intern reporter at Blue Tech Wave specialising in AI and Blockchain. She graduated from Dalian University of Foreign Languages. Send tips to v.wu@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *