- SoftBank installs the world’s biggest Nvidia DGX SuperPOD to expand Japan’s domestic AI capabilities.
- The system is part of a national initiative to reduce reliance on overseas cloud providers.
What happened: SoftBank deploys Nvidia’s GH200-powered AI system to boost Japan’s sovereign cloud and reduce foreign cloud reliance
SoftBank has unveiled what it claims is the world’s largest installation of the Nvidia DGX SuperPOD, a high-performance AI computing system, located in Japan. This buildout marks a significant step in the company’s ambition to create a sovereign cloud and computing stack optimised for artificial intelligence workloads. The deployment features Nvidia’s GH200 Grace Hopper Superchips, offering memory bandwidth of 900 GB/s, and is designed to meet surging demand for generative AI processing and training in Japan.
The installation forms part of a broader strategy from SoftBank to help support Japan’s domestic digital transformation. According to Telecompaper, the SuperPOD is hosted in a data centre in Tochigi Prefecture and is intended to give local enterprises, startups, and research institutions high-performance computing capabilities without needing to rely on major US hyperscalers like AWS or Google Cloud.
Also read: SoftBank acquires Ampere for $6.5B
Also read: SoftBank showcases AI RAN innovations at MWC
Why this is important
This move signals Japan’s growing urgency to establish digital sovereignty in AI infrastructure, mirroring similar trends in Europe’s push for sovereign cloud platforms. SoftBank’s deployment of the Nvidia SuperPOD aligns with the government’s commitment to reduce dependence on foreign tech giants and bolster the national AI ecosystem. The scale of the SuperPOD positions it to become a key enabler for the AI research community and local enterprises, accelerating projects ranging from language models to industrial automation.
The timing also reflects a broader strategic concern. Japan has lagged behind in AI compute accessibility compared to the US and China. With the GH200-based systems offering higher memory and bandwidth per node, Japan can now support more intensive AI training and inference locally. While still reliant on Nvidia’s chips, the local control over compute infrastructure reduces security risks and promotes local innovation.
SoftBank’s step reflects a national tech recalibration. The AI arms race is no longer just about who builds the biggest models, but who owns the infrastructure to run them.