•V4 processes over one million tokens in long-context workloads but excludes image and video capabilities
•Huawei chips used in training, tightening China's domestic AI compute ecosystem
The fact
DeepSeek released a preview of its V4 model on Friday, confirming Huawei chips were used in parts of the training process. The Pro version ranks behind Google's Gemini-Pro-3.1 on world-knowledge benchmarks. V4 targets AI agent workloads with support for over one million tokens of context, but does not include multimodal image or video processing.
The assessment
Huawei's involvement in V4 training signals closer alignment between China's top AI developers and domestic semiconductor infrastructure, reducing reliance on Nvidia. The combination of strong benchmark performance and lower compute requirements also intensifies competition in efficient model design — particularly for open-source alternatives challenging proprietary frontier systems.
What to watch
Whether Huawei-based training becomes the standard for Chinese AI developers, and whether V4's benchmark rankings hold under independent enterprise testing.
Also read: Huawei commits $10bn over five years to smart-driving compute
Also read: DeepSeek V4 runs on Huawei chips






