- ASUS has introduced two new laptop models: the ASUS Vivobook S series and the Zenbook 14 OLED (UM3406), which are equipped with the latest AMD and Intel processors with neural AI-capable processing units in them.
- ASUS has adopted NVIDIA’s MGX Server reference architecture to develop a new line of ASUS AI and high-performance computing servers designed for accelerated computing.
ASUS has released two new laptop models, the ASUS Vivobook S series and the Zenbook 14 OLED (UM3406), equipped with AI-capable chips and keyboard shortcuts to access AI tools, aiming to bolster its AI services.
Also read: Chinese tech giant Lenovo boosts AI efforts with Nvidia’s new servers
Also read: Super Micro Computer rides AI server boom to join S&P 500
Asus laptops with AI capability
The laptops are all powered by some of the latest AMD and Intel processors with neural processing units, which can enhance performance while improving energy efficiency, speeding up gaming, multitasking, and editing.
The Vivobook S series is powered by the Intel Core Ultra 9, and a fourth model in this series comes with the AMD Ryzen 8040 Series chip.
Windows 11’s AI tools can be easily accessed through the Copilot key on the ASUS’ ErgoSense keyboard, which features gaming-grade customizable RGB lights. All the laptops also support OLED displays and Dolby Atmos.
These laptops range in size from 14 to 16 inches and are about to be put to the test.
ASUS optimizes AI servers based on NVIDIA’s MGX
As an IT leader, ASUS products are not only equipped with the latest chips but also offer high-performance AI servers based on NVIDIA’s MGX server reference architecture.
ASUS has developed and optimised the ASUS MGX servers, powered by NVIDIA’s chips, for AI services, high-performance computing, and seamless integration with both enterprise and cloud data centres.
The powerful and cost-effective server includes a no-code AI platform alongside a comprehensive in-house AI software stack. This combination empowers businesses of any size to expedite AI development for large language model (LLM) pre-training, fine-tuning, and inference with lower risk and faster catch-up.