- Apple has chosen Google’s tensor processing units (TPUs) over NVIDIA’s GPUs for its latest AI software infrastructure.
- Apple’s engineers have indicated in the paper that Google’s chips could facilitate the creation of even larger and more sophisticated AI models than those currently discussed.
OUR TAKE
Apple’s latest choice stands out amidst the wave of global technological competition. The company’s decision to use Google’s tensor processing units (TPUs) instead of NVIDIA’s graphics processing units (GPUs) in this new phase of its artificial intelligence (AI) software development is a break with tradition and brings a new topic of discussion to the AI space.
–Elodie Qian, BTW reporter
What happened
In a strategic shift, Apple has chosen Google’s chips for its latest artificial intelligence (AI) software infrastructure, as outlined in a research paper made public on Monday. The software infrastructure will power its forthcoming suite of AI tools.
This move is a deviation from the norm, with Nvidia traditionally leading the AI processor market, boasting a substantial 80% share.
Apple’s decision to utilise Google’s cloud technology is significant. While the research paper did not explicitly state the absence of Nvidia chips, there was a conspicuous lack of reference to Nvidia hardware in the AI tools and features’ infrastructure description.
At the time of writing, Apple has not provided a statement. The paper, however, detailed the deployment of Google’s tensor processing units (TPUs) for AI model training. Specifically, Apple has employed 2,048 TPUv5p chips for models to be implemented on iPhones and other devices, alongside 8,192 TPUv4 processors for server-based AI models.
Also read: Apple’s AI leap: Partnering with Google to boost Siri’s capabilities
Also read: Apple delays AI features rollout until after iOS 18 release
Why it’s important
Nvidia’s high-performance GPUs have long dominated the high-end AI model training market, with a number of tech companies including OpenAI, Microsoft, and Anthropic using its GPUs to accelerate model training.
But over the past few years, Nvidia’s GPUs have consistently fallen short of demand, and for this reason companies such as Google, Meta, Oracle and Tesla are developing their own chips to meet the needs of their respective AI systems and product development.
Google, despite having its own in-house TPUs, maintains its position as a top Nvidia customer, using both Nvidia GPUs and its own TPUs to train its AI systems and providing access toNvidia technology on its cloud platform.
Apple is in the process of introducing its Apple Intelligence to a select group of beta testers. This follows the June announcement of incorporating OpenAI’s ChatGPT technology into its software offerings.
Although Reuters had previously reported on Apple’s TPU chip usage, the extent of Apple’s reliance on Google’s hardware was only fully disclosed in the recent research paper. Neither Google nor Nvidia have offered comments on the matter.
Apple’s engineers have indicated in the paper that Google’s chips could facilitate the creation of even larger and more sophisticated AI models than those currently discussed. This suggests a promising trajectory for Apple’s AI development, underpinned by Google’s technology.