Google launches Trillium AI chip that’s five times faster

  • Google parent Alphabet introduced a product in its line of AI data centre chips called Trillium that it claims is nearly five times faster than its previous version.
  • Alphabet’s effort to build custom chips for AI data centres represents one of the few viable alternatives to the top-of-the-line processors that Nvidia dominates the market.
  • The company said the new chips will be available to its cloud customers “by the end of 2024.

High-speed AI data centre chip

Google parent Alphabet on Tuesday introduced a product in its line of artificial intelligence data centre chips called Trillium that it claims is nearly five times faster than its previous version.

“In the last six years, industry demand for (machine learning) computers has grown by a factor of 1 million, about 10 times a year,” Alphabet CEO Sundar Pichai said on a briefing call with reporters. “I think Google was built for this moment, we’ve been pioneering [artificial intelligence chips] for over a decade.

Also read: Google Cloud Connect: Bridging data and innovation

Trillium processor is 67% more energy efficient than v5e

Alphabet’s effort to build custom chips for AI data centres represents one of the few viable alternatives to the top-of-the-line processors that Nvidia dominates the market. Along with software closely associated with Google’s tensor processing Unit (TPU), these chips have given the company a sizable market share.

Nvidia holds about 80 percent of the AI data centre chip market, with the vast majority of the remaining 20 percent being various versions of Google’s Tpus. The company does not sell chips itself, but rents access through its cloud computing platform.

According to Google, the sixth-generation Trillium chip will deliver 4.7 times more computing performance than the TPU v5e, which is designed to power technology that generates text and other media from large models. The Trillium processor is 67% more energy efficient than the v5e. The company said the new chips will be available to its cloud customers “by the end of 2024.”

Google engineers have achieved additional performance gains by increasing the high band tolerance storage capacity and total bandwidth. AI models require a lot of advanced memory, which has been a bottleneck to further improving performance.

The company has designed these chips to be deployed in pods with 256 chips, which can scale to hundreds of pods.

Tuna-Tu

Tuna Tu

Tuna Tu, an intern reporter at BTW media dedicated in IT infrastructure and media. She graduated from The Communication University of Zhejiang and now works in Hangzhou. Send tips to t.tu@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *