Trends
MLCommons sets new AI benchmark tests measuring speed
On Wednesday, MLCommons sets new AI benchmark tests to measure response speed to users’ queries, in an effort to improve efficiency. New AI benchmark tests by MLCommons On Wednesday, AI benchmark group MLCommons sets a series of tests and releases multiple results to evaluate the speed and efficienc…

Headline
On Wednesday, MLCommons sets new AI benchmark tests to measure response speed to users’ queries, in an effort to improve efficiency. New AI benchmark tests by MLCommons On Wednesday, AI benchmark group MLCommons sets a series of tests and releases multiple results to evaluate…
Context
On Wednesday, MLCommons sets new AI benchmark tests to measure response speed to users’ queries, in an effort to improve efficiency. On Wednesday, AI benchmark group MLCommons sets a series of tests and releases multiple results to evaluate the speed and efficiency of top-tier hardware in responding to user interactions.
Evidence
Pending intelligence enrichment.
Analysis
Among the new benchmarks introduced by MLCommons, two focus on the responsiveness of AI chips and systems in generating outputs, which offer insights into the speed at which AI applications, such as ChatGPT, can provide responses to user queries. Also read: Eliyan raises US$60million for chiplet interconnects that speed up AI chips One of the newly introduced benchmarks, dubbed Llama 2, specifically measures the speed of question-and-answer scenarios for large language models, boasting 70 billion parameters developed by Meta Platforms. Furthermore, MLCommons expanded benchmark tools by incorporating a second text-to-image generator, called MLPerf, based on Stability AI’s Stable Diffusion XL model. In terms of raw performances, servers equipped with Nvidia’s H100 chips, including those from Google, Supermicro, and Nvidia itself, stood out as frontrunners in the latest benchmarks.
Key Points
- MLCommons introduces new AI benchmark tests measuring speed of AI chips and systems in generating responses from large language models.
- Nvidia’s H100 chips, alongside servers from Google, Supermicro, and Nvidia, outperform competitors in both new benchmarks for raw performance.
Actions
Pending intelligence enrichment.





