An artificial intelligence benchmark group called MLCommons unveiled the results on Monday of new tests that determine how quickly top-of-the-line hardware can run AI models.
A Nvidia Corp chip was the top performer in tests on a large language model, with a semiconductor produced by Intel Corp a close second.
The new MLPerf benchmark is based on a large language model with 6 billion parameters that summarizes CNN news articles. The benchmark simulates the "inference" portion of AI data crunching, which powers the software behind generative AI tools.
Nvidia's top submission for the inference benchmark build around eight of its flagship H100 chips. Nvidia has dominated the market for training AI models, but hasn't captured the inference market yet.
"What you see is that we're delivering leadership performance across the board, and again, delivering that leadership performance on all workloads," Nvidia's accelerated computing marketing director, Dave Salvator, said.
Intel's success is based around its Gaudi2 chips produced by the Habana unit the company acquired in 2019. The Gaudi2 system was roughly 10% slower than Nvidia's system.
"We're very proud in the results of inferencing, (as) we show the price performance advantage of Gaudi2," Habana's chief operating officer, Eitan Medina, said.
Intel says its system is cheaper than Nvidia's - roughly the price of Nvidia's last generation 100 systems - but declined to discuss the exact cost of the chip.
Nvidia declined to discuss the cost of its chip. On Friday Nvidia said it planned to soon roll out a software upgrade that would double the performance from its showing in the MLPerf benchmark.
Alphabet's Google unit previewed the performance of the latest version of a custom-built chip it announced at its August cloud computing conference. (Reporting by Max A. Cherney in San Francisco; Editing by Leslie Adler)
Copyright © HT Media Limited
All rights reserved.