The latest MLPerf benchmark results once again highlight Nvidia’s dominance in GPU performance, particularly with its Blackwell GPUs pretraining the large language model Llama 3.1 403B. AMD’s new MI325X GPU matched Nvidia’s H200 on popular fine-tuning tasks, indicating it is one generation behind. MLPerf, organized by the MLCommons consortium, includes six benchmarks testing various machine learning applications, with pretraining being notably demanding. Although Nvidia achieved top results, the efficiency of networking among GPUs, especially with the NVL72 system architecture, plays a crucial role in training large models. The largest submission this round utilized 8,192 GPUs, a decrease from over 10,000 in previous rounds, attributed to improved GPU capabilities. Power efficiency remains a concern, with Lenovo being the only participant to report energy measurements. As AI’s energy consumption rises, future submissions should prioritize power efficiency data for better performance comparisons.
Source link
Nvidia Blackwell Dominates MLPerf Training Benchmark

Leave a Comment
Leave a Comment