Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and gain efficiencies by improving and scaling citizen developers. look now.


Data centers are increasingly exploring different ways to build more energy-efficient supercomputers, in addition to faster ones. Nvidia addressed this challenge in a number of ways, ranging from more efficient processors, better CPU and GPU coordination, new networking technologies, and more efficient libraries.

Dion Harris, Nvidia’s senior product manager for accelerated computing, said that in scientific computing, performance is key, but what’s becoming increasingly urgent is being able to do it as efficiently as possible. So Nvidia explored different ways to make the most of the smaller data center footprint and smaller carbon footprint.

Here is an overview of what’s new:

  • An Nvidia H100 GPU supercomputer demonstrates nearly twice the power efficiency of A100 implementations.
  • A combination of Grace and Grace Hopper Superchips demonstrates a 1.8x improvement for a 1 megawatt data center for accelerated computing.
  • BlueField DPU demonstrates a 30% improvement in power consumption per server.
  • The Nvidia collective communications library demonstrates a 3x improvement for simulations.
  • Updates to the cuFFT library demonstrate a 5x improvement in large-scale FFT execution.

More powerful supercomputers

Nvidia worked with Lenovo on the first submission of a supercomputer built on the Nvidia H100 chip at the Green500 list of the most powerful supercomputers. It is a step in itself. But early findings suggest it could become one of the leading contenders for the most efficient supercomputer.

Additionally, this particular setup is built on an air-cooling based system, so it doesn’t require any special piping or rack setups that are sometimes required for high-performance, energy-efficient systems.

Harris said, “It will allow this type of setup to be deployed anywhere in any typical data center.”

Improved data center efficiency

Nvidia has previously discussed how the combination of Grace and Grace Hopper superchips can improve core CPU computing. New research suggests it can also lead to more efficient accelerated computing architectures.

They found a way to achieve a 1.8x performance improvement for a standard 1 megawatt data center with around 20% of the load allocated to CPU partitions and around 80% allocated to accelerated partitions, compared to approaches traditional x86.

Network Offloading Improvements

Nvidia has also released new research quantifying the benefits of offloading data management and networking tasks at the Bluefield DPU. The Intelligent Network Interface Controller combines traditional network functionality with accelerated networking, security, storage, and control plane functions. The company found that it could reduce overall power consumption by approximately 30% per server. In a large data center with about 10,000 servers, this could save about $5 million in energy costs over a three-year lifespan.

Faster simulations

“Computer acceleration is a full-stack problem,” Harris explained. So, Nvidia has optimized the underlying libraries that help popular scientific computing tools work across multiple GPUs, systems, and locations.

An update to the Nvidia Collective Communications Library (NCCL) has resulted in a triple performance improvement for VASP, a popular data center library, without any hardware modifications. The VASP (Vienna Ab initio Simulation Package) supports material modeling at the atomic scale.

Nvidia CUDA Fast Fourier Transform (cuFFT) enhancements have increased GROMACS, a simulation package for biomolecular systems. The new update also makes it easier to perform FFT calculations efficiently on a much larger number of systems in parallel.

“This allows for large FFTs at the scale of the entire data center,” Harris said.

VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Discover our Briefings.