As processers and data storage drives grow bigger and faster, they can easily overwhelm networks, creating the need for new networking and system I/O approaches.
It wasn’t that many years ago that 10GbE networks seemed like the be-all and end-all for high-performance computing. Who would ever need more bandwidth than that? Well, fast forward to the present. As many organizations have found, 10GbE, and even 25GbE and 40GbE, can’t deliver the throughput demanded by bandwidth-hungry HPC workloads, including high-performance data analytics, AI, machine learning and deep learning.
Here’s the problem. With data-intensive applications, the network can create bottlenecks that limit the performance gains made possible by Intel® Optane storage, multi-core CPUs and other technology advances. While drives and processors are getting bigger and faster, the speed at which data moves is limited by the bandwidth of the network, along with system I/O, and that puts a damper on what matters most — the responsiveness of the application.
When a fraud-prevention system or a real-time stock-trading application is making split-second decisions, there’s no time for system latency. Milliseconds matter. Network latency also matters to countless other HPC use cases, from training machine learning models to extracting life-saving insights from genomic data. Slower networks mean slower time to insight. And that’s a problem for today’s workloads that are running up against network limitations in HPC systems.
Breaking through bottlenecks
There are many ways to break through bottlenecks and some lessons learned from office systems. The IT team at the University of Pisa is leveraging a network architecture to improve the performance of its Storage Spaces Direct environment, which incorporates lightning-fast NVMe drives.
“The network has become again the bottleneck of a system, mostly because of NVMe drives,” Antonio Cisternino, the university’s chief information officer, notes in a Dell EMC case study. “Four NVMe drives, aggregated, are capable of generating around 11 gigabits per second of bandwidth, which tops a 100-gigabit connection. They may saturate and block I/O with just four drives.”
To get around this bottleneck, the IT pros at the University of Pisa used Dell EMC S5048-ON switches to build what amounts to a bigger highway in their Storage Spaces Direct environment. A spine-leaf network design gives every server access to two lanes of 25Gb RoCE — RDMA over Converged Ethernet — to move data in and out of the NVMe drives. This results in an aggregate bandwidth of 50Gb/sec, which helps ensure that the network won’t be much of a bottleneck in the system.
A high-performance file system
In HPC systems, data transfer rates are only part of the latency story. There is also the closely aligned issue of file system I/O performance, which can impact the speed at which data is transferred across the network. As a researcher from Lawrence Berkeley National Laboratory notes, “if data is being transferred to a busy file system the transfer rate would be slower than a file system at regular activity levels.”
In Australia, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) is addressing this issue via a storage upgrade to remove file-system bottlenecks across its HPC clusters. It has contracted with Dell EMC for a new, higher-performance file system to be shared across all of its in-house supercomputers, according to Australia’s iTnews.
The new file system will be based on Dell EMC PowerEdge™ R740 servers with Intel® Xeon® Scalable processors and will include 2 PB of NVMe-based storage from Intel, iTnews reports. This upgrade will help CSIRO avoid I/O bottlenecks and harness the full potential of its HPC systems, including its new Dell EMC-based Bracewell supercomputer.
“As our users became accustomed to the new capability of the Bracewell cluster, we anticipated that the IO performance of the filesystem would become a bottleneck restricting the performance of some of our users’ codes,” a CSIRO spokesperson told iTnews. “This upgrade will remove that bottleneck.”
With today’s data-intensive applications, HPC administrators must look closely at network and system I/O architectures. Data is not slowing down, and HPC systems need to keep all those bits and bytes moving in step with ever-faster processors and ever-faster storage media.
This is the way it is in a world where HPC, data analytics and AI are rapidly converging. And this convergence calls for creative approaches to avoid bottlenecks caused by network and system I/O constraints.
To learn more
For a closer look at the University of Pisa’s Storage Spaces Direct environment, read the Dell EMC case study “Storage Success.” And to explore the technologies for HPC and AI in a converged world, visit dellemc.com/hpc and dellemc.com/ai.