ABSTRACT
High Performance Computing (HPC) encompasses advanced computation over parallel processing. The execution time of a given simulation depends upon many factors, such as the number of CPU/GPU cores, their utilisation factor and, of course, the interconnect performance, efficiency, and scalability. In practice, this last component and the associated topology remains the most significant differentiators between HPC systems and lesser performant systems. The University of Luxembourg operates since 2007 a large academic HPC facility which remains one of the reference implementation within the country and offers a cutting-edge research infrastructure to Luxembourg public research. The main high-bandwidth low-latency network of the operated facility relies on the dominant interconnect technology in the HPC market i.e.,Infiniband (IB) over a Fat-tree topology. It is complemented by an Ethernet-based network defined for management tasks, external access and interactions with user’s applications that do not support Infiniband natively. The recent acquisition of a new cutting-edge supercomputer Aion which was federated with the previous flagship cluster Iris was the occasion to aggregate and consolidate the two types of networks. This article depicts the architecture and the solutions designed to expand and consolidate the existing networks beyond their seminal capacity limits while keeping at best their Bisection bandwidth. At the IB level, and despite moving from a non-blocking configuration, the proposed approach defines a blocking topology maintaining the previous Fat-Tree height. The leaf connection capacity is more than tripled (moving from 216 to 672 end-points) while exhibiting very marginal penalties, i.e. less than 3% (resp. 0.3%) Read (resp. Write) bandwidth degradation against reference parallel I/O benchmarks, or a stable and sustainable point-to-point bandwidth efficiency among all possible pairs of nodes (measured above 95.45% for bi-directional streams). With regards the Ethernet network, a novel 2-layer topology aiming for improving the availability, maintainability and scalability of the interconnect is described. It was deployed together with consistent network VLANs and subnets enforcing strict security policies via ACLs defined on the layer 3, offering isolated and secure network environments. The implemented approaches are applicable to a broad range of HPC infrastructures and thus may help other HPC centres to consolidate their own interconnect stacks when designing or expanding their network infrastructures.
- [n.d.]. IOR: HPC I/O Benchmark. [online]. https://ior.readthedocs.io/.Google Scholar
- [n.d.]. iperf3: A TCP, UDP, and SCTP network bandwidth measurement tool. https://software.es.net/iperf/.Google Scholar
- [n.d.]. The Top 500 List. https://top500.org/.Google Scholar
- Maciej Besta, Jens Domke, Marcel Schneider, Marek Konieczny, Salvatore Di Girolamo, Timo Schneider, Ankit Singla, and Torsten Hoefler. 2021. High-Performance Routing With Multipathing and Path Diversity in Ethernet and HPC Networks. IEEE Transactions on Parallel and Distributed Systems 32, 4 (2021), 943–959.Google Scholar
Digital Library
- A. Bhatele, N. Jain, M. Mubarak, and T. Gamblin. 2019. Analyzing Cost-Performance Tradeoffs of HPC Network Designs under Different Constraints Using Simulations. In Proc. of the ACM SIGSIM Conf. on Principles of Advanced Discrete Simulation (SIGSIM-PADS’19) (Chicago, IL, USA) (SIGSIM-PADS ’19). ACM, New York, NY, USA, 1–12.Google Scholar
Digital Library
- Sangeetha Abdu Jyothi, Ankit Singla, P. Brighten Godfrey, and Alexandra Kolla. 2016. Measuring and Understanding Throughput of Network Topologies. In SC ’16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. 761–772. https://doi.org/10.1109/SC.2016.64Google Scholar
Cross Ref
- S. Varrette, H. Cartiaux, S. Peter, E. Kieffer, T. Valette, and A. Olloh. 2022. Management of an Academic HPC & Research Computing Facility: The ULHPC Experience 2.0. In Proc. of the 6th ACM High Performance Computing and Cluster Technologies Conf. (HPCCT 2022). Association for Computing Machinery (ACM), Fuzhou, China.Google Scholar
Recommendations
The Quadrics Network (QsNet): High-Performance Clustering Technology
HOTI '01: Proceedings of the The Ninth Symposium on High Performance InterconnectsAbstract: The Quadrics interconnection network (QsNet) contributes two novel innovations to the field of high-performance interconnects: (1) integration of the virtual-address spaces of individual nodes into a single, global, virtual-address space and (...
ServerNet Deadlock Avoidance and Fractahedral Topologies
IPPS '96: Proceedings of the 10th International Parallel Processing SymposiumThis paper examines the problems of deadlock avoidance in multistage networks, and proposes a new class of scalable topologies for constructing large networks without introducing loops that could cause deadlocks. The new topologies, called "...
Evaluation of ConnectX Virtual Protocol Interconnect for Data Centers
ICPADS '09: Proceedings of the 2009 15th International Conference on Parallel and Distributed SystemsWith the emergence of new technologies such as Virtual Protocol Interconnect (VPI) for the modern data center, the separation between commodity networking technology and high-performance interconnects is shrinking. With VPI, a single network adapter on ...





Comments