ABSTRACT
High Performance Computing (HPC) is increasingly identified as a strategic asset and enabler to accelerate the research and the business performed in all areas requiring intensive computing and large-scale Big Data analytic capabilities. The efficient exploitation of heterogeneous computing resources featuring different processor architectures and generations, coupled with the eventual presence of GPU accelerators, remains a challenge. The University of Luxembourg operates since 2007 a large academic HPC facility which remains one of the reference implementation within the country and offers a cutting-edge research infrastructure to Luxembourg public research. The HPC support team invests a significant amount of time (i.e., several months of effort per year) in providing a software environment optimised for hundreds of users, but the complexity of HPC software was quickly outpacing the capabilities of classical software management tools. Since 2014, our scientific software stack is generated and deployed in an automated and consistent way through the RESIF framework, a wrapper on top of Easybuild and Lmod [5] meant to efficiently handle user software generation. A large code refactoring was performed in 2017 to better handle different software sets and roles across multiple clusters, all piloted through a dedicated control repository. With the advent in 2020 of a new supercomputer featuring a different CPU architecture, and to mitigate the identified limitations of the existing framework, we report in this state-of-practice article RESIF 3.0, the latest iteration of our scientific software management suit now relying on streamline Easybuild. It permitted to reduce by around 90% the number of custom configurations previously enforced by specific Slurm and MPI settings, while sustaining optimised builds coexisting for different dimensions of CPU and GPU architectures. The workflow for contributing back to the Easybuild community was also automated and a current work in progress aims at drastically decrease the building time of a complete software set generation. Overall, most design choices for our wrapper have been motivated by several years of experience in addressing in a flexible and convenient way the heterogeneous needs inherent to an academic environment aiming for research excellence. As the code base is available publicly, and as we wish to transparently report also the pitfalls and difficulties met, this tool may thus help other HPC centres to consolidate their own software management stack.
- O. Ben-Kiki, C. Evans, and B. Ingerson. 2009. YAML Ain’t Markup Language.Google Scholar
- R. H. Castain, J. Hursey, A. Bouteiller, and D. Solt. 2018. PMIx: Process management for exascale environments. Parallel Comput. 79(2018), 9–29.Google Scholar
Cross Ref
- R. Falke, R. Klein, R. Koschke, and J. Quante. 2005. The Dominance Tree in Visualizing Software Dependencies. In 3rd IEEE Intl. W. on Visualizing Software for Understanding and Analysis. IEEE, Budapest, Hungary, 1–6.Google Scholar
- T. Gamblin, M. LeGendre, M. R. Collette, G. L. Lee, A. Moody, B. R. de Supinski, and S. Futral. 2015. The Spack package manager: bringing order to HPC software chaos. In SC ’15: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, Austin, TX, USA, 1–12.Google Scholar
- M. Geimer, K. Hoste, and R. McLay. 2014. Modern Scientific Software Management Using EasyBuild and Lmod. In 2014 First International Workshop on HPC User Support Tools. IEEE, New Orleans, LA, USA, 41–51. https://doi.org/10.1109/HUST.2014.8Google Scholar
- S. Khuvis, Z-Q. You, H. Na, S. Brozell, E. Franz, T. Dockendorf, J. Gardiner, and K. Tomko. 2019. A Continuous Integration-Based Framework for Software Management. In Proc. of the Practice and Experience in Advanced Research Computing (PEARC’19). ACM, New York, NY, USA, 1–7.Google Scholar
- D. Matthews and W. Limberg. 2018. MkDocs: documentation with Markdown. mkdocs.org.Google Scholar
- R. Mc.Lay. 2013. LMod: A New Environment Module System. https://lmod.rtfd.io.Google Scholar
- PuppetLabs. 2015. Puppet Hiera. https://puppet.com/docs/hiera/.Google Scholar
- S. Varrette, P. Bouvry, H. Cartiaux, and F. Georgatos. 2014. Management of an Academic HPC Cluster: The UL Experience. In Proc. of the 2014 Intl. Conf. on High Performance Computing & Simulation (HPCS 2014). IEEE, Bologna, Italy, 959–967.Google Scholar
Cross Ref
Index Terms
RESIF 3.0: Toward a Flexible & Automated Management of User Software Environment on HPC facility ✱
Recommendations
HPC environment management: new challenges in the petaflop era
VECPAR'10: Proceedings of the 9th international conference on High performance computing for computational scienceHigh Performance Computing (HPC) is becoming much more popular nowadays. Currently, the biggest supercomputers in the world have hundreds of thousands of processors and consequently may have more software and hardware failures. HPC centers managers also ...
Development of scientific software for HPC architectures using OpenACC: the case of LQCD
SE4HPCS '15: Proceedings of the 2015 International Workshop on Software Engineering for High Performance Computing in ScienceMany scientific software applications, that solve complex compute- or data-intensive problems, such as large parallel simulations of physics phenomena, increasingly use HPC systems in order to achieve scientifically relevant results. An increasing ...
Toward a Portable Programming Environment for Distributed High Performance Accelerators
STFSSD '09: Proceedings of the 2009 Software Technologies for Future Dependable Distributed SystemsAccelerators with little power consumption per computation performance are beginning to widely spread for High Performance Computing use, instead of general-purpose CPUs with much power consumption. They are GPUs, processors of Cell architecture, and ...





Comments