Max Planck Institute for Multidisciplinary Sciences
Best Bang for Your Buck!
Cost-efficient MD simulations with GROMACS
Atomic-detail simulations of large biomolecular systems can easily occupy a compute cluster for weeks or even months. Continuous efforts are being made to ensure that our computing power is used most efficiently. This includes network fine-tuning and code optimizations to reach the best possible parallel scaling.
Would you like to adopt cloud computing for your intensive scientific workloads? We share our experiences with our large-scale DynasomeMD project using HPC services and the Cyclone solution from AWS deploying GROMACS in the AWS HPC Blog. more
This a recording of the 37th webinar in BioExcel's webinar series on computational methods for biomolecular research. Carsten presents a benchmark for the GROMACS 2018 software package (Kutzner et al. 2018).
More bang for your buck: Improved use of GPU nodes for GROMACS 2018
We identify hardware that is optimal to produce molecular dynamics trajectories on Linux compute clusters with the GROMACS 2018 simulation package. Therefore, we benchmark the GROMACS performance on a diverse set of compute nodes and relate it to the costs of the nodes, which may include their lifetime costs for energy and cooling. In agreement with our earlier investigation using GROMACS 4.6 on hardware of 2014, the performance to price ratio of consumer GPU nodes is considerably higher than that of CPU nodes.
However, with GROMACS 2018, the optimal CPU to GPU processing power balance has shifted even more towards the GPU. Hence, nodes optimized for GROMACS 2018 and later versions enable a significantly higher performance to price ratio than nodes optimized for older GROMACS versions. Moreover, the shift towards GPU processing allows to cheaply upgrade old nodes with recent GPUs, yielding essentially the same performance as comparable brand-new hardware.
On May 19–20, a group of >50 GROMACS developers and users gathered at the Max Planck Institute for biophysical Chemistry in Göttingen to discuss various aspects of software development and future directions for GROMACS.
This is a recording of the 6th webinar in BioExcel's webinar series on computational methods and applications for biomolecular reseach. It covers many topics from the publication (Kutzner et al. 2015) listed below.
Past contributions that enhance the parallel scaling include:
Parallelization of the Essential Dynamics + Flooding module, making use of Gromacs 4 new domain decomposition features
A patch [GPL license] for Gromacs 3.3.1 optimizes the all-to-all communication for better PME performance on ethernet clusters
Multiple-Process, Multiple-Data PME: This type of PME treatment is available in Gromacs from version 4 on. PME efficiency is enhanced by splitting up a part of the processors for the calculation of the reciprocal part of the Ewald sum
For Gromacs 4.0.7 there is a GPL tool that finds the optimal performance with PME on a given number of processors [download g_tune_pme] (Unpack with tar -xvzf). From version 4.5 on, g_tune_pme is part of the official Gromacs package. There is also a poster describing g_tune_pme. [PDF]
Publications
Kutzner, C.; Kniep, C.; Cherian, A.; Nordstrom, L.; Grubmüller, H.; de Groot, B. L.; Gapsys, V.: GROMACS in the cloud: A global supercomputer to speed up alchemical drug design. Journal of Chemical Information and Modeling 62 (7), pp. 1691 - 1711 (2022)
Kutzner, C.; Páll, S.; Fechner, M.; Esztermann, A.; de Groot, B. L.; Grubmüller, H.: More bang for your buck: Improved use of GPU nodes for GROMACS 2018. Journal of Computational Chemistry 40 (27), pp. 2418 - 2431 (2019)
Kutzner, C.; Páll, S.; Fechner, M.; Esztermann, A.; de Groot, B.; Grubmüller, H.: Best bang for your buck: GPU nodes for GROMACS biomolecular simulations. Journal of Computational Chemistry 36 (26), pp. 1990 - 2008 (2015)
Hess, B.; Kutzner, C.; van der Spoel, D.; Lindahl, E.: GROMACS 4: algorithms for highly efficient, load-balanced, and scalable molecular simulation. Journal of Chemical Theory and Computation 4 (3), pp. 435 - 447 (2008)
Kutzner, C.; van der Spoel, D.; Fechner, M.; Lindahl, E.; Schmitt, U. W.; de Groot, B. L.; Grubmueller, H.: Speeding up parallel GROMACS on high-latency networks. Journal of Computational Chemistry 28 (12), pp. 2075 - 2084 (2007)
Kutzner, C.; van der Spoel, D.; Fechner, M.; Lindahl, E.; Schmitt, U. W.; de Groot, B. L.; Grubmueller, H.: Improved GROMACS scaling on ethernet switched clusters. In: Recent advances in parallel virtual machine and message passing interface. 13th European PVM/MPI User`s Group meeting, Bonn, Germany, September 17-20, 2006, pp. 404 - 405 (Eds. Mohr, B.; Larsson, T. J.; Worringen, J.; Dongarra, J.). Springer, Berlin (2006)