Unified Long-Range Electrostatics and Dynamic Protonation for Realistic Biomolecular Simulations on the Exascale
In this DFG supported project we target a flexible, portable and scalable solver for potentials and forces, which is a prerequisite for exascale applications in particle-based simulations with long-range interactions in general. As a particularly challenging example that will prove and demonstrate the capability of our concepts, we use the popular molecular dynamics (MD) simulation software GROMACS. MD simulation has become a crucial tool to the scientific community, especially as it probes time- and length scales difficult or impossible to probe experimentally. Moreover, it is a prototypic example of a general class of complex multiparticle systems with long-range interactions.
MD simulations elucidate detailed, time-resolved behaviour of biology’s nanomachines. From a computational point of view, they are extremely challenging for two main reasons. First, to properly describe the functional motions of biomolecules, the long-range effects of the electrostatic interactions must be explicitly accounted for. Therefore, techniques like the particle-mesh Ewald method were adopted, which, however, severely limits the scaling to a large number of cores due to global communication requirements. The second challenge is to realistically describe the time-dependent location of (partial) charges, as e.g. the protonation states of the molecules depend on their time-dependent electrostatic environment. Here we address both tighly interlinked challenges by the development, implementation, and optimization of a unified algorithm for long-range interactions that will account for realistic, dynamic protonation states and at the same time overcome current scaling limitations.
Download and test our GPU-FMM for GROMACS
If you want to give our GPU-FMM a test drive, please download the tar archive below, unpack with
tar -xvzf, and install just like a usual GROMACS 2019.
Our CUDA FMM can be used as a PME replacement by choosing
coulombtype = FMM in the
.mdp input parameter list. The tree depth d and the multipole order p are set with
fmm-override-multipole-order input parameters, respectively. On request (provide your ssh key), the code can be checked out from our git repository
GROMACS with GPU-FMM including benchmark systems
For running the GPU FMM benchmarks, you need to set the following environment variable:
With sparse systems as the aerosol system, you should additionally set
for optimum FMM performance.
Running FMM in standalone mode
You can also compile and run the GPU-FMM without GROMACS integration. The relevant code is in the
./src/gromacs/fmm/fmsolvr-gpu subdirectory of the above tar archive after unpacking. Compile it with a script like this:
; in bash
export CC=$( which gcc )
export CXX=$( which g++ )
cmake -H../git-gromacs-gmxbenchmarking/src/gromacs/fmm/fmsolvr-gpu -B. -DFMM_STANDALONE=1 -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.0
The python script
runfmm.py can be used to benchmark the standalone version of the GPU-FMM.
Please follow this link to the external workshop website for more information.
On May 19–20, a group of >50 GROMACS developers and users gathered at the Max Planck Institute for biophysical Chemistry (today Max Planck Institute for Multidisciplinary Sciences) in Göttingen to discuss various aspects of software development and future directions for GROMACS.
16 (11), pp. 6938 - 6949 (2020)
A GPU-accelerated fast multipole method for GROMACS: Performance and accuracy. Journal of Chemical Theory and Computation
35 (1), pp. 97 - 117 (2021)
A CUDA fast multipole method with highly efficient M2L farfield evaluationfield evaluation. The International Journal of High Performance Computing Applications
GROMEX: A scalable and versatile fast multipole method for biomolecular simulation. In: Software for Exascale Computing - SPPEXA 2016-2019, pp. 517 - 543 (Eds. Bungartz, H.-J.; Reiz, S.; Uekermann, B.; Neumann, P.; Nagel, W. E.). Springer, Cham (2020)
40 (27), pp. 2418 - 2431 (2019)
More bang for your buck: Improved use of GPU nodes for GROMACS 2018. Journal of Computational Chemistry
12 (3), pp. 1040 - 1051 (2016)
Charge-neutral constant pH molecular dynamics simulations using a parsimonious proton buffer. Journal of Chemical Theory and Computation
Accelerating an FMM-Based Coulomb Solver with GPUs
Lecture Notes in Computational Science and Engineering 113 (2016) "Software for Exascale Computing - SPPEXA 2013-2015", Eds. HJ Bungartz, P. Neumann, WE Nagel, Springer, pp. 485-504
Tackling exascale software challenges in molecular dynamics simulations with GROMACS. In: Solving Software Challenges for Exascale: International Conference on Exascale Applications and Software, EASC 2014, Stockholm, Sweden, April 2-3, 2014, Revised Selected Papers, pp. 3 - 27 (Eds. Markidis, S.; Laure, E.). Springer, Cham (2015)
Portable Node-Level Performance Optimization for the Fast Multipole Method
Lecture Notes in Computational Science and Engineering 105, 29-46 (2015)
36 (26), pp. 1990 - 2008 (2015)
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations. Journal of Computational Chemistry
Scaling of the GROMACS 4.6 molecular dynamics code on SuperMUC. In: Parallel Computing: Accelerating Computational Science and Engineering (CSE), pp. 722 - 730 (Eds. Bader, M.; Bode, A.; Bungartz, H. J.). IOS Press, Amsterdam (2014)
Comparison of scalable fast methods for long-range interactions
Phys. Rev. E 88 063308-1-22 (2013)
GMCT: A Monte Carlo simulation package for macromolecular receptors
Journal of Computational Chemistry 33, 887-900 (2012)
7 (6), pp. 1962 - 1978 (2011)
Constant pH molecular dynamics in explicit solvent with lambda-dynamics. Journal of Chemical Theory and Computation