FMM electrostatics for biomolecular simulations in GROMACS
We target a flexible, portable and scalable solver for potentials and forces, which is a prerequisite for exascale applications in particle-based simulations with long-range interactions in general. As a particularly challenging example that will prove and demonstrate the capability of our concepts, we use the popular molecular dynamics (MD) simulation software GROMACS. MD simulation has become a crucial tool to the scientific community, especially as it probes time- and length scales difficult or impossible to probe experimentally. Moreover, it is a prototypic example of a general class of complex multiparticle systems with long-range interactions.
Take me to the new version!
MD simulations elucidate detailed, time-resolved behaviour of biology’s nanomachines. From a computational point of view, they are extremely challenging for two main reasons. First, to properly describe the functional motions of biomolecules, the long-range effects of the electrostatic interactions must be explicitly accounted for. Therefore, techniques like the particle-mesh Ewald method were adopted, which, however, severely limits the scaling to a large number of cores due to global communication requirements. The second challenge is to realistically describe the time-dependent location of (partial) charges, as e.g. the protonation states of the molecules depend on their time-dependent electrostatic environment. Here we address both tighly interlinked challenges by the development, implementation, and optimization of a unified algorithm for long-range interactions that will account for realistic, dynamic protonation states and at the same time overcome current scaling limitations.
Download and test our GPU-FMM for GROMACS
If you want to give our GPU-FMM a test drive, please download the tar archive below, unpack withtar -xvzf
, and install just like a usual GROMACS 2019.
Our CUDA FMM can be used as a PME replacement by choosing coulombtype = FMM
in the .mdp
input parameter list. The tree depth d and the multipole order p are set with fmm-override-tree-depth
and fmm-override-multipole-order
input parameters, respectively. On request (provide your ssh key), the code can be checked out from our git repository git@fmsolvr.fz-juelich.de:gromacs
.
GROMACS with GPU-FMM including benchmark systems
- GROMACS 2019 with CUDA FMM source code v.5 63.26 MB
- GROMACS input files for salt water system 1.09 MB
- GROMACS input files for multi-droplet (aerosol) system 1.34 MB
- Multi-droplet (aerosol) benchmark with FMM electrostatics .tpr (p=18!) 3.05 MB
- Multi-droplet (aerosol) benchmark with PME electrostatics .tpr 1.97 MB
- runfmm.py 1.85 kB
.tpr
file on the command line with the MULTIPOLEORDER
environment variable:
MULTIPOLEORDER=8 gmx mdrun -s in.tpr
With sparse systems as the aerosol system, you should set the following environment variable for optimum performance:
export FMM_SPARSE=1
OPENBOUNDARY=1
can be set to calculate FMM-based Coulomb interactions with open boundaries. This can make sense for droplet systems, for example, where there is only vacuum at the box edges anyway.
Running FMM in standalone mode
You can also compile and run the GPU-FMM without GROMACS integration. The relevant code is in the ./src/gromacs/fmm/fmsolvr-gpu
subdirectory of the above tar archive after unpacking. Compile it with a script like this:
; in bash
export CC=$( which gcc )
export CXX=$( which g++ )cmake -H../git-gromacs-gmxbenchmarking/src/gromacs/fmm/fmsolvr-gpu -B. -DFMM_STANDALONE=1 -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.0
make
The python script runfmm.py
can be used to benchmark the standalone version of the GPU-FMM.