GROMACS¶
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.i
GROMACS Module¶
GROMACS has to be loaded using Lmod prior to running it.
$ module load GROMACS
Once the module has been loaded you will need to set some enviroment variables:
You can also list all available versions using Lmod's spider command:
$ module spider GROMACS
Software version¶
Here you can check the available versions for GROMACS in the different clusters
GROMACS/4.0.5-mod
GROMACS/4.0.5-mod2-foss-2018b-minimalopt
GROMACS/4.0.5-mod2-foss-2018b-noopt
GROMACS/4.0.5-mod2-foss-2018b
GROMACS/4.0.5-mod2-intel-2016b-noopt
GROMACS/4.0.5-mod2-intel-2016b
GROMACS/4.0.5-mod2-intel-2018b-minimalopt
GROMACS/4.0.5-mod2
GROMACS/5.0.7-intel-2017b-hybrid
GROMACS/5.1.2-intel-2017a-hybrid
GROMACS/2016.4-intel-2017a
GROMACS/2018-foss-2018a
GROMACS/2019-foss-2018b-Gold
GROMACS/2019-foss-2018b
GROMACS/2019.2-intel-2019a
GROMACS/2019.4-foss-2019b
GROMACS/2021.3-foss-2021a
GROMACS/2022.4-foss-2021a
GROMACS/2022.4-intel-2021a
GROMACS/constantph-foss-2020b-CUDA-11.1.1
GROMACS/constantph-intel-2021a-PLUMED-2.8.1
GROMACS/4.6.5-foss-2021b
GROMACS/4.6.5-intel-2021b
GROMACS/2016.4-foss-2019b-PLUMED-2.4.0
GROMACS/2016.4-fosscuda-2019b-PLUMED-2.4.0
GROMACS/2019.4-foss-2019b
GROMACS/2020-foss-2019b
GROMACS/2020-fosscuda-2019b
GROMACS/2020.1-foss-2020a-Python-3.8.2
GROMACS/2020.3-fosscuda-2019b
GROMACS/2020.4-foss-2020a-Python-3.8.2
GROMACS/2021.2-fosscuda-2020b
GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2
GROMACS/2022.4-foss-2020b
GROMACS/2022.4-fosscuda-2020b
GROMACS/2022.4-intel-2021a
GROMACS/2022.5-foss-2021b-PLUMED-2.8.1
GROMACS/2022.5-foss-2022a-PLUMED-2.8.1
GROMACS/2023.1-foss-2022a-CUDA-11.7.0
GROMACS/2021.5-foss-2021b-CUDA-11.4.1-PLUMED-2.8.0
GROMACS/2023-foss-2022a-CUDA-11.7.0-PLUMED-2.9.0
GROMACS/2023-foss-2022a-CUDA-11.7.0
GROMACS/2023.1-foss-2022a-CUDA-11.7.0
GROMACS/2023.1-foss-2022a
Running GROMACS jobs on different HPC systems¶
Batch script for parallel execution of GROMACS compiled with the foss
toolchain:
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=GROMACS_job
#SBATCH --mem=200gb
#SBATCH --cpus-per-task=6
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
#SBATCH --output=%x-%j.out
#SBATCH --error=%x-%j.err
module load GROMACS/2022.4-foss-2021a
mpirun -np ${SLURM_NTASKS} gmx_mpi mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s input.tpr
Batch script for parallel execution of GROMACS compiled with the intel
toolchain:
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=GROMACS_job
#SBATCH --mem=200gb
#SBATCH --cpus-per-task=6
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
#SBATCH --output=%x-%j.out
#SBATCH --error=%x-%j.err
module load GROMACS/2022.4-foss-2021a
srun gmx_mpi mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s input.tpr
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=GROMACS_job
#SBATCH --mem=200gb
#SBATCH --cpus-per-task=6
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
#SBATCH --output=%x-%j.out
#SBATCH --error=%x-%j.err
module load GROMACS/2022.4-foss-2021a
srun gmx_mpi mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s input.tpr
GPU jobs:
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=GROMACS_job
#SBATCH --mem=200gb
#SBATCH --cpus-per-task=1
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --gres=gpu:p40:2
#SBATCH --output=%x-%j.out
#SBATCH --error=%x-%j.err
module load GROMACS/2020-fosscuda-2019b
srun gmx_mpi mdrun -ntomp $SLURM_CPUS_PER_TASK -nb auto -bonded auto -pme auto -gpu_id 01 -s input.tpr
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=GROMACS_job
#SBATCH --mem=200gb
#SBATCH --cpus-per-task=6
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
#SBATCH --output=%x-%j.out
#SBATCH --error=%x-%j.err
module load GROMACS/2022.4-foss-2021a
srun gmx_mpi mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s input.tpr
GPU jobs:
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=GROMACS_job
#SBATCH --mem=200gb
#SBATCH --cpus-per-task=1
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --gres=gpu:rtx:2
#SBATCH --output=%x-%j.out
#SBATCH --error=%x-%j.err
module load GROMACS/2023.1-foss-2022a-CUDA-11.7.0
srun gmx_mpi mdrun -ntomp $SLURM_CPUS_PER_TASK -nb auto -bonded auto -pme auto -gpu_id 01 -s input.tpr