Skip to content

NAMD

NAMD

Nanoscale Molecular Dynamics (NAMD) is computer software for molecular dynamics simulation, written using the Charm++ parallel programming model. It is noted for its parallel efficiency and is often used to simulate large systems (millions of atoms). It has been developed by the collaboration of the Theoretical and Computational Biophysics Group (TCB) and the Parallel Programming Laboratory (PPL) at the University of Illinois at Urbana–Champaign.

NAMD module

NAMD has to be loaded using Lmod prior to running it.

$ module load NAMD/<version>

Once the module has been loaded you will need to set some enviroment variables:

You can also list all available versions using Lmod's spider command:

$ module spider NAMD
---------------------------------------------------------
  NAMD:
---------------------------------------------------------
      NAMD is a parallel molecular dynamics code designed for 
      high-performance simulation of large biomolecular systems.

     Versions:
        NAMD/2.14-foss-2019b-mpi
        NAMD/2.14-fosscuda-2019b
        NAMD/2.14-verbs-gcccuda-2019b

---------------------------------------------------------
  For detailed information about a specific "NAMD" package 
  (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided 
  by other modules.
  For example:

     $ module spider NAMD/2.14-verbs-CUDA
---------------------------------------------------------

You will usually find the following compilations of NAMD in the system:

Compilation Modulefile Example Scope
MPI NAMD/version-toolchain-mpi NAMD/2.14-foss-2019b-mpi single-, multi-node jobs.
GPU NAMD/version-toolchain NAMD/2.14-fosscuda-2019b single-, multi-node jobs with GPUs.
verbs-CUDA NAMD/version-verbs-toolchain NAMD/2.14-verbs-gcccuda-2019b single-, multi-node jobs with GPUs + Replica Exchange.

Software version

Here you can check the available versions for NAMD in the different clusters

NAMD/2.12-intel-2017b-mpi
NAMD/2.13-foss-2018b-mpi
NAMD/2.14-foss-2019b-mpi
NAMD/2.14-fosscuda-2019b
NAMD/2.14-verbs-gcccuda-2019b

How to run NAMD in batch mode on Atlas

Single-node jobs

You can submit single-node jobs to Atlas FDR using a similar batch script to this:

NAMD on Atlas EDR: single-node job batch script
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=NAMD_job
#SBATCH --cpus-per-task=1
#SBATCH --mem=100gb
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --output=%x.out
#SBATCH --error=%x.err

mpirun -np $SLURM_NTASKS namd2 mysim.conf

Multi-node jobs

The only thing that differs from single-node jobs is the amount of nodes requested in the batch script:

NAMD on Atlas EDR: single-node job batch script
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=NAMD_job
#SBATCH --cpus-per-task=1
#SBATCH --mem=100gb
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=24
#SBATCH --output=%x.out
#SBATCH --error=%x.err

mpirun -np $SLURM_NTASKS namd2 mysim.conf

Single-node GPU jobs

In order to run a single-node GPU job create a batch script along the following lines:

NAMD on Atlas EDR: single-node GPU job batch script
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=NAMD_job
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:p40:2
#SBATCH --mem=100gb
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=24
#SBATCH --output=%x.out
#SBATCH --error=%x.err

module load NAMD/2.14-fosscuda-2019b

namd2 +ppn $SLURM_NTASKS +p $SLURM_NTASKS +devices $CUDA_VISIBLE_DEVICES +idlepoll mysim.conf
  • ++p: number of PEs (worker threads).
  • ++ppn: number of PEs per process.

As it is stated in the NAMD documentation, when running CUDA NAMD always add +idlepoll to the command line. This is needed to poll the GPU for results rather than sleeping while idle.

Also, you do not really need to set the option +devices as SLURM sets the environment variable automatically. Therefore, the following batch script is equivalent:

NAMD on EDR: simplified single-node GPU job batch script
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=NAMD_job
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:p40:2
#SBATCH --mem=100gb
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --output=%x.out
#SBATCH --error=%x.err

module load NAMD/2.14-fosscuda-2019b

namd2 +ppn $SLURM_NTASKS +p $SLURM_NTASKS +idlepoll mysim.conf

Multi-node GPU jobs

Please be sure that your NAMD job scales to more than 2 GPUs before submitting multi-node GPU jobs. Otherwise, it is always a better idea to request a node with 2 GPUs o to limit the job to a single GPU.

NAMD on EDR: multi-node GPU job batch script
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=NAMD_job
#SBATCH --ntasks-per-core=1
#SBATCH --gres=gpu:p40:2
#SBATCH --mem=100gb
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --output=%x.out
#SBATCH --error=%x.err

module load NAMD/2.14-verbs-gcccuda-2019b

charmrun ++nodelist .nodelist-$SLURM_JOB_ID ++p $SLURM_NTASKS ++ppn <ppn> $(which namd2) +setcpuaffinity +idlepoll mysim.conf

Tip

The number of processes per node ++ppn has to be a multiple of the number of GPUs requested per node. For example, if the number of GPUs requested per node is 2 and ++p is 96. Then ++ppn has to be either 48 or 24.

Replica Exchange (REMD)

Replica Exchage using MPI

Replica Exchange NAMD: multi-node job batch script
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=NAMD_job
#SBATCH --ntasks-per-core=1
#SBATCH --gres=gpu:p40:2
#SBATCH --mem=100gb
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --output=%x.out
#SBATCH --error=%x.err

module load NAMD/2.14-foss-2019b-mpi

mkdir output
(cd output; mkdir {0..7})

mpirun -np $SLURM_NTASKS namd2 +replicas 8 mysim.conf +stdout output/%d/mysim.%d.log

Replica Exchange on GPUs

The only compilation of NAMD that will allow you to perform Replica Exchange on GPU is NAMD/2.14-verbs-gcccuda-2019b.

Replica Exchange NAMD: multi-node GPU job batch script
#!/bin/bash
#SBATCH --qos=regular
#SBATCH --job-name=NAMD_job
#SBATCH --ntasks-per-core=1
#SBATCH --gres=gpu:p40:2
#SBATCH --mem=100gb
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=32
#SBATCH --output=%x.out
#SBATCH --error=%x.err

module load NAMD/2.14-verbs-gcccuda-2019b

# Run NAMD simulation
mkdir output
(cd output; mkdir {0..7})

charmrun ++nodelist .nodelist-$SLURM_JOB_ID +p8 $(which namd2) +replicas 8 mysim.conf +stdout output/%d/mysim.%d.log