LAMMPS – Carbon Nanotubes

To get acquainted with Hippolyta, use LAMMPS to model two carbon nanotubes.1

Load the LAMMPS module

Start by loading the LAMMPS module and it’s dependencies:

module load lammps-all

In addition to telling your shell where to find the LAMMPS executable, loading the LAMMPS module sets two environment variables: LAMMPS_POTENTIALS and LAMMPS_EXAMPLES. The LAMMPS_POTENTIALS variable tells LAMMPS where to look for the system versions of interatomic potentials to use. The LAMMPS_EXAMPLES variable holds the location of the example simulation scripts distributed with the LAMMPS source code, along with a few example scripts developed by Hippolyta users.

brian@hippolyta > ls ${LAMMPS_EXAMPLES}/
accelerate  COUPLE    ELASTIC      indent   min        peptide  reax     USER
ASPHERE     crack     ELASTIC_T    KAPPA    msst       peri     rigid    vashishta
balance     deposit   ellipse      kim      nanotubes  pour     shear    VISCOSITY
body        DIFFUSE   fccVacancy   MC       nb3b       prd      snap     voronoi
colloid     dipole    flow         meam     neb        python   srd
comb        dreiding  friction     melt     nemd       qeq      streitz
coreshell   eim       hugoniostat  micelle  obstacle   README   tad

Obtain the example files

Copy the nanotubes example from the ${LAMMPS_EXAMPLES} directory into your ~/data directory to proceed with the example:

cp -r ${LAMMPS_EXAMPLES}/nanotubes ~/data/
cd ~/data/nanotubes

The nanotubes example consists of four files: a LAMMPS input script in.nanotubes, a LAMMPS data file dataangle.cnt, a job submission script submit_nanotubes.sh, and a README file. (If you’re following this tutorial on your own machine, you’ll probably also want to grab the Airebo potential file as well.)

The data file dataangle.cnt contains the positions of the atoms in the two nanotubes, and the LAMMPS script in.nanotubes contains the LAMMPS commands for the molecular dynamics simulation:

# in.nanotubes -- Ankit Gupta
atom_style atomic
units metal
boundary f f f
read_data dataangle.cnt
group low1 id <= 240
group up1 id <> 241 480
group idwy id 3 220 249 466
newton on
pair_style hybrid airebo 4.0 1 0 lj/cut 10.2
pair_coeff * * airebo CH.airebo C C
pair_coeff 1 2 lj/cut 0.00284 3.40
neighbor 2.0 bin
neigh_modify delay 5
dump        1 all atom 10 dump.nanotubes
minimize 1.0e-6 1.0e-6 1000 1000
fix 1 low1 move linear 0.0  0 0.0
fix 2 up1  move linear 0.0  0 1.0
fix_modify 1 energy yes
fix_modify 2 energy yes
thermo      10
thermo_modify format 5 %10.10g
dump        2 idwy custom 10 dump1.nanotubes id x y z
run     7000

Since LAMMPS_POTENTIALS contains the system potentials directory on Hippolyta, you do not need to keep a copy of the potential files (CH.airebo in this case) in the working directory.

brian@hippolyta > ls ${LAMMPS_POTENTIALS}/
Ag_u3.eam            CuH.bop.table         ffield.reax.mattsson  Ni.meam                  SiO.tersoff
AlCu.adp             Cu.meam               ffield.reax.rdx       Ni_smf7.eam              Si.sw
AlCu.bop.table       Cu_mishin1.eam.alloy  ffield.reax.V_O_C_H   Ni_u3.eam                Si.tersoff
AlCu.eam.alloy       CuNi.eam.alloy        ffield.reax.ZnOH      Pd_u3.eam                Si.tersoff.mod
AlFe_mm.eam.fs       Cu_smf7.eam           ffield.smtbq.Al       Pt_u3.eam                Ta06A_pot.snap
Al_jnp.eam           CuTa_eam.poly         ffield.smtbq.Al2O3    README                   Ta06A.snapcoeff
Al_mm.eam.fs         Cu_u3.eam             ffield.smtbq.TiO2     README.reax              Ta06A.snapparam
AlO.eam.alloy        Cu_u6.eam             GaAs.bop.table        Si_1.meam.spline         Ta4.mgpt.parmin
AlO.streitz          Cu_zhou.eam.alloy     GaN.sw                Si_2.meam.spline         Ta4.mgpt.potin
AlSiMgCuFe.meam      CuZr_mm.eam.fs        GaN_sw.poly           Si.b.meam.sw.spline      Ta4.mgpt.README
Al_zhou.eam.alloy    FeCr.cdeam            GaN.tersoff           SiC_1989.tersoff         Ta6.8x.mgpt.parmin
Au_u3.eam            Fe_mm.eam.fs          GaN_tersoff.poly      SiC_1990.tersoff         Ta6.8x.mgpt.potin
BNC.tersoff          FeP_mm.eam.fs         InP.vashishta         SiC_1994.tersoff         Ta6.8x.mgpt.README
CCu_v2.bop.table     ffield.comb           lib.comb3             SiC_Erhart-Albe.tersoff  Ti.meam.spline
CdTe.bop.table       ffield.comb3          library.meam          SiCGe.tersoff            Ti.meam.sw.spline
CdTeSe.bop.table     ffield.eim            Mg_mm.eam.fs          SiC.meam                 V6.1.mgpt.parmin
CdTe.sw              ffield.reax.AB        Mo5.2.mgpt.parmin     SiC.tersoff              V6.1.mgpt.potin
CdTeZnSeHgS0.sw      ffield.reax.AuO       Mo5.2.mgpt.potin      SiC.tersoff.zbl          V6.1.mgpt.README
CdZnTe_v1.bop.table  ffield.reax.budzien   Mo5.2.mgpt.README     SiC.vashishta            VFe_mm.eam.fs
CdZnTe_v2.bop.table  ffield.reax.cho       MOH.nb3b.harmonic     Si.edip                  W_zhou.eam.alloy
CH.airebo            ffield.reax.FC        Ni.adp                SiO.1990.vashishta       Zr_mm.eam.fs
C.lcbop              ffield.reax.Fe_O_C_H  NiAlH_jea.eam.alloy   SiO.1994.vashishta
CoAl.eam.alloy       ffield.reax.lg        NiAlH_jea.eam.fs      SiO.1997.vashishta

Submit a LAMMPS script to slurm

The usual way of running LAMMPS is by entering lammps -in in.lammps_script at the command prompt. Similarly, parallel LAMMPS jobs are invoked by entering mpirun -np 4 lammps -in in.lammps_script, which in this case will launch LAMMPS with 4 processes. Invoking LAMMPS this way on Hippolyta will launch heavy LAMMPS processes on the shared head node.

Instead, we want to use slurm to run LAMMPS on the compute nodes. This is typically accomplished by submitting a job submission script using the sbatch command. A job submission script is just a shell script with special comments (beginning with #SBATCH) for passing options to sbatch:

#!/bin/bash
#SBATCH --job-name=nanotubes-example
#SBATCH --output=nanotubes-example.stdout
#SBATCH --ntasks 4
#SBATCH --time=1:00
#SBATCH --partition=test

# lammps input script as the first argument
LAMMPS_SCRIPT=${1}

mpirun lammps -in ${LAMMPS_SCRIPT}

This example script asks slurm for four processors (for one minute of wall-clock time) on the test partition to run a job named nanotubes-example. Any output that would normally be printed to the screen is saved to the file nanotubes-example.stdout. Finally, the job script runs LAMMPS in the usual way. Note that explicitly specifying the -np option to mpirun is unnecessary, since the number of processors has already been requested in an #SBATCH directive.

The #SBATCH directives included in the example script are just a few of the possible directives. You can read more in the online documentation, or by view the sbatch software manual (run man sbatch).

Submit this job script to slurm by entering

sbatch submit_nanotubes.sh in.nanotubes

at the command prompt. sbatch will respond with a job id number, which you can check the status of using the squeue command.

brian@hippolyta > sbatch submit_nanotubes.sh
Submitted batch job 11223
brian@hippolyta > squeue
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
              1837     batch     iso1   philip PD       0:00      1 (PartitionTimeLimit)
             11223      test nanotube    brian  R       0:01      1 cacophony31

After your job has finished running, you can check that all of the expected LAMMPS output files (dump files, log files, etc) are present:

brian@hippolyta > ls
dataangle.cnt  dump1.nanotubes  dump.nanotubes  in.nanotubes  nanotubes-example.stdout  log.lammps  README  submit_nanotubes.sh
brian@hippolyta > head -n 20 log.lammps
LAMMPS (26 Oct 2015)
# in.nanotubes -- Ankit Gupta
atom_style atomic
units metal
boundary f f f
read_data dataangle.cnt
  orthogonal box = (-1000 -1000 -1000) to (1000 1000 1000)
  1 by 2 by 2 MPI processor grid
  reading atoms ...
  480 atoms
group low1 id <= 240
240 atoms in group low1
group up1 id <> 241 480
240 atoms in group up1
group idwy id 3 220 249 466
4 atoms in group idwy
newton on
pair_style hybrid airebo 4.0 1 0 lj/cut 10.2
pair_coeff * * airebo CH.airebo C C
Reading potential file CH.airebo with DATE: 2011-10-25


  1. contributed by Ankit Gupta