Quick guide for MPI jobs running on infiniband nodes

There are several MPI packages installed on Scientific Computing (SciC) Linux Clusters. We recommend using "mvapich2".  This FAQ focuses on "mvapich2" 

1. Which version of MPI are we using? What is it?

We are using "mvapich2.2.1". 

"MVAPICH (pronounced as “em-vah-pich”) is an open-source MPI software to exploit the novel features and mechanisms of these networking technologies and deliver best performance and scalability to MPI applications. This software is developed in the Network-Based Computing Laboratory (NBCL), headed by Prof. Dhabaleswar K. (DK) Panda."

2. How to load "mvapich"?

 module load intel_parallel_xe/xe

module load mvapich2/2.2.1-intel

To make sure, you can run "which mpirun", the output should be "/source/mvapich2/2.2.1-intel/bin/mpirun"

3. which lsf file should I pick?

Use the template "/apps/testsuite/mvapich2.2.1-intel.lsf"

4. What should do with the template

Before running any MPI jobs, you should be aware that the MPI executable binaries need to be re-compiled by the current version of mvapich2. Importing previous executable binary files will NOT work. The default compiler on mvapich2.2.1-intel is intel compiler. Refer to the mvapich manual if you want to enable other compilers such as gcc. 

After successfully re-compiling your source code with mvapich2/2.2.1-intel, then you can submit jobs with the template. You need to determine the job_name, output file folder and the number of processors you want to use. After that, need to specify the absolute path of your executable binary files, your input parameters and your output file(if you want to generate). 

For example, one typical command line on the bottom of mvapich2.2.1-intel.ls would be 

mpirun  -np $nproc -machinefile ${LSB_JOBID}_machines  /source/NAMD_2.9/NAMD_2.9_Linux-x86_64-multicore/namd2  < input.conf  > output.log 

Go to KB0028514 in the IS Service Desk

Related articles