Message Passing Interface (MPI) Jobs on Scientific Computing Linux Clusters

There are several MPI packages installed on Scientific Computing (SciC) Linux Clusters. We recommend using "mvapich2". The main advantage for "mvapich2" is the compliance with GPU simulation. So if you have used "mpich2" before and are happy with it, then just use it. This FAQ focuses on "mvapich2" 

1. Which version of MPI are we using? What is it?

We are using "mvapich2-20a". 

"MVAPICH (pronounced as “em-vah-pich”) is an open-source MPI software to exploit the novel features and mechanisms of these networking technologies and deliver best performance and scalability to MPI applications. This software is developed in the Network-Based Computing Laboratory (NBCL), headed by Prof. Dhabaleswar K. (DK) Panda."

2. How to load "mvapich"?

module load mvapich2/2.20a

To make sure, you can run "which mpirun", the output should be "/apps/source/mvapich2-20a/bin/mpirun"

3. which lsf file should I pick?

Use the template "/apps/testsuite/mvapich2-20a.lsf"

It is only a template. The detailed number of processes, queue-level and job name should be changed accordingly. Also, of course, you need to replace the "XXX" with your real command together with the input parameters. The line "

source /apps/testsuite/mpich2/mpich2lsf.sh $LSB_HOSTS

is import to generate the machine file, do not delete it. 

Go to KB0027935 in the IS Service Desk

Related articles