MPI
Message Passing Interface (MPI) is a communication protocol for programming parallel computers. It is typically used when a program needs to run on and communicate over multiple process or multiple compute nodes.
SCRP nodes have OpenMPI 5.0 installed.
C
Activate the openmpi
conda environment to use OpenMPI:
conda activate openmpi
To run MPI code written in C, first compile your code:
mpicc -o mpi_program mpi_codes.c
Replace mpi_program
and mpi_codes.c
with your desired output and input file names.
To run mpi_program
on your current node, use mpiexec
:
# Runs two processes
mpiexec -n 2 ./mpi_program
Use srun
or sbatch
to run the program on a compute node.
The Slurm option -n
controls the number of processes being started:
# Runs two processes on a single compute node
srun -n 2 ./mpi_program
To run the program on multiple nodes, you will need to specify the large
partition
and use the -N
option to specify the number of nodes.
The processes will be spread evenly over the number of nodes you request.
For example, to run six processes on two nodes:
# Runs three processes on each of the two nodes
srun -n 6 -N 2 -p large ./mpi_program
Python
The openmpi
environment supports the use mpi4py
.
Suppose you have a Python file callled mpi-python.py
with the following content:
from mpi4py import MPI
import socket
if __name__ == "__main__":
world_comm = MPI.COMM_WORLD
world_size = world_comm.Get_size()
my_rank = world_comm.Get_rank()
hostname = socket.gethostname()
print("World Size: " + str(world_size) + " " + "Rank: "
+ str(my_rank) + " " + hostname)
Activate the openmpi
conda environment to use OpenMPI:
conda activate openmpi
Use mpiexec
in combination with python
to run the code on the current node:
# Runs two processes
mpiexec -n 2 python mpi-python.py
Use srun
or sbatch
to run the program on a compute node.
The Slurm option -n
controls the number of processes being started:
# Runs two processes on a single compute node
srun -n 2 python mpi-python.py
To run the program on multiple nodes, you will need to specify the large
partition
and use the -N
option to specify the number of nodes.
The processes will be spread evenly over the number of nodes you request.
For example, to run six processes on two nodes:
# Runs three processes on each of the two nodes
srun -n 6 -N 2 -p large python mpi-python.py