Skip to content

Building and Running OpenMPI

Howard Pritchard edited this page Nov 24, 2015 · 17 revisions

This page describes how to build and run Open MPI to test the libfabric GNI provider. It is assumed the user is building Open MPI on a Cray XC system like tiger or edision/cori, and that you have built and installed a copy of libfabric.


Building and Installing Open MPI

First, if you don't already have a clone of Open MPI

% git clone https://github.com/open-mpi/ompi.git

Next, configure and build/install Open MPI. Note you will not want to try to build the Open MPI using the Cray compiler.

% cd ompi
% ./autogen.pl
% module load PrgEnv-gnu
% ./configure --prefix=ompi_install_dir --with-libfabric=your_libfabric_install_dir --disable-dlopen
% make -j 8 install

Note if you are wanting to run MPI multi-threaded tests which use MPI_THREAD_MULTIPLE, you will need to configure Open MPI as follows

% ./configure --prefix=ompi_install_dir --with-libfabric=your_libfabric_install_dir --enable-mpi-thread-multiple --disable-dlopen

Running Open MPI with libfabric

First you will need to build an MPI app using Open MPI's compiler wrapper:

% export PATH=ompi_install_dir/bin:${PATH}
% mpicc -o my_app my_app.c

On Tiger, the application can be launched using srun:

% srun -n 2 -N 2 ./my_app

On systems using aprun, like NERSC edison/cori aprun can be used:

% aprun -n 2 -N 1 /my_app

If you'd like to double check against the sockets provider, do the following

% export OMPI_MCA_mtl_ofi_provider_exclude=gni
% srun -n 2 -N 2 ./my_app

This will force the OFI MTL to use the sockets provider.

Building and Testing OSU MPI benchmarks

OSU provides a relatively simple set of MPI benchmark tests which are useful for testing the GNI libfabric provider.

% wget http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.0.tar.gz
% tar -zxvf osu-micro-benchmarks-5.0.tar.gz
% cd osu-micro-benchmarks-5.0
% ./configure CC=mpicc
% make

In the mpi/pt2pt and mpi/collective subdirectories there are a number of tests. To test, for example Open MPI send/recv message latency, osu_latency can be used

% cd mpi/pt2pt
% srun -n 2 -N 2 ./osu_latency