Installation¶
Wheel packages¶
The mpi4py project builds and publishes binary wheels able to run in a variety of:
operating systems: Linux, macOS, Windows;
processor architectures: AMD64, ARM64;
MPI implementations: MPICH, Open MPI, MVAPICH, Intel MPI, HPE Cray MPICH, Microsoft MPI;
Python implementations: CPython, PyPy.
These mpi4py wheels are distributed via the Python Package Index (PyPI) and can be installed with Python package managers like pip:
python -m pip install mpi4py
The mpi4py wheels can be installed in standard Python virtual environments. The MPI runtime can be provided by other wheels installed in the same virtual environment.
Tip
Intel publishes production-grade Intel MPI wheels for Linux (x86_64) and Windows (AMD64). mpi4py and MPI wheels can be installed side by side to get a ready-to-use Python+MPI environment:
python -m pip install mpi4py impi-rt
Tip
The mpi4py project publishes MPICH wheels and Open MPI wheels for Linux (x86_64/aarch64) and macOS (arm64/x86_64). mpi4py and MPI wheels can be installed side by side to get a ready-to-use Python+MPI environment:
python -m pip install mpi4py mpich # for MPICH
python -m pip install mpi4py openmpi # for Open MPI
Warning
The MPI wheels are distributed with special focus on ease of use, convenience, compatibility, and interoperability. The Linux wheels are built in somewhat constrained environments with relatively dated Linux distributions (manylinux container images). Therefore, they may lack support for features like GPU awareness (CUDA/ROCm) and C++/Fortran bindings. In production scenarios, it is recommended to use external (either custom-built or system-provided) MPI installations.
The mpi4py wheels can also be installed (with pip) in conda environments and they should work out of the box, without any special tweak to environment variables, for any of the MPI packages provided by conda-forge.
Externally-provided MPI implementations may come from a system package manager, sysadmin-maintained builds accessible via module files, or customized user builds. Such usage is supported and encouraged. However, there are a few platform-specific considerations to take into account.
Linux¶
The Linux (x86_64/aarch64) wheels require one of
MPICH or any other ABI-compatible derivative, like MVAPICH, Intel MPI, HPE Cray MPICH
Open MPI or any other ABI-compatible derivative, like NVIDIA HPC-X
Users may need to set the LD_LIBRARY_PATH
environment variable
such that the dynamic linker is able to find at runtime the MPI shared
library file (libmpi.so.*
).
Fedora/RHEL¶
On Fedora/RHEL systems, both MPICH and Open MPI are available for installation. There is no default or preferred MPI implementation. Instead, users must select their favorite MPI implementation by loading the proper MPI module.
module load mpi/mpich-$(arch) # for MPICH
module load mpi/openmpi-$(arch) # for Open MPI
After loading the requested MPI module, the LD_LIBRARY_PATH
environment variable should be properly setup.
Debian/Ubuntu¶
On Debian/Ubuntu systems, Open MPI is the default MPI implementation and most of the MPI-based applications and libraries provided by the distribution depend on Open MPI. Nonetheless, MPICH is also available to users for installation.
In Ubuntu 22.04 and older, due to legacy reasons, the MPICH ABI is
slightly broken: the MPI shared library file is named
libmpich.so.12
instead of libmpi.so.12
as required by the
MPICH ABI Compatibility Initiative.
Users without sudo
access can workaround this issue creating a
symbolic link anywhere in their home directory and appending to
LD_LIBRARY_PATH
.
mkdir -p ~/.local/lib
libdir=/usr/lib/$(arch)-linux-gnu
ln -s $libdir/libmpich.so.12 ~/.local/lib/libmpi.so.12
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.local/lib
A system-wide fix for all users requires sudo
access:
libdir=/usr/lib/$(arch)-linux-gnu
sudo ln -sr $libdir/libmpi{ch,}.so.12
HPE Cray OS¶
On HPE Cray systems, users must load the cray-mpich-abi
module.
For further details, refer to man intro_mpi.
macOS¶
The macOS (arm64/x86_64) wheels require
Windows¶
The Windows (AMD64) wheels require one of
User may need to set the I_MPI_ROOT
or MSMPI_BIN
environment
variables such that the MPI dynamic link library (DLL) (impi.dll
or msmpi.dll
) can be found at runtime.
Intel MPI is under active development and supports recent versions of
the MPI standard. Intel MPI can be installed with pip
(see the
impi-rt package on PyPI), being therefore straightforward to get it
up and running within a Python environment. Intel MPI can also be
installed system-wide as part of the Intel oneAPI HPC Toolkit for
Windows or via standalone online/offline installers.
Conda packages¶
The conda-forge community provides ready-to-use binary packages from an ever growing collection of software libraries built around the multi-platform conda package manager. Four MPI implementations are available on conda-forge: Open MPI (Linux and macOS), MPICH (Linux and macOS), Intel MPI (Linux and Windows), and Microsoft MPI (Windows). You can install mpi4py and your preferred MPI implementation using the conda package manager:
to use MPICH do:
conda install -c conda-forge mpi4py mpich
to use Open MPI do:
conda install -c conda-forge mpi4py openmpi
to use Intel MPI do:
conda install -c conda-forge mpi4py impi_rt
to use Microsoft MPI do:
conda install -c conda-forge mpi4py msmpi
MPICH and many of its derivatives are ABI-compatible. You can provide
the package specification mpich=X.Y.*=external_*
(where X
and
Y
are the major and minor version numbers) to request the conda
package manager to use system-provided MPICH (or derivative)
libraries. Similarly, you can provide the package specification
openmpi=X.Y.*=external_*
to use system-provided Open MPI
libraries.
The openmpi
package on conda-forge has built-in CUDA support, but
it is disabled by default. To enable it, follow the instruction
outlined during conda install
. Additionally, UCX support is also
available once the ucx
package is installed.
Warning
The MPI conda-forge packages are built with special focus on compatibility. The MPICH and Open MPI packages are built in a constrained environment with relatively dated OS images. Therefore, they may lack support for high-performance features like cross-memory attach (XPMEM/CMA). In production scenarios, it is recommended to use external (either custom-built or system-provided) MPI installations. See the relevant conda-forge documentation about using external MPI libraries .
System packages¶
mpi4py is readily available through system package managers of most Linux distributions and the most popular community package managers for macOS.
Linux¶
On Fedora Linux systems (as well as RHEL and their derivatives using the EPEL software repository), you can install binary packages with the system package manager:
using
dnf
and thempich
package:sudo dnf install python3-mpi4py-mpich
using
dnf
and theopenmpi
package:sudo dnf install python3-mpi4py-openmpi
Please remember to load the correct MPI module for your chosen MPI implementation:
for the
mpich
package do:module load mpi/mpich-$(arch) python -c "from mpi4py import MPI"
for the
openmpi
package do:module load mpi/openmpi-$(arch) python -c "from mpi4py import MPI"
On Ubuntu Linux and Debian Linux systems, binary packages are available for installation using the system package manager:
sudo apt install python3-mpi4py
On Arch Linux systems, binary packages are available for installation using the system package manager:
sudo pacman -S python-mpi4py
macOS¶
macOS users can install mpi4py using the Homebrew package manager:
brew install mpi4py
Note that the Homebrew mpi4py package uses Open MPI. Alternatively,
install the mpich
package and next install mpi4py from sources
using pip
.
Alternatively, mpi4py can be installed from MacPorts:
sudo port install py-mpi4py
Building from sources¶
Installing mpi4py from pre-built binary wheels, conda packages, or
system packages is not always desired or appropriate. For example, the
mpi4py wheels published on PyPI may not be interoperable with
non-mainstream, vendor-specific MPI implementations; or a system
mpi4py package may be built with a alternative, non-default MPI
implementation. In such scenarios, mpi4py can still be installed from
its source distribution (sdist) using pip
:
python -m pip install --no-binary=mpi4py mpi4py
You can also install the in-development version with:
python -m pip install git+https://github.com/mpi4py/mpi4py
or:
python -m pip install https://github.com/mpi4py/mpi4py/tarball/master
Note
Installing mpi4py from its source distribution (available on PyPI) or Git source code repository (available on GitHub) requires a C compiler and a working MPI implementation with development headers and libraries.
Warning
pip
keeps previously built wheel files in its cache for future
reuse. If you want to reinstall the mpi4py
package from its source
distribution using a different or updated MPI implementation, you have
to either first remove the cached wheel file:
python -m pip cache remove mpi4py
python -m pip install --no-binary=mpi4py mpi4py
or ask pip
to disable the cache:
python -m pip install --no-cache-dir --no-binary=mpi4py mpi4py
Build backends¶
mpi4py supports three different build backends: setuptools (default),
scikit-build-core (CMake-based), and meson-python
(Meson-based). The build backend can be selected by setting the
MPI4PY_BUILD_BACKEND
environment variable.
- MPI4PY_BUILD_BACKEND¶
- Choices:
"setuptools"
,"scikit-build-core"
,"meson-python"
- Default:
"setuptools"
Request a build backend for building mpi4py from sources.
Using setuptools¶
Tip
Set the MPI4PY_BUILD_BACKEND
environment variable to
"setuptools"
to use the setuptools build backend.
When using the default setuptools build backend, mpi4py relies on the legacy Python distutils framework to build C extension modules. The following environment variables affect the build configuration.
- MPI4PY_BUILD_MPICC¶
The mpicc compiler wrapper command is searched for in the executable search path (
PATH
environment variable) and used to compile thempi4py.MPI
C extension module. Alternatively, use theMPI4PY_BUILD_MPICC
environment variable to the full path or command corresponding to the MPI-aware C compiler.
- MPI4PY_BUILD_MPILD¶
The mpicc compiler wrapper command is also used for linking the
mpi4py.MPI
C extension module. Alternatively, use theMPI4PY_BUILD_MPILD
environment variable to specify the full path or command corresponding to the MPI-aware C linker.
- MPI4PY_BUILD_MPICFG¶
If the MPI implementation does not provide a compiler wrapper, or it is not installed in a default system location, all relevant build information like include/library locations and library lists can be provided in an ini-style configuration file under a
[mpi]
section. mpi4py can then be asked to use the custom build information by setting theMPI4PY_BUILD_MPICFG
environment variable to the full path of the configuration file. As an example, see thempi.cfg
file located in the top level mpi4py source directory.
- MPI4PY_BUILD_CONFIGURE¶
Some vendor MPI implementations may not provide complete coverage of the MPI standard, or may provide partial features of newer MPI standard versions while advertising support for an older version. Setting the
MPI4PY_BUILD_CONFIGURE
environment variable to a non-empty string will trigger the run of exhaustive checks for the availability of all MPI constants, predefined handles, and routines.
- MPI4PY_BUILD_MPIABI¶
Setting the
MPI4PY_BUILD_MPIABI
environment variable to"1"
enables enhanced support for the MPI 5.0 standard ABI and the MPICH or Open MPI legacy ABIs. Thempi4py.MPI
extension module is then able to dynamically link at runtime with older versions (down to the MPI 3.0 standard) of the corresponding MPI implementation used at compile time. This feature is only available on Linux, macOS, and Windows. POSIX-like systems other than Linux and macOS are not currently supported, although they could easily be: all what is needed is for the platform to either support weak symbols in shared modules/libraries, or support the standard POSIXdlopen()
/dlsym()
APIs.
- MPI4PY_BUILD_PYSABI¶
Build with the CPython 3.10+ Stable ABI. Setting the
MPI4PY_BUILD_PYSABI
environment variable to a string"{major}.{minor}"
defines thePy_LIMITED_API
value to use for building extension modules.
The following environment variables are aliases for the ones described above. Having shorter names, they are convenient for occasional use in the command line. Its usage is not recommended in automation scenarios like packaging recipes, deployment scripts, and container image creation.
- MPICC¶
Convenience alias for
MPI4PY_BUILD_MPICC
.
- MPILD¶
Convenience alias for
MPI4PY_BUILD_MPILD
.
- MPICFG¶
Convenience alias for
MPI4PY_BUILD_MPICFG
.
Using scikit-build-core¶
Tip
Set the MPI4PY_BUILD_BACKEND
environment variable to
"scikit-build-core"
to use the scikit-build-core build backend.
When using the scikit-build-core build backend, mpi4py delegates all
of MPI build configuration to CMake’s FindMPI module. Besides the
obvious advantage of cross-platform support, this delegation to CMake
may be convenient in build environments exposing vendor software
stacks via intricate module systems. Note however that mpi4py will not
be able to look for MPI routines available beyond the MPI standard
version the MPI implementation advertises to support (via the
MPI_VERSION
and MPI_SUBVERSION
macro constants
in the mpi.h
header file), any missing MPI constant or symbol
will prevent a successful build.
Using meson-python¶
Tip
Set the MPI4PY_BUILD_BACKEND
environment variable to
"meson-python"
to use the meson-python build backend.
When using the meson-python build backend, mpi4py delegates build tasks to the Meson build system.
Warning
mpi4py support for the meson-python build backend is
experimental. For the time being, users must set the CC
environment variable to the command or path corresponding to the
mpicc C compiler wrapper.