mpi4py.util.pkl5¶
Added in version 3.1.0.
pickle
protocol 5 (see PEP 574) introduced support for out-of-band
buffers, allowing for more efficient handling of certain object types with
large memory footprints.
MPI for Python uses the traditional in-band handling of buffers. This approach is appropriate for communicating non-buffer Python objects, or buffer-like objects with small memory footprints. For point-to-point communication, in-band buffer handling allows for the communication of a pickled stream with a single MPI message, at the expense of additional CPU and memory overhead in the pickling and unpickling steps.
The mpi4py.util.pkl5
module provides communicator wrapper classes
reimplementing pickle-based point-to-point and collective communication methods
using pickle protocol 5. Handling out-of-band buffers necessarily involves
multiple MPI messages, thus increasing latency and hurting performance in case
of small size data. However, in case of large size data, the zero-copy savings
of out-of-band buffer handling more than offset the extra latency costs.
Additionally, these wrapper methods overcome the infamous 2 GiB message count
limit (MPI-1 to MPI-3).
Note
Support for pickle protocol 5 is available in the pickle
module
within the Python standard library since Python 3.8. Previous Python 3
releases can use the pickle5
backport, which is available on PyPI and can be installed with:
python -m pip install pickle5
- class mpi4py.util.pkl5.Request¶
Request.
Custom request class for nonblocking communications.
Note
Request
is not a subclass ofmpi4py.MPI.Request
- get_status(status=None)¶
Non-destructive test for the completion of a request.
- test(status=None)¶
Test for the completion of a request.
- wait(status=None)¶
Wait for a request to complete.
- classmethod get_status_all(requests, statuses=None)¶
Non-destructive test for the completion of all requests.
- Classmethod:
- classmethod testall(requests, statuses=None)¶
Test for the completion of all requests.
- Classmethod:
- classmethod waitall(requests, statuses=None)¶
Wait for all requests to complete.
- Classmethod:
- class mpi4py.util.pkl5.Message¶
Message.
Custom message class for matching probes.
Note
Message
is not a subclass ofmpi4py.MPI.Message
- recv(status=None)¶
Blocking receive of matched message.
- classmethod probe(comm, source=ANY_SOURCE, tag=ANY_TAG, status=None)¶
Blocking test for a matched message.
- Classmethod:
- classmethod iprobe(comm, source=ANY_SOURCE, tag=ANY_TAG, status=None)¶
Nonblocking test for a matched message.
- Classmethod:
- class mpi4py.util.pkl5.Comm¶
Communicator.
Base communicator wrapper class.
- send(obj, dest, tag=0)¶
Blocking send in standard mode.
- bsend(obj, dest, tag=0)¶
Blocking send in buffered mode.
- ssend(obj, dest, tag=0)¶
Blocking send in synchronous mode.
- isend(obj, dest, tag=0)¶
Nonblocking send in standard mode.
- ibsend(obj, dest, tag=0)¶
Nonblocking send in buffered mode.
- issend(obj, dest, tag=0)¶
Nonblocking send in synchronous mode.
- recv(buf=None, source=ANY_SOURCE, tag=ANY_TAG, status=None)¶
Blocking receive.
- irecv(buf=None, source=ANY_SOURCE, tag=ANY_TAG)¶
Nonblocking receive.
Warning
This method cannot be supported reliably and raises
RuntimeError
.
- sendrecv(sendobj, dest, sendtag=0, recvbuf=None, source=ANY_SOURCE, recvtag=ANY_TAG, status=None)¶
Send and receive.
- mprobe(source=ANY_SOURCE, tag=ANY_TAG, status=None)¶
Blocking test for a matched message.
- improbe(source=ANY_SOURCE, tag=ANY_TAG, status=None)¶
Nonblocking test for a matched message.
- bcast(obj, root=0)¶
Broadcast.
Added in version 3.1.0.
- gather(sendobj, root=0)¶
Gather.
Added in version 4.0.0.
- scatter(sendobj, root=0)¶
Scatter.
Added in version 4.0.0.
- allgather(sendobj)¶
Gather to All.
Added in version 4.0.0.
- class mpi4py.util.pkl5.Intracomm¶
Intracommunicator.
Intracommunicator wrapper class.
- class mpi4py.util.pkl5.Intercomm¶
Intercommunicator.
Intercommunicator wrapper class.
Examples¶
1import numpy as np
2from mpi4py import MPI
3from mpi4py.util import pkl5
4
5comm = pkl5.Intracomm(MPI.COMM_WORLD) # comm wrapper
6size = comm.Get_size()
7rank = comm.Get_rank()
8dst = (rank + 1) % size
9src = (rank - 1) % size
10
11sobj = np.full(1024**3, rank, dtype='i4') # > 4 GiB
12sreq = comm.isend(sobj, dst, tag=42)
13robj = comm.recv (None, src, tag=42)
14sreq.Free()
15
16assert np.min(robj) == src
17assert np.max(robj) == src
1import numpy as np
2from mpi4py import MPI
3from mpi4py.util import pkl5
4
5comm = pkl5.Intracomm(MPI.COMM_WORLD) # comm wrapper
6size = comm.Get_size()
7rank = comm.Get_rank()
8dst = (rank + 1) % size
9src = (rank - 1) % size
10
11sobj = np.full(1024**3, rank, dtype='i4') # > 4 GiB
12sreq = comm.isend(sobj, dst, tag=42)
13
14status = MPI.Status()
15rmsg = comm.mprobe(status=status)
16assert status.Get_source() == src
17assert status.Get_tag() == 42
18rreq = rmsg.irecv()
19robj = rreq.wait()
20
21sreq.Free()
22assert np.max(robj) == src
23assert np.min(robj) == src