dask_mpi.core.initialize

dask_mpi.core.initialize

dask_mpi.core.initialize(interface=None, nthreads=1, local_directory='', memory_limit='auto', nanny=False, dashboard=True, dashboard_address=':8787', protocol=None, worker_class='distributed.Worker', worker_options=None, comm=None, exit=True)[source]

Initialize a Dask cluster using mpi4py

Using mpi4py, MPI rank 0 launches the Scheduler, MPI rank 1 passes through to the client script, and all other MPI ranks launch workers. All MPI ranks other than MPI rank 1 block while their event loops run.

In normal operation these ranks exit once rank 1 ends. If exit=False is set they instead return an bool indicating whether they are the client and should execute more client code, or a worker/scheduler who should not. In this case the user is responsible for the client calling send_close_signal when work is complete, and checking the returned value to choose further actions.

Parameters
interfacestr

Network interface like ‘eth0’ or ‘ib0’

nthreadsint

Number of threads per worker

local_directorystr

Directory to place worker files

memory_limitint, float, or ‘auto’

Number of bytes before spilling data to disk. This can be an integer (nbytes), float (fraction of total memory), or ‘auto’.

nannybool

Start workers in nanny process for management (deprecated, use worker_class instead)

dashboardbool

Enable Bokeh visual diagnostics

dashboard_addressstr

Bokeh port for visual diagnostics

protocolstr

Protocol like ‘inproc’ or ‘tcp’

worker_classstr

Class to use when creating workers

worker_optionsdict

Options to pass to workers

comm: mpi4py.MPI.Intracomm

Optional MPI communicator to use instead of COMM_WORLD

exit: bool

Whether to call sys.exit on the workers and schedulers when the event loop completes.

Returns
is_client: bool

Only returned if exit=False. Inidcates whether this rank should continue to run client code (True), or if it acts as a scheduler or worker (False).