Random state within joblib.Parallel

Randomness is affected by parallel execution differently by the different backends.

In particular, when using multiple processes, the random sequence can be the same in all processes. This example illustrates the problem and shows how to work around it.

import numpy as np
from joblib import Parallel, delayed


# The followings are hacks to allow sphinx-gallery to run the example.
import os
import sys
sys.path.insert(0, os.getcwd())
main_dir = os.path.basename(sys.modules['__main__'].__file__)
IS_RUN_WITH_SPHINX_GALLERY = main_dir != os.getcwd()

A utility function for the example

def print_vector(vector, backend):
    """Helper function to print the generated vector with a given backend."""
    print('\nThe different generated vectors using the {} backend are:\n {}'
          .format(backend, np.array(vector)))

Sequential behavior

stochastic_function will generate five random integers. When calling the function several times, we are expecting to obtain different vectors. For instance, we will call the function five times in a sequential manner, we can check that the generated vectors are all different.
def stochastic_function(max_value):
    """Randomly generate integer up to a maximum value."""
    return np.random.randint(max_value, size=5)


n_vectors = 5
random_vector = [stochastic_function(10) for _ in range(n_vectors)]
print('\nThe different generated vectors in a sequential manner are:\n {}'
      .format(np.array(random_vector)))

Out:

The different generated vectors in a sequential manner are:
 [[5 8 9 4 2]
 [3 3 8 0 3]
 [0 9 2 7 7]
 [3 0 8 6 0]
 [4 7 6 6 7]]

Parallel behavior

Joblib provides three different backends: loky (default), threading, and multiprocessing.
backend = 'loky'
random_vector = Parallel(n_jobs=2, backend=backend)(delayed(
    stochastic_function)(10) for _ in range(n_vectors))
print_vector(random_vector, backend)

Out:

The different generated vectors using the loky backend are:
 [[4 3 9 8 7]
 [2 2 4 2 6]
 [8 8 9 1 7]
 [1 2 6 6 6]
 [2 5 5 7 5]]
backend = 'threading'
random_vector = Parallel(n_jobs=2, backend=backend)(delayed(
    stochastic_function)(10) for _ in range(n_vectors))
print_vector(random_vector, backend)

Out:

The different generated vectors using the threading backend are:
 [[3 9 4 8 5]
 [4 8 7 5 7]
 [3 0 7 7 8]
 [8 2 4 2 8]
 [0 0 9 1 1]]

Loky and the threading backends behave exactly as in the sequential case and do not require more care. However, this is not the case regarding the multiprocessing backend.

if IS_RUN_WITH_SPHINX_GALLERY:
    # When this example is run with sphinx gallery, it breaks the pickling
    # capacity for multiprocessing backend so we have to modify the way we
    # define our functions. This has nothing to do with the example.
    from utils import stochastic_function

backend = 'multiprocessing'
random_vector = Parallel(n_jobs=2, backend=backend)(delayed(
    stochastic_function)(10) for _ in range(n_vectors))
print_vector(random_vector, backend)

Out:

The different generated vectors using the multiprocessing backend are:
 [[7 9 3 2 7]
 [7 9 3 2 7]
 [0 6 9 0 1]
 [0 6 9 0 1]
 [4 7 8 1 5]]

Some of the generated vectors are exactly the same, which can be a problem for the application.

Technically, the reason is that all forked Python processes share the same exact random seed. As a result, we obtain twice the same randomly generated vectors because we are using n_jobs=2. A solution is to set the random state within the function which is passed to joblib.Parallel.

def stochastic_function_seeded(max_value, random_state):
    rng = np.random.RandomState(random_state)
    return rng.randint(max_value, size=5)


if IS_RUN_WITH_SPHINX_GALLERY:
    # When this example is run with sphinx gallery, it breaks the pickling
    # capacity for multiprocessing backend so we have to modify the way we
    # define our functions. This has nothing to do with the example.
    from utils import stochastic_function_seeded  # noqa: F811

stochastic_function_seeded accepts as argument a random seed. We can reset this seed by passing None at every function call. In this case, we see that the generated vectors are all different.

random_vector = Parallel(n_jobs=2, backend=backend)(delayed(
    stochastic_function_seeded)(10, None) for _ in range(n_vectors))
print_vector(random_vector, backend)

Out:

The different generated vectors using the multiprocessing backend are:
 [[6 1 0 8 2]
 [5 8 7 3 6]
 [6 0 5 9 6]
 [5 7 8 5 2]
 [8 5 0 3 9]]

Fixing the random state to obtain deterministic results

The pattern of stochastic_function_seeded has another advantage: it allows to control the random_state by passing a known seed. So for instance, we can replicate the same generation of vectors by passing a fixed state as follows.
random_state = np.random.randint(np.iinfo(np.int32).max, size=n_vectors)

random_vector = Parallel(n_jobs=2, backend=backend)(delayed(
    stochastic_function_seeded)(10, rng) for rng in random_state)
print_vector(random_vector, backend)

random_vector = Parallel(n_jobs=2, backend=backend)(delayed(
    stochastic_function_seeded)(10, rng) for rng in random_state)
print_vector(random_vector, backend)

Out:

The different generated vectors using the multiprocessing backend are:
 [[7 6 4 7 5]
 [7 9 1 4 9]
 [3 7 2 0 4]
 [5 0 5 4 2]
 [8 4 6 6 2]]

The different generated vectors using the multiprocessing backend are:
 [[7 6 4 7 5]
 [7 9 1 4 9]
 [3 7 2 0 4]
 [5 0 5 4 2]
 [8 4 6 6 2]]

Total running time of the script: ( 0 minutes 1.082 seconds)

Gallery generated by Sphinx-Gallery