NumPy memmap in joblib.Parallel

This example illustrates some features enabled by using a memory map (numpy.memmap) within joblib.Parallel. First, we show that dumping a huge data array ahead of passing it to joblib.Parallel speeds up computation. Then, we show the possibility to provide write access to original data.

Speed up processing of a large data array

We create a large data array for which the average is computed for several slices.

import numpy as np

data = np.random.random((int(1e7),))
window_size = int(5e5)
slices = [slice(start, start + window_size)
          for start in range(0, data.size - window_size, int(1e5))]

The slow_mean function introduces a time.sleep() call to simulate a more expensive computation cost for which parallel computing is beneficial. Parallel may not be beneficial for very fast operation, due to extra overhead (workers creations, communication, etc.).

import time


def slow_mean(data, sl):
    """Simulate a time consuming processing."""
    time.sleep(0.01)
    return data[sl].mean()

First, we will evaluate the sequential computing on our problem.

tic = time.time()
results = [slow_mean(data, sl) for sl in slices]
toc = time.time()
print('\nElapsed time computing the average of couple of slices {:.2f} s'
      .format(toc - tic))
Elapsed time computing the average of couple of slices 0.99 s

joblib.Parallel is used to compute in parallel the average of all slices using 2 workers.

from joblib import Parallel, delayed


tic = time.time()
results = Parallel(n_jobs=2)(delayed(slow_mean)(data, sl) for sl in slices)
toc = time.time()
print('\nElapsed time computing the average of couple of slices {:.2f} s'
      .format(toc - tic))
Elapsed time computing the average of couple of slices 0.72 s

Parallel processing is already faster than the sequential processing. It is also possible to remove a bit of overhead by dumping the data array to a memmap and pass the memmap to joblib.Parallel.

import os
from joblib import dump, load

folder = './joblib_memmap'
try:
    os.mkdir(folder)
except FileExistsError:
    pass

data_filename_memmap = os.path.join(folder, 'data_memmap')
dump(data, data_filename_memmap)
data = load(data_filename_memmap, mmap_mode='r')

tic = time.time()
results = Parallel(n_jobs=2)(delayed(slow_mean)(data, sl) for sl in slices)
toc = time.time()
print('\nElapsed time computing the average of couple of slices {:.2f} s\n'
      .format(toc - tic))
Elapsed time computing the average of couple of slices 0.52 s

Therefore, dumping large data array ahead of calling joblib.Parallel can speed up the processing by removing some overhead.

Writable memmap for shared memory joblib.Parallel

slow_mean_write_output will compute the mean for some given slices as in the previous example. However, the resulting mean will be directly written on the output array.

def slow_mean_write_output(data, sl, output, idx):
    """Simulate a time consuming processing."""
    time.sleep(0.005)
    res_ = data[sl].mean()
    print("[Worker %d] Mean for slice %d is %f" % (os.getpid(), idx, res_))
    output[idx] = res_

Prepare the folder where the memmap will be dumped.

Pre-allocate a writable shared memory map as a container for the results of the parallel computation.

output = np.memmap(output_filename_memmap, dtype=data.dtype,
                   shape=len(slices), mode='w+')

data is replaced by its memory mapped version. Note that the buffer has already been dumped in the previous section.

data = load(data_filename_memmap, mmap_mode='r')

Fork the worker processes to perform computation concurrently

Parallel(n_jobs=2)(delayed(slow_mean_write_output)(data, sl, output, idx)
                   for idx, sl in enumerate(slices))
[None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]

Compare the results from the output buffer with the expected results

print("\nExpected means computed in the parent process:\n {}"
      .format(np.array(results)))
print("\nActual means computed by the worker processes:\n {}"
      .format(output))
Expected means computed in the parent process:
 [0.49965292 0.49974874 0.49943944 0.49931679 0.49946435 0.49967807
 0.49974379 0.50017189 0.49993651 0.50008239 0.49967675 0.49985648
 0.49979733 0.50005115 0.4997408  0.49989219 0.49989937 0.50017182
 0.50023661 0.50031656 0.50015995 0.50009222 0.49978472 0.49989487
 0.49972381 0.49994612 0.50003557 0.50000035 0.49992915 0.50020348
 0.50004181 0.50004251 0.50018289 0.50023478 0.50013722 0.50024537
 0.5001683  0.50023065 0.49968055 0.49965365 0.49946673 0.49950121
 0.49915857 0.49942131 0.49990244 0.49956862 0.50000764 0.50039555
 0.50045694 0.50014263 0.50044234 0.49998283 0.5001928  0.50009705
 0.50044156 0.50051449 0.50031264 0.49989217 0.50014136 0.49955727
 0.4995133  0.50001898 0.50001194 0.49997361 0.50040533 0.50058782
 0.4999523  0.49981122 0.49988069 0.49943995 0.49955917 0.50008888
 0.50038626 0.50023465 0.5004434  0.50023939 0.50007451 0.49980625
 0.49977308 0.49946885 0.49943195 0.4993972  0.49971944 0.49975733
 0.49987063 0.49989171 0.50015681 0.50010566 0.50010865 0.50058211
 0.5009235  0.50069002 0.50076474 0.50068463 0.50041682]

Actual means computed by the worker processes:
 [0.49965292 0.49974874 0.49943944 0.49931679 0.49946435 0.49967807
 0.49974379 0.50017189 0.49993651 0.50008239 0.49967675 0.49985648
 0.49979733 0.50005115 0.4997408  0.49989219 0.49989937 0.50017182
 0.50023661 0.50031656 0.50015995 0.50009222 0.49978472 0.49989487
 0.49972381 0.49994612 0.50003557 0.50000035 0.49992915 0.50020348
 0.50004181 0.50004251 0.50018289 0.50023478 0.50013722 0.50024537
 0.5001683  0.50023065 0.49968055 0.49965365 0.49946673 0.49950121
 0.49915857 0.49942131 0.49990244 0.49956862 0.50000764 0.50039555
 0.50045694 0.50014263 0.50044234 0.49998283 0.5001928  0.50009705
 0.50044156 0.50051449 0.50031264 0.49989217 0.50014136 0.49955727
 0.4995133  0.50001898 0.50001194 0.49997361 0.50040533 0.50058782
 0.4999523  0.49981122 0.49988069 0.49943995 0.49955917 0.50008888
 0.50038626 0.50023465 0.5004434  0.50023939 0.50007451 0.49980625
 0.49977308 0.49946885 0.49943195 0.4993972  0.49971944 0.49975733
 0.49987063 0.49989171 0.50015681 0.50010566 0.50010865 0.50058211
 0.5009235  0.50069002 0.50076474 0.50068463 0.50041682]

Clean-up the memmap

Remove the different memmap that we created. It might fail in Windows due to file permissions.

import shutil

try:
    shutil.rmtree(folder)
except:  # noqa
    print('Could not clean-up automatically.')

Total running time of the script: (0 minutes 2.697 seconds)

Gallery generated by Sphinx-Gallery