Note
Go to the end to download the full example code.
NumPy memmap in joblib.Parallel¶
This example illustrates some features enabled by using a memory map
(numpy.memmap
) within joblib.Parallel
. First, we show that
dumping a huge data array ahead of passing it to joblib.Parallel
speeds up computation. Then, we show the possibility to provide write access to
original data.
Speed up processing of a large data array¶
We create a large data array for which the average is computed for several slices.
import numpy as np
data = np.random.random((int(1e7),))
window_size = int(5e5)
slices = [slice(start, start + window_size)
for start in range(0, data.size - window_size, int(1e5))]
The slow_mean
function introduces a time.sleep()
call to simulate a
more expensive computation cost for which parallel computing is beneficial.
Parallel may not be beneficial for very fast operation, due to extra overhead
(workers creations, communication, etc.).
import time
def slow_mean(data, sl):
"""Simulate a time consuming processing."""
time.sleep(0.01)
return data[sl].mean()
First, we will evaluate the sequential computing on our problem.
Elapsed time computing the average of couple of slices 0.98 s
joblib.Parallel
is used to compute in parallel the average of all
slices using 2 workers.
Elapsed time computing the average of couple of slices 0.70 s
Parallel processing is already faster than the sequential processing. It is
also possible to remove a bit of overhead by dumping the data
array to a
memmap and pass the memmap to joblib.Parallel
.
import os
from joblib import dump, load
folder = './joblib_memmap'
try:
os.mkdir(folder)
except FileExistsError:
pass
data_filename_memmap = os.path.join(folder, 'data_memmap')
dump(data, data_filename_memmap)
data = load(data_filename_memmap, mmap_mode='r')
tic = time.time()
results = Parallel(n_jobs=2)(delayed(slow_mean)(data, sl) for sl in slices)
toc = time.time()
print('\nElapsed time computing the average of couple of slices {:.2f} s\n'
.format(toc - tic))
Elapsed time computing the average of couple of slices 0.51 s
Therefore, dumping large data
array ahead of calling
joblib.Parallel
can speed up the processing by removing some
overhead.
Clean-up the memmap¶
Remove the different memmap that we created. It might fail in Windows due to file permissions.
import shutil
try:
shutil.rmtree(folder)
except: # noqa
print('Could not clean-up automatically.')
Total running time of the script: (0 minutes 2.631 seconds)