.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/compressors_comparison.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_compressors_comparison.py: =============================== Improving I/O using compressors =============================== This example compares the compressors available in Joblib. In the example, Zlib, LZMA and LZ4 compression only are used but Joblib also supports BZ2 and GZip compression methods. For each compared compression method, this example dumps and reloads a dataset fetched from an online machine-learning database. This gives 3 information: the size on disk of the compressed data, the time spent to dump and the time spent to reload the data from disk. .. GENERATED FROM PYTHON SOURCE LINES 14-19 .. code-block:: default import os import os.path import time .. GENERATED FROM PYTHON SOURCE LINES 20-25 Get some data from real-world use cases --------------------------------------- First fetch the benchmark dataset from an online machine-learning database and load it in a pandas dataframe. .. GENERATED FROM PYTHON SOURCE LINES 25-38 .. code-block:: default import pandas as pd url = ("https://archive.ics.uci.edu/ml/machine-learning-databases/" "kddcup99-mld/kddcup.data.gz") names = ("duration, protocol_type, service, flag, src_bytes, " "dst_bytes, land, wrong_fragment, urgent, hot, " "num_failed_logins, logged_in, num_compromised, " "root_shell, su_attempted, num_root, " "num_file_creations, ").split(', ') data = pd.read_csv(url, names=names, nrows=1e6) .. GENERATED FROM PYTHON SOURCE LINES 39-43 Dump and load the dataset without compression --------------------------------------------- This gives reference values for later comparison. .. GENERATED FROM PYTHON SOURCE LINES 43-48 .. code-block:: default from joblib import dump, load pickle_file = './pickle_data.joblib' .. GENERATED FROM PYTHON SOURCE LINES 49-50 Start by measuring the time spent for dumping the raw data: .. GENERATED FROM PYTHON SOURCE LINES 50-56 .. code-block:: default start = time.time() with open(pickle_file, 'wb') as f: dump(data, f) raw_dump_duration = time.time() - start print("Raw dump duration: %0.3fs" % raw_dump_duration) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Raw dump duration: 0.177s .. GENERATED FROM PYTHON SOURCE LINES 57-58 Then measure the size of the raw dumped data on disk: .. GENERATED FROM PYTHON SOURCE LINES 58-61 .. code-block:: default raw_file_size = os.stat(pickle_file).st_size / 1e6 print("Raw dump file size: %0.3fMB" % raw_file_size) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Raw dump file size: 167.218MB .. GENERATED FROM PYTHON SOURCE LINES 62-63 Finally measure the time spent for loading the raw data: .. GENERATED FROM PYTHON SOURCE LINES 63-69 .. code-block:: default start = time.time() with open(pickle_file, 'rb') as f: load(f) raw_load_duration = time.time() - start print("Raw load duration: %0.3fs" % raw_load_duration) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Raw load duration: 0.113s .. GENERATED FROM PYTHON SOURCE LINES 70-75 Dump and load the dataset using the Zlib compression method ----------------------------------------------------------- The compression level is using the default value, 3, which is, in general, a good compromise between compression and speed. .. GENERATED FROM PYTHON SOURCE LINES 77-78 Start by measuring the time spent for dumping of the zlib data: .. GENERATED FROM PYTHON SOURCE LINES 78-85 .. code-block:: default start = time.time() with open(pickle_file, 'wb') as f: dump(data, f, compress='zlib') zlib_dump_duration = time.time() - start print("Zlib dump duration: %0.3fs" % zlib_dump_duration) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Zlib dump duration: 0.674s .. GENERATED FROM PYTHON SOURCE LINES 86-87 Then measure the size of the zlib dump data on disk: .. GENERATED FROM PYTHON SOURCE LINES 87-91 .. code-block:: default zlib_file_size = os.stat(pickle_file).st_size / 1e6 print("Zlib file size: %0.3fMB" % zlib_file_size) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Zlib file size: 3.944MB .. GENERATED FROM PYTHON SOURCE LINES 92-93 Finally measure the time spent for loading the compressed dataset: .. GENERATED FROM PYTHON SOURCE LINES 93-100 .. code-block:: default start = time.time() with open(pickle_file, 'rb') as f: load(f) zlib_load_duration = time.time() - start print("Zlib load duration: %0.3fs" % zlib_load_duration) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Zlib load duration: 0.319s .. GENERATED FROM PYTHON SOURCE LINES 101-106 .. note:: The compression format is detected automatically by Joblib. The compression format is identified by the standard magic number present at the beginning of the file. Joblib uses this information to determine the compression method used. This is the case for all compression methods supported by Joblib. .. GENERATED FROM PYTHON SOURCE LINES 108-115 Dump and load the dataset using the LZMA compression method ----------------------------------------------------------- LZMA compression method has a very good compression rate but at the cost of being very slow. In this example, a light compression level, e.g. 3, is used to speed up a bit the dump/load cycle. .. GENERATED FROM PYTHON SOURCE LINES 117-118 Start by measuring the time spent for dumping the lzma data: .. GENERATED FROM PYTHON SOURCE LINES 118-125 .. code-block:: default start = time.time() with open(pickle_file, 'wb') as f: dump(data, f, compress=('lzma', 3)) lzma_dump_duration = time.time() - start print("LZMA dump duration: %0.3fs" % lzma_dump_duration) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none LZMA dump duration: 2.054s .. GENERATED FROM PYTHON SOURCE LINES 126-127 Then measure the size of the lzma dump data on disk: .. GENERATED FROM PYTHON SOURCE LINES 127-131 .. code-block:: default lzma_file_size = os.stat(pickle_file).st_size / 1e6 print("LZMA file size: %0.3fMB" % lzma_file_size) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none LZMA file size: 2.119MB .. GENERATED FROM PYTHON SOURCE LINES 132-133 Finally measure the time spent for loading the lzma data: .. GENERATED FROM PYTHON SOURCE LINES 133-140 .. code-block:: default start = time.time() with open(pickle_file, 'rb') as f: load(f) lzma_load_duration = time.time() - start print("LZMA load duration: %0.3fs" % lzma_load_duration) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none LZMA load duration: 0.425s .. GENERATED FROM PYTHON SOURCE LINES 141-147 Dump and load the dataset using the LZ4 compression method ---------------------------------------------------------- LZ4 compression method is known to be one of the fastest available compression method but with a compression rate a bit lower than Zlib. In most of the cases, this method is a good choice. .. GENERATED FROM PYTHON SOURCE LINES 149-152 .. note:: In order to use LZ4 compression with Joblib, the `lz4 `_ package must be installed on the system. .. GENERATED FROM PYTHON SOURCE LINES 154-155 Start by measuring the time spent for dumping the lz4 data: .. GENERATED FROM PYTHON SOURCE LINES 155-162 .. code-block:: default start = time.time() with open(pickle_file, 'wb') as f: dump(data, f, compress='lz4') lz4_dump_duration = time.time() - start print("LZ4 dump duration: %0.3fs" % lz4_dump_duration) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none LZ4 dump duration: 0.114s .. GENERATED FROM PYTHON SOURCE LINES 163-164 Then measure the size of the lz4 dump data on disk: .. GENERATED FROM PYTHON SOURCE LINES 164-168 .. code-block:: default lz4_file_size = os.stat(pickle_file).st_size / 1e6 print("LZ4 file size: %0.3fMB" % lz4_file_size) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none LZ4 file size: 6.259MB .. GENERATED FROM PYTHON SOURCE LINES 169-170 Finally measure the time spent for loading the lz4 data: .. GENERATED FROM PYTHON SOURCE LINES 170-177 .. code-block:: default start = time.time() with open(pickle_file, 'rb') as f: load(f) lz4_load_duration = time.time() - start print("LZ4 load duration: %0.3fs" % lz4_load_duration) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none LZ4 load duration: 0.182s .. GENERATED FROM PYTHON SOURCE LINES 178-180 Comparing the results --------------------- .. GENERATED FROM PYTHON SOURCE LINES 180-202 .. code-block:: default import numpy as np import matplotlib.pyplot as plt N = 4 load_durations = (raw_load_duration, lz4_load_duration, zlib_load_duration, lzma_load_duration) dump_durations = (raw_dump_duration, lz4_dump_duration, zlib_dump_duration, lzma_dump_duration) file_sizes = (raw_file_size, lz4_file_size, zlib_file_size, lzma_file_size) ind = np.arange(N) width = 0.5 plt.figure(1, figsize=(5, 4)) p1 = plt.bar(ind, dump_durations, width) p2 = plt.bar(ind, load_durations, width, bottom=dump_durations) plt.ylabel('Time in seconds') plt.title('Dump and load durations') plt.xticks(ind, ('Raw', 'LZ4', 'Zlib', 'LZMA')) plt.yticks(np.arange(0, lzma_load_duration + lzma_dump_duration)) plt.legend((p1[0], p2[0]), ('Dump duration', 'Load duration')) .. image-sg:: /auto_examples/images/sphx_glr_compressors_comparison_001.png :alt: Dump and load durations :srcset: /auto_examples/images/sphx_glr_compressors_comparison_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none .. GENERATED FROM PYTHON SOURCE LINES 203-213 Compared with other compressors, LZ4 is clearly the fastest, especially for dumping compressed data on disk. In this particular case, it can even be faster than the raw dump. Also note that dump and load durations depend on the I/O speed of the underlying storage: for example, with SSD hard drives the LZ4 compression will be slightly slower than raw dump/load, whereas with spinning hard disk drives (HDD) or remote storage (NFS), LZ4 is faster in general. LZMA and Zlib, even if always slower for dumping data, are quite fast when re-loading compressed data from disk. .. GENERATED FROM PYTHON SOURCE LINES 213-219 .. code-block:: default plt.figure(2, figsize=(5, 4)) plt.bar(ind, file_sizes, width, log=True) plt.ylabel('File size in MB') plt.xticks(ind, ('Raw', 'LZ4', 'Zlib', 'LZMA')) .. image-sg:: /auto_examples/images/sphx_glr_compressors_comparison_002.png :alt: compressors comparison :srcset: /auto_examples/images/sphx_glr_compressors_comparison_002.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none ([, , , ], [Text(0, 0, 'Raw'), Text(1, 0, 'LZ4'), Text(2, 0, 'Zlib'), Text(3, 0, 'LZMA')]) .. GENERATED FROM PYTHON SOURCE LINES 220-223 Compressed data obviously takes a lot less space on disk than raw data. LZMA is the best compression method in terms of compression rate. Zlib also has a better compression rate than LZ4. .. GENERATED FROM PYTHON SOURCE LINES 223-226 .. code-block:: default plt.show() .. GENERATED FROM PYTHON SOURCE LINES 227-229 Clear the pickle file --------------------- .. GENERATED FROM PYTHON SOURCE LINES 229-232 .. code-block:: default import os os.remove(pickle_file) .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 8.794 seconds) .. _sphx_glr_download_auto_examples_compressors_comparison.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: compressors_comparison.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: compressors_comparison.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_