I wrote a zstd module for stdlib: https://github.com/animalize/cpython/pull/8/files And a PyPI version based on it: PyPI: https://pypi.org/project/pyzstd/ Doc: https://pyzstd.readthedocs.io/en/latest/ If you decide to include it into stdlib, the work can be done in a short time. Zstd has some advantages: fast speed, multi-threaded compression, dictionary for small data, etc. IMO it's suitable as a replacement for zlib, but at this time: 1, If it is included into stdlib, it will take advantage of the huge influence of Python and become popular. 2, If wait until zstd becomes popular, and there is no better alternate, unnecessary time will be wasted. (I'm +0.5 on this option. Python promotes a technology that is on the rise, which is a bit strange.) I heard in data science domain, the data is often huge, such as hundreds of GB or more. If people can make full use of multi-core CPU to compress, the experience will be much better than zlib. Your survey on PyPI mentioned data science. Maybe you can talk to those people about this.