
As a very simple example: if you have a traditional (non-container) Linux system hosting a Python application in a virtualenv, and you deploy a Python app to a virtualenv e.g. using Puppet or Ansible, you either need to:
1. Use no C extensions 2. Hope there's a manylinux1 binary wheel 3. Use the OS package and --system-site-packages 4. Compile the C extensions and make them available to pip
#2 seems useful now that I know about it but - correct me if I'm wrong - the manylinux1 permitted C dependencies are super-tiny, and would not permit e.g. cryptography or psycopg2?
#4 is what you are advocating for I believe? But can we agree that for smaller projects, that might seem like a lot of repeated work if the package is already available in the OS repos?
You always say “repeated work” but I hear “small Python script”. :) We have a few Python projects (not written/maintained by me :)) that did rely on system packages “because it’s easier”. Guess what! The other day Ubuntu upgraded PyCrypto (needed Ubuntu Trusty’s ancient Paramiko) under our butts, introduced a regressing and everything blew up: https://www.ubuntu.com/usn/usn-3199-2/ <https://www.ubuntu.com/usn/usn-3199-2/> . That particular project is a virtualenv now too. The Python build chain got so good on Linux by now that I really don’t even think about C extensions anymore. Windows is a different story entirely of course. :|
I'm trying to explain why, based on the efforts we expend locally, it could seem attractive to smaller sites.
Honestly, in the years I’ve been running Python services of different sizes, I have found that distro-provided system packages – unless you are writing software *for* a distribution – are loaded with so many downsides that they’re almost never worth it. They’re a shortcut and shortcuts usually bite back *eventually*. —h