On Wed, Jun 15, 2016 at 5:10 PM, Ionel Cristian Mărieș email@example.com wrote:
On Thu, Jun 16, 2016 at 2:57 AM, Donald Stufft firstname.lastname@example.org wrote:
Of course It still applies to Docker. You still have an operating system inside that container and unless you install zero Python using packages from the system then all of that can still conflict with your own application’s dependencies.
You're correct, theoretically. But in reality is best to not stick a dozen services or apps in a single docker image. What's the point of using docker if you're having a single container with everything in it? Eg: something smells if you need to run a supervisor inside the container.
If we're talking about python packages managed by the os package manager ... there are very few situations where it makes sense to use them (eg: pysvn is especially hard to build), but other than that? Plus it's easier to cache some wheels than to cache some apt packages when building images, way easier.
The problem is that the bits of the OS that you use inside the container might themselves be written in Python. Probably the most obvious example is that on Fedora, if you want to install a system package, then the apt equivalent (dnf) is written in Python, so sudo pip install could break your package manager. Debian-derived distros also use Python for core system stuff, and might use it more in the future. So sure, you might get away with this, depending on the details of your container and your base image and if you religiously follow other best-practices for containerization and ... -- but why risk it? Using a virtualenv is cheap and means you don't have to know or care about these potential problems, so it's what we should recommend as best practice.