Have virtual environments led to neglect of the actual environment?

I was reading a discussion thread <https://gist.github.com/tiran/2dec9e03c6f901814f6d1e8dad09528e> about various issues with the Debian packaged version of Python, and the following statement stood out for me as shocking: Christian Heimes wrote:
Core dev and PyPA has spent a lot of effort in promoting venv because we don't want users to break their operating system with sudo pip install.
I don't think sudo pip install should break the operating system. And I think if it does, that problem should be solved rather than merely advising users against using it. And why is it, anyway, that distributions whose package managers can't coexist with pip-installed packages don't ever seem to get the same amount of flak for "damaging python's brand" as Debian is getting from some of the people in the discussion thread? Why is it that this community is resigned to recommending a workaround when distributions decide the site-packages directory belongs to their package manager rather than pip, instead of bringing the same amount of fiery condemnation of that practice as we apparently have for *checks notes* splitting parts of the stdlib into optional packages? Why demand that pip be present if we're not going to demand that it works properly? I think that installing packages into the actual python installation, both via distribution packaging tools and pip [and using both simultaneously - the Debian model of separated dist-packages and site-packages folders seems like a reasonable solution to this problem] can and should be a supported paradigm, and that virtual environments [or more extreme measures such as shipping an entire python installation as part of an application's deployment] should ideally be reserved for the rare corner cases where that doesn't work for some reason. How is it that virtual environments have become so indispensable, that no-one considers installing libraries centrally to be a viable model anymore? Are library maintainers making breaking changes too frequently, reasoning that if someone needs the old version they can just venv it? Is there some other cause?

I think it's a classic case of dependency hell. OS packagers are rebundling Python packages as OS packages and expressing their own OS-package dependency graphs. Then, you sudo pip install something that has a conflicting dependency, it bypasses OS packaging, and *boom*. I find tools like pipx go a long way to solve this, as they install a Python package and all of its dependencies in its own venv. This is great for Python apps, and (kinda) treats them like apps on platforms like Android, where all app dependencies are bundled and isolated from others. I think it would great if OS vendors did something similar to pipx for Python-based apps: bundle the app and all of its dependencies into its own venv. On Tue, 2021-02-23 at 19:45 -0500, Random832 wrote:

I love pipx and I'm glad it exists at this point because it make The main issue is that each virtualenv takes space, lots of space. I have currently 57 apps installed via pipx on my laptop, and the 57 environments take almost 1 GB already. ~ cd .local/pipx/venvs/ ~/.l/p/venvs ls abilian-tools/ concentration/ gitlabber/ pygount/ sphinx/ ansible/ cookiecutter/ httpie/ pyinfra/ tentakel/ assertize/ cruft/ isort/ pylint/ tlv/ autoflake/ cython/ jupyterlab/ pyre-check/ towncrier/ black/ dephell/ lektor/ pytype/ tox/ borgbackup/ docformatter/ md2pdf/ pyupgrade/ twine/ borgmatic/ flake8/ medikit/ radon/ virtualenv/ bpytop/ flit/ mypy/ re-ver/ virtualfish/ check-manifest/ flynt/ nox/ sailboat/ vulture/ clone-github/ gh-clone/ pdoc3/ salvo/ cloneall/ ghtop/ pdocs/ shed/ com2ann/ gitchangelog/ pybetter/ sixer/ ~/.l/p/venvs du -sh . 990M . ~/.l/p/venvs ls | wc 57 57 475 There is probably a clever way to reuse common packages (probably via clever symlinking) and reduce the footprint of these installations. Still, I'm glad that pipx exists as it is now, and that it has been packaged on Ubuntu 20.04 and later (and probably other distros as well). Having pipx (or something similar) installed by the distro, and the distro focussed on packaging only the packages that are needed for its own sake, means that we could go past the controversies between the Python community and the Debian (or other distros) packagers community, which are based on different goals and assumption, such as this one: https://gist.github.com/tiran/2dec9e03c6f901814f6d1e8dad09528e S. On Wed, Feb 24, 2021 at 2:28 AM Paul Bryan <pbryan@anode.ca> wrote:
-- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyParis & PyData Paris - http://pyparis.org/ & http://pydata.fr/

On 24/02/2021 11.52, Stéfane Fermigier wrote:
There are tools like https://rdfind.pauldreik.se/rdfind.1.html that create hard links to deduplicate files. Some files systems have deduplicated baked in, too. Christian

On Wed, 24 Feb 2021 at 10:55, Stéfane Fermigier <sf@fermigier.com> wrote:
There is probably a clever way to reuse common packages (probably via clever symlinking) and reduce the footprint of these installations.
Ultimately the problem is that a general tool can't deal with conflicts (except by raising an error). If application A depends on lib==1.0 and application B depends on lib==2.0, you simply can't have a (consistent) environment that supports both A and B. But that's the rare case - 99% of the time, there are no conflicts. One env per app is a safe, but heavy handed, approach. Managing environments manually isn't exactly *hard*, but it's annoying manual work that pipx does an excellent job of automating, so it's a disk space vs admin time trade-off. As far as I know, no-one has tried to work on the more complex option of sharing things (pipx shares the copies of pip, setuptools and wheel that are needed to support pipx itself, but doesn't extend that to application dependencies). It would be a reasonable request for pipx to look at, or for a new tool, but I suspect the cost of implementing it simply outweighs the benefit ("disk space is cheap" :-)) Paul

On Wed, Feb 24, 2021 at 12:42 PM Paul Moore <p.f.moore@gmail.com> wrote:
There are three ways to approach the question: 1) Fully isolated envs. The safest option but uses the most space. 2) Try to minimise the number of dependencies installed by interpreting the requirements specification in the looser way possible. This is both algorithmically hard (see https://hal.archives-ouvertes.fr/hal-00149566/document for instance, or the more recent https://hal.archives-ouvertes.fr/hal-03005932/document ) and risky, as you've noted. 3) But the best way IMHO is to compute dependencies for each virtualenv independently from the others, but still share the packages, using some indirection mechanisms (hard links, symlinks or some Python-specific constructs) when the versions match exactly. The 3rd solution is probably the best of the 3, but the sharing mechanism still needs to be specified (and, if needed, implemented) properly. I've tried Christian's suggestions of using rdfind on my pipx installation, and it claims to reduce the footprint by 30% (nice, but less than I expected. This would however scale better with the number of installed packages). I'm not sure this would be practical in reality, OTOH, because I think there is a serious risk of breakage each time I would upgrade one of the packages (via 'pipx upgrade-all' for instance). So IMHO the best way to implement solution 3 would be by using some variant of the approach popularized by Nix (repository of immutable packages + links to each virtualenv). S. -- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyParis & PyData Paris - http://pyparis.org/ & http://pydata.fr/

On Wed, Feb 24, 2021 at 1:47 PM Stéfane Fermigier <sf@fermigier.com> wrote:
Another benefit of this kind of approach, besides sparing disk space, could be similar improvements in terms of installation time (and even bigger improvements when reinstalling a package, which happens all the time when developing). S. -- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyParis & PyData Paris - http://pyparis.org/ & http://pydata.fr/

On Wed, 24 Feb 2021 13:47:40 +0100 Stéfane Fermigier <sf@fermigier.com> wrote:
I wouldn't want to repeat myself too often, but conda and conda-based distributions already have sharing through hardlinks (or, on Windows, whatever is available) baked-in, assuming you install your software from conda packages. That also applies to non-Python packages, and to python itself (which is just a package like any other). Regards Antoine.

On Wed, 24 Feb 2021 at 13:12, Antoine Pitrou <antoine@python.org> wrote:
I'm not sure conda solves the problem of *application* distribution, though, so I think it's addressing a different problem. Specifically, I don't think conda addresses the use case pipx is designed for. Although to be fair, this conversation has drifted *way* off the original topic. Going back to that, my view is that Python does not have a good solution to the "write your application in Python, and then distribute it" scenario. Shipping just the app to be run on an independently installed runtime results in the conflicting dependencies issue. Shipping the app with bundled dependencies is clumsy, mostly because no-one has developed tools to make it easier. It also misses opportunities for sharing libraries (reduced maintenance, less disk usage...). Shipping the app with a bundled interpreter and libraries is safest, but hard to do and even more expensive than the "bundled libraries" approach. I'd love to see better tools for this, but the community preferred approach seems to be "ship your app as a PyPI package with a console entry point" and that's the approach pipx supports. I don't use Linux much, and I'm definitely not familiar with Linux distribution tools, but from what I can gather Linux distributions have made the choices: 1. Write key operating system utilities in Python. 2. Share the Python interpreter and libraries. 3. Expose that Python interpreter as the *user's* default Python. IMO, the mistake is (3) - because the user wants to install Python packages, and not all packages are bundled by the distribution (or if they are, they aren't updated quickly enough for the user), users want to be able to install packages using Python tools. That risks introducing unexpected library versions and/or conflicts, which breaks the OS utilities, which expect their requirements to be respected (that's what the OS packaging tools do). Hindsight is way too easy here, but if distros had a "system Python" package that OS tools depend on, and which is reserved for *only* OS tools, and a "user Python" package that users could write their code against, we'd probably have had far fewer issues (and much less FUD about the "using sudo pip breaks your OS" advice). But it's likely way too late to argue for such a sweeping change. *Shrug* I'm not the person to ask here. My view is that I avoid using Python on Linux, because it's *way* too hard. I find it so much easier to work on Windows, where I can install Python easily for myself, and I don't have to fight with system package managers, or distribution-patched tools that don't work the way I expect. And honestly, on Windows, there's no "neglect of the system environment" to worry about - if you want to install Python, and use pip to install packages into that environment for shared use, it works fine. People (including me) use virtual environments for *convenience* on Windows, not because it's a requirement. Paul

On Wed, Feb 24, 2021, at 09:08, Paul Moore wrote:
I think 1 *partially* mischaracterizes the problem, because any "system python" would logically be used by *every application written in python [or that embeds python] distributed by the OS's package management*, not just by "key operating system utilities". To suggest otherwise implies that they should not distribute any python applications at all. That also makes asking all of their package maintainers to change their #! line to point at a different interpreter [or to pass an option, as I had suggested in another post] a more significant ask than the "just change a few core utilities" that some people seem to be assuming it would be. It also means that making a "system python" does not remove the need to distribute the largish subset of python *libraries* that they distribute, because even when these libraries aren't used by key OS utilities, they are still used by packaged applications. [this, in turn, means we don't want the user's default python environment to stand entirely separate from the system python, or we'll start getting complaints along the lines of "I apt-get installed numpy, why can't I import it in my python interpreter?" - particularly from users who don't really care if it runs a couple versions behind]

On 2021-02-24 02:52, Stéfane Fermigier wrote:
I have currently 57 apps installed via pipx on my laptop, and the 57 environments take almost 1 GB already.
I never understood the fear around version conflicts. Perhaps it has to do with the decline of sys-admin skills over the years? So, the strategy above feels like swatting a fly with a sledgehammer to me. Same as with a venv for every app. :-) Rather than installing every package this way, why not wait until a conflict actually occurs? Personally, I rarely have such conflicts, maybe every few years or so. When it happens, I fix them by uninstalling the offender and putting the more difficult one into the venv or pipx. Right now I only have one, a giant app from work that uses pipenv, and it's fine. Now what about sudo and all that? Well, I used it in the old days because that's what the instructions said. But, to be honest, it never made any sense. I haven't shared a computer in decades, and when we did we used NFS for common tools, so it never saved any disk space. Pip (and easy_install?) dragged their feet for years to properly support user installs (should have been default) but once they did I didn't look back. I dump almost all packages to user, which gets cleaned up every other year when the distro moves to the next version. The strategy has been working well for a long time. So, --user works at the low end, and containers for the high end. Honestly, I don't have much of a use case any longer for venvs. -Mike

I have currently 57 apps installed via pipx on my laptop, and the 57 environments take almost 1 GB already.
That is a lot! But give conda a try: conda uses hard links, so no wasted space when packages are the same. I never understood the fear around version conflicts. I don’t know that it’s fear. But some sod use a Lot of packages, and version conflicts do get ugly. Rather than installing every package this way, why not wait until a
conflict actually occurs?
I used to do that — for years. But it really did cause problems. The trick is that you have, say, your 57 apps all working. Then you need to update a package for one. As soon as you update, you have to go test your 57 apps, and if one of them is broken, you have to figure out how to deal with it. Now you have 52 apps running in the main environment, and 5 running in their own... and you are on your way to an even harder to manage system. The nice thing about environments is that once something is working, you’re not going to mess it up when working on something else. The stuff not being actively maintained can just hum along. -CHB
I agree — keep it all on user land. -CHB -- Christopher Barker, PhD (Chris) Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On 2021-02-24 19:59, Christopher Barker wrote:
Every few years I revert whatever package upgrade that caused the issue. Which brings the house back in order. Not a substantial problem.
Now you have 52 apps running in the main environment, and 5 running in their own... and you are on your way to an even harder to manage system.
Almost twenty years of daily python use and this situation has never happened here. Sure, if one wants to spend time and gigs of storage to guard against exceptional situations that's their decision. My post was simply to push back on the idea that this is required for the average developer. It isn't, as mentioned I have but a single venv for a big work app. I find it less of a burden to simply fix issues as they come up, which is almost never. Approximately ten minutes per year, sometimes zero. Mr. Random had an interesting point to start this thread, that over-reliance on venvs may have slowed fixes and improvements on the standard tools and distributions. I suspect there is some truth to the assertion. -Mike

On Thu, 25 Feb 2021 at 19:22, Mike Miller <python-ideas@mgmiller.net> wrote:
Arguably, your claim that using your main Python interpreter for everything "almost never" causes you problems would imply that there's no real *need* for fixes and improvements to handle that situation, so work on supporting virtual environments helps people who prefer them, and harms no-one. I suspect the truth is somewhere between the two. Paul

On Wed, Feb 24, 2021 at 07:59:55PM -0800, Christopher Barker wrote:
I don't get it. How is it "even harder" to manage? The five apps you have isolated are, well, isolated. And for the rest, you've gone down from 57 apps to 52, so there's less complexity and fewer dependencies, so it should be easier, not harder, to manage. Now there is clearly one sense that it is harder to manage: updates. If one of the common dependencies needs updating, then you have to update it six times: once each for the five isolated apps, and once for the non-isolated apps. So that makes it harder to manage; but I guess that's not what you meant. So if you extrapolate to the point that all 57 apps are isolated, what you save in potential-or-actual conflicts you lose in updatings. Whether that makes it worthwhile, I think, depends on how often you expect to be updating versus how often you expect to be introducing new conflicts. Clearly there are cases where you have, say, a rapidly changing app with lots of dependencies that are consistently conflicting with other apps; or you have a legacy app that needs a frozen, stable environment (possibly even including the OS!). In both of these extreme cases isolating the app makes great sense. But I'm not convinced that isolation makes sense every time I start to write a 300 line script, let alone a 30 line one. I guess that's what annoys me about verts -- it isn't that I don't see their usefulness. But it seems to me, rightly or wrongly, that a power feature which is mostly of benefit to quite big and/or complex development tasks is being pushed as the One True Way that everyone must use, always. -- Steve

On Tue, Feb 23, 2021 at 7:48 PM Random832 <random832@fastmail.com> wrote:
I can't speak for distributors or maintainers [1], but I can speak for myself as a user. I run Debian testing (currently bullseye as that is preparing for release) as my daily OS on my personal laptop, used for personal matters and school assignments (I'm a university computer science student in my senior year). I don't use the system Python for anything of my own, whether it's a school assignment or a personal project, precisely because I don't want to risk screwing something up. Rather, I maintain a clone/fork of the official CPython GitHub repo, and periodically build from source and `make altinstall` into `~/.local/`. The `python3` command continues to refer to the system Python, while `python3.8`, etc. refer to the ones installed in my home folder. To the latter I make symlinks for `py38`, `py39`, etc., and just `py` (and `pip`) for the one I use most often (usually the latest stable release). I typically have multiple versions installed at once since different scripts/projects run on different versions at different times. Given this setup, I can just do a simple `pip install spam` command and I don't need either `sudo` or `--user`, nor do I need virtual envs. While the average person would probably not clone the GitHub repo and build that way, it's not terribly unreasonable for an inclined person to do the same with a tarball downloaded from python.org, and so I doubt I'm the only one with this type of setup. Just some food for thought. [1] Technically I am a library maintainer since I have a couple projects on PyPI, but those are mostly unused and more or less abandoned at this point, and neither ever reached the point where I could consider graduating them from beta status. Most of what I work with these days is private personal code or school assignments.

On 2/23/21, Random832 <random832@fastmail.com> wrote:
First, pip+venv is not sufficient for secure software deployment: something must set appropriate permissions so that the application cannot overwrite itself and other core libraries (in order to eliminate W^X violations (which e.g. Android is solving by requiring all installed binaries to come from an APK otherwise they won't and can't be labeled with the SELinux extended file atrributes necessary for a binary to execute; but we don't have binaries, we have an interpreter and arbitrary hopefully-signed somewhere source code, at least)). Believe it or not, this is wrong: ```bash # python -m venv httpbin || virtualenv httpbin # source httpbin/bin/activate mkvirtualenv httpbin pip install httpbin gunicorn gunicorn -b 0.0.0.0:8001 httpbin:app # python -m webbrowser http://127.0.0.1:8001 ``` It's wrong - it's insecure - because the user executing the Python interpreter (through gunicorn, in this case) can overwrite the app. W^X: has both write and execute permissions. What would be better? This would be better because pip isn't running setup.py as root (with non-wheels) and httpbin_exec can't modify the app interpreter or the code it loads at runtime: ```bash useradd httpbin # also creates a group also named 'httpbin' sudo -u httpbin sh -c ' \ python -m venv httpbin; \ umask 0022; \ ./httpbin/bin/python -m pip install httpbin gunicorn' useradd httpbin_exec -G httpbin sudo -u httpbin_exec './httpbin/bin/gunicorn -b 0.0.0.0:8001 httpbin:app' ``` This would be better if it worked, though there are a few caveats: ```bash sudo apt-get install python-gunicorn python-httpbin sudo -u nobody /usr/bin/gunicorn -b 0.0.0.0:8001 httpbin:app ``` 1. Development is impossible: - You can't edit the code in /usr/lib/python3.n/site-package/ without root permissions. - You should not be running an editor as root. - You can edit distro-package files individually with e.g. sudoedit (and then the GPG-signed package file checksums will fail when you run `debsums` or `rpm -Va` because you've edited the file and that's changed the hash). - Non-root users cannot install python packages without having someone repack (and sign it) for them. - What do I need to do in order to patch the distro's signed repack of the Python package released to PyPI? - I like how Fedora pkgs and conda-forge have per-package git repos now. - Conda-forge has a bot that watches PyPI for new releases and tries sending an automated PR. - If I send a PR to the main branch of the source repo and it gets merged, how long will it be before there's a distro repack built and uploaded to the distro package index? 2. It should be installed in a chroot/jail/zone/container/context/vm so that it cannot read other data on the machine. The httpbin app does not need read access to /etc/shadow, for example. Distro package installs are not - either - sandboxed. To pick on httpbin a bit more, the httpbin docs specify that httpbin should be run as a docker container: ```bash docker run -p 80:8001 kennethreitz/httpbin ``` Is that good enough? We don't know, we haven't reviewed: - the Dockerfile - it says `FROM ubuntu:18.04`, which is fortunately an LTS release. But if it hasn't been updated this month, it probably has the sudo bug that enabled escalation to root (which - even in a container - is bad because it could obnoxiously just overwrite libc, for example, and unless the container is rebuilt or something runs `debsums`, nothing will detect that data integrity error) - the requirements.txt / setup.py:install_requires / Pipfile[.lock] dependencies - does it depend upon outdated pinned exact versions? - Is there an SBOM (Software Bill of Materials) that we can review against known vulnerability databases? How do I know that: - The packages I have installed are not outdated and unpatched against known vulnerabilities - The files on disk are exactly what should be in the package - The app_exec user can't overwrite the binary interpreter or the source files it loads at runtime - There won't be unreviewed code running as root (including at install time) - All Python package dependencies are available as wheels (that basically only need to be unzipped) - The ensemble of dependencies which I've miraculously assembled is available on the target platform(s) - The integration tests for my app pass with each combination of dependencies which satisfy the specified dependency constraints - I can edit things and quickly re-test - Each dependency is signed by a key that's valid for that dependency So, if pip is insufficient for secure software deployment, what are pro teams using to build signed, deployable artifacts with fresh, upgaded dependencies either bundled in or loosely-referenced? - Bazel (from Google's internal Blaze) builds from BUILD files. - https://github.com/dropbox/dbx_build_tools - Pantsbuild, Buck - zipapps - FPM can apparently package up an entire virtualenv; though IDK how good it is at permissions? https://github.com/jordansissel/fpm/blob/master/lib/fpm/package/virtualenv.r... As an open source maintainer, there are very many potential environments to release builds for. Manylinux docker images (and auditwheel, delocate, and *cibuildwheel*) are a response to extreme and somewhat-avoidable complexity. https://github.com/joerick/cibuildwheel Distro packagers can and do build upon e.g. pip; which is great for development but not sufficient for production deployment due to lack of support for file permissions, extended file attributes, checksums, cryptographic signatures, and due to running setup.py as the install user for non-wheel packages. There are many deployment stories now: pull/push, configuration management systems, venvs within containers within VMs. For your favorite distro, how do I get from cibuildwheel to a signed release artifact in your package index; and which keys can sign for what?

FWIW, Distro repacks are advantageous in comparison to "statically-bundled" releases that for example bundle in an outdated version of OpenSSL, because when you `apt-get upgrade -y` that should upgrade the OpenSSL that all the other distro packages depend upon. Here's something that doesn't get called as frequently as `apt-get upgrade -y`: ```bash pip install -U certifi ``` https://github.com/certifi/python-certifi (Mozilla's CA bundle extracted into a Python package) ```bash apt-get install -y ca-certificates dnf install -y ca-certificates ``` On 2/24/21, Wes Turner <wes.turner@gmail.com> wrote:
-- Wes Turner https://westurner.org https://wrdrd.com/docs/consulting/knowledge-engineering

On Wed, 24 Feb 2021 at 10:49, Random832 <random832@fastmail.com> wrote:
The reason venv is promoted as heavily as it is is because it's the only advice that can be given that is consistently correct regardless of the operating system the user is running locally, whereas safely using a system-wide Python installation varies a lot depending on whether you're on Windows, Mac OS X, or Linux (let alone some other platform outside the big 3 desktop clients). conda is also popular for the same reason: while the instructions for installing conda in the first place are OS-dependent, once it is up and running you can use consistent platform independent conda commands rather than having to caveat all your documentation with platform-specific instructions. Apple moved all of their dynamic language interpreter implementations to inaccessible-by-default locations so Mac OS X users would stop using them to run their own code. Alongside that, we *have* worked with the Linux distro vendors to help make "sudo pip install" safe (e.g [1]), but that only helps if a user is running a new enough version of a distro that has participated in that work. However, while the option of running "platform native" environments will never go away, and work will continue to make it less error prone, the level of knowledge of your specific OS's idiosyncrasies that it requires is almost certainly going to remain too high for it to ever again become the default recommendation that it used to be. Cheers, Nick. [1] https://fedoraproject.org/wiki/Changes/Making_sudo_pip_safe (Note: this change mitigated some aspects of the problem in a way similar to what Debian does, but still doesn't solve it completely, as custom Python builds may still make arbitrary changes) P.S. "But what about user site-packages?" you ask. Until relatively recently, Debian didn't put the user's local bin directory on the system path by default, so commands provided by user level package installs didn't work without the user adjusting their PATH. The CPython Windows installer also doesn't adjust PATH by default (for good reasons). And unlike a venv, "python -m" doesn't let you ensure that the code executed is the version installed in user site-packages - it could be coming from a directory earlier in sys.path. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Is there a tool that (1) detects import name collisions; and (2) attempts to read package metadata and package file checksums (maybe from the ZIP 'manifest')? In order to: - troubleshoot module shadowing issues - $PATH - sys.path - `python -m site` - incomplete and overlapping uninstallations: pip install a pip install a_modified # pip uninstall a? pip install pdbpp pip uninstall a_modified ls -altr "${site-packages[*]}" strace -e trace=file python -c 'import pdb' **** When shouldn't site customizations be added to the site module? https://docs.python.org/3/library/site.html When should customizations be built into the build instead of a runtime conditional? On Sat, Feb 27, 2021, 23:12 Nick Coghlan <ncoghlan@gmail.com> wrote:

I think it's a classic case of dependency hell. OS packagers are rebundling Python packages as OS packages and expressing their own OS-package dependency graphs. Then, you sudo pip install something that has a conflicting dependency, it bypasses OS packaging, and *boom*. I find tools like pipx go a long way to solve this, as they install a Python package and all of its dependencies in its own venv. This is great for Python apps, and (kinda) treats them like apps on platforms like Android, where all app dependencies are bundled and isolated from others. I think it would great if OS vendors did something similar to pipx for Python-based apps: bundle the app and all of its dependencies into its own venv. On Tue, 2021-02-23 at 19:45 -0500, Random832 wrote:

I love pipx and I'm glad it exists at this point because it make The main issue is that each virtualenv takes space, lots of space. I have currently 57 apps installed via pipx on my laptop, and the 57 environments take almost 1 GB already. ~ cd .local/pipx/venvs/ ~/.l/p/venvs ls abilian-tools/ concentration/ gitlabber/ pygount/ sphinx/ ansible/ cookiecutter/ httpie/ pyinfra/ tentakel/ assertize/ cruft/ isort/ pylint/ tlv/ autoflake/ cython/ jupyterlab/ pyre-check/ towncrier/ black/ dephell/ lektor/ pytype/ tox/ borgbackup/ docformatter/ md2pdf/ pyupgrade/ twine/ borgmatic/ flake8/ medikit/ radon/ virtualenv/ bpytop/ flit/ mypy/ re-ver/ virtualfish/ check-manifest/ flynt/ nox/ sailboat/ vulture/ clone-github/ gh-clone/ pdoc3/ salvo/ cloneall/ ghtop/ pdocs/ shed/ com2ann/ gitchangelog/ pybetter/ sixer/ ~/.l/p/venvs du -sh . 990M . ~/.l/p/venvs ls | wc 57 57 475 There is probably a clever way to reuse common packages (probably via clever symlinking) and reduce the footprint of these installations. Still, I'm glad that pipx exists as it is now, and that it has been packaged on Ubuntu 20.04 and later (and probably other distros as well). Having pipx (or something similar) installed by the distro, and the distro focussed on packaging only the packages that are needed for its own sake, means that we could go past the controversies between the Python community and the Debian (or other distros) packagers community, which are based on different goals and assumption, such as this one: https://gist.github.com/tiran/2dec9e03c6f901814f6d1e8dad09528e S. On Wed, Feb 24, 2021 at 2:28 AM Paul Bryan <pbryan@anode.ca> wrote:
-- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyParis & PyData Paris - http://pyparis.org/ & http://pydata.fr/

On 24/02/2021 11.52, Stéfane Fermigier wrote:
There are tools like https://rdfind.pauldreik.se/rdfind.1.html that create hard links to deduplicate files. Some files systems have deduplicated baked in, too. Christian

On Wed, 24 Feb 2021 at 10:55, Stéfane Fermigier <sf@fermigier.com> wrote:
There is probably a clever way to reuse common packages (probably via clever symlinking) and reduce the footprint of these installations.
Ultimately the problem is that a general tool can't deal with conflicts (except by raising an error). If application A depends on lib==1.0 and application B depends on lib==2.0, you simply can't have a (consistent) environment that supports both A and B. But that's the rare case - 99% of the time, there are no conflicts. One env per app is a safe, but heavy handed, approach. Managing environments manually isn't exactly *hard*, but it's annoying manual work that pipx does an excellent job of automating, so it's a disk space vs admin time trade-off. As far as I know, no-one has tried to work on the more complex option of sharing things (pipx shares the copies of pip, setuptools and wheel that are needed to support pipx itself, but doesn't extend that to application dependencies). It would be a reasonable request for pipx to look at, or for a new tool, but I suspect the cost of implementing it simply outweighs the benefit ("disk space is cheap" :-)) Paul

On Wed, Feb 24, 2021 at 12:42 PM Paul Moore <p.f.moore@gmail.com> wrote:
There are three ways to approach the question: 1) Fully isolated envs. The safest option but uses the most space. 2) Try to minimise the number of dependencies installed by interpreting the requirements specification in the looser way possible. This is both algorithmically hard (see https://hal.archives-ouvertes.fr/hal-00149566/document for instance, or the more recent https://hal.archives-ouvertes.fr/hal-03005932/document ) and risky, as you've noted. 3) But the best way IMHO is to compute dependencies for each virtualenv independently from the others, but still share the packages, using some indirection mechanisms (hard links, symlinks or some Python-specific constructs) when the versions match exactly. The 3rd solution is probably the best of the 3, but the sharing mechanism still needs to be specified (and, if needed, implemented) properly. I've tried Christian's suggestions of using rdfind on my pipx installation, and it claims to reduce the footprint by 30% (nice, but less than I expected. This would however scale better with the number of installed packages). I'm not sure this would be practical in reality, OTOH, because I think there is a serious risk of breakage each time I would upgrade one of the packages (via 'pipx upgrade-all' for instance). So IMHO the best way to implement solution 3 would be by using some variant of the approach popularized by Nix (repository of immutable packages + links to each virtualenv). S. -- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyParis & PyData Paris - http://pyparis.org/ & http://pydata.fr/

On Wed, Feb 24, 2021 at 1:47 PM Stéfane Fermigier <sf@fermigier.com> wrote:
Another benefit of this kind of approach, besides sparing disk space, could be similar improvements in terms of installation time (and even bigger improvements when reinstalling a package, which happens all the time when developing). S. -- Stefane Fermigier - http://fermigier.com/ - http://twitter.com/sfermigier - http://linkedin.com/in/sfermigier Founder & CEO, Abilian - Enterprise Social Software - http://www.abilian.com/ Chairman, National Council for Free & Open Source Software (CNLL) - http://cnll.fr/ Founder & Organiser, PyParis & PyData Paris - http://pyparis.org/ & http://pydata.fr/

On Wed, 24 Feb 2021 13:47:40 +0100 Stéfane Fermigier <sf@fermigier.com> wrote:
I wouldn't want to repeat myself too often, but conda and conda-based distributions already have sharing through hardlinks (or, on Windows, whatever is available) baked-in, assuming you install your software from conda packages. That also applies to non-Python packages, and to python itself (which is just a package like any other). Regards Antoine.

On Wed, 24 Feb 2021 at 13:12, Antoine Pitrou <antoine@python.org> wrote:
I'm not sure conda solves the problem of *application* distribution, though, so I think it's addressing a different problem. Specifically, I don't think conda addresses the use case pipx is designed for. Although to be fair, this conversation has drifted *way* off the original topic. Going back to that, my view is that Python does not have a good solution to the "write your application in Python, and then distribute it" scenario. Shipping just the app to be run on an independently installed runtime results in the conflicting dependencies issue. Shipping the app with bundled dependencies is clumsy, mostly because no-one has developed tools to make it easier. It also misses opportunities for sharing libraries (reduced maintenance, less disk usage...). Shipping the app with a bundled interpreter and libraries is safest, but hard to do and even more expensive than the "bundled libraries" approach. I'd love to see better tools for this, but the community preferred approach seems to be "ship your app as a PyPI package with a console entry point" and that's the approach pipx supports. I don't use Linux much, and I'm definitely not familiar with Linux distribution tools, but from what I can gather Linux distributions have made the choices: 1. Write key operating system utilities in Python. 2. Share the Python interpreter and libraries. 3. Expose that Python interpreter as the *user's* default Python. IMO, the mistake is (3) - because the user wants to install Python packages, and not all packages are bundled by the distribution (or if they are, they aren't updated quickly enough for the user), users want to be able to install packages using Python tools. That risks introducing unexpected library versions and/or conflicts, which breaks the OS utilities, which expect their requirements to be respected (that's what the OS packaging tools do). Hindsight is way too easy here, but if distros had a "system Python" package that OS tools depend on, and which is reserved for *only* OS tools, and a "user Python" package that users could write their code against, we'd probably have had far fewer issues (and much less FUD about the "using sudo pip breaks your OS" advice). But it's likely way too late to argue for such a sweeping change. *Shrug* I'm not the person to ask here. My view is that I avoid using Python on Linux, because it's *way* too hard. I find it so much easier to work on Windows, where I can install Python easily for myself, and I don't have to fight with system package managers, or distribution-patched tools that don't work the way I expect. And honestly, on Windows, there's no "neglect of the system environment" to worry about - if you want to install Python, and use pip to install packages into that environment for shared use, it works fine. People (including me) use virtual environments for *convenience* on Windows, not because it's a requirement. Paul

On Wed, Feb 24, 2021, at 09:08, Paul Moore wrote:
I think 1 *partially* mischaracterizes the problem, because any "system python" would logically be used by *every application written in python [or that embeds python] distributed by the OS's package management*, not just by "key operating system utilities". To suggest otherwise implies that they should not distribute any python applications at all. That also makes asking all of their package maintainers to change their #! line to point at a different interpreter [or to pass an option, as I had suggested in another post] a more significant ask than the "just change a few core utilities" that some people seem to be assuming it would be. It also means that making a "system python" does not remove the need to distribute the largish subset of python *libraries* that they distribute, because even when these libraries aren't used by key OS utilities, they are still used by packaged applications. [this, in turn, means we don't want the user's default python environment to stand entirely separate from the system python, or we'll start getting complaints along the lines of "I apt-get installed numpy, why can't I import it in my python interpreter?" - particularly from users who don't really care if it runs a couple versions behind]

On 2021-02-24 02:52, Stéfane Fermigier wrote:
I have currently 57 apps installed via pipx on my laptop, and the 57 environments take almost 1 GB already.
I never understood the fear around version conflicts. Perhaps it has to do with the decline of sys-admin skills over the years? So, the strategy above feels like swatting a fly with a sledgehammer to me. Same as with a venv for every app. :-) Rather than installing every package this way, why not wait until a conflict actually occurs? Personally, I rarely have such conflicts, maybe every few years or so. When it happens, I fix them by uninstalling the offender and putting the more difficult one into the venv or pipx. Right now I only have one, a giant app from work that uses pipenv, and it's fine. Now what about sudo and all that? Well, I used it in the old days because that's what the instructions said. But, to be honest, it never made any sense. I haven't shared a computer in decades, and when we did we used NFS for common tools, so it never saved any disk space. Pip (and easy_install?) dragged their feet for years to properly support user installs (should have been default) but once they did I didn't look back. I dump almost all packages to user, which gets cleaned up every other year when the distro moves to the next version. The strategy has been working well for a long time. So, --user works at the low end, and containers for the high end. Honestly, I don't have much of a use case any longer for venvs. -Mike

I have currently 57 apps installed via pipx on my laptop, and the 57 environments take almost 1 GB already.
That is a lot! But give conda a try: conda uses hard links, so no wasted space when packages are the same. I never understood the fear around version conflicts. I don’t know that it’s fear. But some sod use a Lot of packages, and version conflicts do get ugly. Rather than installing every package this way, why not wait until a
conflict actually occurs?
I used to do that — for years. But it really did cause problems. The trick is that you have, say, your 57 apps all working. Then you need to update a package for one. As soon as you update, you have to go test your 57 apps, and if one of them is broken, you have to figure out how to deal with it. Now you have 52 apps running in the main environment, and 5 running in their own... and you are on your way to an even harder to manage system. The nice thing about environments is that once something is working, you’re not going to mess it up when working on something else. The stuff not being actively maintained can just hum along. -CHB
I agree — keep it all on user land. -CHB -- Christopher Barker, PhD (Chris) Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On 2021-02-24 19:59, Christopher Barker wrote:
Every few years I revert whatever package upgrade that caused the issue. Which brings the house back in order. Not a substantial problem.
Now you have 52 apps running in the main environment, and 5 running in their own... and you are on your way to an even harder to manage system.
Almost twenty years of daily python use and this situation has never happened here. Sure, if one wants to spend time and gigs of storage to guard against exceptional situations that's their decision. My post was simply to push back on the idea that this is required for the average developer. It isn't, as mentioned I have but a single venv for a big work app. I find it less of a burden to simply fix issues as they come up, which is almost never. Approximately ten minutes per year, sometimes zero. Mr. Random had an interesting point to start this thread, that over-reliance on venvs may have slowed fixes and improvements on the standard tools and distributions. I suspect there is some truth to the assertion. -Mike

On Thu, 25 Feb 2021 at 19:22, Mike Miller <python-ideas@mgmiller.net> wrote:
Arguably, your claim that using your main Python interpreter for everything "almost never" causes you problems would imply that there's no real *need* for fixes and improvements to handle that situation, so work on supporting virtual environments helps people who prefer them, and harms no-one. I suspect the truth is somewhere between the two. Paul

On Wed, Feb 24, 2021 at 07:59:55PM -0800, Christopher Barker wrote:
I don't get it. How is it "even harder" to manage? The five apps you have isolated are, well, isolated. And for the rest, you've gone down from 57 apps to 52, so there's less complexity and fewer dependencies, so it should be easier, not harder, to manage. Now there is clearly one sense that it is harder to manage: updates. If one of the common dependencies needs updating, then you have to update it six times: once each for the five isolated apps, and once for the non-isolated apps. So that makes it harder to manage; but I guess that's not what you meant. So if you extrapolate to the point that all 57 apps are isolated, what you save in potential-or-actual conflicts you lose in updatings. Whether that makes it worthwhile, I think, depends on how often you expect to be updating versus how often you expect to be introducing new conflicts. Clearly there are cases where you have, say, a rapidly changing app with lots of dependencies that are consistently conflicting with other apps; or you have a legacy app that needs a frozen, stable environment (possibly even including the OS!). In both of these extreme cases isolating the app makes great sense. But I'm not convinced that isolation makes sense every time I start to write a 300 line script, let alone a 30 line one. I guess that's what annoys me about verts -- it isn't that I don't see their usefulness. But it seems to me, rightly or wrongly, that a power feature which is mostly of benefit to quite big and/or complex development tasks is being pushed as the One True Way that everyone must use, always. -- Steve

On Tue, Feb 23, 2021 at 7:48 PM Random832 <random832@fastmail.com> wrote:
I can't speak for distributors or maintainers [1], but I can speak for myself as a user. I run Debian testing (currently bullseye as that is preparing for release) as my daily OS on my personal laptop, used for personal matters and school assignments (I'm a university computer science student in my senior year). I don't use the system Python for anything of my own, whether it's a school assignment or a personal project, precisely because I don't want to risk screwing something up. Rather, I maintain a clone/fork of the official CPython GitHub repo, and periodically build from source and `make altinstall` into `~/.local/`. The `python3` command continues to refer to the system Python, while `python3.8`, etc. refer to the ones installed in my home folder. To the latter I make symlinks for `py38`, `py39`, etc., and just `py` (and `pip`) for the one I use most often (usually the latest stable release). I typically have multiple versions installed at once since different scripts/projects run on different versions at different times. Given this setup, I can just do a simple `pip install spam` command and I don't need either `sudo` or `--user`, nor do I need virtual envs. While the average person would probably not clone the GitHub repo and build that way, it's not terribly unreasonable for an inclined person to do the same with a tarball downloaded from python.org, and so I doubt I'm the only one with this type of setup. Just some food for thought. [1] Technically I am a library maintainer since I have a couple projects on PyPI, but those are mostly unused and more or less abandoned at this point, and neither ever reached the point where I could consider graduating them from beta status. Most of what I work with these days is private personal code or school assignments.

On 2/23/21, Random832 <random832@fastmail.com> wrote:
First, pip+venv is not sufficient for secure software deployment: something must set appropriate permissions so that the application cannot overwrite itself and other core libraries (in order to eliminate W^X violations (which e.g. Android is solving by requiring all installed binaries to come from an APK otherwise they won't and can't be labeled with the SELinux extended file atrributes necessary for a binary to execute; but we don't have binaries, we have an interpreter and arbitrary hopefully-signed somewhere source code, at least)). Believe it or not, this is wrong: ```bash # python -m venv httpbin || virtualenv httpbin # source httpbin/bin/activate mkvirtualenv httpbin pip install httpbin gunicorn gunicorn -b 0.0.0.0:8001 httpbin:app # python -m webbrowser http://127.0.0.1:8001 ``` It's wrong - it's insecure - because the user executing the Python interpreter (through gunicorn, in this case) can overwrite the app. W^X: has both write and execute permissions. What would be better? This would be better because pip isn't running setup.py as root (with non-wheels) and httpbin_exec can't modify the app interpreter or the code it loads at runtime: ```bash useradd httpbin # also creates a group also named 'httpbin' sudo -u httpbin sh -c ' \ python -m venv httpbin; \ umask 0022; \ ./httpbin/bin/python -m pip install httpbin gunicorn' useradd httpbin_exec -G httpbin sudo -u httpbin_exec './httpbin/bin/gunicorn -b 0.0.0.0:8001 httpbin:app' ``` This would be better if it worked, though there are a few caveats: ```bash sudo apt-get install python-gunicorn python-httpbin sudo -u nobody /usr/bin/gunicorn -b 0.0.0.0:8001 httpbin:app ``` 1. Development is impossible: - You can't edit the code in /usr/lib/python3.n/site-package/ without root permissions. - You should not be running an editor as root. - You can edit distro-package files individually with e.g. sudoedit (and then the GPG-signed package file checksums will fail when you run `debsums` or `rpm -Va` because you've edited the file and that's changed the hash). - Non-root users cannot install python packages without having someone repack (and sign it) for them. - What do I need to do in order to patch the distro's signed repack of the Python package released to PyPI? - I like how Fedora pkgs and conda-forge have per-package git repos now. - Conda-forge has a bot that watches PyPI for new releases and tries sending an automated PR. - If I send a PR to the main branch of the source repo and it gets merged, how long will it be before there's a distro repack built and uploaded to the distro package index? 2. It should be installed in a chroot/jail/zone/container/context/vm so that it cannot read other data on the machine. The httpbin app does not need read access to /etc/shadow, for example. Distro package installs are not - either - sandboxed. To pick on httpbin a bit more, the httpbin docs specify that httpbin should be run as a docker container: ```bash docker run -p 80:8001 kennethreitz/httpbin ``` Is that good enough? We don't know, we haven't reviewed: - the Dockerfile - it says `FROM ubuntu:18.04`, which is fortunately an LTS release. But if it hasn't been updated this month, it probably has the sudo bug that enabled escalation to root (which - even in a container - is bad because it could obnoxiously just overwrite libc, for example, and unless the container is rebuilt or something runs `debsums`, nothing will detect that data integrity error) - the requirements.txt / setup.py:install_requires / Pipfile[.lock] dependencies - does it depend upon outdated pinned exact versions? - Is there an SBOM (Software Bill of Materials) that we can review against known vulnerability databases? How do I know that: - The packages I have installed are not outdated and unpatched against known vulnerabilities - The files on disk are exactly what should be in the package - The app_exec user can't overwrite the binary interpreter or the source files it loads at runtime - There won't be unreviewed code running as root (including at install time) - All Python package dependencies are available as wheels (that basically only need to be unzipped) - The ensemble of dependencies which I've miraculously assembled is available on the target platform(s) - The integration tests for my app pass with each combination of dependencies which satisfy the specified dependency constraints - I can edit things and quickly re-test - Each dependency is signed by a key that's valid for that dependency So, if pip is insufficient for secure software deployment, what are pro teams using to build signed, deployable artifacts with fresh, upgaded dependencies either bundled in or loosely-referenced? - Bazel (from Google's internal Blaze) builds from BUILD files. - https://github.com/dropbox/dbx_build_tools - Pantsbuild, Buck - zipapps - FPM can apparently package up an entire virtualenv; though IDK how good it is at permissions? https://github.com/jordansissel/fpm/blob/master/lib/fpm/package/virtualenv.r... As an open source maintainer, there are very many potential environments to release builds for. Manylinux docker images (and auditwheel, delocate, and *cibuildwheel*) are a response to extreme and somewhat-avoidable complexity. https://github.com/joerick/cibuildwheel Distro packagers can and do build upon e.g. pip; which is great for development but not sufficient for production deployment due to lack of support for file permissions, extended file attributes, checksums, cryptographic signatures, and due to running setup.py as the install user for non-wheel packages. There are many deployment stories now: pull/push, configuration management systems, venvs within containers within VMs. For your favorite distro, how do I get from cibuildwheel to a signed release artifact in your package index; and which keys can sign for what?

FWIW, Distro repacks are advantageous in comparison to "statically-bundled" releases that for example bundle in an outdated version of OpenSSL, because when you `apt-get upgrade -y` that should upgrade the OpenSSL that all the other distro packages depend upon. Here's something that doesn't get called as frequently as `apt-get upgrade -y`: ```bash pip install -U certifi ``` https://github.com/certifi/python-certifi (Mozilla's CA bundle extracted into a Python package) ```bash apt-get install -y ca-certificates dnf install -y ca-certificates ``` On 2/24/21, Wes Turner <wes.turner@gmail.com> wrote:
-- Wes Turner https://westurner.org https://wrdrd.com/docs/consulting/knowledge-engineering

On Wed, 24 Feb 2021 at 10:49, Random832 <random832@fastmail.com> wrote:
The reason venv is promoted as heavily as it is is because it's the only advice that can be given that is consistently correct regardless of the operating system the user is running locally, whereas safely using a system-wide Python installation varies a lot depending on whether you're on Windows, Mac OS X, or Linux (let alone some other platform outside the big 3 desktop clients). conda is also popular for the same reason: while the instructions for installing conda in the first place are OS-dependent, once it is up and running you can use consistent platform independent conda commands rather than having to caveat all your documentation with platform-specific instructions. Apple moved all of their dynamic language interpreter implementations to inaccessible-by-default locations so Mac OS X users would stop using them to run their own code. Alongside that, we *have* worked with the Linux distro vendors to help make "sudo pip install" safe (e.g [1]), but that only helps if a user is running a new enough version of a distro that has participated in that work. However, while the option of running "platform native" environments will never go away, and work will continue to make it less error prone, the level of knowledge of your specific OS's idiosyncrasies that it requires is almost certainly going to remain too high for it to ever again become the default recommendation that it used to be. Cheers, Nick. [1] https://fedoraproject.org/wiki/Changes/Making_sudo_pip_safe (Note: this change mitigated some aspects of the problem in a way similar to what Debian does, but still doesn't solve it completely, as custom Python builds may still make arbitrary changes) P.S. "But what about user site-packages?" you ask. Until relatively recently, Debian didn't put the user's local bin directory on the system path by default, so commands provided by user level package installs didn't work without the user adjusting their PATH. The CPython Windows installer also doesn't adjust PATH by default (for good reasons). And unlike a venv, "python -m" doesn't let you ensure that the code executed is the version installed in user site-packages - it could be coming from a directory earlier in sys.path. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Is there a tool that (1) detects import name collisions; and (2) attempts to read package metadata and package file checksums (maybe from the ZIP 'manifest')? In order to: - troubleshoot module shadowing issues - $PATH - sys.path - `python -m site` - incomplete and overlapping uninstallations: pip install a pip install a_modified # pip uninstall a? pip install pdbpp pip uninstall a_modified ls -altr "${site-packages[*]}" strace -e trace=file python -c 'import pdb' **** When shouldn't site customizations be added to the site module? https://docs.python.org/3/library/site.html When should customizations be built into the build instead of a runtime conditional? On Sat, Feb 27, 2021, 23:12 Nick Coghlan <ncoghlan@gmail.com> wrote:
participants (13)
-
Antoine Pitrou
-
Christian Heimes
-
Christopher Barker
-
Jonathan Goble
-
Mike Miller
-
Nick Coghlan
-
Paul Bryan
-
Paul Moore
-
Random832
-
Soni L.
-
Steven D'Aprano
-
Stéfane Fermigier
-
Wes Turner