Lately I have been working on a CentOS 8 machine, and it has "python2"
and "python3", but no "python". Many packages install scripts with a
and those do not work on this OS. Seems like rather a large missing
dependency which goes by without triggering a fatal error.
In bioinformatics pipelines it is common for one package to invoke a
script from another. So while the package which supplied a particular
script might have avoided this issue by only invoking it with:
that does not prevent another package from doing one of these:
B python path/script
In terms of analysis, it is trivial to find all python scripts
installed by a package and examine the shebang line (if present) to
see if this is an issue. I am adding a "reshebang" function to my
python_devirtualizer specifically to handle the issue for scripts
which are invoked directly. It is, however, not at all trivial to
analyze all a package's code to see which scripts are called by other
scripts, and how they are called. Moreover, they might be called from
perl, or C, or some other language. So dealing with "B" above is not
So, my question is, should the use of "python" (as opposed to
"python2" or "python3") in a shebang be considered an installation
error on a system for which "python" does not exist?
I would argue yes, because we already know that python3 was not fully
backwards compatible with python2, so we have reason to suspect that
python4 (whenever that appears) might also not be fully backwards
compatible with python3. By being picky about the python version now,
that should prevent a lot of problems later.
I’m pleased to announce the release of Setuptools 48, which adopts the distutils codebase from CPython per pypa/setuptools#417<https://github.com/pypa/setuptools/issues/417> and pypa/packaging-problems#127<https://github.com/pypa/packaging-problems/issues/127>.
Given the amount of change this effort involved, it’s likely unstable and thus the major version bump. Please report issues at the Setuptools issue tracker. I’ll be around today (IRC, Gitter, Slack) to either disable the functionality or add an escape hatch if needed.
First post here.
I have a cluster where the common software is NFS shared from the file
server to other nodes. All the python packages are kept in a
directory which is referenced by PYTHONPATH. The good part of that is
that there is just one copy of each package-version. The bad part, as
you have all no doubt guessed, is that python by itself is really bad
at specifying and loading a set of particular library versions (see
below), so upgrading one program will break another due to conflicting
installed versions. Hence the common use of virtualenv's. But as far
as I can tell each virtualenv installs a copy of each package-version
it needs, resulting in multiple copies of the same package-version for
common packages on the same disk.
What I am after is some method of keeping exactly one copy of each
package-version in the common area (ie, one might find foo-1.2,
foo-1.7, and foo-2.3 there), while also presenting only the one
version of each (let's say foo-1.7) to a particular installed program.
On linux it might do that by making soft links to the common
PYTHONPATH area from another directory for which it sets PYTHONPATH
for the application. Finally, this has to be usable by any account
which has read execute access to the main directory.
Does such a beast exist? If so, please point me to it!
The limitations of python version handling to which I refer above can
be illustrated for "scanpy-scripts"'s dependencies. Given all the
needed libraries in one place (plus incompatible versions) the right
set can be loaded (and verified) like this:
__requires__ = ['scipy <1.3.0,>=1.2.0', 'anndata <0.6.20', 'loompy
<3.0.0,>=2.00', 'h5py <2.10']
which emits exactly the versions scanpy-scripts needs:
, 'scanpy <1.4.4,>=1.4.2'
at the end of __requires__ makes the whole thing fail at
(many lines deleted)
792, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (scipy 1.2.3
even though the scanpy it loaded in the first case was within the
desired range. Moreover, specifying the desired versions as
does not work at all since pkg_resources only keeps the highest
version of each package it finds when imported. (A limitation that
never made the least bit of sense to me.)
The test system is CentOS 8 with python 3.6.8.
I have converted my setup.py to pyproject.toml and use poetry to build/manage deps.
I maintain packages which also have dependency packages and I frequently use development installations locally ("pip install -e ../my-dep") so to develop dependencies at the same time as my main packages.
Q: With a pyproject.toml, can I enable some sort of development installation without also maintaining a setup.py - or is there a plan/PEP for enabling this without a setup.py?
Right now, I have a setup.py which needs to stipulate a number of things which is already in my pyproject.toml. In particular, I find it a little bothering that I need to specify the entry_points/console_scripts in both the pyproject.toml and in setup.py.