I had a thought for something that might be a simple way to improve
dev experience with custom build backends.
A PEP 517 build backend is a Python object that has some special
methods on it. And the way a project picks which object to use, is via
build-backend = "module1.module2:object"
Currently, this means that the build backend is the Python object:
Here's my idea: what if change it, so that the above config is
interpreted as meaning that the build backend is the Python object:
(I.e., we tack a "__build_backend__" on the end before looking it up.)
Why does this matter? Well, with the current system, if you want to
use flit  as your build backend, you have to write:
build-backend = "flit.buildapi"
And if you want to use intreehooks ,you have to writ:
build-backend = "intreehooks:loader"
These names are slightly awkward, because these projects don't want to
just jam all the PEP 517 methods directly onto the top-level module
object, so they each have to invent some ad hoc sub-object to put the
methods on. And then that's exposed to all their users as a bit of
random cruft you have to copy-paste.
The idea of __build_backend__ is that these projects could rename the
'buildapi' and 'loader' objects to be '__build_backend__' instead, and
then users could write:
build-backend = "flit"
build-backend = "intreehooks"
build-backend = "setuptools"
and it just feels nicer.
Right now PEP 517 is still marked provisional, and pip hasn't shipped
support yet, so I think changing this is still pretty easy. (It would
mean a small amount of work for projects like flit that have already
What do you think? (Thomas, I'd love your thoughts in particular :-).)
Nathaniel J. Smith -- https://vorpus.org
I recently stumbled into a worrying problem with pip. I found out that
doing "pip install pusher requests" installs urllib3 v1.23 as a
dependency even though requests specifically restricts the version to
lower than 1.23. Then if instead I do "pip install requests pusher" it
installs urllib3 v1.22 as expected. As I recall, pip has long had a
problem with combining version specifiers and extras when the same
target has been required from multiple sources. What I wanted to ask
was, is this a simple bug, or a larger unresolved design problem?
Should pip also take into consideration the requirements from existing
installed packages so pip won't end up installing upgrades they're
Are custom installation commands in setup.py no longer respected by setuptools? For example, the pybind11 project has a custom InstallHeaders command class in its setup.py, which is passed to the setup() call.
When setup is imported from setuptools, the custom command class never gets invoked. When setup is imported from distutils.core, the custom command class is invoked.
What's the reason for the disparity - can someone please enlighten me?
I just uploaded python-gnupg 0.4.3 to PyPI using Twine. Search still shows the previous version:
https://pypi.org/search/?q=python-gnupg => 0.4.2
However, clicking on the link brings up the page for the latest version:
https://pypi.org/project/python-gnupg/ => 0.4.3
But pip install is also wrongly picking up 0.4.2. What's the expected delay between uploading a new version and having it be available via pip? I would have expected more or less immediately. All systems are showing as operational.
Apologies if this is the wrong mailing list, Sumanah suggested I write to you fine folks to get some feedback/direction on my pip PR.
Here's the link: https://github.com/pypa/pip/pull/5404
There's a bit of discussion in that PR thread, but also in a couple of Issue threads, most notably https://github.com/pypa/pip/issues/5355 (where I first proposed adding the dist options to install) and https://github.com/pypa/pip/issues/5453 (where some other folks seem to be requesting similar functionality).
Granted there's a lot to ingest, but the discussion in the PR has seemed to stagnate and I desperately would like some direction on how to proceed... or IF to proceed, if it's ultimately decided that this is not something pip maintainers want to do, I totally respect that.
I summarize the status quo in my latest comment on the PR, but to reiterate here: my PR adds `--platform`, `--abi`, `--python-version` and `--implementation` as valid arguments to `install` (formerly these options were only available on `download`). They are only usable when invoking `install` with `--target`, per pfmoore's suggestion. Some other folks have suggested that this limitation (the `--target` one) is unnecessary and that merely using the dist options alone is plenty-explicit about intent. The same folks also would like to see the dist options on the `wheel` subcommand.
I personally have no qualms with their suggestion NOR do i have qualms with pfmoore's suggestion, so I am looking for guidance. Thanks so much in advance, I appreciate your time.
-- Loren <3
pip is currently not well integrated on Linux: it conflicts with the
system package manager like apt or rpm. When pip writes files into
/usr, it can replace files written by the system package manager and
so create different kind of issues. For example, if you check the
system integry, you will likely see that some Python files have been
I would like to open a discussion to see how each Linux vendor handles
the issue, and see if a common solution can be designed.
Debian uses /usr for apt-get install and /usr/local for distutils and
Fedora decided to change pip to install files into /usr/local by
default, instead of /usr, so "sudo pip install" doesn't replace files
installed by dnf (Fedora package manager):
It gives you 3 main places to install Python code: /usr (managed by
dnf), /usr/local (managed by sudo pip), $HOME/.local (managed by pip
Would it make sense to make the Fedora/Debian change upstream? At
least, give an opt-in option for Linux vendors to use /usr/local?
I propose to make the change upstream because there are still issues,
and I don't want to be alone to have to fix them :-) It should be
easier if we agree on a filesystem layout and an implementation, so
we can collaborate on issues!
Issues with the current Fedora implementation:
(1) When Python is embedded in an application, there is an issue with
the current heuristic to decide if /usr/local should be added to
(2) On Fedora, "sudo pip install -U" currently removes old code from
/usr and install the new one in /usr/local. We should leave /usr
unchanged, since only dnf should touch /usr.
The implementation is made of a single patch on the Python site module:
There are two issues related to the "sudo pip" change, but they
already exist when pip is installed in $HOME/.local:
(3) Priority issue between PATH and PYTHONPATH directories.
When the user runs "pip", the pip binary may come from /usr,
/usr/local or $HOME/.local/bin, but the Python pip module ("import
pip") may come from a different path. Which binary and which module
should be used?
Obvisouly, users can replace these two environment variables...
(4) Related to (3). Running "pip" may run pip binary of one pip
version, but pick the "pip" Python module of another pip version.
For example, pip9 binary from /usr/bin/pip, but pip10 module from /usr/local.
Fedora works around issue (4) with a downstream patch on pip:
I don't well well how Linux distributions handle the issue with "sudo
pip". So don't hesitate to correct me if I'm wrong :-) My goal is
just to start a discussion about a common "upstream" solution.