I was told:
Pip does not have a public API and because of that there is no backwards compatibility contract. It's impossible to fully parse every type of requirements.txt without a session so either parse_requirements needs to create one if it doesn't (which means if we forget to pass in a session somewhere it'll use the wrong one) or it needs one passed in.
Up to now we used parse_requirements() of pip, but in new versions you need to pass in a
If I see changes like this:
- install_requires=[str(req.req) for req in parse_requirements("requirements.txt")],
+ install_requires=[str(req.req) for req in parse_requirements("requirements.txt", session=uuid.uuid1())],
... I think something is wrong.
I am not an expert in python packaging details. I just want it to work.
What is wrong here?
- You should not use parse_requirements() in setup.py
- pip should not change its API.
- you should not use pip at all, you should use ...?
How can I upload an OpenPGP signature (and the signing key) for a
version, after the upload of the distribution is complete?
I have recently been informed of the ‘--sign’ and ‘--identity’ options
to the ‘upload’ command. As described here:
Signing a package is easy and it is done as part of the upload
process to PyPI. […]
Can it be done, not “as part of the upload process”, but subsequent to
the upload of the distribution? How?
\ “Try adding “as long as you don't breach the terms of service – |
`\ according to our sole judgement” to the end of any cloud |
_o__) computing pitch.” —Simon Phipps, 2010-12-11 |
One of the recurring problems folks mention here is how to deal with
the complexities of handling Linux ABI compatibility issues.
That's a genuinely hard problem, and not one that *anyone* has solved
well - it's one of the reasons being an independent software vendor
for Linux in general (rather than just certifying with the major
commercial distros) is a pain. When folks do it, they tend to take the
"bundle everything you need and drop it somewhere in /opt" approach
which (quite rightly) makes professional system administrators very
On the distro side, this is one of the big factors driving the
popularity of the "bundle all the things" container image model: it
does the bundling in such a way that it's amenable to programmatic
introspection, and it still reduces the runtime ABI compatibility
question to just the kernel ABI. This tends to work really when in the
case of dynamic languages like Python, as the language runtime is
likely to deal with most kernel compatibility issues for you. (ABI
incompatibilities can still bite you if you're using system libraries
inside the container and your base image doesn't match your runtime
kernel, but the bug surface is still much smaller than when you use
the end user's system libraries directly)
It seems to me that, at least for web services published via PyPI
(like Kallithea), "use our recommended container", is likely to be the
easiest way to get folks on Linux up and running quickly with the
service. Folks may still want to take the image apart later and roll
their own (e.g. to switch to running on a different web server or a
different base image), but they wouldn't have to do their own
integration work just to get started.
The other advantage of nudging folks in the direction of Linux
containers to address their ABI compatibility woes is that this is
tech that already (mostly) works, and has a broader management
ecosystem growing around it (including both the major open source
platform-as-a-service offerings in OpenShift and Cloud Foundry).
Inventing our own way of abstracting away the Linux ABI compatibility
problem would be an awful lot of work, and likely leave us with an end
result that isn't pre-integrated with anything else.
P.S. Full disclosure: for Fedora's developer experience work for web
service developers, we're heading heavily in the direction of
containers+Vagrant for local dev workstations, to allow common dev
workflows across Linux, Mac OS X and Windows, and then pushing the
containers through Linux based CI and independent QE workflows, into
container based production Linux environments, including the Google &
Red Hat backed Kubernetes container orchestration framework and
OpenStack's Project Solum. In my day job, this is also the direction
we're taking Red Hat's internal infrastructure since it systematically
solves a variety of problems for us (like how to most effectively
allow folks to develop on Fedora while deploying on RHEL).
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
I've pushed changes to PyPI where it is no longer possible to reuse a filename
and attempting to do it will give an 400 error that says:
This filename has previously been used, you should use a different version.
This does NOT prevent authors from being allowed to delete files from PyPI,
however if a file is deleted from PyPI it cannot be re-uploaded again. This
means that if you upload say foobar-1.0.tar.gz, and your 1.0 has a mistake in
it then you *must* issue a new release to correct it.
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
How can I specify to Distutils (Setuptools) that module ‘foo’ needs to
be available for use by ‘setup.py’, but should not be installed with the
In the ‘python-daemon’ distribution, I have refactored a bunch of
functionality to a separate top-level module (‘version’). That module is
required to perform Setuptools actions – the ‘egg_info.writers’ entry
point specifically – but is not needed at all by the resulting
installation and should not be installed.
So it's not clear how to specify this dependency. I have a
‘packages=find_packages(exclude=["test"])’ specification; but that
module isn't a package and so should not (?) be collected. I have the
file included in ‘MANIFEST.in’; but that only specifies what to include
in the source distribution, and should not add any files to the binary.
As it stands (‘python-daemon’  version 2.0.3), the ‘version.py’ file
is correctly included in the source distribution, correctly used by the
‘egg_info.writers’ entry point; but then ends up incorrectly installed
to the run-time packages library. This causes problems for subsequent
import of unrelated modules that happen to share the same name.
How can I specify to Setuptools that the file is needed in the source
distribution, is needed by the entry points for Setuptools, but should
not be installed along with the binary distribution?
 <URL:https://pypi.python.org/pypi/python-daemon/>; version 2.0.3 at
\ “The entertainment industry calls DRM "security" software, |
`\ because it makes them secure from their customers.” —Cory |
_o__) Doctorow, 2014-02-05 |
I could not find documentation about the allowed characters here:
Someone told me it here: https://github.com/pypa/pip/issues/2383#issuecomment-72034990
The precise rules on what's a valid egg_name aren't documented anywhere particularly obvious, unfortunately.
Since docs are important, I want this to change.
Where should be the docs about the allowed characters in this place?
Please don't provide details about which characters are allowed or not.
This issue is about where the docs should be. Details later :-)