Currently in the packaging space, we have a number of avenues for communication, which are:
- Other project specific mailing lists
- Various issue trackers spread across multiple platforms.
- Probably more places I’m not remembering.
The result of this is that all discussion ends up being super fractured amongst the various places. Sometimes that is exactly what you want (for instance, someone who is working on the wheel specs probably doesn’t care about deprecation policy and internal module renaming in pip) and sometimes that ends up being the opposite of what you want (for instance, when you’re describing something that touches PyPI, setuptools, flit, pip, etc all at once).
Theoretically the idea is that distutils-sig is where cross project reaching stuff goes, IRC/gitter is where real time discussion goes, and the various project mailing lists and issue trackers are where the project specific bits go. The problem is that often times doesn’t actually happen in practice except for the largest and most obvious of changes.
I think our current “communications stack” kind of sucks, and I’d love to figure out a better way for us to handle this that solves the sort of weird “independent but related” set of projects we have here.
From my POV, a list of our major problems are:
* Discussion gets fractured across a variety of platforms and locations, which can make it difficult to actually keep up with what’s going on but also to know how to loop in someone relevant if their input would be valuable. You have to more or less simultaneously know someone’s email, Github username, IRC nick, bitbucket username, etc to be able to bring threads of discussion to people’s attention.
* It’s not always clear to users where a discussion should go, often times they’ll come to one location and need to get redirected to another location. If any discussion did happen in the incorrect location, it tends to need to get restarted in the new location (and depending on the specific platform, it may be impossible to actually redirect everyone over to the proper location, so you again, end up fractured with the discussion happening in two places).
* A lot of the technology in this stack is particularly old, and lacks a lot of the modern day affordances that newer things have. An example is being able to edit a discussion post to fix typos that can hinder the ability of others to actually understand whats being talked about. In your typical mailing list or IRC there’s no mechanism by which you can edit an already sent message, so your only option is to either let the problem ride and hope it doesn’t trip up too many people, or send an additional message to correct the error. However these show up as additional, later messages which someone might not even see until they’ve already been thoroughly confused by the first message (since people tend to read email/IRC in a linear fashion).
- There is a lot of things in this one, other things are things like being able to control in a more fine grained manner what email you’re going to get.
- Holy crap, formatting and typography to make things actually readable and not a big block of plaintext.
* We don’t have a clear way for users to get help, leaving users to treat random issues, discussion areas, etc as a support forum, rather than some place that’s actually optimized for that. Some of this ties back into some of the earlier things too, where it’s hard to actually redirect discussions
These aren’t *new* problems, and often times the existing members of a community are the least effected becasue they’ve already spent effort learning the ins and outs and also curating a (typically custom) workflow that they’ve grown accustomed too. The problem with that is that often times that means that new users are left out, and the community gets smaller and smaller as time goes on as people leave and aren’t replaced with new blood, because they’re driven off but the issues with the stack.
A big part of the place this is coming from, is me sitting back and realizing that I tend to be drawn towards pulling discussions into Github issues rather than onto the varying mailing lists, not because that’s always the most appropriate place for it, but because it’s the least painful place in terms of features and functionality. I figure if I’m doing that, when I already have a significant investment in setting up tooling and being involved here, that others (and particularly new users) are likely feeling the same way.
PyPi does not allow duplicate file names -- this makes lots of sense,
because you really don't want people to go to PyPi one day and grab a file,
and then go there another day, grab a file with exactly the same name, and
have it be a different file.
We are all too human, and make mistakes when doing a release. All to often
someone pushed a broken file up to PyPi, often realizes it pretty quickly
-- before anyone has a chance to even download it (or only the dev team as,
In fact, I was in a sprint last summer, and we decided to push our package
up to PyPi -- granted, we were all careless amateurish noobs, but we ended
up making I think 4! minor version bumps because we had done something
stupid in the sdist.
Also -- the latest numpy release got caught in this, too:
* We ran into a problem with pipy not allowing reuse of filenames and a
resulting proliferation of *.*.*.postN releases. Not only were the names
getting out of hand, some packages were unable to work with the postN
So -- I propose that PyPi allow projects to replace existing files if they
REALLY REALLY want to.
You should have to jump through all sorts of hoops, and make it really
clear that it is a BAD IDEA in the general case, but it'd be good to have
it be possible.
After all -- PyPi does not take on responsibility for anything else about
what's in those packages, and Python itself is all about "we're all
consenting adults here"
I suppose we could even put in some heuristics about how long the file as
been there, how many times it's been downloaded, etc.
Just a thought.....I really hate systems that don't let me roll back
mistakes, even when I discover them almost immediately...
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
I've got two projects: mynamespace.myprojectA and mynamespace.myprojectB
myprojectB depends on myprojectA. I'm using setuptools 0.6c8 to manage both
Both projects are registered using 'setup develop'. Both projects are
accessible from an interactive interpreter:
PS C:\Users\me\projects> python
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)]
Type "help", "copyright", "credits" or "license" for more information.
>>> import mynamespace.myprojectA
>>> import mynamespace.myprojectB
>>> from mynamespace.myprojectA import mymoduleZ
However, when I run 'setup test' in myprojectB, the tests fail with
File ".mymoduleZ.py", line NNN, in [some context]
from mynamespace.myprojectA.mymoduleZ import MyClassC
ImportError: No module named myprojectA.mymoduleZ
In setup.py, the test_suite is nose.collector.
I searched and couldn't find anyone else with this problem. Is this a
supported configuration? Is there something I can do to make tests work
with interdependent projects with the same root namespace?
If there's not something obvious I should be doing differently, I'm happy to
put together a minimal test case that reproduces the problem. Any
suggestions are appreciated.
Jason R. Coombs
I am a research programmer at the NYU School of Engineering. My colleagues
(Trishank Kuppusamy and Justin Cappos) and I are requesting community
feedback on our proposal, "Surviving a Compromise of PyPI." The two-stage
proposal can be reviewed online at:
Summary of the Proposal:
"Surviving a Compromise of PyPI" proposes how the Python Package Index
(PyPI) can be amended to better protect end users from altered or malicious
packages, and to minimize the extent of PyPI compromises against affected
users. The proposed integration allows package managers such as pip to be
more secure against various types of security attacks on PyPI and defend
end users from attackers responding to package requests. Specifically,
these PEPs describe how PyPI processes should be adapted to generate and
incorporate repository metadata, which are signed text files that describe
the packages and metadata available on PyPI. Package managers request
(along with the packages) the metadata on PyPI to verify the authenticity
of packages before they are installed. The changes to PyPI and tools will
be minimal by leveraging a library, The Update Framework
<https://github.com/theupdateframework/tuf>, that generates and
transparently validates the relevant metadata.
The first stage of the proposal (PEP 458
<http://legacy.python.org/dev/peps/pep-0458/>) uses a basic security model
that supports verification of PyPI packages signed with cryptographic keys
stored on PyPI, requires no action from developers and end users, and
protects against malicious CDNs and public mirrors. To support continuous
delivery of uploaded packages, PyPI administrators sign for uploaded
packages with an online key stored on PyPI infrastructure. This level of
security prevents packages from being accidentally or deliberately tampered
with by a mirror or a CDN because the mirror or CDN will not have any of
the keys required to sign for projects.
The second stage of the proposal (PEP 480
<http://legacy.python.org/dev/peps/pep-0480/>) is an extension to the basic
security model (discussed in PEP 458) that supports end-to-end verification
of signed packages. End-to-end signing allows both PyPI and developers to
sign for the packages that are downloaded by end users. If the PyPI
infrastructure were to be compromised, attackers would be unable to serve
malicious versions of these packages without access to the project's
developer key. As in PEP 458, no additional action is required by end
users. However, PyPI administrators will need to periodically (perhaps
every few months) sign metadata with an offline key. PEP 480 also proposes
an easy-to-use key management solution for developers, how to interface
with a potential build farm on PyPI infrastructure, and discusses the
security benefits of end-to-end signing. The second stage of the proposal
simultaneously supports real-time project registration and developer
signatures, and when configured to maximize security on PyPI, less than 1%
of end users will be at risk even if an attacker controls PyPI and goes
undetected for a month.
We thank Nick Coghlan and Donald Stufft for their valuable contributions,
and Giovanni Bajo and Anatoly Techtonik for their feedback.
PEP 458 & 480 authors.
As a new Twine maintainer I've been running into questions like:
* Now that Warehouse doesn't use "register" anymore, can we deprecate it from distutils, setuptools, and twine? Are any other package indexes or upload tools using it? https://github.com/pypa/twine/issues/311
* It would be nice if Twine could depend on a package index providing an HTTP 201 response in response to a successful upload, and fail on 200 (a response some non-package-index servers will give to an arbitrary POST request).
I do not see specifications to guide me here, e.g., in the official guidance on hosting one's own package index https://packaging.python.org/guides/hosting-your-own-index/ . PEP 301 was long enough ago that it's due an update, and PEP 503 only concerns browsing and download, not upload.
I suggest that I write a PEP specifying an API for uploading to a Python package index. This PEP would partially supersede PEP 301 and would document the Warehouse reference implementation. I would write it in collaboration with the Warehouse maintainers who will develop the reference implementation per pypa/warehouse/issues/284 and maybe add a header referring to compliance with this new standard. And I would consult with the maintainers of packaging and distribution tools such as zest.releaser, flit, poetry, devpi, pypiserver, etc.
Per Nick Coghlan's formulation, my specific goal here would be close to:
> Documenting what the current upload API between twine & warehouse actually is, similar to the way PEP 503 focused on describing the status quo, without making any changes to it. That way, other servers (like devpi) and other upload clients have the info they need to help ensure interoperability.
Since Warehouse is trying to redo its various APIs in the next several months, I think it might be more useful to document and work with the new upload API, but I'm open to feedback on this.
After a little conversation here on distutils-sig, I believe my steps would be:
1. start a very early PEP draft with lots of To Be Determined blanks, submit as a PR to the python/peps repo, and share it with distutils-sig
2. ping maintainers of related tools
3. discuss with others at the packaging sprints https://wiki.python.org/psf/PackagingSprints next week
4. revise and get consensus, preferably mostly on this list
5. finalize PEP and get PEP accepted by BDFL-Delegate
6. coordinate with PyPA, maintainers of `distutils`, maintainers of packaging and distribution tools, and documentation maintainers to implement PEP compliance
Thoughts are welcome. I originally posted this at https://github.com/pypa/packaging-problems/issues/128 .
I have a pair of ideas about Linux binary wheels, which are currently (I
It seems like it should be possible to support Linux binary wheels using
one or both of these technologies:
* https://build.opensuse.org/ is a service that builds packages for a
variety of Linuxes
* Docker could be used to automate the building of wheels for a handful of
Linuxes with minimal dependencies. It seems like if you get
Debian/Ubuntu/Mint, Fedora/CentOS, openSUSE and perhaps one or two others,
that would cover almost all Linuxes and Linux users.
I'm up to my hears in commitments already, but I sincerely someone will
grab onto one or both of these possibilities and run with them.
Thanks for reading.
What timeline are we thinking is realistic for rolling out the new pip
resolver? (latest update on resolver work:
https://pradyunsg.me/blog/2019/08/06/pip-update-2/ ) I'm re-upping this
question which I originally asked on a GitHub issue about the rollout:
https://github.com/pypa/pip/issues/6536#issuecomment-521696430 and would
prefer to corral answers there.
This depends a lot on Pradyun's health and free time, and code review
availability from other pip maintainers, and whether we get some grants
we're applying for, but I think the sequence is something like:
1) build logic refactor: in progress, done sometime December-February
2) UX research and design, test infrastructure building, talking to
downstreams and users about config flags and transition schedules: we
need funding for this; earliest start is probably December, will take
3) introduce the abstractions defined in resolvelib/zazo while doing
alpha testing: will take a few months, so, conservatively estimating,
4) adopting better dependency resolution and do beta testing: ?
Is this right? What am I missing?
I ask because some of the info-gathering work is stuff a project manager
and/or UX researcher should do, in my opinion, and because some progress
on the increase in metadata strictness
https://github.com/pypa/packaging-problems/issues/264 and other issues
might help with concerns people have brought up here.
PyPI project manager, PyPA member & coordinator, and person who seems to
write a lot of grant applications
I have a PEP 517 compatible backend which works with pip to install from
an sdist (via an internal wheel). However there are a couple of
pip swallows all output from the backend. Is there anyway for the user
to see the output (builds can take several minutes)?
I would like to pass options from the pip command line to the backend
but neither --global-option or --install-option have any effect (the
config_settings argument is always None). How do I achieve this?
Goal - using wheels rather than RPM and/or installp formats for
distributing binary modules
One reason (there are likely more) - using wheels means packages/modules
can be loaded in a virtualenv
rather than require they are first loaded in the system environment
using installp/rpm/yum (with root authority). This alone has been reason
enough for me to do the research.
The use of pip and wheels is commonplace in the worlds of Linux, macOS
Not so for AIX. Not because it couldn't be commonplace.
The current situation for AIX is comparable to the initial issue Linux
when PEP513 was written:
"Currently, distribution of binary Python extensions for Windows and OS
X is straightforward. ...
For Linux, the situation is much more delicate. ...
Build tools using PEP 425 platform tags  do not track information
about the particular Linux distribution or installed system libraries,
and instead assign all wheels the too-vague linux_i686 or linux_x86_64
tags. Because of this ambiguity, ..."
The root cause for the *ambiguity* that Linux systems had is not the
ambiguity that AIX faces.
AIX has provided a consistent way to "tag" it's runtime environment
since at least 2007 (AIX 5.3 TL7).
Since that time IBM AIX has also *guaranteed* binary compatibility for
migration of applications
from old to new OS levels.
I would like to see these tags added - at a minimum that they can be
retrieved by something such as sysconfig.get_var('AIC_BLD_TAG'). It
would be "nice" to see sysconfig.get_platform() updated to include these
values from the running system.
Further, while pip related tools can add the "bitness" to the platform
tags I would like to see something added to the AIX get_platform() tag
(b32, ppc, aix32), (b64, ppc64, aix64) for 32 and 64 bit operations,
respectively - as that is a "running" environment attribute. Open to
other ideas on what the bitness tag should be. IMHO - anything is better
than nothing. Maybe this could be considered a bug rather than as a new
Thank you for your feedback.
Resending since I messed up the reply.
On Sun, 18 Aug 2019 at 8:29 PM, Michael <aixtools(a)felt.demon.nl> wrote:
> Would be 'nice' if old meant 3.6 and earlier as I would like to target 3.7
> and later.
I understand we're talking about generating the PEP 425 tags here. If not,
please ignore this next paragraph.
The "algorithm" essentially needs to be a check for "is this an AIX python"
and also generate/provide a signal of what the system/binary compatibility
looks like. This algorithm would executed on a target Python interpreter
(which can be anything from Python 2.7, 3.3, 3.7 or a hypothetical 3.10),
to check if that interpreter compatible with whatever tag you plan to
Reiterating - continue only here, or expand and include python-dev?
python-dev has, practically, delegated Packaging discussions to this list.
(opinion) If you're okay with changing communication channel, I have a
strong preference for using discuss.python.org/c/packaging instead. Anyway,
most of the packaging-related discussions have been happening there lately.
> Distutils-SIG mailing list -- distutils-sig(a)python.org
> To unsubscribe send an email to distutils-sig-leave(a)python.org
> Message archived at