As a new Twine maintainer I've been running into questions like:
* Now that Warehouse doesn't use "register" anymore, can we deprecate it from distutils, setuptools, and twine? Are any other package indexes or upload tools using it? https://github.com/pypa/twine/issues/311
* It would be nice if Twine could depend on a package index providing an HTTP 201 response in response to a successful upload, and fail on 200 (a response some non-package-index servers will give to an arbitrary POST request).
I do not see specifications to guide me here, e.g., in the official guidance on hosting one's own package index https://packaging.python.org/guides/hosting-your-own-index/ . PEP 301 was long enough ago that it's due an update, and PEP 503 only concerns browsing and download, not upload.
I suggest that I write a PEP specifying an API for uploading to a Python package index. This PEP would partially supersede PEP 301 and would document the Warehouse reference implementation. I would write it in collaboration with the Warehouse maintainers who will develop the reference implementation per pypa/warehouse/issues/284 and maybe add a header referring to compliance with this new standard. And I would consult with the maintainers of packaging and distribution tools such as zest.releaser, flit, poetry, devpi, pypiserver, etc.
Per Nick Coghlan's formulation, my specific goal here would be close to:
> Documenting what the current upload API between twine & warehouse actually is, similar to the way PEP 503 focused on describing the status quo, without making any changes to it. That way, other servers (like devpi) and other upload clients have the info they need to help ensure interoperability.
Since Warehouse is trying to redo its various APIs in the next several months, I think it might be more useful to document and work with the new upload API, but I'm open to feedback on this.
After a little conversation here on distutils-sig, I believe my steps would be:
1. start a very early PEP draft with lots of To Be Determined blanks, submit as a PR to the python/peps repo, and share it with distutils-sig
2. ping maintainers of related tools
3. discuss with others at the packaging sprints https://wiki.python.org/psf/PackagingSprints next week
4. revise and get consensus, preferably mostly on this list
5. finalize PEP and get PEP accepted by BDFL-Delegate
6. coordinate with PyPA, maintainers of `distutils`, maintainers of packaging and distribution tools, and documentation maintainers to implement PEP compliance
Thoughts are welcome. I originally posted this at https://github.com/pypa/packaging-problems/issues/128 .
I have a pair of ideas about Linux binary wheels, which are currently (I
It seems like it should be possible to support Linux binary wheels using
one or both of these technologies:
* https://build.opensuse.org/ is a service that builds packages for a
variety of Linuxes
* Docker could be used to automate the building of wheels for a handful of
Linuxes with minimal dependencies. It seems like if you get
Debian/Ubuntu/Mint, Fedora/CentOS, openSUSE and perhaps one or two others,
that would cover almost all Linuxes and Linux users.
I'm up to my hears in commitments already, but I sincerely someone will
grab onto one or both of these possibilities and run with them.
Thanks for reading.
What timeline are we thinking is realistic for rolling out the new pip
resolver? (latest update on resolver work:
https://pradyunsg.me/blog/2019/08/06/pip-update-2/ ) I'm re-upping this
question which I originally asked on a GitHub issue about the rollout:
https://github.com/pypa/pip/issues/6536#issuecomment-521696430 and would
prefer to corral answers there.
This depends a lot on Pradyun's health and free time, and code review
availability from other pip maintainers, and whether we get some grants
we're applying for, but I think the sequence is something like:
1) build logic refactor: in progress, done sometime December-February
2) UX research and design, test infrastructure building, talking to
downstreams and users about config flags and transition schedules: we
need funding for this; earliest start is probably December, will take
3) introduce the abstractions defined in resolvelib/zazo while doing
alpha testing: will take a few months, so, conservatively estimating,
4) adopting better dependency resolution and do beta testing: ?
Is this right? What am I missing?
I ask because some of the info-gathering work is stuff a project manager
and/or UX researcher should do, in my opinion, and because some progress
on the increase in metadata strictness
https://github.com/pypa/packaging-problems/issues/264 and other issues
might help with concerns people have brought up here.
PyPI project manager, PyPA member & coordinator, and person who seems to
write a lot of grant applications
I have a PEP 517 compatible backend which works with pip to install from
an sdist (via an internal wheel). However there are a couple of
pip swallows all output from the backend. Is there anyway for the user
to see the output (builds can take several minutes)?
I would like to pass options from the pip command line to the backend
but neither --global-option or --install-option have any effect (the
config_settings argument is always None). How do I achieve this?
Goal - using wheels rather than RPM and/or installp formats for
distributing binary modules
One reason (there are likely more) - using wheels means packages/modules
can be loaded in a virtualenv
rather than require they are first loaded in the system environment
using installp/rpm/yum (with root authority). This alone has been reason
enough for me to do the research.
The use of pip and wheels is commonplace in the worlds of Linux, macOS
Not so for AIX. Not because it couldn't be commonplace.
The current situation for AIX is comparable to the initial issue Linux
when PEP513 was written:
"Currently, distribution of binary Python extensions for Windows and OS
X is straightforward. ...
For Linux, the situation is much more delicate. ...
Build tools using PEP 425 platform tags  do not track information
about the particular Linux distribution or installed system libraries,
and instead assign all wheels the too-vague linux_i686 or linux_x86_64
tags. Because of this ambiguity, ..."
The root cause for the *ambiguity* that Linux systems had is not the
ambiguity that AIX faces.
AIX has provided a consistent way to "tag" it's runtime environment
since at least 2007 (AIX 5.3 TL7).
Since that time IBM AIX has also *guaranteed* binary compatibility for
migration of applications
from old to new OS levels.
I would like to see these tags added - at a minimum that they can be
retrieved by something such as sysconfig.get_var('AIC_BLD_TAG'). It
would be "nice" to see sysconfig.get_platform() updated to include these
values from the running system.
Further, while pip related tools can add the "bitness" to the platform
tags I would like to see something added to the AIX get_platform() tag
(b32, ppc, aix32), (b64, ppc64, aix64) for 32 and 64 bit operations,
respectively - as that is a "running" environment attribute. Open to
other ideas on what the bitness tag should be. IMHO - anything is better
than nothing. Maybe this could be considered a bug rather than as a new
Thank you for your feedback.
Resending since I messed up the reply.
On Sun, 18 Aug 2019 at 8:29 PM, Michael <aixtools(a)felt.demon.nl> wrote:
> Would be 'nice' if old meant 3.6 and earlier as I would like to target 3.7
> and later.
I understand we're talking about generating the PEP 425 tags here. If not,
please ignore this next paragraph.
The "algorithm" essentially needs to be a check for "is this an AIX python"
and also generate/provide a signal of what the system/binary compatibility
looks like. This algorithm would executed on a target Python interpreter
(which can be anything from Python 2.7, 3.3, 3.7 or a hypothetical 3.10),
to check if that interpreter compatible with whatever tag you plan to
Reiterating - continue only here, or expand and include python-dev?
python-dev has, practically, delegated Packaging discussions to this list.
(opinion) If you're okay with changing communication channel, I have a
strong preference for using discuss.python.org/c/packaging instead. Anyway,
most of the packaging-related discussions have been happening there lately.
> Distutils-SIG mailing list -- distutils-sig(a)python.org
> To unsubscribe send an email to distutils-sig-leave(a)python.org
> Message archived at
Recently I have become more interested in looking at how to simplify
packaging for AIX.
Currently I more or less ignore anything pip and/or distutils could mean
for a python "user" (e.g., AIX sys admin and/or Python developer working
on AIX) as I just run "pip install XYZ" locally, find what constitutes
the equivalent of a bdist, and repackage those as 'installp' packages.
For someone with root access this is ok - as installp requires root (or
RBAC equivalent) to install.
As I would like to make installing modules more Python like - I asked
around and was pointed at piwheels (which looks very useful for my
aspiration) but also at PEP425 and the platform-tag.
There are several things I need to learn about dist-utils, and how
packaging tagging needs to be considered when, e.g., Python is built on
AIX 7.1 and the module is built on AIX 7.2 - there may be AIX ABI
differences that would prevent the module built on AIX 7.2 from working
properly on AIX 7.1 while the same module built on AIX 7.1 will run with
no issue on AIX 7.2. (A simplified explanation is that libc.a on AIX 7.2
knows how to work with applications linked against libc.a from earlier
versions of AIX, but earlier versions of AIX do not know how to deal
with libc.a from AIX 7.2.)
Taken to an extreme: Python(2|3) and modules built on AIX 5.3 TL7 run,
unaltered, on all levels of AIX including the latest AIX 7.2 - while my
expectation is that executable and modules built on AIX 7.2 TL5 (latest
level) might not run on AIX 7.2 TL4.
In short, "version", "release" are in themselves not enough - the TL
(technology level) should also be considered.
Additionally, what I also see "missing" is the
platform.architecture() value. By default, this is still 32bit on AIX
- but it is important - especially for pre-built eggs and/or wheels.
In the AIX world - the OS-level name scheme is usually: VR00-TL-SP
(Version, Release, TechnologyLevel, ServicePack). There is also a value
compareable to a builddate, but for distutils purposes has no value (I
can think of).
So, to my question: currently, for AIX get_host_platform returns
something such as: aix-6.1.
Considering above: what would you - as more expericenced with
multi-oslevel packaging and low levels are accepted by high-levels, but
"What should the AIX get_host_platform() string contain?"
At a minimum I forsee: return "%s-%s.%s-%s" % (osname, version,
But this does not address potential issues where the TL level within a
version.release has changed. (X.Y.TL5 built packages MIGHT work on
X.YTL4, but there is no reason to expect them to.
So, I would look to something that remains recognizable, but uses
e.g., oslevel -s returns a string such as: 6100-09-10-1731
Then using the equivalent of:
version, release, service, builddata = '6100-09-10-1731'.split('-')
return "%s-%s.%s.%s-%s" % (osname, version, release, service,
Note: no special authority is needed to run "oslevel -s", but it does
take time. So having a way, in the library to only have to actually call
for the value once - is a great improvement. I can imagine a way to do
it (store a static value somewhere, and when it is NULL aka not
initialized call the program, otherwise return the value) - but "WHERE"
- in distutils, or (ideally?) elsewhere in a more "central" library.
Starting a new thread - I think the first one has served it's purpose.
I use, successfully, pip build; pip install to build and install modules
for Python. The files "installed" by pip install (to .../site-packages)
I repackage as installp (AIX installp manager) packages.
This works - but - requires root access (to execute installp) and I do
not expect this will support anything like virtualenv.
Further, sometimes a .egg-info file is created, sometimes not. And when
it is created it does not always include an "AIX" tag.
As example, I show some data from the modules I "pip built and
installed" and then repackaged as installp:
root@x066:[/home/root]lslpp -L | grep python3 | grep -v adt | sort
aixtools.python3.rte 22.214.171.124 C F python python3
aixtools.python3.six.rte 126.96.36.199 C F tools six 26-Jul-2019
root@x066:[/home/root]find /opt/lib/python3.7 -name \*.egg\*
There are no .whl (except for pip and setuptools).
I am starting to guess that getting a wheel built requires more than
what most packages are doing.
Further, I am sure cffi should be including an AIX platform tag - as it
generates a .so file that depends on standard libraries from AIX (libc
I'll have read up on what is actually in the egg-info (using od -c I can
read the file).
Seems to be very close to what I find here
-rw-r--r-- 1 bin bin 4783 Jul 26 15:08 PKG-INFO
-rw-r--r-- 1 bin bin 11759 Jul 26 15:08 SOURCES.txt
-rw-r--r-- 1 bin bin 1 Jul 26 15:08
-rw-r--r-- 1 bin bin 144 Jul 26 15:08 native_libs.txt
-rw-r--r-- 1 bin bin 1 Jul 26 15:08 not-zip-safe
-rw-r--r-- 1 bin bin 381 Jul 26 15:08 requires.txt
-rw-r--r-- 1 bin bin 46 Jul 26 15:08 top_level.txt
In short, I am just a user of pip and setuptools - but I would like to
utilize them for packaging "binary" modules for AIX that do not require
installp (or the alternate package manager - rpm).
Some pointers into the documentation that is essential (I am starting at
https://setuptools.readthedocs.io/en/latest/, but seems like quite the
Thanks (for remembering what it was like when you first started!),