Idea: perennial manylinux tag

Hi all, The manylinux1 -> manylinux2010 transition has turned out to be very difficult. Timeline so far: March 2017: CentOS 5 went EOL April 2018: PEP 517 accepted May 2018: support for manylinux2010 lands in warehouse November 2018: support lands in auditwheel, and pip master December 2018: 21 months after CentOS 5 EOL, wwee still don't have an official build environment, or support in a pip release We'll get through this, but it's been super painful and maybe we can change things somehow so it will suck less next time. We don't have anything like this pain on Windows or macOS. We never have to update pip, warehouse, etc., after those OSes hit EOLs. Why not? On Windows, we have just two tags: "win32" and "win_amd64". These are defined to mean something like "this wheel will run on any recent-ish Windows system". So the meaning of the tag actually changes over time: it used to be that if a wheel said it ran on win32, then that meant it would work on winxp, but since winxp hit EOL people started uploading "win32" wheels that don't work on winxp, and that's worked fine. On macOS, the tags look like "macosx_10_9_x86_64". So here we have the OS version embedded in the tag. This means that we do occasionally switch which tags we're using, kind of like how manylinux1 -> manylinux2010 is intended to work. But, unlike for the manylinux tags, defining a new macosx tag is totally trivial: every time a new OS version is released, the tag springs into existence without any human intervention. Warehouse already accepts uploads with this tag; pip already knows which systems can install wheels with this tag, etc. Can we take any inspiration from this for manylinux? We could do the Windows thing, and have a plain "manylinux" tag that means "any recent-ish glibc-based Linux". Today it would be defined to be "any distro newer than CentOS 6". When CentOS 6 goes out of service, we could tweak the definition to be "any distro newer than CentOS 7". Most parts of the toolchain wouldn't need to be updated, though, because the tag wouldn't change, and by assumption, enforcement wouldn't really be needed, because the only people who could break would be ones running on unsupported platforms. Just like happens on Windows. We could do the macOS thing, and have a "manylinux_${glibc version}" tag that means "this package works on any Linux using glibc newer than ${glibc version}". We're already using this as our heuristic to handle the current manylinux profiles, so e.g. manylinux1 is effectively equivalent to manylinux_2_5, and manylinux2010 will be equivalent to manylinux_2_12. That way we'd define the manylinux tags once, get support into pip and warehouse and auditwheel once, and then in the future the only thing that would have to change to support new distro releases or new architectures would be to set up a proper build environment. What do y'all think? -n

Yes On Fri, Nov 30, 2018 at 1:27 AM Paul Moore <p.f.moore@gmail.com> wrote:
-- Nathaniel J. Smith -- https://vorpus.org

I'll betray my lack of understanding of how ABIs work: PEP 571 (manylinux2010) defines a set of libraries besides libc which compatible wheels can safely link against, such as glib and libXrender. Most of these are only versioned by the filename suffix (like .so.6), while glibc and a few presumably related pieces (CXXABI, GLIBCXX, GCC) are defined with specific versions (which are the maximum versions for compatible wheels, and the minimum for compatible platforms). If we move to manylinux tags based purely on the glibc version, what happens to the versions of all the other symbols and libraries? Do we just continue to build on some old version of CentOS and presume that it will work for any reasonably recent Linux distro? Are the other ABI symbol versions tied to the glibc version somehow? When, if ever, does auditwheel update its list of permissible libraries to link against? Do we lose the ability for a system to explicitly declare that it is or isn't compatible with a given manylinux variant (via the _manylinux? Presumably it would still require a new PEP, and changes to various tools, to allow manylinux wheels based around an alternative libc implementation? Is it worth naming these tags like manylinux_glibc_2_12, to anticipate that possibility? Or is that unnecessary verbosity? +1 to the overall idea of making it easier to move to new manylinux tags in the future, assuming we can do that without causing lots of compatibility problems. Thomas On Fri, Nov 30, 2018, at 8:09 AM, Nathaniel Smith wrote:

On Fri, Nov 30, 2018 at 7:13 AM Thomas Kluyver <thomas@kluyver.me.uk> wrote:
Do we lose the ability for a system to explicitly declare that it is or isn't compatible with a given manylinux variant (via the _manylinux?
Good question. Straw man: if _manylinux is importable, and _manylinux.manylinux_compatible is defined, then it must be a callable, and manylinux_compatible(<tag>) returns whether the given tag should be considered supported. Immediate question: should the possible return values be True/False, or a ternary True/False/use-the-default-detection-logic?
Presumably it would still require a new PEP, and changes to various tools, to allow manylinux wheels based around an alternative libc implementation? Is it worth naming these tags like manylinux_glibc_2_12, to anticipate that possibility? Or is that unnecessary verbosity?
In practice, the "many" in "manylinux" has always been code for "glibc-based", so "manylinux_glibc" is kind of redundant. I guess we could call them "linux_glibc_2_12_x86_64", but at this point python devs seem to understand the manylinux name, so changing names would probably cause more confusion than clarity. I'm not sure what to think about the "2" part of the glibc version. I think the reality is that they will never have a "3"? And if they did we have no idea why or what it would mean? I guess we could ask them. -n -- Nathaniel J. Smith -- https://vorpus.org

Den lör 1 dec. 2018 kl 09:41 skrev Nathaniel Smith <njs@pobox.com>:
As a Linux Python dev, even if I've sort of learned what "manylinux" means (not least of all from following these discussions :p), I would much prefer the "linux_glibc_2_12_x86_64" style tag. It's nice and explicit and conveys some info, and sort of follows the same style as the macOS ones. Just my 2 cents. Elvis

On Fri, 30 Nov 2018 at 08:12, Nathaniel Smith <njs@pobox.com> wrote:
As a Windows user who doesn't understand the whole Linux ABI situation[1], I can't answer that. But I do think that the goal should be that we *don't* need changes to pip and Warehouse in order to keep Linux wheels current. Whether that's done by not needing new tags (the way Windows does it) or by having a general "pattern" of tags that needs no maintenance (the way macOS does it) I don't know.
Only Linux users can really answer this. But what I will say is that on Windows, anything other than the core system libraries must be bundled in the wheel (so, for example, Pillow bundles the various image handling DLLs). Manylinux (as I understand it) does a certain amount of this, but expects dynamic linking for a much wider set of libraries. Maybe that reflects the same sort of mindset that results in Linux distros "debundling" tools like pip that vendor their dependencies. I'm not going to try to judge whether the Linux or the Windows approach is "right", but I'd be surprised if manylinux can take much inspiration from the Windows approach without confronting this difference in philosophy. Paul [1] I certainly don't want to spark any sort of flamewar here, but I do feel a certain wry amusement that the term "DLL Hell" was invented as a criticism of library management practices on Windows, and yet in this context, library management on Windows is pretty much a non-problem, and it's Linux (that prided itself on avoiding DLL hell at the time) that is now struggling with library versioning complexity ;-)

On 2018-11-30 15:35:10 +0000 (+0000), Paul Moore wrote: [...]
You could look at it this way: "Linux" isn't an operating system, it's just a kernel. GNU/Linux distributions are independent and varied operating systems. If you needed to build packages which could be installed on dozens of different competing Windows-based operating systems all of whom recompiled Windows from source in various ways with different features and random versions of system libraries, the problem might look similar for the Windows ecosystem as well. That Windows is a commercial product legally available strictly in precompiled binary form from only one source is what mostly saves it from this particular bit of fun. -- Jeremy Stanley

On Fri, Nov 30, 2018 at 7:35 AM Paul Moore <p.f.moore@gmail.com> wrote:
The Windows and Linux situations are actually almost identical, except for the folklore around them. Both have a small but sufficient set of libraries that you can rely on being there, and that are carefully designed to maintain ABI backwards compatibility over time, and then you have to vendor everything else. Windows actually used to be worse than Linux at this, because its version of libc wasn't in the set of base libraries, so it had to be vendored along with every app, and you could have all kinds of "fun" if Python and its extensions weren't built against the same libc. But these days they've switched to a Linux-style libc (complete with a clever implementation of glibc-style symbol versioning), so they really are pretty much identical. The hardest thing with distributing binaries on Linux is just convincing Linux hackers that it's OK to do it the same way Windows/macOS do, instead of inventing something more complicated. "DLL hell" refers to how in the bad old days, the standard practice for apps on Windows was not just to include vendored libraries, but to *store all those vendored libraries in the global libraries directory*, which unsurprisingly led to all kinds of chaos as different apps overwrote each other's vendored libraries. -n -- Nathaniel J. Smith -- https://vorpus.org

I think either approach works, but if we do go with a glibc-versioned tag that we make it explicit in the tag, e.g. `manylinux_glibc_{version}`. That way if we ever choose to support musl (for Alpine) we can. The one question I do have is how the compatibility tags will work for a tagged platform? E.g. if you say manylinux_glibc_2_12 for manylinux2010, then do we generate from 2.12 down to 1.0 (or whatever the floor is for manylinux1)? This would match how compatibility tags work on macOS where you go from your macOS version all the way down to the first version supporting your CPU architecture. And just to double-check, I'm assuming we don't want to just jump straight to distro tags and say if you're centos_6 compatible then you're fine? I assume that would potentially over-reach on compatibility in terms of what might be dynamically-linked against, but I thought I would ask because otherwise the glibc-tagged platform will be a unique hybrid of macOS + not an actual OS restriction. On Fri, 30 Nov 2018 at 00:10, Nathaniel Smith <njs@pobox.com> wrote:

Also betraying the lack of knowledge of how this works, I read this section in PEP 513 (which defines manylinux1):
To be eligible for the manylinux1 platform tag, a Python wheel must therefore both (a) contain binary executables and compiled code that links only to libraries with SONAMEs included in the following list:
.… libglib-2.0.so.0 Does this mean that only tags down to 2.0 needs to be generated? TP Sent from Mail for Windows 10 From: Brett Cannon Sent: 01 December 2018 02:12 To: Nathaniel Smith Cc: distutils sig Subject: [Distutils] Re: Idea: perennial manylinux tag I think either approach works, but if we do go with a glibc-versioned tag that we make it explicit in the tag, e.g. `manylinux_glibc_{version}`. That way if we ever choose to support musl (for Alpine) we can. The one question I do have is how the compatibility tags will work for a tagged platform? E.g. if you say manylinux_glibc_2_12 for manylinux2010, then do we generate from 2.12 down to 1.0 (or whatever the floor is for manylinux1)? This would match how compatibility tags work on macOS where you go from your macOS version all the way down to the first version supporting your CPU architecture. And just to double-check, I'm assuming we don't want to just jump straight to distro tags and say if you're centos_6 compatible then you're fine? I assume that would potentially over-reach on compatibility in terms of what might be dynamically-linked against, but I thought I would ask because otherwise the glibc-tagged platform will be a unique hybrid of macOS + not an actual OS restriction. On Fri, 30 Nov 2018 at 00:10, Nathaniel Smith <njs@pobox.com> wrote: Hi all, The manylinux1 -> manylinux2010 transition has turned out to be very difficult. Timeline so far: March 2017: CentOS 5 went EOL April 2018: PEP 517 accepted May 2018: support for manylinux2010 lands in warehouse November 2018: support lands in auditwheel, and pip master December 2018: 21 months after CentOS 5 EOL, wwee still don't have an official build environment, or support in a pip release We'll get through this, but it's been super painful and maybe we can change things somehow so it will suck less next time. We don't have anything like this pain on Windows or macOS. We never have to update pip, warehouse, etc., after those OSes hit EOLs. Why not? On Windows, we have just two tags: "win32" and "win_amd64". These are defined to mean something like "this wheel will run on any recent-ish Windows system". So the meaning of the tag actually changes over time: it used to be that if a wheel said it ran on win32, then that meant it would work on winxp, but since winxp hit EOL people started uploading "win32" wheels that don't work on winxp, and that's worked fine. On macOS, the tags look like "macosx_10_9_x86_64". So here we have the OS version embedded in the tag. This means that we do occasionally switch which tags we're using, kind of like how manylinux1 -> manylinux2010 is intended to work. But, unlike for the manylinux tags, defining a new macosx tag is totally trivial: every time a new OS version is released, the tag springs into existence without any human intervention. Warehouse already accepts uploads with this tag; pip already knows which systems can install wheels with this tag, etc. Can we take any inspiration from this for manylinux? We could do the Windows thing, and have a plain "manylinux" tag that means "any recent-ish glibc-based Linux". Today it would be defined to be "any distro newer than CentOS 6". When CentOS 6 goes out of service, we could tweak the definition to be "any distro newer than CentOS 7". Most parts of the toolchain wouldn't need to be updated, though, because the tag wouldn't change, and by assumption, enforcement wouldn't really be needed, because the only people who could break would be ones running on unsupported platforms. Just like happens on Windows. We could do the macOS thing, and have a "manylinux_${glibc version}" tag that means "this package works on any Linux using glibc newer than ${glibc version}". We're already using this as our heuristic to handle the current manylinux profiles, so e.g. manylinux1 is effectively equivalent to manylinux_2_5, and manylinux2010 will be equivalent to manylinux_2_12. That way we'd define the manylinux tags once, get support into pip and warehouse and auditwheel once, and then in the future the only thing that would have to change to support new distro releases or new architectures would be to set up a proper build environment. What do y'all think? -n -- Distutils-SIG mailing list -- distutils-sig@python.org To unsubscribe send an email to distutils-sig-leave@python.org https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/ Message archived at https://mail.python.org/archives/list/distutils-sig@python.org/message/6AFS4...

On 30.11.18 19:10, Brett Cannon wrote:
while a distro tag might be overkill, just encoding glibc might not be enough. At least libstdc++ can be configured two ways (--with-default-libstdcxx-abi=old|new, --disable-libstdcxx-dual-abi), and usually you can't run code built for the dual abi on platforms which only have the old abi. Not sure if 32bit x86 wheels are still covered, but the recent move of Fedora to SSE math on these systems might show interesting results when run on a system using x87 math (although the calling conventions are the same).

On Tue, 4 Dec 2018 at 23:51, Matthias Klose <doko@ubuntu.com> wrote:
Right, the kinds of issues you mention are why I think it's important to keep the "many" qualifier in the name (since there are additional constraints beyond just the glibc version), and why *something* still needs to define what those additional constraints actually are (even if that something becomes "the manylinux build environment project" rather than "distutils-sig via the PEP process"). The only aspect this proposal would change is making it possible to infer the platform compatibility checking *heuristic* from the wheel name, rather than needing a lookup table. Installers that wanted a more robust heuristic could still add extra checks based on the actual linking constraints defined by the reference build environment. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

FYI, I've started a discussion on the next manylinux spec here: https://discuss.python.org/t/the-next-manylinux-specification/ On Thu, Dec 6, 2018 at 4:21 AM Nick Coghlan <ncoghlan@gmail.com> wrote:

It sounds like I should explain better how things currently work :-). The original manylinux1 spec is PEP 513. But of course it's just text -- it's a useful reference, but it doesn't do much by itself. And when we wrote it we had no idea how this would actually work out. In practice, there are two pieces to manylinux1's implementation, that work together to make it successful. First, there's pip's gatekeeping logic. If you put up a manylinux1 wheel on pypi, then pip will install it on any python that's built again glibc 2.5 or greater, on x86-64 or x86-32. That means not ancient distros like CentOS 4 (its glibc is too old), and not exotic distros like Alpine or Android (they don't use glibc), but it includes all vaguely-modern mainstream desktop or server distros. So in practice the definition of a manylinux1 wheel is "I promise this wheel will work on any system with glibc 2.5 or greater and an Intel processor". But most maintainers have no idea how to actually fulfill that promise, which is where the docker image and auditwheel come in. There are a lot of ways a wheel can fail to work on a glibc 2.5+ system: it might depend on a newer glibc, or it might depend on library that the target system doesn't have installed, or a whole bunch of other super arcane traps that we've discovered over time (e.g. the Python used for the build has to be linked using the correct configure options). These are all encoded into the docker image/auditwheel. (So for example, auditwheel has some built-in knowledge of which libraries you can expect to find on every Intel system with glibc 2.5 or greater, that it uses to make decisions about which libraries need to be vendored.) Technically you don't *have* to use these tools to build your wheel, pip doesn't care, but they provide some nice padded guardrails that make it possible for ordinary maintainers to fulfill the manylinux1 promise in practice. How does this affect spec-writing? Well, we want to allow for non-pip installers, so the part that pip does has to be specified. But pip's part is really straightforward. All the complicated bit is in the docker image/auditwheel. But, for these, it turns out the spec doesn't actually matter that much. We can observe that most wheels do work in practice, and whenever someone discovers some new edge case that the PEP never thought of, then it's not a disaster, it just means there's one broken wheel on pypi, and we figure out how to fix the tools to catch the new edge case, they upload a new wheel, and life goes on. So the proposal here is to refactor the spec to match how this actually works: the official definition of a manylinux_${glibc version}_${arch} wheel would be "I promise this wheel will work on any Linux system with glibc >=${glibc version} and an ${arch} processor". We'll still need to make changes as old distros go out of support, new architectures get supported, etc., but the difference is, those changes won't require complex cross-ecosystem coordination with new formal specs for each one; instead they'll be routine engineering problems for the docker image+auditwheel maintainers to solve. -n On Fri, Nov 30, 2018 at 12:09 AM Nathaniel Smith <njs@pobox.com> wrote:
-- Nathaniel J. Smith -- https://vorpus.org

On Sat, 1 Dec 2018 at 04:42, Nathaniel Smith <njs@pobox.com> wrote:
So if I follow, what you're saying is that the *spec* (i.e., the PEP) will simply say what installers like pip, and indexes like warehouse need to do[1] (which is for pip, generate the right list of supported tags, and for warehouse, add the relevant tags to the "allowed uploads" list). And everything else (all the stuff about libraries you're allowed to link dynamically to) becomes just internal design documentation for the auditwheel project (and amy other manylinux building support projects that exist)? That sounds reasonable. Paul [1] Is there not also an element of what the wheel project needs to do? It has to generate wheels with the right tags in the first place. Actually, PEP 425 also needs an update, at a minimum to refer to the manylinux spec(s), which modify the definition of a "platform tag" from PEP 425...

On Fri, Nov 30, 2018 at 10:29 PM Paul Moore <p.f.moore@gmail.com> wrote:
Yep.
We've actually never touched the wheel project in any of the manylinux work. The workflow is: - set up the special build environment - run setuptools/wheel to generate a plain "linux" wheel (this is the not-very-useful tag that for historical reasons just means "it works on my machine", and isn't allowed on pypi) - auditwheel processes the "linux" wheel to check for various possible issues, vendor any necessary libraries, and if that all worked then it rewrites the metadata to convert it to a "manylinux" wheel This feels a bit weird somehow, but it's worked really well so far. Note that in a PEP 517-based world, regular local installs also create an intermediate wheel, and in that case you don't want the special auditwheel handling, you really just want the "it works on my machine" wheel. -n -- Nathaniel J. Smith -- https://vorpus.org

Thanks Nathaniel for the explanation. On Sat, Dec 1, 2018, at 4:39 AM, Nathaniel Smith wrote:
I'm still a bit unsure how this works with the other libraries specified in PEP 571 (glib, libXrender, etc.). Would they be entirely dropped from a hypothetical manylinux_2_20, so wheels need to bundle everything apart from glibc itself? Or is it reasonable to assume that any system built with glibc has certain other libraries available? And is there any need to specify versions of these libraries, or is e.g. libX11.so.6 sticking around forever? Thomas

Hi, On Sat, Dec 1, 2018 at 5:18 PM Thomas Kluyver <thomas@kluyver.me.uk> wrote:
I think this is the key point. For Mac - Apple has already done the specification work for us, with its MACOSX_DEPLOYMENT_TARGET specifiers. These versions, such as '10.6' specify compatible versions for all the system libraries. I don't know Windows well, but I suppose that the equivalent APIs to the PEP 571 libraries are stable across many Windows versions, and it's standard Windows practice to compile against old and stable APIs. Cheers, Matthew

Hi, As the original author of auditwheel and co-author of PEP 513, I figure I should probably chime in. I suspect that *I* am one of the major reasons that the manylinux1 -> manylinux2010 transition has been unreasonably drawn out, rather than any particular design flaw in the versioning scheme (manylinux_{cardinal number} vs. manylinux_{year} vs. manylinux_{glibc version}). I wrote auditwheel while I was finishing up graduate school. For years, the "test suite" was just a couple wheel files hosted on my school's per-student cgi-bin/ directory (which recently stopped working, since I'm no longer a student). The logging was just random print statements. It worked (I think), but it wasn't particularly well designed, nor was it well tested. And it was basically a one-person project. After I finished my Ph.D., I got a full time job and mostly stopped contributing to open source. I think a lot of the reason for the delay in the manylinux2010 transition was that nobody was present and accounted for to develop auditwheel. Now, the project is in a much better place. Elana Hashman is leading doing an awesome job leading auditwheel, and its seems like there's a new momentum for manylinux2010. With now-proper maintenance and testing for auditwheel, I don't think it will be as hard jump to the next iteration of manylinux (e.g. manylinux2014) as it was to jump from manylinux1 to manylinux2010. -Robert On Fri, Nov 30, 2018 at 3:12 AM Nathaniel Smith <njs@pobox.com> wrote:
-- -Robert

On Sun, Dec 2, 2018 at 6:10 PM Robert T. McGibbon <rmcgibbo@gmail.com> wrote:
I suspect that *I* am one of the major reasons that the manylinux1 -> manylinux2010 transition has been unreasonably drawn out, rather than any particular design flaw in the versioning scheme (manylinux_{cardinal number} vs. manylinux_{year} vs. manylinux_{glibc version}).
Hey Robert, good to hear from you! And seriously, I don't think you need to blame yourself for this... like, it was 13 months between when CentOS 5 went EOL and when the PEP was accepted, which was a precondition for everything else. 8 months after that, we still don't have a pip release that can install manylinux2010 wheels. As it's turned out, auditwheel wasn't the bottleneck at any point. And this proposal would remove both the need for future PEPs and for future pip updates, so it addresses the actual bottlenecks. -n -- Nathaniel J. Smith -- https://vorpus.org

On Fri, 30 Nov 2018 at 18:12, Nathaniel Smith <njs@pobox.com> wrote:
We could do the Windows thing, and have a plain "manylinux" tag that means "any recent-ish glibc-based Linux". Today it would be defined to be "any distro newer than CentOS 6". When CentOS 6 goes out of service, we could tweak the definition to be "any distro newer than CentOS 7". Most parts of the toolchain wouldn't need to be updated, though, because the tag wouldn't change, and by assumption, enforcement wouldn't really be needed, because the only people who could break would be ones running on unsupported platforms. Just like happens on Windows.
The reason this approach works for Windows is because *CPython* defines the target Windows ABI version - if you don't use the right target ABI, your extension module won't even link to the CPython DLL. So here, we're taking advantage of Microsoft's strict ABI management policy, by way of CPython.
We could do the macOS thing, and have a "manylinux_${glibc version}" tag that means "this package works on any Linux using glibc newer than ${glibc version}". We're already using this as our heuristic to handle the current manylinux profiles, so e.g. manylinux1 is effectively equivalent to manylinux_2_5, and manylinux2010 will be equivalent to manylinux_2_12. That way we'd define the manylinux tags once, get support into pip and warehouse and auditwheel once, and then in the future the only thing that would have to change to support new distro releases or new architectures would be to set up a proper build environment.
This approach works for Mac OS X because Apple have just as strict an approach to ABI management as Microsoft do, and the OS version specifies a lot of details about a broad range of operating system interfaces, which are then given strict compatibility guarantees.
What do y'all think?
I don't think we can get away from *something* specifying exactly what can be assumed to be present in a given manylinux variant, since the distros don't define any useful form of cross-distro ABI compatibility, and CPython doesn't nominate a target Linux ABI either. However, I like the design concept of making it so that auditwheel is the only project that has to change in order to define a new revision of manylinux, which means encoding the heuristic check that installers should use directly into the platform compatibility tag rather than defining it as a lookup table that needs to be updated in every affected tool whenever a new version is defined. From my view, the most promising path towards achieving that would be to go with Brett's suggestion of "manylinux_{libc_variant}_{libc_version}", such that we get "manylinux_glibc_2_17" as the next edition rather than "manylinux2014" (assuming the trend of using RHEL/CentOS releases as the baseline continues). If folks prove keen to start using the new cheaper aarch64 cloud systems, there may also be demand for a "manylinux_glibc_2_27" (compatible with Ubuntu 18.04 and RHEL/CentOS 8). While having both "many" and "glibc" in the name may seem redundant initially, I think it's worth having them both for two reasons: 1. Even if new manylinux tags are being defined as part of the auditwheel documentation rather than in a PEP, I'd still expect them to allow the generated wheel archives to dynamically link against more than just glibc. Accordingly, there will be some platforms that pass the installer's heuristic check, but nevertheless still have compatibility problems with prebuilt wheel archives. 2. PEP 513 defined a ctypes based algorithm for checking the glibc version, and a similar heuristic could be defined for other libc implementations (most notably musl, thanks to Docker/Alpine). If we're going to go through the process of switching to a different naming scheme, we may as well provide a path towards resolving that issue as well, rather than further entrenching manylinux as being glibc-only. Cheers, Nick. P.S. Paul asked how we can have manylinux tags without updating PEP 425 to include them, and the answer is that the actual compatibility tag spec is at https://packaging.python.org/specifications/platform-compatibility-tags/ and that references PEP 513 (manylinux1) and PEP 571 (manylinux2010) in addition to PEP 425. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Mon, 3 Dec 2018 at 12:16, Nick Coghlan <ncoghlan@gmail.com> wrote:
Bah. I'd forgotten we had moved to putting the specs at packaging.python.org (and that the tag spec had changed). Maybe there should be a prominent note in PEP 425 (and any other affected specs) that notes that it is no longer the reference specification, and pointing to the new spec? Paul

On Mon, 3 Dec 2018 at 23:11, Paul Moore <p.f.moore@gmail.com> wrote:
Yeah, that's the last todo item on https://github.com/pypa/pypa.io/issues/11#issuecomment-195412332 Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Yes On Fri, Nov 30, 2018 at 1:27 AM Paul Moore <p.f.moore@gmail.com> wrote:
-- Nathaniel J. Smith -- https://vorpus.org

I'll betray my lack of understanding of how ABIs work: PEP 571 (manylinux2010) defines a set of libraries besides libc which compatible wheels can safely link against, such as glib and libXrender. Most of these are only versioned by the filename suffix (like .so.6), while glibc and a few presumably related pieces (CXXABI, GLIBCXX, GCC) are defined with specific versions (which are the maximum versions for compatible wheels, and the minimum for compatible platforms). If we move to manylinux tags based purely on the glibc version, what happens to the versions of all the other symbols and libraries? Do we just continue to build on some old version of CentOS and presume that it will work for any reasonably recent Linux distro? Are the other ABI symbol versions tied to the glibc version somehow? When, if ever, does auditwheel update its list of permissible libraries to link against? Do we lose the ability for a system to explicitly declare that it is or isn't compatible with a given manylinux variant (via the _manylinux? Presumably it would still require a new PEP, and changes to various tools, to allow manylinux wheels based around an alternative libc implementation? Is it worth naming these tags like manylinux_glibc_2_12, to anticipate that possibility? Or is that unnecessary verbosity? +1 to the overall idea of making it easier to move to new manylinux tags in the future, assuming we can do that without causing lots of compatibility problems. Thomas On Fri, Nov 30, 2018, at 8:09 AM, Nathaniel Smith wrote:

On Fri, Nov 30, 2018 at 7:13 AM Thomas Kluyver <thomas@kluyver.me.uk> wrote:
Do we lose the ability for a system to explicitly declare that it is or isn't compatible with a given manylinux variant (via the _manylinux?
Good question. Straw man: if _manylinux is importable, and _manylinux.manylinux_compatible is defined, then it must be a callable, and manylinux_compatible(<tag>) returns whether the given tag should be considered supported. Immediate question: should the possible return values be True/False, or a ternary True/False/use-the-default-detection-logic?
Presumably it would still require a new PEP, and changes to various tools, to allow manylinux wheels based around an alternative libc implementation? Is it worth naming these tags like manylinux_glibc_2_12, to anticipate that possibility? Or is that unnecessary verbosity?
In practice, the "many" in "manylinux" has always been code for "glibc-based", so "manylinux_glibc" is kind of redundant. I guess we could call them "linux_glibc_2_12_x86_64", but at this point python devs seem to understand the manylinux name, so changing names would probably cause more confusion than clarity. I'm not sure what to think about the "2" part of the glibc version. I think the reality is that they will never have a "3"? And if they did we have no idea why or what it would mean? I guess we could ask them. -n -- Nathaniel J. Smith -- https://vorpus.org

Den lör 1 dec. 2018 kl 09:41 skrev Nathaniel Smith <njs@pobox.com>:
As a Linux Python dev, even if I've sort of learned what "manylinux" means (not least of all from following these discussions :p), I would much prefer the "linux_glibc_2_12_x86_64" style tag. It's nice and explicit and conveys some info, and sort of follows the same style as the macOS ones. Just my 2 cents. Elvis

On Fri, 30 Nov 2018 at 08:12, Nathaniel Smith <njs@pobox.com> wrote:
As a Windows user who doesn't understand the whole Linux ABI situation[1], I can't answer that. But I do think that the goal should be that we *don't* need changes to pip and Warehouse in order to keep Linux wheels current. Whether that's done by not needing new tags (the way Windows does it) or by having a general "pattern" of tags that needs no maintenance (the way macOS does it) I don't know.
Only Linux users can really answer this. But what I will say is that on Windows, anything other than the core system libraries must be bundled in the wheel (so, for example, Pillow bundles the various image handling DLLs). Manylinux (as I understand it) does a certain amount of this, but expects dynamic linking for a much wider set of libraries. Maybe that reflects the same sort of mindset that results in Linux distros "debundling" tools like pip that vendor their dependencies. I'm not going to try to judge whether the Linux or the Windows approach is "right", but I'd be surprised if manylinux can take much inspiration from the Windows approach without confronting this difference in philosophy. Paul [1] I certainly don't want to spark any sort of flamewar here, but I do feel a certain wry amusement that the term "DLL Hell" was invented as a criticism of library management practices on Windows, and yet in this context, library management on Windows is pretty much a non-problem, and it's Linux (that prided itself on avoiding DLL hell at the time) that is now struggling with library versioning complexity ;-)

On 2018-11-30 15:35:10 +0000 (+0000), Paul Moore wrote: [...]
You could look at it this way: "Linux" isn't an operating system, it's just a kernel. GNU/Linux distributions are independent and varied operating systems. If you needed to build packages which could be installed on dozens of different competing Windows-based operating systems all of whom recompiled Windows from source in various ways with different features and random versions of system libraries, the problem might look similar for the Windows ecosystem as well. That Windows is a commercial product legally available strictly in precompiled binary form from only one source is what mostly saves it from this particular bit of fun. -- Jeremy Stanley

On Fri, Nov 30, 2018 at 7:35 AM Paul Moore <p.f.moore@gmail.com> wrote:
The Windows and Linux situations are actually almost identical, except for the folklore around them. Both have a small but sufficient set of libraries that you can rely on being there, and that are carefully designed to maintain ABI backwards compatibility over time, and then you have to vendor everything else. Windows actually used to be worse than Linux at this, because its version of libc wasn't in the set of base libraries, so it had to be vendored along with every app, and you could have all kinds of "fun" if Python and its extensions weren't built against the same libc. But these days they've switched to a Linux-style libc (complete with a clever implementation of glibc-style symbol versioning), so they really are pretty much identical. The hardest thing with distributing binaries on Linux is just convincing Linux hackers that it's OK to do it the same way Windows/macOS do, instead of inventing something more complicated. "DLL hell" refers to how in the bad old days, the standard practice for apps on Windows was not just to include vendored libraries, but to *store all those vendored libraries in the global libraries directory*, which unsurprisingly led to all kinds of chaos as different apps overwrote each other's vendored libraries. -n -- Nathaniel J. Smith -- https://vorpus.org

I think either approach works, but if we do go with a glibc-versioned tag that we make it explicit in the tag, e.g. `manylinux_glibc_{version}`. That way if we ever choose to support musl (for Alpine) we can. The one question I do have is how the compatibility tags will work for a tagged platform? E.g. if you say manylinux_glibc_2_12 for manylinux2010, then do we generate from 2.12 down to 1.0 (or whatever the floor is for manylinux1)? This would match how compatibility tags work on macOS where you go from your macOS version all the way down to the first version supporting your CPU architecture. And just to double-check, I'm assuming we don't want to just jump straight to distro tags and say if you're centos_6 compatible then you're fine? I assume that would potentially over-reach on compatibility in terms of what might be dynamically-linked against, but I thought I would ask because otherwise the glibc-tagged platform will be a unique hybrid of macOS + not an actual OS restriction. On Fri, 30 Nov 2018 at 00:10, Nathaniel Smith <njs@pobox.com> wrote:

Also betraying the lack of knowledge of how this works, I read this section in PEP 513 (which defines manylinux1):
To be eligible for the manylinux1 platform tag, a Python wheel must therefore both (a) contain binary executables and compiled code that links only to libraries with SONAMEs included in the following list:
.… libglib-2.0.so.0 Does this mean that only tags down to 2.0 needs to be generated? TP Sent from Mail for Windows 10 From: Brett Cannon Sent: 01 December 2018 02:12 To: Nathaniel Smith Cc: distutils sig Subject: [Distutils] Re: Idea: perennial manylinux tag I think either approach works, but if we do go with a glibc-versioned tag that we make it explicit in the tag, e.g. `manylinux_glibc_{version}`. That way if we ever choose to support musl (for Alpine) we can. The one question I do have is how the compatibility tags will work for a tagged platform? E.g. if you say manylinux_glibc_2_12 for manylinux2010, then do we generate from 2.12 down to 1.0 (or whatever the floor is for manylinux1)? This would match how compatibility tags work on macOS where you go from your macOS version all the way down to the first version supporting your CPU architecture. And just to double-check, I'm assuming we don't want to just jump straight to distro tags and say if you're centos_6 compatible then you're fine? I assume that would potentially over-reach on compatibility in terms of what might be dynamically-linked against, but I thought I would ask because otherwise the glibc-tagged platform will be a unique hybrid of macOS + not an actual OS restriction. On Fri, 30 Nov 2018 at 00:10, Nathaniel Smith <njs@pobox.com> wrote: Hi all, The manylinux1 -> manylinux2010 transition has turned out to be very difficult. Timeline so far: March 2017: CentOS 5 went EOL April 2018: PEP 517 accepted May 2018: support for manylinux2010 lands in warehouse November 2018: support lands in auditwheel, and pip master December 2018: 21 months after CentOS 5 EOL, wwee still don't have an official build environment, or support in a pip release We'll get through this, but it's been super painful and maybe we can change things somehow so it will suck less next time. We don't have anything like this pain on Windows or macOS. We never have to update pip, warehouse, etc., after those OSes hit EOLs. Why not? On Windows, we have just two tags: "win32" and "win_amd64". These are defined to mean something like "this wheel will run on any recent-ish Windows system". So the meaning of the tag actually changes over time: it used to be that if a wheel said it ran on win32, then that meant it would work on winxp, but since winxp hit EOL people started uploading "win32" wheels that don't work on winxp, and that's worked fine. On macOS, the tags look like "macosx_10_9_x86_64". So here we have the OS version embedded in the tag. This means that we do occasionally switch which tags we're using, kind of like how manylinux1 -> manylinux2010 is intended to work. But, unlike for the manylinux tags, defining a new macosx tag is totally trivial: every time a new OS version is released, the tag springs into existence without any human intervention. Warehouse already accepts uploads with this tag; pip already knows which systems can install wheels with this tag, etc. Can we take any inspiration from this for manylinux? We could do the Windows thing, and have a plain "manylinux" tag that means "any recent-ish glibc-based Linux". Today it would be defined to be "any distro newer than CentOS 6". When CentOS 6 goes out of service, we could tweak the definition to be "any distro newer than CentOS 7". Most parts of the toolchain wouldn't need to be updated, though, because the tag wouldn't change, and by assumption, enforcement wouldn't really be needed, because the only people who could break would be ones running on unsupported platforms. Just like happens on Windows. We could do the macOS thing, and have a "manylinux_${glibc version}" tag that means "this package works on any Linux using glibc newer than ${glibc version}". We're already using this as our heuristic to handle the current manylinux profiles, so e.g. manylinux1 is effectively equivalent to manylinux_2_5, and manylinux2010 will be equivalent to manylinux_2_12. That way we'd define the manylinux tags once, get support into pip and warehouse and auditwheel once, and then in the future the only thing that would have to change to support new distro releases or new architectures would be to set up a proper build environment. What do y'all think? -n -- Distutils-SIG mailing list -- distutils-sig@python.org To unsubscribe send an email to distutils-sig-leave@python.org https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/ Message archived at https://mail.python.org/archives/list/distutils-sig@python.org/message/6AFS4...

On 30.11.18 19:10, Brett Cannon wrote:
while a distro tag might be overkill, just encoding glibc might not be enough. At least libstdc++ can be configured two ways (--with-default-libstdcxx-abi=old|new, --disable-libstdcxx-dual-abi), and usually you can't run code built for the dual abi on platforms which only have the old abi. Not sure if 32bit x86 wheels are still covered, but the recent move of Fedora to SSE math on these systems might show interesting results when run on a system using x87 math (although the calling conventions are the same).

On Tue, 4 Dec 2018 at 23:51, Matthias Klose <doko@ubuntu.com> wrote:
Right, the kinds of issues you mention are why I think it's important to keep the "many" qualifier in the name (since there are additional constraints beyond just the glibc version), and why *something* still needs to define what those additional constraints actually are (even if that something becomes "the manylinux build environment project" rather than "distutils-sig via the PEP process"). The only aspect this proposal would change is making it possible to infer the platform compatibility checking *heuristic* from the wheel name, rather than needing a lookup table. Installers that wanted a more robust heuristic could still add extra checks based on the actual linking constraints defined by the reference build environment. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

FYI, I've started a discussion on the next manylinux spec here: https://discuss.python.org/t/the-next-manylinux-specification/ On Thu, Dec 6, 2018 at 4:21 AM Nick Coghlan <ncoghlan@gmail.com> wrote:

It sounds like I should explain better how things currently work :-). The original manylinux1 spec is PEP 513. But of course it's just text -- it's a useful reference, but it doesn't do much by itself. And when we wrote it we had no idea how this would actually work out. In practice, there are two pieces to manylinux1's implementation, that work together to make it successful. First, there's pip's gatekeeping logic. If you put up a manylinux1 wheel on pypi, then pip will install it on any python that's built again glibc 2.5 or greater, on x86-64 or x86-32. That means not ancient distros like CentOS 4 (its glibc is too old), and not exotic distros like Alpine or Android (they don't use glibc), but it includes all vaguely-modern mainstream desktop or server distros. So in practice the definition of a manylinux1 wheel is "I promise this wheel will work on any system with glibc 2.5 or greater and an Intel processor". But most maintainers have no idea how to actually fulfill that promise, which is where the docker image and auditwheel come in. There are a lot of ways a wheel can fail to work on a glibc 2.5+ system: it might depend on a newer glibc, or it might depend on library that the target system doesn't have installed, or a whole bunch of other super arcane traps that we've discovered over time (e.g. the Python used for the build has to be linked using the correct configure options). These are all encoded into the docker image/auditwheel. (So for example, auditwheel has some built-in knowledge of which libraries you can expect to find on every Intel system with glibc 2.5 or greater, that it uses to make decisions about which libraries need to be vendored.) Technically you don't *have* to use these tools to build your wheel, pip doesn't care, but they provide some nice padded guardrails that make it possible for ordinary maintainers to fulfill the manylinux1 promise in practice. How does this affect spec-writing? Well, we want to allow for non-pip installers, so the part that pip does has to be specified. But pip's part is really straightforward. All the complicated bit is in the docker image/auditwheel. But, for these, it turns out the spec doesn't actually matter that much. We can observe that most wheels do work in practice, and whenever someone discovers some new edge case that the PEP never thought of, then it's not a disaster, it just means there's one broken wheel on pypi, and we figure out how to fix the tools to catch the new edge case, they upload a new wheel, and life goes on. So the proposal here is to refactor the spec to match how this actually works: the official definition of a manylinux_${glibc version}_${arch} wheel would be "I promise this wheel will work on any Linux system with glibc >=${glibc version} and an ${arch} processor". We'll still need to make changes as old distros go out of support, new architectures get supported, etc., but the difference is, those changes won't require complex cross-ecosystem coordination with new formal specs for each one; instead they'll be routine engineering problems for the docker image+auditwheel maintainers to solve. -n On Fri, Nov 30, 2018 at 12:09 AM Nathaniel Smith <njs@pobox.com> wrote:
-- Nathaniel J. Smith -- https://vorpus.org

On Sat, 1 Dec 2018 at 04:42, Nathaniel Smith <njs@pobox.com> wrote:
So if I follow, what you're saying is that the *spec* (i.e., the PEP) will simply say what installers like pip, and indexes like warehouse need to do[1] (which is for pip, generate the right list of supported tags, and for warehouse, add the relevant tags to the "allowed uploads" list). And everything else (all the stuff about libraries you're allowed to link dynamically to) becomes just internal design documentation for the auditwheel project (and amy other manylinux building support projects that exist)? That sounds reasonable. Paul [1] Is there not also an element of what the wheel project needs to do? It has to generate wheels with the right tags in the first place. Actually, PEP 425 also needs an update, at a minimum to refer to the manylinux spec(s), which modify the definition of a "platform tag" from PEP 425...

On Fri, Nov 30, 2018 at 10:29 PM Paul Moore <p.f.moore@gmail.com> wrote:
Yep.
We've actually never touched the wheel project in any of the manylinux work. The workflow is: - set up the special build environment - run setuptools/wheel to generate a plain "linux" wheel (this is the not-very-useful tag that for historical reasons just means "it works on my machine", and isn't allowed on pypi) - auditwheel processes the "linux" wheel to check for various possible issues, vendor any necessary libraries, and if that all worked then it rewrites the metadata to convert it to a "manylinux" wheel This feels a bit weird somehow, but it's worked really well so far. Note that in a PEP 517-based world, regular local installs also create an intermediate wheel, and in that case you don't want the special auditwheel handling, you really just want the "it works on my machine" wheel. -n -- Nathaniel J. Smith -- https://vorpus.org

Thanks Nathaniel for the explanation. On Sat, Dec 1, 2018, at 4:39 AM, Nathaniel Smith wrote:
I'm still a bit unsure how this works with the other libraries specified in PEP 571 (glib, libXrender, etc.). Would they be entirely dropped from a hypothetical manylinux_2_20, so wheels need to bundle everything apart from glibc itself? Or is it reasonable to assume that any system built with glibc has certain other libraries available? And is there any need to specify versions of these libraries, or is e.g. libX11.so.6 sticking around forever? Thomas

Hi, On Sat, Dec 1, 2018 at 5:18 PM Thomas Kluyver <thomas@kluyver.me.uk> wrote:
I think this is the key point. For Mac - Apple has already done the specification work for us, with its MACOSX_DEPLOYMENT_TARGET specifiers. These versions, such as '10.6' specify compatible versions for all the system libraries. I don't know Windows well, but I suppose that the equivalent APIs to the PEP 571 libraries are stable across many Windows versions, and it's standard Windows practice to compile against old and stable APIs. Cheers, Matthew

Hi, As the original author of auditwheel and co-author of PEP 513, I figure I should probably chime in. I suspect that *I* am one of the major reasons that the manylinux1 -> manylinux2010 transition has been unreasonably drawn out, rather than any particular design flaw in the versioning scheme (manylinux_{cardinal number} vs. manylinux_{year} vs. manylinux_{glibc version}). I wrote auditwheel while I was finishing up graduate school. For years, the "test suite" was just a couple wheel files hosted on my school's per-student cgi-bin/ directory (which recently stopped working, since I'm no longer a student). The logging was just random print statements. It worked (I think), but it wasn't particularly well designed, nor was it well tested. And it was basically a one-person project. After I finished my Ph.D., I got a full time job and mostly stopped contributing to open source. I think a lot of the reason for the delay in the manylinux2010 transition was that nobody was present and accounted for to develop auditwheel. Now, the project is in a much better place. Elana Hashman is leading doing an awesome job leading auditwheel, and its seems like there's a new momentum for manylinux2010. With now-proper maintenance and testing for auditwheel, I don't think it will be as hard jump to the next iteration of manylinux (e.g. manylinux2014) as it was to jump from manylinux1 to manylinux2010. -Robert On Fri, Nov 30, 2018 at 3:12 AM Nathaniel Smith <njs@pobox.com> wrote:
-- -Robert

On Sun, Dec 2, 2018 at 6:10 PM Robert T. McGibbon <rmcgibbo@gmail.com> wrote:
I suspect that *I* am one of the major reasons that the manylinux1 -> manylinux2010 transition has been unreasonably drawn out, rather than any particular design flaw in the versioning scheme (manylinux_{cardinal number} vs. manylinux_{year} vs. manylinux_{glibc version}).
Hey Robert, good to hear from you! And seriously, I don't think you need to blame yourself for this... like, it was 13 months between when CentOS 5 went EOL and when the PEP was accepted, which was a precondition for everything else. 8 months after that, we still don't have a pip release that can install manylinux2010 wheels. As it's turned out, auditwheel wasn't the bottleneck at any point. And this proposal would remove both the need for future PEPs and for future pip updates, so it addresses the actual bottlenecks. -n -- Nathaniel J. Smith -- https://vorpus.org

On Fri, 30 Nov 2018 at 18:12, Nathaniel Smith <njs@pobox.com> wrote:
We could do the Windows thing, and have a plain "manylinux" tag that means "any recent-ish glibc-based Linux". Today it would be defined to be "any distro newer than CentOS 6". When CentOS 6 goes out of service, we could tweak the definition to be "any distro newer than CentOS 7". Most parts of the toolchain wouldn't need to be updated, though, because the tag wouldn't change, and by assumption, enforcement wouldn't really be needed, because the only people who could break would be ones running on unsupported platforms. Just like happens on Windows.
The reason this approach works for Windows is because *CPython* defines the target Windows ABI version - if you don't use the right target ABI, your extension module won't even link to the CPython DLL. So here, we're taking advantage of Microsoft's strict ABI management policy, by way of CPython.
We could do the macOS thing, and have a "manylinux_${glibc version}" tag that means "this package works on any Linux using glibc newer than ${glibc version}". We're already using this as our heuristic to handle the current manylinux profiles, so e.g. manylinux1 is effectively equivalent to manylinux_2_5, and manylinux2010 will be equivalent to manylinux_2_12. That way we'd define the manylinux tags once, get support into pip and warehouse and auditwheel once, and then in the future the only thing that would have to change to support new distro releases or new architectures would be to set up a proper build environment.
This approach works for Mac OS X because Apple have just as strict an approach to ABI management as Microsoft do, and the OS version specifies a lot of details about a broad range of operating system interfaces, which are then given strict compatibility guarantees.
What do y'all think?
I don't think we can get away from *something* specifying exactly what can be assumed to be present in a given manylinux variant, since the distros don't define any useful form of cross-distro ABI compatibility, and CPython doesn't nominate a target Linux ABI either. However, I like the design concept of making it so that auditwheel is the only project that has to change in order to define a new revision of manylinux, which means encoding the heuristic check that installers should use directly into the platform compatibility tag rather than defining it as a lookup table that needs to be updated in every affected tool whenever a new version is defined. From my view, the most promising path towards achieving that would be to go with Brett's suggestion of "manylinux_{libc_variant}_{libc_version}", such that we get "manylinux_glibc_2_17" as the next edition rather than "manylinux2014" (assuming the trend of using RHEL/CentOS releases as the baseline continues). If folks prove keen to start using the new cheaper aarch64 cloud systems, there may also be demand for a "manylinux_glibc_2_27" (compatible with Ubuntu 18.04 and RHEL/CentOS 8). While having both "many" and "glibc" in the name may seem redundant initially, I think it's worth having them both for two reasons: 1. Even if new manylinux tags are being defined as part of the auditwheel documentation rather than in a PEP, I'd still expect them to allow the generated wheel archives to dynamically link against more than just glibc. Accordingly, there will be some platforms that pass the installer's heuristic check, but nevertheless still have compatibility problems with prebuilt wheel archives. 2. PEP 513 defined a ctypes based algorithm for checking the glibc version, and a similar heuristic could be defined for other libc implementations (most notably musl, thanks to Docker/Alpine). If we're going to go through the process of switching to a different naming scheme, we may as well provide a path towards resolving that issue as well, rather than further entrenching manylinux as being glibc-only. Cheers, Nick. P.S. Paul asked how we can have manylinux tags without updating PEP 425 to include them, and the answer is that the actual compatibility tag spec is at https://packaging.python.org/specifications/platform-compatibility-tags/ and that references PEP 513 (manylinux1) and PEP 571 (manylinux2010) in addition to PEP 425. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Mon, 3 Dec 2018 at 12:16, Nick Coghlan <ncoghlan@gmail.com> wrote:
Bah. I'd forgotten we had moved to putting the specs at packaging.python.org (and that the tag spec had changed). Maybe there should be a prominent note in PEP 425 (and any other affected specs) that notes that it is no longer the reference specification, and pointing to the new spec? Paul

On Mon, 3 Dec 2018 at 23:11, Paul Moore <p.f.moore@gmail.com> wrote:
Yeah, that's the last todo item on https://github.com/pypa/pypa.io/issues/11#issuecomment-195412332 Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
participants (14)
-
Brett Cannon
-
Dustin Ingram
-
Elvis Stansvik
-
Jeremy Stanley
-
Marius Gedminas
-
Matthew Brett
-
Matthias Klose
-
Nathaniel Smith
-
Nick Coghlan
-
Paul Moore
-
Pradyun Gedam
-
Robert T. McGibbon
-
Thomas Kluyver
-
Tzu-ping Chung