Working toward Linux wheel support
Hi all, I've recently been working on adding SOABI support for Python 2.x and other pieces needed to get wheels w/ C extensions for Linux working. Here's the work for wheels: https://bitbucket.org/pypa/wheel/pull-request/54/ Based on that, I've added support for those wheels to pip here: https://github.com/natefoo/pip/tree/linux-wheels As mentioned in the wheels PR, there are some questions and decisions made that I need guidance on: - On Linux, the distro name/version (as determined by platform.linux_distribution()) will be appended to the platform string, e.g. linux_x86_64_ubuntu_14_04. This is going to be necessary to make a reasonable attempt at wheel compatibility in PyPI. But this may violate PEP 425. - By default, wheels will be built using the most specific platform information. In practice, I build our wheels[1] using Debian Squeeze in Docker and therefore they should work on most currently "supported" Linuxes, but allowing such wheels to PyPI could still be dangerous because forward compatibility is not always guaranteed (e.g. if a SO version/name changes, or a C lib API method changes in a non-backward compatible way but the SO version/name does not change). That said, I'd be happy to make a much more generalized version of our docker-build[2] system that'd allow package authors to easily/rapidly build distro/version-specific wheels for many of the popular Linux distros. We can assume that a wheel built on a vanilla install of e.g. Ubuntu 14.04 will work on any other installation of 14.04 (this is what the distro vendors promise, anyway). - I attempt to set the SOABI if the SOABI config var is unset, this is for Python 2, but will also be done even on Python 3. Maybe that is the wrong decision (or maybe SOABI is guaranteed to be set on Python 3). - Do any other implementations define SOABI? PyPy does not, I did not test others. What should we do with these? Because the project I work for[3] relies heavily on large number of packages, some of which have complicated build-time dependencies, we have always provided them as eggs and monkeypatched platform support back in to pkg_resources. Now that the PyPA has settled on wheels as the preferred binary packaging format, I am pretty heavily motivated to do the work to work out all the issues with this implementation. Thanks, --nate [1] https://wheels.galaxyproject.org/ [2] https://github.com/galaxyproject/docker-build/ [3] https://galaxyproject.org/
On 17 July 2015 at 03:41, Nate Coraor <nate@bx.psu.edu> wrote:
Hi all,
I've recently been working on adding SOABI support for Python 2.x and other pieces needed to get wheels w/ C extensions for Linux working. Here's the work for wheels:
https://bitbucket.org/pypa/wheel/pull-request/54/
Based on that, I've added support for those wheels to pip here:
https://github.com/natefoo/pip/tree/linux-wheels
As mentioned in the wheels PR, there are some questions and decisions made that I need guidance on:
- On Linux, the distro name/version (as determined by platform.linux_distribution()) will be appended to the platform string, e.g. linux_x86_64_ubuntu_14_04. This is going to be necessary to make a reasonable attempt at wheel compatibility in PyPI. But this may violate PEP 425.
I think it's going beyond it in a useful way, though. At the moment, the "linux_x86_64" platform tag *under*specifies the platform - a binary extension built on Ubuntu 14.04 with default settings may not work on CentOS 7, for example. Adding in the precise distro name and version number changes that to *over*specification, but I now think we can address that through configuration settings on the installer side that allow the specification of "compatible platforms". That way a derived distribution could add the corresponding upstream distribution's platform tag and their users would be able to install the relevant wheel files by default. Rather than putting the Linux specific platform tag derivation logic directly in the tools, though, what if we claimed a file under the "/etc/python" subtree and used it to tell the tools what platform tags to use? For example, we could put the settings relating to package tags into "/etc/python/binary-compatibility.cfg" and allow that to be overridden on a per-virtualenv basis with a binary-compatibility.cfg file within the virtualenv. For example, we could have a section where for a given platform, we overrode both the build and install tags appropriately. For RHEL 7.1, that may look like: [linux_x86_64] build=rhel_7_1 install=rhel_7_0,rhel_7_1,centos_7_1406,centos_7_1503 Using JSON rather than an ini-style format would also work: { "linux_x86_64": { "build": "rhel_7_1", "install": ["rhel_7_0", "rhel_7_1", "centos_7_1406", "centos_7_1503"] } } The reason I like this approach is that it leaves the definition of ABI compatibility in the hands of the distros, but also makes it safe to publish Linux wheel files on PyPI (just not with the generic linux_x86_64 platform tag).
- By default, wheels will be built using the most specific platform information. In practice, I build our wheels[1] using Debian Squeeze in Docker and therefore they should work on most currently "supported" Linuxes, but allowing such wheels to PyPI could still be dangerous because forward compatibility is not always guaranteed (e.g. if a SO version/name changes, or a C lib API method changes in a non-backward compatible way but the SO version/name does not change). That said, I'd be happy to make a much more generalized version of our docker-build[2] system that'd allow package authors to easily/rapidly build distro/version-specific wheels for many of the popular Linux distros. We can assume that a wheel built on a vanilla install of e.g. Ubuntu 14.04 will work on any other installation of 14.04 (this is what the distro vendors promise, anyway).
Right, if we break ABI within a release, that's our fault (putting on my distro developer hat), and folks will rightly yell at us for it. I was previously wary of this approach due to the "what about derived distributions?" problem, but realised recently that a config file that explicitly lists known binary compatible platforms should suffice for that. There's only a handful of systems folks are likely want to prebuild wheels for (Debian, Ubuntu, Fedora, CentOS/RHEL, openSuse), and a configuration file based system allows ABI compatible derived distros to be handled as if they were their parent.
- I attempt to set the SOABI if the SOABI config var is unset, this is for Python 2, but will also be done even on Python 3. Maybe that is the wrong decision (or maybe SOABI is guaranteed to be set on Python 3).
Python 3 should always set it, but if it's not present for some reason, deriving it makes sense.
- Do any other implementations define SOABI? PyPy does not, I did not test others. What should we do with these?
The implementation identifier is also included in the compatibility tags, so setting that in addition to the platform ABI tag when a wheel contains binary extensions should suffice.
Because the project I work for[3] relies heavily on large number of packages, some of which have complicated build-time dependencies, we have always provided them as eggs and monkeypatched platform support back in to pkg_resources. Now that the PyPA has settled on wheels as the preferred binary packaging format, I am pretty heavily motivated to do the work to work out all the issues with this implementation.
Thank you! Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
TL;DR -- pip+wheel needs to address the non-python dependency issue before it can be a full solution for Linux (or anything else, really) The long version: I think Linux wheel support is almost useless unless the pypa stack provides _something_ to handle non-python dependencies. 1) Pure Python packages work fine as source. 2) Python packages with C extensions build really easily out of the box -- so source distribution is fine (OK, I suppose some folks want to run a system without a compiler -- is this the intended use-case?) So what are the hard cases? the one we really want binary wheels for? - Windows, where a system compiler is a rarity: Done - OS-X, where a system compiler is a semi-rarity, and way too many "standard" system libs aren't there (or are old crappy versions...) - Almost Done. - Packages with semi-standard dependencies: can we expect ANY Linux distro to have libfreetype, libpng, libz, libjpeg, etc? probably, but maybe not installed (would a headless server have libfreetype?). And would those version be all compatible (probably if you specified a distro version) - Packages with non-standard non-python dependencies: libhdf5, lapack, BLAS, fortran(!) -- this is where the nightmare really is. I suspect most folks on this list will say that this is "Scipy Problem", and indeed, that's where the biggest issues are, and where systems like conda have grown up to address this. But at this point, I think it's really sad that the community has become fractured -- if folks start out with "I want to do scientific computing", then they get pointed to Enthought Canopy or Anaconda, and all is well (until they look for standard web development packages -- though that's getting better). But if someone starts out as a web developer, and is all happy with the PyPA stack (virtualenv, pip, etc...), then someone suggests they put some Bokeh plotting in their web site, or need to do some analytics on HDF5 files, or any number of things well supported by Python, but NOT by pip/wheel -- they are kind of stuck. My point is that it may actually be a bad thing to solve the easy problem while keeping out fingers in our ears about the hard ones.... (la la la la, I don't need to use those packages. la la la la) My thought: what pip+wheel needs to support much of this is the ability to specify a wheel dependency, rather than a package dependency -- i.e. "this particular wheel requires a libfreetype wheel". Then we could have binary wheels for non-python dependencies like libs (which would install the lib into pre-defined locations that could be relative path linked to) Sorry for the rant.... -Chris PS: Personally, after banging my head against this for years, I've committed to conda for the moment -- working to get conda to better support the wide range of python packages. I haven't tried it on Linux, but it does exist and works well for some folks. On Fri, Jul 17, 2015 at 1:22 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 17 July 2015 at 03:41, Nate Coraor <nate@bx.psu.edu> wrote:
Hi all,
I've recently been working on adding SOABI support for Python 2.x and other pieces needed to get wheels w/ C extensions for Linux working. Here's the work for wheels:
https://bitbucket.org/pypa/wheel/pull-request/54/
Based on that, I've added support for those wheels to pip here:
https://github.com/natefoo/pip/tree/linux-wheels
As mentioned in the wheels PR, there are some questions and decisions made that I need guidance on:
- On Linux, the distro name/version (as determined by platform.linux_distribution()) will be appended to the platform string, e.g. linux_x86_64_ubuntu_14_04. This is going to be necessary to make a reasonable attempt at wheel compatibility in PyPI. But this may violate PEP 425.
I think it's going beyond it in a useful way, though. At the moment, the "linux_x86_64" platform tag *under*specifies the platform - a binary extension built on Ubuntu 14.04 with default settings may not work on CentOS 7, for example.
Adding in the precise distro name and version number changes that to *over*specification, but I now think we can address that through configuration settings on the installer side that allow the specification of "compatible platforms". That way a derived distribution could add the corresponding upstream distribution's platform tag and their users would be able to install the relevant wheel files by default.
Rather than putting the Linux specific platform tag derivation logic directly in the tools, though, what if we claimed a file under the "/etc/python" subtree and used it to tell the tools what platform tags to use? For example, we could put the settings relating to package tags into "/etc/python/binary-compatibility.cfg" and allow that to be overridden on a per-virtualenv basis with a binary-compatibility.cfg file within the virtualenv.
For example, we could have a section where for a given platform, we overrode both the build and install tags appropriately. For RHEL 7.1, that may look like:
[linux_x86_64] build=rhel_7_1 install=rhel_7_0,rhel_7_1,centos_7_1406,centos_7_1503
Using JSON rather than an ini-style format would also work:
{ "linux_x86_64": { "build": "rhel_7_1", "install": ["rhel_7_0", "rhel_7_1", "centos_7_1406", "centos_7_1503"] } }
The reason I like this approach is that it leaves the definition of ABI compatibility in the hands of the distros, but also makes it safe to publish Linux wheel files on PyPI (just not with the generic linux_x86_64 platform tag).
- By default, wheels will be built using the most specific platform information. In practice, I build our wheels[1] using Debian Squeeze in Docker and therefore they should work on most currently "supported" Linuxes, but allowing such wheels to PyPI could still be dangerous because forward compatibility is not always guaranteed (e.g. if a SO version/name changes, or a C lib API method changes in a non-backward compatible way but the SO version/name does not change). That said, I'd be happy to make a much more generalized version of our docker-build[2] system that'd allow package authors to easily/rapidly build distro/version-specific wheels for many of the popular Linux distros. We can assume that a wheel built on a vanilla install of e.g. Ubuntu 14.04 will work on any other installation of 14.04 (this is what the distro vendors promise, anyway).
Right, if we break ABI within a release, that's our fault (putting on my distro developer hat), and folks will rightly yell at us for it. I was previously wary of this approach due to the "what about derived distributions?" problem, but realised recently that a config file that explicitly lists known binary compatible platforms should suffice for that. There's only a handful of systems folks are likely want to prebuild wheels for (Debian, Ubuntu, Fedora, CentOS/RHEL, openSuse), and a configuration file based system allows ABI compatible derived distros to be handled as if they were their parent.
- I attempt to set the SOABI if the SOABI config var is unset, this is for Python 2, but will also be done even on Python 3. Maybe that is the wrong decision (or maybe SOABI is guaranteed to be set on Python 3).
Python 3 should always set it, but if it's not present for some reason, deriving it makes sense.
- Do any other implementations define SOABI? PyPy does not, I did not test others. What should we do with these?
The implementation identifier is also included in the compatibility tags, so setting that in addition to the platform ABI tag when a wheel contains binary extensions should suffice.
Because the project I work for[3] relies heavily on large number of packages, some of which have complicated build-time dependencies, we have always provided them as eggs and monkeypatched platform support back in to pkg_resources. Now that the PyPA has settled on wheels as the preferred binary packaging format, I am pretty heavily motivated to do the work to work out all the issues with this implementation.
Thank you!
Regards, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On Fri, 17 Jul 2015 08:36:39 -0700 Chris Barker <chris.barker@noaa.gov> wrote:
- Packages with non-standard non-python dependencies: libhdf5, lapack, BLAS, fortran(!) -- this is where the nightmare really is. I suspect most folks on this list will say that this is "Scipy Problem", and indeed, that's where the biggest issues are, and where systems like conda have grown up to address this.
But at this point, I think it's really sad that the community has become fractured -- if folks start out with "I want to do scientific computing", then they get pointed to Enthought Canopy or Anaconda, and all is well (until they look for standard web development packages -- though that's getting better). But if someone starts out as a web developer, and is all happy with the PyPA stack (virtualenv, pip, etc...), then someone suggests they put some Bokeh plotting in their web site, or need to do some analytics on HDF5 files, or any number of things well supported by Python, but NOT by pip/wheel -- they are kind of stuck.
Indeed, that's the main issue here. Eventually some people will want to use llvmlite or Numba in an environment where there's also a web application serving stuff, or who knows other combinations.
PS: Personally, after banging my head against this for years, I've committed to conda for the moment -- working to get conda to better support the wide range of python packages. I haven't tried it on Linux, but it does exist and works well for some folks.
Due to the fact Linux binary wheels don't exist, conda is even more useful on Linux... Regards Antoine.
On Fri, Jul 17, 2015 at 8:46 AM, Antoine Pitrou <solipsis@pitrou.net> wrote:
Due to the fact Linux binary wheels don't exist, conda is even more useful on Linux...
True -- though it's at least possible, and certainly easier than on Mac and Windows, to build it all yourself on Linux. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On 18 Jul 2015, at 1:53 am, Chris Barker <chris.barker@noaa.gov> wrote:
On Fri, Jul 17, 2015 at 8:46 AM, Antoine Pitrou <solipsis@pitrou.net> wrote: Due to the fact Linux binary wheels don't exist, conda is even more useful on Linux...
True -- though it's at least possible, and certainly easier than on Mac and Windows, to build it all yourself on Linux.
I build(*) everything myself on OS X and I find it easy, hdf5 has never been a problem. (*) I am lying, homebrew provides binary installs. my 2c, Andrea -- Andrea Bedini @andreabedini, http://www.andreabedini.com See the impact of my research at https://impactstory.org/AndreaBedini use https://keybase.io/andreabedini to send me encrypted messages Key fingerprint = 17D5 FB49 FA18 A068 CF53 C5C2 9503 64C1 B2D5 9591
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/17/2015 11:46 AM, Antoine Pitrou wrote:
Due to the fact Linux binary wheels don't exist, conda is even more useful on Linux...
FWIW, they exist, they just can't be published to PyPI. Private indexes (where binary compatibility is a known quantity) work fine with them. Because it nails down binary non-Python dependencies, conda (and similar tools) do fit the bill for public distribution of Python projects which have such build-time deps. Even given the "over-specified" platform tags Nick suggests, linux wheels won't fully work, because the build-time depes won't be satisfiable *by pip*: the burden will be on each project to attempt a build and then spit out an error message trying to indicate the missing system package. Is-that-'-dev'-or-'-devel'-I-need?'ly, Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBAgAGBQJVrbuhAAoJEPKpaDSJE9HY0R8P/jLBINO04/NTlJTUa8wmIxed aWSU0mxFSAKg0q+n2QaRi418QG6vvtUVGsXGafmYu4hlfKj3Hkj6DA+ws2o7uR5S 1UNU3KSF2lsLoWjaIKpMm4RNWmbHuWQ3HlabXqSly7H7lfgXCAzntdrVy5s3zacM 4wqVTjTWaG2lBf77B6aWhgom6kTvfnpNtyQ4+oKDujSnSWlLJ1W7p0hvuR/33XHr 1NHUdaoUWH7kES0zcRHOyYU7PSPtVYMpzn3SKWljMXSiN1vs9YN6WmypNmLeXjTj gkD/JR8gGv97o9TliKW6KaocbSLvZ5w2bHwkBGYsLRS2pti2ojw+3vmSpm4VwKyn PLhOaMpBR4qC2scFVJ5z1iW9uOYlakra45o60rAaRTiuKEHBPaoimQP3mMW38AsB glY+/j349A2XyE1vosAekxeuinip64erQg6G3+gU0myRsfaaC1lTBlzkDrsya4X5 C2LbE4n2IlMrm+hrA/RbUjKlbTJtIyWLlnrv1jORh6l5VNTXSkafStA7j1nXa/hx 4zAqv9mV/1UErI+IjPz6CQTwNbz5QtSP1gFa/9xqGnnrBSuWRMYd/x0c+JNXFzFC MCMhbQ/ZIAkpmk/VRb1mVQVc2uqsWr9WxZ5F13cJJvZrvWkQJFf70nnHk+n2f3CU 9/s6HEGX8SkP8tZnZ7Co =gpd9 -----END PGP SIGNATURE-----
Hi Tres, On 21 July 2015 at 00:25, Tres Seaver <tseaver@palladion.com> wrote:
[...]
Even given the "over-specified" platform tags Nick suggests, linux wheels won't fully work, because the build-time depes won't be satisfiable *by pip*: the burden will be on each project to attempt a build and then spit out an error message trying to indicate the missing system package.
Actually, since they're wheels, they´re already built, so installing them will work perfectly. Trying to use them is what's going to fail with a message like: ImportError: libxslt.so.1: cannot open shared object file: No such file or directory I do think Nate's proposal is a step forward[1], since being unable to use the package because a runtime dependency is not installed is no more of a problem than being unable to install a source package because a build dependency is not installed. And the package documentation could always specify which system packages are needed for using the wheel. If anything, the error message tends to be smaller, whereas a missing .h from a missing development package usually causes a huge stream of error messages on build, only the first of which is actually relevant. Then again, an import error could happen anywhere in the middle of running the software, so in some cases the error might not be obvious at first. My proposal (that wheels should specify the system file dependencies in terms of abstract locations) would allow pip to provide a much more user-friendly information about the missing file, in the earliest possible moment, allowing for the user to hunt (or compile) the system package at the same moment as he's installing the python package. This information is readily derived during the build process, making it's inclusion in the wheel info straightforward. But I don't think my proposal should block acceptance of Nate's. [1] As long as the acceptance of the over-specified wheels is a strictly opt-in process. Some linux folks don't like running code they haven't compiled. Regards, Leo
I think Linux wheel support is almost useless unless the pypa stack
provides _something_ to handle non-python dependencies.
I wouldn't say useless, but I tend to agree with this sentiment. I'm thinking the only way to really "compete" with the ease of Conda (for non-python dependencies) is to shift away from wheels, and instead focus on making it easier to create native distro packages (i.e. rpm, deb etc...that can easily depend on non-python dependencies) for python applications, and moreover that these packages should be "parallel installable" with the system packages, i.e. they should depend on virtual environments, not the system python. I've been working on this some personally, admittedly pretty slowly, since it's a pretty tall order to put all the pieces together Marcus
1) Pure Python packages work fine as source.
2) Python packages with C extensions build really easily out of the box -- so source distribution is fine (OK, I suppose some folks want to run a system without a compiler -- is this the intended use-case?)
So what are the hard cases? the one we really want binary wheels for?
- Windows, where a system compiler is a rarity: Done
- OS-X, where a system compiler is a semi-rarity, and way too many "standard" system libs aren't there (or are old crappy versions...) - Almost Done.
- Packages with semi-standard dependencies: can we expect ANY Linux distro to have libfreetype, libpng, libz, libjpeg, etc? probably, but maybe not installed (would a headless server have libfreetype?). And would those version be all compatible (probably if you specified a distro version)
- Packages with non-standard non-python dependencies: libhdf5, lapack, BLAS, fortran(!) -- this is where the nightmare really is. I suspect most folks on this list will say that this is "Scipy Problem", and indeed, that's where the biggest issues are, and where systems like conda have grown up to address this.
But at this point, I think it's really sad that the community has become fractured -- if folks start out with "I want to do scientific computing", then they get pointed to Enthought Canopy or Anaconda, and all is well (until they look for standard web development packages -- though that's getting better). But if someone starts out as a web developer, and is all happy with the PyPA stack (virtualenv, pip, etc...), then someone suggests they put some Bokeh plotting in their web site, or need to do some analytics on HDF5 files, or any number of things well supported by Python, but NOT by pip/wheel -- they are kind of stuck.
My point is that it may actually be a bad thing to solve the easy problem while keeping out fingers in our ears about the hard ones....
(la la la la, I don't need to use those packages. la la la la)
My thought: what pip+wheel needs to support much of this is the ability to specify a wheel dependency, rather than a package dependency -- i.e. "this particular wheel requires a libfreetype wheel". Then we could have binary wheels for non-python dependencies like libs (which would install the lib into pre-defined locations that could be relative path linked to)
Sorry for the rant....
-Chris
PS: Personally, after banging my head against this for years, I've committed to conda for the moment -- working to get conda to better support the wide range of python packages. I haven't tried it on Linux, but it does exist and works well for some folks.
On Fri, Jul 17, 2015 at 1:22 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Hi all,
I've recently been working on adding SOABI support for Python 2.x and other pieces needed to get wheels w/ C extensions for Linux working. Here's
On 17 July 2015 at 03:41, Nate Coraor <nate@bx.psu.edu> wrote: the
work for wheels:
https://bitbucket.org/pypa/wheel/pull-request/54/
Based on that, I've added support for those wheels to pip here:
As mentioned in the wheels PR, there are some questions and decisions
made
that I need guidance on:
- On Linux, the distro name/version (as determined by platform.linux_distribution()) will be appended to the platform string, e.g. linux_x86_64_ubuntu_14_04. This is going to be necessary to make a reasonable attempt at wheel compatibility in PyPI. But this may violate PEP 425.
I think it's going beyond it in a useful way, though. At the moment, the "linux_x86_64" platform tag *under*specifies the platform - a binary extension built on Ubuntu 14.04 with default settings may not work on CentOS 7, for example.
Adding in the precise distro name and version number changes that to *over*specification, but I now think we can address that through configuration settings on the installer side that allow the specification of "compatible platforms". That way a derived distribution could add the corresponding upstream distribution's platform tag and their users would be able to install the relevant wheel files by default.
Rather than putting the Linux specific platform tag derivation logic directly in the tools, though, what if we claimed a file under the "/etc/python" subtree and used it to tell the tools what platform tags to use? For example, we could put the settings relating to package tags into "/etc/python/binary-compatibility.cfg" and allow that to be overridden on a per-virtualenv basis with a binary-compatibility.cfg file within the virtualenv.
For example, we could have a section where for a given platform, we overrode both the build and install tags appropriately. For RHEL 7.1, that may look like:
[linux_x86_64] build=rhel_7_1 install=rhel_7_0,rhel_7_1,centos_7_1406,centos_7_1503
Using JSON rather than an ini-style format would also work:
{ "linux_x86_64": { "build": "rhel_7_1", "install": ["rhel_7_0", "rhel_7_1", "centos_7_1406", "centos_7_1503"] } }
The reason I like this approach is that it leaves the definition of ABI compatibility in the hands of the distros, but also makes it safe to publish Linux wheel files on PyPI (just not with the generic linux_x86_64 platform tag).
- By default, wheels will be built using the most specific platform information. In practice, I build our wheels[1] using Debian Squeeze in Docker and therefore they should work on most currently "supported" Linuxes, but allowing such wheels to PyPI could still be dangerous because forward compatibility is not always guaranteed (e.g. if a SO version/name changes, or a C lib API method changes in a non-backward compatible way but the SO version/name does not change). That said, I'd be happy to make a much more generalized version of our docker-build[2] system that'd allow package authors to easily/rapidly build distro/version-specific wheels for many of the popular Linux distros. We can assume that a wheel built on a vanilla install of e.g. Ubuntu 14.04 will work on any other installation of 14.04 (this is what the distro vendors promise, anyway).
Right, if we break ABI within a release, that's our fault (putting on my distro developer hat), and folks will rightly yell at us for it. I was previously wary of this approach due to the "what about derived distributions?" problem, but realised recently that a config file that explicitly lists known binary compatible platforms should suffice for that. There's only a handful of systems folks are likely want to prebuild wheels for (Debian, Ubuntu, Fedora, CentOS/RHEL, openSuse), and a configuration file based system allows ABI compatible derived distros to be handled as if they were their parent.
- I attempt to set the SOABI if the SOABI config var is unset, this is for Python 2, but will also be done even on Python 3. Maybe that is the wrong decision (or maybe SOABI is guaranteed to be set on Python 3).
Python 3 should always set it, but if it's not present for some reason, deriving it makes sense.
- Do any other implementations define SOABI? PyPy does not, I did not test others. What should we do with these?
The implementation identifier is also included in the compatibility tags, so setting that in addition to the platform ABI tag when a wheel contains binary extensions should suffice.
Because the project I work for[3] relies heavily on large number of packages, some of which have complicated build-time dependencies, we have always provided them as eggs and monkeypatched platform support back in to pkg_resources. Now that the PyPA has settled on wheels as the preferred binary packaging format, I am pretty heavily motivated to do the work to work out all the issues with this implementation.
Thank you!
Regards, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
--
Christopher Barker, Ph.D. Oceanographer
Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception
Chris.Barker@noaa.gov
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
2015-07-17 18:50 GMT+02:00 Marcus Smith <qwcode@gmail.com>:
I think Linux wheel support is almost useless unless the pypa stack provides _something_ to handle non-python dependencies.
I wouldn't say useless, but I tend to agree with this sentiment.
I'm thinking the only way to really "compete" with the ease of Conda (for non-python dependencies) is to shift away from wheels, and instead focus on making it easier to create native distro packages (i.e. rpm, deb etc...that can easily depend on non-python dependencies) for python applications, and moreover that these packages should be "parallel installable" with the system packages, i.e. they should depend on virtual environments, not the system python.
+1 for being able to work in isolation of the system packages (and without admin rights). This is precisely the killer feature of conda (and virtualenv to some extent): users do not need to rely on interaction with sys admins to get up and running to setup a developer environment. Furthermore they can get as many cheap environments in parallel to develop and reproduce bugs with various versions of libraries or Python it-self. However I don't see why you would not be able to ship your non-Python dependencies as wheels. Surely it should be possible to package stateless libraries like OpenBLAS, libxml/libxsql, llvm runtimes, qt and the like as wheels. Shipping wheels for services such as database servers like postgresql is out of the scope in my opinion. For such admin sys tasks such as managing running stateful services, system packages or docker containers + orchestration are the way to go. Still wheels should be able to address the "setup parallel dev environments" use case. When I say "developer environment" I also include "datascientists environment" that rely on ipython notebook + scipy stack libraries. Best, -- Olivier
I've recently packaged SDL2 for Windows as a wheel, without any Python code. It is a conditional dependency "if Windows" for a SDL wrapper. Very convenient. It uses a little WAF script instead of bdist_wheel to make the package. https://bitbucket.org/dholth/sdl2_lib/src/tip We were talking on this list about adding more categories to wheel, to make it easier to install in abstract locations "confdir", "libdir" etc. probably per GNU convention which would map to /etc, /usr/share, and so forth based on the platform. Someone needs to write that specification. Propose we forget about Windows for the first revision, so that it is possible to get it done. The real trick is when you have to depend on something that lives outside of your packaging system, for example, it's probably easier to ship qt as a wheel than to ship libc as a wheel. Asking for specific SHA-256 hashes of all the 'ldd' shared library dependencies would be limiting. Specifying the full library names of the same a-la RPM somewhere? And as always many Linux users will find precompiled code to be a nuisance even if it does run and even if the dependency in question is difficult to compile. On Fri, Jul 17, 2015 at 2:34 PM Olivier Grisel <olivier.grisel@ensta.org> wrote:
2015-07-17 18:50 GMT+02:00 Marcus Smith <qwcode@gmail.com>:
I think Linux wheel support is almost useless unless the pypa stack provides _something_ to handle non-python dependencies.
I wouldn't say useless, but I tend to agree with this sentiment.
I'm thinking the only way to really "compete" with the ease of Conda (for non-python dependencies) is to shift away from wheels, and instead focus
on
making it easier to create native distro packages (i.e. rpm, deb etc...that can easily depend on non-python dependencies) for python applications, and moreover that these packages should be "parallel installable" with the system packages, i.e. they should depend on virtual environments, not the system python.
+1 for being able to work in isolation of the system packages (and without admin rights).
This is precisely the killer feature of conda (and virtualenv to some extent): users do not need to rely on interaction with sys admins to get up and running to setup a developer environment. Furthermore they can get as many cheap environments in parallel to develop and reproduce bugs with various versions of libraries or Python it-self.
However I don't see why you would not be able to ship your non-Python dependencies as wheels. Surely it should be possible to package stateless libraries like OpenBLAS, libxml/libxsql, llvm runtimes, qt and the like as wheels.
Shipping wheels for services such as database servers like postgresql is out of the scope in my opinion. For such admin sys tasks such as managing running stateful services, system packages or docker containers + orchestration are the way to go.
Still wheels should be able to address the "setup parallel dev environments" use case. When I say "developer environment" I also include "datascientists environment" that rely on ipython notebook + scipy stack libraries.
Best,
-- Olivier _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Jul 17, 2015, at 1:19 PM, Daniel Holth <dholth@gmail.com> wrote:
I've recently packaged SDL2 for Windows as a wheel, without any Python code. It is a conditional dependency "if Windows" for a SDL wrapper.
Cool, though I still think we need wheel-level deps -- the dependency is on the particular binary, not the platform. But a good start.
We were talking on this list about adding more categories to wheel, to make it easier to install in abstract locations "confdir", "libdir" etc. probably per GNU convention which would map to /etc, /usr/share, and so forth based on the platform.
Where would the concrete firs be? I think inside the Python install I.e. Where everything is managed by python . I don't think I want pip dumping stuff in /use/local, nevermind /usr. And presumably the goal is to support virtualenv anyway.
Someone needs to write that specification. Propose we forget about Windows for the first revision, so that it is possible to get it done.
If we want Windows support in the long run -- and we do -- we should be thinking about it from the start. But if it's going in the Python-managed dirs, it doesn't have to follow Windows convention ...
The real trick is when you have to depend on something that lives outside of your packaging system, for example, it's probably easier to ship qt as a wheel than to ship libc as a wheel.
Well, we can expect SOME base system! No system can exist without libc.... -CHB
Yes, but how do you know that I compiled against the right version of libc? On Fri, Jul 17, 2015, 9:13 PM Chris Barker - NOAA Federal < chris.barker@noaa.gov> wrote:
On Jul 17, 2015, at 1:19 PM, Daniel Holth <dholth@gmail.com> wrote:
I've recently packaged SDL2 for Windows as a wheel, without any Python code. It is a conditional dependency "if Windows" for a SDL wrapper.
Cool, though I still think we need wheel-level deps -- the dependency is on the particular binary, not the platform. But a good start.
We were talking on this list about adding more categories to wheel, to
make it easier to install in abstract locations "confdir", "libdir" etc. probably per GNU convention which would map to /etc, /usr/share, and so forth based on the platform.
Where would the concrete firs be? I think inside the Python install I.e. Where everything is managed by python . I don't think I want pip dumping stuff in /use/local, nevermind /usr. And presumably the goal is to support virtualenv anyway.
Someone needs to write that specification. Propose we forget about Windows for the first revision, so that it is possible to get it done.
If we want Windows support in the long run -- and we do -- we should be thinking about it from the start. But if it's going in the Python-managed dirs, it doesn't have to follow Windows convention ...
The real trick is when you have to depend on something that lives outside of your packaging system, for example, it's probably easier to ship qt as a wheel than to ship libc as a wheel.
Well, we can expect SOME base system! No system can exist without libc....
-CHB
On 18 July 2015 at 02:13, Chris Barker - NOAA Federal <chris.barker@noaa.gov> wrote:
Someone needs to write that specification. Propose we forget about Windows for the first revision, so that it is possible to get it done.
If we want Windows support in the long run -- and we do -- we should be thinking about it from the start. But if it's going in the Python-managed dirs, it doesn't have to follow Windows convention ...
I agree that excluding Windows is probably a mistake (differing expectations on Windows will come back to bite you if you do that). But Windows shouldn't be a huge issue as long as it's clearly noted that all directories will be within the Python-managed dirs. (Even if the system install on Unix doesn't work like this, virtualenvs on Unix have to, so that's not a Windows-specific point). Managing categories that make no sense on particular platforms (e.g. manpages on Windows) is the only other thing that I can think of that considering Windows might bring up, but again, it's not actually Windows specific (HTML Help files on Unix, for instance, would be similar - an obvious resolution is just to document that certain directories simply won't be installed on inappropriate platforms). Paul
On 18 July 2015 at 01:36, Chris Barker <chris.barker@noaa.gov> wrote:
TL;DR -- pip+wheel needs to address the non-python dependency issue before it can be a full solution for Linux (or anything else, really)
The long version:
I think Linux wheel support is almost useless unless the pypa stack provides _something_ to handle non-python dependencies.
1) Pure Python packages work fine as source.
2) Python packages with C extensions build really easily out of the box -- so source distribution is fine (OK, I suppose some folks want to run a system without a compiler -- is this the intended use-case?)
The intended use case is "Build once, deploy many times". This is especially important for use cases like Nate's - Galaxy has complete control over both the build environment and the deployment environment, but they *don't* want to rebuild in every analysis environment. That means all they need is a way to build a binary artifact that adequately declares its build context, and a way to retrieve those artifacts at installation time. I'm interested in the same case - I don't need to build artifacts for arbitrary versions of Linux, I mainly want to build them for the particular ABIs defined by the different Fedora and EPEL versions. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Sun, Jul 19, 2015 at 10:50 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
The intended use case is "Build once, deploy many times".
This is especially important for use cases like Nate's - Galaxy has complete control over both the build environment and the deployment environment, but they *don't* want to rebuild in every analysis environment.
That means all they need is a way to build a binary artifact that adequately declares its build context, and a way to retrieve those artifacts at installation time.
I'm interested in the same case - I don't need to build artifacts for arbitrary versions of Linux, I mainly want to build them for the particular ABIs defined by the different Fedora and EPEL versions.
sure -- but isn't that use-case already supported by wheel -- define your own wheelhouse that has the ABI you know you need, and point pip to it. Not that it would hurt to add a bit more to the filename, but it seems you either: Have a specific system definition you are building for -- so you want to give it a name. One step better than defining a wheelhouse. or You want to put it up on PyPi and have the folks with compatible systems be abel to get, and know it will work -- THAT is a big 'ol can of worms that maybe you're better off going with conda.... -CHB
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On 20 July 2015 at 18:37, Chris Barker <chris.barker@noaa.gov> wrote:
sure -- but isn't that use-case already supported by wheel -- define your own wheelhouse that has the ABI you know you need, and point pip to it.
I presume the issue is wanting to have a single shared wheelhouse for a (presumably limited) number of platforms. So being able to specify a (completely arbitrary) local platform name at build and install time sounds like a viable option. BUT - we have someone offering a solution that solves at least part of the problem, sufficient for their needs and a step forward from where we are. This is great news, as wheel support for Linux has always been stalled before (for whatever reason). So thank you to Nate for his work, and let's look to how we can accept it and build on it in the future. Unfortunately, I don't have any Linux knowledge in this area, so I can't offer any useful advice on the questions Nate asks. But hopefully some people on this list can. Paul
On 21 July 2015 at 04:37, Paul Moore <p.f.moore@gmail.com> wrote:
On 20 July 2015 at 18:37, Chris Barker <chris.barker@noaa.gov> wrote:
sure -- but isn't that use-case already supported by wheel -- define your own wheelhouse that has the ABI you know you need, and point pip to it.
I presume the issue is wanting to have a single shared wheelhouse for a (presumably limited) number of platforms. So being able to specify a (completely arbitrary) local platform name at build and install time sounds like a viable option.
While supporting multiple distros in a single repo is indeed one use case (and the one that needs to be solved to allow distribution via PyPI), the problem I'm interested in isn't the "success case" where a precompiled Linux wheel stays nicely confined to the specific environment it was built to target, but rather the failure mode where a file "escapes". Currently, there's nothing in a built Linux wheel file to indicate its *intended* target environment, which makes debugging ABI mismatches incredibly difficult. By contrast, if the wheel filename says "Fedora 22" and you're trying to run it on "Ubuntu 14.04" and getting a segfault, you have a pretty good hint as to the likely cause of your problem. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Hi all, Thanks for the lively debate - I sent the message to start the thread and then had a week's vacation - and I appreciate the discussion that took place in the interim. I've encountered all of the problems discussed here, especially the dependencies both with Python and other attempts at package management in distributed heterogeneous systems. For Galaxy's controlled ecosystem we deal with this using static linking (e.g. our psycopg2 egg is statically linked to a version of libpq5 built for the egg), but this is not an ideal solution for a variety of reasons that I doubt I need to explain. As for the Python side of things, I certainly agree with the point raised by Leonardo that an ENOENT is probably easier for most to debug than a missing Python.h. For what it's worth, some libraries like PyYAML have a partial solution for this: If libyaml.so.X is not found at runtime, it defaults to a pure Python implementation. This is not ideal, for sure, nor will it be possible for all packages, and it depends on the package author to implement a pure Python version, but it does avoid an outright runtime failure. I hope - and I think that Nick is advocating for this - that incremental improvements can be made, rather than what's been the case so far: identifying the myriad of problems and the shortcomings of the packaging format(s), only to stall on making progress towards a solution. As to the comments regarding our needs being met today with a wheelhouse, while this is partially true (e.g. we've got our own PyPI up at https://wheels.galaxyproject.org), we still need to settle on an overspecified tag standard and fix SOABI support in Python 2.x in order to avoid having to ship a modified wheel/pip with Galaxy. Is there any specific direction the Distutils-SIG would like me to take to continue this work? Thanks, --nate On Mon, Jul 27, 2015 at 10:19 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 21 July 2015 at 04:37, Paul Moore <p.f.moore@gmail.com> wrote:
On 20 July 2015 at 18:37, Chris Barker <chris.barker@noaa.gov> wrote:
sure -- but isn't that use-case already supported by wheel -- define your own wheelhouse that has the ABI you know you need, and point pip to it.
I presume the issue is wanting to have a single shared wheelhouse for a (presumably limited) number of platforms. So being able to specify a (completely arbitrary) local platform name at build and install time sounds like a viable option.
While supporting multiple distros in a single repo is indeed one use case (and the one that needs to be solved to allow distribution via PyPI), the problem I'm interested in isn't the "success case" where a precompiled Linux wheel stays nicely confined to the specific environment it was built to target, but rather the failure mode where a file "escapes".
Currently, there's nothing in a built Linux wheel file to indicate its *intended* target environment, which makes debugging ABI mismatches incredibly difficult. By contrast, if the wheel filename says "Fedora 22" and you're trying to run it on "Ubuntu 14.04" and getting a segfault, you have a pretty good hint as to the likely cause of your problem.
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Fri, 17 Jul 2015 at 16:37 Chris Barker <chris.barker@noaa.gov> wrote:
TL;DR -- pip+wheel needs to address the non-python dependency issue before it can be a full solution for Linux (or anything else, really)
<snip>
- Packages with semi-standard dependencies: can we expect ANY Linux distro to have libfreetype, libpng, libz, libjpeg, etc? probably, but maybe not installed (would a headless server have libfreetype?). And would those version be all compatible (probably if you specified a distro version) - Packages with non-standard non-python dependencies: libhdf5, lapack, BLAS, fortran(!)
I think it would be great to just package these up as wheels and put them on PyPI. I'd really like to be able to (easily) have different BLAS libraries on a per-virtualenv basis. So numpy could depend on "blas" and there could be a few different distributions on PyPI that provide "blas" representing the different underlying libraries. If I want to install numpy with a particular one I can just do: pip install gotoblas # Installs the BLAS library within Python dirs pip install numpy You could have a BLAS distribution that is just a shim for a system BLAS that was installed some other way. pip install --install-option='--blaslib=/usr/lib/libblas' systemblas pip install numpy That would give linux distros a way to provide the BLAS library that python/pip understands without everything being statically linked and without pip needing to understand the distro package manager. Also python packages that want BLAS can use the Python import system to locate the BLAS library making it particularly simple for them and allowing distros to move things around as desired. I would like it if this were possible even without wheels. I'd be happy just that the commands to download a BLAS library, compile it, install it non-globally, and configure numpy to use it would be that simple. If it worked with wheels then that'd be a massive win. -- Oscar
On Tue, Jul 21, 2015 at 9:38 AM, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
I think it would be great to just package these up as wheels and put them on PyPI.
that's the point -- there is no way with the current spec to specify a wheel dependency as opposed to a package dependency. i.e this particular binary numpy wheel depends on this other wheel, whereas the numpy source pacakge does not have that dependency -- and, indeed, a wheel for one platform may have different dependencies that\n other platforms.
So numpy could depend on "blas" and there could be a few different distributions on PyPI that provide "blas" representing the different underlying libraries. If I want to install numpy with a particular one I can just do:
pip install gotoblas # Installs the BLAS library within Python dirs pip install numpy
well,different implementations of BLAS are theoretically ABI compatible, but as I understand it, it's not actually that simple, so this is particularly challenging. But if it were, this would be a particular trick, because then that numpy wheel would depend on _some_ BLAS wheel, but there may be more than one option -- how would you express that???? -Chris
You could have a BLAS distribution that is just a shim for a system BLAS that was installed some other way.
pip install --install-option='--blaslib=/usr/lib/libblas' systemblas pip install numpy
That would give linux distros a way to provide the BLAS library that python/pip understands without everything being statically linked and without pip needing to understand the distro package manager. Also python packages that want BLAS can use the Python import system to locate the BLAS library making it particularly simple for them and allowing distros to move things around as desired.
I would like it if this were possible even without wheels. I'd be happy just that the commands to download a BLAS library, compile it, install it non-globally, and configure numpy to use it would be that simple. If it worked with wheels then that'd be a massive win.
-- Oscar
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On Fri, 24 Jul 2015 at 19:53 Chris Barker <chris.barker@noaa.gov> wrote:
On Tue, Jul 21, 2015 at 9:38 AM, Oscar Benjamin < oscar.j.benjamin@gmail.com> wrote:
I think it would be great to just package these up as wheels and put them on PyPI.
that's the point -- there is no way with the current spec to specify a wheel dependency as opposed to a package dependency. i.e this particular binary numpy wheel depends on this other wheel, whereas the numpy source pacakge does not have that dependency -- and, indeed, a wheel for one platform may have different dependencies that\n other platforms.
I thought it was possible to do this with wheels. It's already possible to have wheels or sdists whose dependencies vary by platform I thought. The BLAS dependency is different. In particular the sdist is compatible with more cases than a wheel would be so the built wheel would have a more precise requirement than the sdist. Is that not possible with pip/wheels/PyPI or is that a limitation of using setuptools to build the wheel?
So numpy could depend on "blas" and there could be a few different
distributions on PyPI that provide "blas" representing the different underlying libraries. If I want to install numpy with a particular one I can just do:
pip install gotoblas # Installs the BLAS library within Python dirs pip install numpy
well,different implementations of BLAS are theoretically ABI compatible, but as I understand it, it's not actually that simple, so this is particularly challenging.
But if it were, this would be a particular trick, because then that numpy wheel would depend on _some_ BLAS wheel, but there may be more than one option -- how would you express that????
I imagined having numpy Require "blas OR openblas". Then openblas package Provides "blas". Any other BLAS library also provides "blas". If you do "pip install numpy" and "blas" is already provided then the numpy wheel installs fine. Otherwise it falls back to installing openblas. Potentially "blas" is not specific enough so the label could be "blas-gfortran" to express the ABI. -- Oscar
On Jul 28, 2015 10:02 AM, "Oscar Benjamin" <oscar.j.benjamin@gmail.com> wrote:
On Fri, 24 Jul 2015 at 19:53 Chris Barker <chris.barker@noaa.gov> wrote:
On Tue, Jul 21, 2015 at 9:38 AM, Oscar Benjamin <
I think it would be great to just package these up as wheels and put
that's the point -- there is no way with the current spec to specify a
wheel dependency as opposed to a package dependency. i.e this particular binary numpy wheel depends on this other wheel, whereas the numpy source
oscar.j.benjamin@gmail.com> wrote: them on PyPI. pacakge does not have that dependency -- and, indeed, a wheel for one platform may have different dependencies that\n other platforms.
I thought it was possible to do this with wheels. It's already possible
to have wheels or sdists whose dependencies vary by platform I thought.
The BLAS dependency is different. In particular the sdist is compatible
with more cases than a wheel would be so the built wheel would have a more precise requirement than the sdist. Is that not possible with pip/wheels/PyPI or is that a limitation of using setuptools to build the wheel?
So numpy could depend on "blas" and there could be a few different
pip install gotoblas # Installs the BLAS library within Python dirs pip install numpy
well,different implementations of BLAS are theoretically ABI compatible, but as I understand it, it's not actually that simple, so this is
distributions on PyPI that provide "blas" representing the different underlying libraries. If I want to install numpy with a particular one I can just do: particularly challenging.
But if it were, this would be a particular trick, because then that
numpy wheel would depend on _some_ BLAS wheel, but there may be more than one option -- how would you express that????
I imagined having numpy Require "blas OR openblas". Then openblas package Provides "blas". Any other BLAS library also provides "blas". If you do "pip install numpy" and "blas" is already provided then the numpy wheel installs fine. Otherwise it falls back to installing openblas.
Potentially "blas" is not specific enough so the label could be "blas-gfortran" to express the ABI.
BLAS may not be the best example, but should we expect such linked interfaces to change over time? (And e.g. be versioned dependencies with shim packages that have check functions)? ... How is an ABI constraint different from a package dependency? iiuc, ABI tags are thus combinatorial with package/wheel dependency strings? Conda/pycosat solve this with "preprocessing selectors" : http://conda.pydata.org/docs/building/meta-yaml.html#preprocessing-selectors : ``` linux True if the platform is Linux linux32 True if the platform is Linux and the Python architecture is 32-bit linux64 True if the platform is Linux and the Python architecture is 64-bit armv6 True if the platform is Linux and the Python architecture is armv6l osx True if the platform is OS X unix True if the platform is Unix (OS X or Linux) win True if the platform is Windows win32 True if the platform is Windows and the Python architecture is 32-bit win64 True if the platform is Windows and the Python architecture is 64-bit py The Python version as a two digit string (like '27'). See also the CONDA_PY environment variable below. py3k True if the Python major version is 3 py2k True if the Python major version is 2 py26 True if the Python version is 2.6 py27 True if the Python version is 2.7 py33 True if the Python version is 3.3 py34 True if the Python version is 3.4 np The NumPy version as a two digit string (like '17'). See also the CONDA_NPY environment variable below. Because the selector is any valid Python expression, complicated logic is possible. ```
-- Oscar
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Hello all, I've implemented the wheel side of Nick's suggestion from very early in this thread to support a vendor-providable binary-compatibility.cfg. https://bitbucket.org/pypa/wheel/pull-request/54/ If this is acceptable, I'll add support for it to the pip side. What else should be implemented at this stage to get the PR accepted? Thanks, --nate On Tue, Jul 28, 2015 at 12:21 PM, Wes Turner <wes.turner@gmail.com> wrote:
On Jul 28, 2015 10:02 AM, "Oscar Benjamin" <oscar.j.benjamin@gmail.com> wrote:
On Fri, 24 Jul 2015 at 19:53 Chris Barker <chris.barker@noaa.gov> wrote:
On Tue, Jul 21, 2015 at 9:38 AM, Oscar Benjamin <
I think it would be great to just package these up as wheels and put
that's the point -- there is no way with the current spec to specify a
wheel dependency as opposed to a package dependency. i.e this particular binary numpy wheel depends on this other wheel, whereas the numpy source
oscar.j.benjamin@gmail.com> wrote: them on PyPI. pacakge does not have that dependency -- and, indeed, a wheel for one platform may have different dependencies that\n other platforms.
I thought it was possible to do this with wheels. It's already possible
to have wheels or sdists whose dependencies vary by platform I thought.
The BLAS dependency is different. In particular the sdist is compatible
with more cases than a wheel would be so the built wheel would have a more precise requirement than the sdist. Is that not possible with pip/wheels/PyPI or is that a limitation of using setuptools to build the wheel?
So numpy could depend on "blas" and there could be a few different
pip install gotoblas # Installs the BLAS library within Python
pip install numpy
well,different implementations of BLAS are theoretically ABI compatible, but as I understand it, it's not actually that simple, so this is particularly challenging.
But if it were, this would be a particular trick, because then that numpy wheel would depend on _some_ BLAS wheel, but there may be more than one option -- how would you express that????
I imagined having numpy Require "blas OR openblas". Then openblas
distributions on PyPI that provide "blas" representing the different underlying libraries. If I want to install numpy with a particular one I can just do: dirs package Provides "blas". Any other BLAS library also provides "blas". If you do "pip install numpy" and "blas" is already provided then the numpy wheel installs fine. Otherwise it falls back to installing openblas.
Potentially "blas" is not specific enough so the label could be
"blas-gfortran" to express the ABI.
BLAS may not be the best example, but should we expect such linked interfaces to change over time? (And e.g. be versioned dependencies with shim packages that have check functions)?
... How is an ABI constraint different from a package dependency?
iiuc, ABI tags are thus combinatorial with package/wheel dependency strings?
Conda/pycosat solve this with "preprocessing selectors" : http://conda.pydata.org/docs/building/meta-yaml.html#preprocessing-selectors :
``` linux True if the platform is Linux linux32 True if the platform is Linux and the Python architecture is 32-bit linux64 True if the platform is Linux and the Python architecture is 64-bit armv6 True if the platform is Linux and the Python architecture is armv6l osx True if the platform is OS X unix True if the platform is Unix (OS X or Linux) win True if the platform is Windows win32 True if the platform is Windows and the Python architecture is 32-bit win64 True if the platform is Windows and the Python architecture is 64-bit py The Python version as a two digit string (like '27'). See also the CONDA_PY environment variable below. py3k True if the Python major version is 3 py2k True if the Python major version is 2 py26 True if the Python version is 2.6 py27 True if the Python version is 2.7 py33 True if the Python version is 3.3 py34 True if the Python version is 3.4 np The NumPy version as a two digit string (like '17'). See also the CONDA_NPY environment variable below. Because the selector is any valid Python expression, complicated logic is possible. ```
-- Oscar
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
I'm not sure what will be needed to get the PR accepted; At PyCon AU Tennessee Leuwenberg started drafting a PEP for the expression of dependencies on e.g. BLAS - its been given number 497, and is in the packaging-peps repo; I'm working on updating it now. On 13 August 2015 at 08:21, Nate Coraor <nate@bx.psu.edu> wrote:
Hello all,
I've implemented the wheel side of Nick's suggestion from very early in this thread to support a vendor-providable binary-compatibility.cfg.
https://bitbucket.org/pypa/wheel/pull-request/54/
If this is acceptable, I'll add support for it to the pip side. What else should be implemented at this stage to get the PR accepted?
Thanks, --nate
On Tue, Jul 28, 2015 at 12:21 PM, Wes Turner <wes.turner@gmail.com> wrote:
On Jul 28, 2015 10:02 AM, "Oscar Benjamin" <oscar.j.benjamin@gmail.com> wrote:
On Fri, 24 Jul 2015 at 19:53 Chris Barker <chris.barker@noaa.gov> wrote:
On Tue, Jul 21, 2015 at 9:38 AM, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
I think it would be great to just package these up as wheels and put them on PyPI.
that's the point -- there is no way with the current spec to specify a wheel dependency as opposed to a package dependency. i.e this particular binary numpy wheel depends on this other wheel, whereas the numpy source pacakge does not have that dependency -- and, indeed, a wheel for one platform may have different dependencies that\n other platforms.
I thought it was possible to do this with wheels. It's already possible to have wheels or sdists whose dependencies vary by platform I thought.
The BLAS dependency is different. In particular the sdist is compatible with more cases than a wheel would be so the built wheel would have a more precise requirement than the sdist. Is that not possible with pip/wheels/PyPI or is that a limitation of using setuptools to build the wheel?
So numpy could depend on "blas" and there could be a few different distributions on PyPI that provide "blas" representing the different underlying libraries. If I want to install numpy with a particular one I can just do:
pip install gotoblas # Installs the BLAS library within Python dirs pip install numpy
well,different implementations of BLAS are theoretically ABI compatible, but as I understand it, it's not actually that simple, so this is particularly challenging.
But if it were, this would be a particular trick, because then that numpy wheel would depend on _some_ BLAS wheel, but there may be more than one option -- how would you express that????
I imagined having numpy Require "blas OR openblas". Then openblas package Provides "blas". Any other BLAS library also provides "blas". If you do "pip install numpy" and "blas" is already provided then the numpy wheel installs fine. Otherwise it falls back to installing openblas.
Potentially "blas" is not specific enough so the label could be "blas-gfortran" to express the ABI.
BLAS may not be the best example, but should we expect such linked interfaces to change over time? (And e.g. be versioned dependencies with shim packages that have check functions)?
... How is an ABI constraint different from a package dependency?
iiuc, ABI tags are thus combinatorial with package/wheel dependency strings?
Conda/pycosat solve this with "preprocessing selectors" : http://conda.pydata.org/docs/building/meta-yaml.html#preprocessing-selectors :
``` linux True if the platform is Linux linux32 True if the platform is Linux and the Python architecture is 32-bit linux64 True if the platform is Linux and the Python architecture is 64-bit armv6 True if the platform is Linux and the Python architecture is armv6l osx True if the platform is OS X unix True if the platform is Unix (OS X or Linux) win True if the platform is Windows win32 True if the platform is Windows and the Python architecture is 32-bit win64 True if the platform is Windows and the Python architecture is 64-bit py The Python version as a two digit string (like '27'). See also the CONDA_PY environment variable below. py3k True if the Python major version is 3 py2k True if the Python major version is 2 py26 True if the Python version is 2.6 py27 True if the Python version is 2.7 py33 True if the Python version is 3.3 py34 True if the Python version is 3.4 np The NumPy version as a two digit string (like '17'). See also the CONDA_NPY environment variable below. Because the selector is any valid Python expression, complicated logic is possible. ```
-- Oscar
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
-- Robert Collins <rbtcollins@hp.com> Distinguished Technologist HP Converged Cloud
On Aug 12, 2015 13:57, "Nate Coraor" <nate@bx.psu.edu> wrote:
Hello all,
I've implemented the wheel side of Nick's suggestion from very early in
this thread to support a vendor-providable binary-compatibility.cfg.
https://bitbucket.org/pypa/wheel/pull-request/54/
If this is acceptable, I'll add support for it to the pip side. What else
should be implemented at this stage to get the PR accepted?
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag, (2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
To make this *really* slick, it would be cool if, say, David C. could make a formal list of exactly which system libraries are important to depend on (xlib, etc.), and we could hard-code two compatibility profiles "centos5-minimal" (= just glibc and the C++ runtime) and "centos5" (= that plus the core too-hard-to-ship libraries), and possibly teach pip how to check whether that hard-coded core set is available. Compare with osx, where there are actually a ton of different ABIs but in practice everyone distributing wheels basically sat down and picked one and wrote some ad hoc tools to make it work, and it does: https://github.com/MacPython/wiki/wiki/Spinning-wheels -n
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
On Aug 12, 2015 13:57, "Nate Coraor" <nate@bx.psu.edu> wrote:
Hello all,
I've implemented the wheel side of Nick's suggestion from very early in
this thread to support a vendor-providable binary-compatibility.cfg.
https://bitbucket.org/pypa/wheel/pull-request/54/
If this is acceptable, I'll add support for it to the pip side. What
else should be implemented at this stage to get the PR accepted?
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag,
This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version.
(2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available? --nate
To make this *really* slick, it would be cool if, say, David C. could make a formal list of exactly which system libraries are important to depend on (xlib, etc.), and we could hard-code two compatibility profiles "centos5-minimal" (= just glibc and the C++ runtime) and "centos5" (= that plus the core too-hard-to-ship libraries), and possibly teach pip how to check whether that hard-coded core set is available.
Compare with osx, where there are actually a ton of different ABIs but in practice everyone distributing wheels basically sat down and picked one and wrote some ad hoc tools to make it work, and it does: https://github.com/MacPython/wiki/wiki/Spinning-wheels
-n
On 13 August 2015 at 11:07, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
On Aug 12, 2015 13:57, "Nate Coraor" <nate@bx.psu.edu> wrote:
Hello all,
I've implemented the wheel side of Nick's suggestion from very early in
this thread to support a vendor-providable binary-compatibility.cfg.
https://bitbucket.org/pypa/wheel/pull-request/54/
If this is acceptable, I'll add support for it to the pip side. What
else should be implemented at this stage to get the PR accepted?
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag,
This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version.
(2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Just my opinion, but although I'm +1 on Nate's efforts, I'm -1 on both the standard behavior for installation being the exact platform tag, and an automatic fallback to cento5. IMO, on Linux, the default should always be to opt in to the desired platform tags. We could make it so that the word `default` inside `binary-compatibility.cfg` means an exact match on the distro version, so that we could simplify the documentation. But I don't want to upgrade to pip and suddenly find myself installing binary wheels compiled by whomever for whatever platform I have no control with, even assuming the best of the package builders intentions. And I certainly don't want centos5 wheels accidentally installed on my ubuntu servers unless I very specifically asked for them. The tiny pain inflicted by telling users to add a one-line text file in a very well known location (or two lines, for the added centos5), so that they can get the benefit of binary wheels on linux, is very small compared to the pain of repeatable install scripts suddenly behaving differently and installing binary wheels in systems that were prepared to pay the price of source installs, including the setting of build environment variables that correctly tweaked their build process. Regards, Leo
On Aug 13, 2015 2:31 PM, "Leonardo Rochael Almeida" <leorochael@gmail.com> wrote:
On 13 August 2015 at 11:07, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
On Aug 12, 2015 13:57, "Nate Coraor" <nate@bx.psu.edu> wrote:
Hello all,
I've implemented the wheel side of Nick's suggestion from very early
https://bitbucket.org/pypa/wheel/pull-request/54/
If this is acceptable, I'll add support for it to the pip side. What
else should be implemented at this stage to get the PR accepted?
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag,
This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version.
(2) the special hard-coded tag "centos5". (That's what everyone
actually uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Just my opinion, but although I'm +1 on Nate's efforts, I'm -1 on both
in this thread to support a vendor-providable binary-compatibility.cfg. the standard behavior for installation being the exact platform tag, and an automatic fallback to cento5.
IMO, on Linux, the default should always be to opt in to the desired
platform tags.
We could make it so that the word `default` inside
`binary-compatibility.cfg` means an exact match on the distro version, so that we could simplify the documentation.
But I don't want to upgrade to pip and suddenly find myself installing
binary wheels compiled by whomever for whatever platform I have no control with, even assuming the best of the package builders intentions.
And I certainly don't want centos5 wheels accidentally installed on my
ubuntu servers unless I very specifically asked for them.
The tiny pain inflicted by telling users to add a one-line text file in a
very well known location (or two lines, for the added centos5), so that they can get the benefit of binary wheels on linux, is very small compared to the pain of repeatable install scripts suddenly behaving differently and installing binary wheels in systems that were prepared to pay the price of source installs, including the setting of build environment variables that correctly tweaked their build process. Could/should this (repeatable) build configuration be specified in a JSON manifest file? What's the easiest way to build for all of these platforms? Tox w/ per-platform Dockerfile?
Regards,
Leo
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Thu, Aug 13, 2015 at 12:30 PM, Leonardo Rochael Almeida <leorochael@gmail.com> wrote:
On 13 August 2015 at 11:07, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
[...]
(2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Just my opinion, but although I'm +1 on Nate's efforts, I'm -1 on both the standard behavior for installation being the exact platform tag, and an automatic fallback to cento5.
IMO, on Linux, the default should always be to opt in to the desired platform tags.
We could make it so that the word `default` inside `binary-compatibility.cfg` means an exact match on the distro version, so that we could simplify the documentation.
But I don't want to upgrade to pip and suddenly find myself installing binary wheels compiled by whomever for whatever platform I have no control with, even assuming the best of the package builders intentions.
And I certainly don't want centos5 wheels accidentally installed on my ubuntu servers unless I very specifically asked for them.
The tiny pain inflicted by telling users to add a one-line text file in a very well known location (or two lines, for the added centos5), so that they can get the benefit of binary wheels on linux, is very small compared to the pain of repeatable install scripts suddenly behaving differently and installing binary wheels in systems that were prepared to pay the price of source installs, including the setting of build environment variables that correctly tweaked their build process.
I think there are two issues here: 1) You don't want centos5 wheels "accidentally" installed on an ubuntu server: Fair enough, you're right; we should probably make the "this wheel should work on pretty much any linux out there" tag be something that distributors have to explicitly opt into (similar to how they have to opt into creating universal wheels), rather than having it be something you could get by just typing 'pip wheel foo' on the right (wrong) machine. 2) You want it to be the case that if I type 'pip install foo' on a Linux machine, and pip finds both an sdist and a wheel, where the wheel is definitely compatible with the current system, then it should still always prefer the sdist unless configured otherwise: Here I disagree strongly. This is inconsistent with how things work on every other platform, it's inconsistent with how pip is being used on Linux right now with private wheelhouses, and the "tiny pain" of editing a file in /etc is a huge barrier to new users, many of whom are uncomfortable editing config files and may not have root access. -- Nathaniel J. Smith -- http://vorpus.org
On Aug 13, 2015 8:47 PM, "Nathaniel Smith" <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 12:30 PM, Leonardo Rochael Almeida <leorochael@gmail.com> wrote:
On 13 August 2015 at 11:07, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
[...]
(2) the special hard-coded tag "centos5". (That's what everyone
uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Just my opinion, but although I'm +1 on Nate's efforts, I'm -1 on both
standard behavior for installation being the exact platform tag, and an automatic fallback to cento5.
IMO, on Linux, the default should always be to opt in to the desired platform tags.
We could make it so that the word `default` inside `binary-compatibility.cfg` means an exact match on the distro version, so that we could simplify the documentation.
But I don't want to upgrade to pip and suddenly find myself installing binary wheels compiled by whomever for whatever platform I have no control with, even assuming the best of the package builders intentions.
And I certainly don't want centos5 wheels accidentally installed on my ubuntu servers unless I very specifically asked for them.
The tiny pain inflicted by telling users to add a one-line text file in a very well known location (or two lines, for the added centos5), so that
can get the benefit of binary wheels on linux, is very small compared to the pain of repeatable install scripts suddenly behaving differently and installing binary wheels in systems that were prepared to pay the price of source installs, including the setting of build environment variables
actually the they that
correctly tweaked their build process.
I think there are two issues here:
1) You don't want centos5 wheels "accidentally" installed on an ubuntu server: Fair enough, you're right; we should probably make the "this wheel should work on pretty much any linux out there" tag be something that distributors have to explicitly opt into (similar to how they have to opt into creating universal wheels), rather than having it be something you could get by just typing 'pip wheel foo' on the right (wrong) machine.
2) You want it to be the case that if I type 'pip install foo' on a Linux machine, and pip finds both an sdist and a wheel, where the wheel is definitely compatible with the current system, then it should still always prefer the sdist unless configured otherwise: Here I disagree strongly. This is inconsistent with how things work on every other platform, it's inconsistent with how pip is being used on Linux right now with private wheelhouses, and the "tiny pain" of editing a file in /etc is a huge barrier to new users, many of whom are uncomfortable editing config files and may not have root access.
So, there would be a capability / osnamestr mapping, or just [...]? Because my libc headers are different.
-- Nathaniel J. Smith -- http://vorpus.org _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Thu, Aug 13, 2015 at 6:50 PM, Wes Turner <wes.turner@gmail.com> wrote:
So, there would be a capability / osnamestr mapping, or just [...]?
Because my libc headers are different.
Hi Wes,
From the question mark I infer that this is intended as a question for me, but like most of your posts I have no idea what you're talking about -- they're telegraphic to the point of incomprehensibility. So... if I don't answer something, that's why.
-n -- Nathaniel J. Smith -- http://vorpus.org
On Aug 13, 2015 9:33 PM, "Nathaniel Smith" <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 6:50 PM, Wes Turner <wes.turner@gmail.com> wrote:
So, there would be a capability / osnamestr mapping, or just [...]?
Because my libc headers are different.
Hi Wes,
From the question mark I infer that this is intended as a question for me, but like most of your posts I have no idea what you're talking about -- they're telegraphic to the point of incomprehensibility. So... if I don't answer something, that's why.
Two approaches: * specify specific platforms / distributions ("centos5") * specify required capabilities ("Pkg", [version_constraints], [pkg_ABI_v2, xyz]) Limitations in the status quo: * setuptools install_requires only accepts (name, [version_constraints])
-n
-- Nathaniel J. Smith -- http://vorpus.org
Hi, Going back in time to this old post, but I think it becomes more relevant now that Nate's work is being completed: On 13 August 2015 at 22:47, Nathaniel Smith <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 12:30 PM, Leonardo Rochael Almeida <leorochael@gmail.com> wrote:
On 13 August 2015 at 11:07, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
(2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Just my opinion, but although I'm +1 on Nate's efforts, I'm -1 on both
standard behavior for installation being the exact platform tag, and an automatic fallback to cento5.
IMO, on Linux, the default should always be to opt in to the desired platform tags.
We could make it so that the word `default` inside `binary-compatibility.cfg` means an exact match on the distro version, so that we could simplify the documentation.
But I don't want to upgrade to pip and suddenly find myself installing binary wheels compiled by whomever for whatever platform I have no control with, even assuming the best of the package builders intentions.
And I certainly don't want centos5 wheels accidentally installed on my ubuntu servers unless I very specifically asked for them.
The tiny pain inflicted by telling users to add a one-line text file in a very well known location (or two lines, for the added centos5), so that
can get the benefit of binary wheels on linux, is very small compared to
pain of repeatable install scripts suddenly behaving differently and installing binary wheels in systems that were prepared to pay the price of source installs, including the setting of build environment variables
[...] the they the that
correctly tweaked their build process.
I think there are two issues here:
1) You don't want centos5 wheels "accidentally" installed on an ubuntu server: Fair enough, you're right; we should probably make the "this wheel should work on pretty much any linux out there" tag be something that distributors have to explicitly opt into (similar to how they have to opt into creating universal wheels), rather than having it be something you could get by just typing 'pip wheel foo' on the right (wrong) machine.
I agree that generating something like "this linux binary wheel is generically installable" should be opt-in, yes. But I also feel strongly that installing such a generic wheel should also be opt in. I guess that if we go into the direction of being able to generate wheels with a libc tag rather than a distro tag, like Nate and Donald are now discussing, then we could get both kinds of opt-in by specifying the libc tag in `binary-compatibility.cfg`.
2) You want it to be the case that if I type 'pip install foo' on a Linux machine, and pip finds both an sdist and a wheel, where the wheel is definitely compatible with the current system, then it should still always prefer the sdist unless configured otherwise: Here I disagree strongly. This is inconsistent with how things work on every other platform, it's inconsistent with how pip is being used on Linux right now with private wheelhouses, and the "tiny pain" of editing a file in /etc is a huge barrier to new users, many of whom are uncomfortable editing config files and may not have root access.
Not having root access shouldn't be an issue, as there should be a user-level and virtualenv level equivalents to a `binary-compatibility.cfg` on `/etc`, and perhaps it could even be included in `requirements.txt` for a project, so users of a project might not even have to bother setting up `binary-compatibility.cfg`. However, you make an excellent point: not handling binary wheels on Linux by default (at least with exact platform tag matching) would mean having different behavior between Linux and Mac/Windows. Still, I wouldn't want a random binary wheel suddenly finding its way into my servers, and I would like a way to opt out of it, for "reasons" (ex. I might have special build flags, or a special compiler, or maybe I'm still waiting for TUF before trusting other peoples binaries on my servers). So I'd like to propose that the installation tooling (eg. `pip`, `distlib`) should allow the user to specify which index servers to trust for receiving binary wheels and which to trust only for pure python wheels or sdists. It could trust them all by default, to maintain the current behavior (where `all` means only pypi unless I specified more, obviously), but I'd like a switch to limit this trust to a subset of the specified index servers. Regards, Leo
On September 8, 2015 at 3:21:26 PM, Leonardo Rochael Almeida (leorochael@gmail.com) wrote:
Still, I wouldn't want a random binary wheel suddenly finding its way into my servers, and I would like a way to opt out of it, for "reasons" (ex. I might have special build flags, or a special compiler, or maybe I'm still waiting for TUF before trusting other peoples binaries on my servers).
—no-binary packages,that,have,binaries ? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
That's nice for singling out some packages (though I only found a , but I had a different use-case in mind, which I guess I didn't fully articulate: I might want binary wheels for some packages, just not coming from PyPI, where I don't necessarily trust whatever was put there. I'm perfectly fine trusting binary wheels coming from my own wheelhouse, for example. So, I'd rather have a: --accept-binary-from=http://mywheelhouse.example.com Which would accept binary from all provided indexes if absent Or perhaps a: --no-binary-from=https://pypi.python.org/simple Regards, Leo On 8 September 2015 at 16:22, Donald Stufft <donald@stufft.io> wrote:
On September 8, 2015 at 3:21:26 PM, Leonardo Rochael Almeida ( leorochael@gmail.com) wrote:
Still, I wouldn't want a random binary wheel suddenly finding its way
into
my servers, and I would like a way to opt out of it, for "reasons" (ex. I might have special build flags, or a special compiler, or maybe I'm still waiting for TUF before trusting other peoples binaries on my servers).
—no-binary packages,that,have,binaries ?
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag,
This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version.
(2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Yes. Or more generally, we should pick some common baseline build environment such that we're pretty sure wheels built there can run on 99% of end-user systems and give this environment a name. (Doesn't have to be "centos5", though IIUC CentOS 5 is what people are using for this baseline build environment right now.) That way when distros catch up and start providing binary-compatibility.cfg files, we can give tell them that this is an environment that they should try to support because it's what everyone is using, and to kick start that process we should assume it as a default until the distros do catch up. This has two benefits: it means that these wheels would actually become useful in some reasonable amount of time, and as a bonus, it would provide a clear incentive for those rare distros that *aren't* compatible to document that by starting to provide a binary-compatibility.cfg. -n -- Nathaniel J. Smith -- http://vorpus.org
On 14 August 2015 at 13:25, Nathaniel Smith <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag,
This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version.
(2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Yes.
Or more generally, we should pick some common baseline build environment such that we're pretty sure wheels built there can run on 99% of end-user systems and give this environment a name. (Doesn't have to be "centos5", though IIUC CentOS 5 is what people are using for this baseline build environment right now.) That way when distros catch up and start providing binary-compatibility.cfg files, we can give tell them that this is an environment that they should try to support because it's what everyone is using, and to kick start that process we should assume it as a default until the distros do catch up. This has two benefits: it means that these wheels would actually become useful in some reasonable amount of time, and as a bonus, it would provide a clear incentive for those rare distros that *aren't* compatible to document that by starting to provide a binary-compatibility.cfg.
Sounds like a reinvention of LSB, which is still a thing I think, but really didn't take the vendor world by storm. -Rob -- Robert Collins <rbtcollins@hp.com> Distinguished Technologist HP Converged Cloud
On Aug 13, 2015 8:31 PM, "Robert Collins" <robertc@robertcollins.net> wrote:
On 14 August 2015 at 13:25, Nathaniel Smith <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take
this from
a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag,
This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version.
(2) the special hard-coded tag "centos5". (That's what everyone
actually
uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Yes.
Or more generally, we should pick some common baseline build environment such that we're pretty sure wheels built there can run on 99% of end-user systems and give this environment a name. (Doesn't have to be "centos5", though IIUC CentOS 5 is what people are using for this baseline build environment right now.) That way when distros catch up and start providing binary-compatibility.cfg files, we can give tell them that this is an environment that they should try to support because it's what everyone is using, and to kick start that process we should assume it as a default until the distros do catch up. This has two benefits: it means that these wheels would actually become useful in some reasonable amount of time, and as a bonus, it would provide a clear incentive for those rare distros that *aren't* compatible to document that by starting to provide a binary-compatibility.cfg.
Sounds like a reinvention of LSB, which is still a thing I think, but really didn't take the vendor world by storm.
LSB == "Linux System Base" It really shouldn't be too difficult to add lsb_release to the major distros and/or sys.plat* http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/bo... http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/ls...
-Rob
-- Robert Collins <rbtcollins@hp.com> Distinguished Technologist HP Converged Cloud _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On 14 August 2015 at 13:38, Wes Turner <wes.turner@gmail.com> wrote:
On Aug 13, 2015 8:31 PM, "Robert Collins" <robertc@robertcollins.net> wrote:
On 14 August 2015 at 13:25, Nathaniel Smith <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag,
This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version.
(2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Yes.
Or more generally, we should pick some common baseline build environment such that we're pretty sure wheels built there can run on 99% of end-user systems and give this environment a name. (Doesn't have to be "centos5", though IIUC CentOS 5 is what people are using for this baseline build environment right now.) That way when distros catch up and start providing binary-compatibility.cfg files, we can give tell them that this is an environment that they should try to support because it's what everyone is using, and to kick start that process we should assume it as a default until the distros do catch up. This has two benefits: it means that these wheels would actually become useful in some reasonable amount of time, and as a bonus, it would provide a clear incentive for those rare distros that *aren't* compatible to document that by starting to provide a binary-compatibility.cfg.
Sounds like a reinvention of LSB, which is still a thing I think, but really didn't take the vendor world by storm.
LSB == "Linux System Base"
It really shouldn't be too difficult to add lsb_release to the major distros and/or sys.plat*
http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/bo...
http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/ls...
So its already there; the point I was making was the LSB process and guarantees, not lsb_release, which is a tiny thing. -Rob -- Robert Collins <rbtcollins@hp.com> Distinguished Technologist HP Converged Cloud
On Aug 13, 2015 8:38 PM, "Wes Turner" <wes.turner@gmail.com> wrote:
On Aug 13, 2015 8:31 PM, "Robert Collins" <robertc@robertcollins.net>
On 14 August 2015 at 13:25, Nathaniel Smith <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com>
wrote:
From my reading of what the Enthought and Continuum folks were
saying
about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take
wrote: this from
a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag,
This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version.
(2) the special hard-coded tag "centos5". (That's what everyone
actually
uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Yes.
Or more generally, we should pick some common baseline build environment such that we're pretty sure wheels built there can run on 99% of end-user systems and give this environment a name. (Doesn't have to be "centos5", though IIUC CentOS 5 is what people are using for this baseline build environment right now.) That way when distros catch up and start providing binary-compatibility.cfg files, we can give tell them that this is an environment that they should try to support because it's what everyone is using, and to kick start that process we should assume it as a default until the distros do catch up. This has two benefits: it means that these wheels would actually become useful in some reasonable amount of time, and as a bonus, it would provide a clear incentive for those rare distros that *aren't* compatible to document that by starting to provide a binary-compatibility.cfg.
Sounds like a reinvention of LSB, which is still a thing I think, but really didn't take the vendor world by storm.
LSB == "Linux System Base"
It really shouldn't be too difficult to add lsb_release to the major distros and/or sys.plat*
Salt grains implement this functionality w/ many OS: https://github.com/saltstack/salt/blob/110cae3cdc1799bad37f81f2/salt/grains/... ("osname", "osrelease") [Apache 2.0]
http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/bo...
http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/ls...
-Rob
-- Robert Collins <rbtcollins@hp.com> Distinguished Technologist HP Converged Cloud _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Thu, Aug 13, 2015 at 6:31 PM, Robert Collins <robertc@robertcollins.net> wrote:
On 14 August 2015 at 13:25, Nathaniel Smith <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag,
This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version.
(2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Yes.
Or more generally, we should pick some common baseline build environment such that we're pretty sure wheels built there can run on 99% of end-user systems and give this environment a name. (Doesn't have to be "centos5", though IIUC CentOS 5 is what people are using for this baseline build environment right now.) That way when distros catch up and start providing binary-compatibility.cfg files, we can give tell them that this is an environment that they should try to support because it's what everyone is using, and to kick start that process we should assume it as a default until the distros do catch up. This has two benefits: it means that these wheels would actually become useful in some reasonable amount of time, and as a bonus, it would provide a clear incentive for those rare distros that *aren't* compatible to document that by starting to provide a binary-compatibility.cfg.
Sounds like a reinvention of LSB, which is still a thing I think, but really didn't take the vendor world by storm.
Yeah, I've been carefully not mentioning LSB because LSB is a disaster :-). But, I think this is different. IIUC, the problem with LSB is that it's trying to make it possible for big enterprise software vendors to stop saying "This RDBMS is certified to work on RHEL 6" and start saying "This RDBMS is certified to work on any distribution that meets the LSB criteria". But in practice this creates more risk and work for the vendor, while not actually solving any real problem -- if a customer is spending $$$$ on some enterprise database then they might as well throw in an extra $$ for a RHEL license, so the customers don't care, so the vendor doesn't either. And the folks building free software like Postgres don't care either because the distros do the support for them. So the LSB continues to limp along through the ISO process because just enough managers have been convinced that it *ought* to be useful that they continue to throw some money at it, and hey, it's probably useful to some people sometimes, just not very many people very often. We, on the other hand, are trying to solve a real problem that our users feel keenly (lots of people want to be able to distribute some little binary python extension in a way that just works for a wide range of users), and the proposed mechanism for solving this problem is not "let's form an ISO committee and hire contractors to write a Grand Unified Test Suite", it's codifying an existing working solution in the form of a wiki page or PEP or something. Of course if you have an alternative proposal than I'm all ears :-). -n P.S.: since probably not everyone on the mailing list has been following Linux inside baseball for decades, some context...: https://en.wikipedia.org/wiki/Linux_Standard_Base http://www.linuxfoundation.org/collaborate/workgroups/lsb/download https://lwn.net/Articles/152580/ http://udrepper.livejournal.com/8511.html (Last two links are from 2005, I can't really say how accurate they still are in details but they do describe some of the structural reasons why the LSB has not been massively popular) -- Nathaniel J. Smith -- http://vorpus.org
On Aug 13, 2015 9:14 PM, "Nathaniel Smith" <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 6:31 PM, Robert Collins <robertc@robertcollins.net> wrote:
On 14 August 2015 at 13:25, Nathaniel Smith <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith <njs@pobox.com>
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across
different
distributions, it sounds like the additional piece that would take
wrote: this from
a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag,
This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version.
(2) the special hard-coded tag "centos5". (That's what everyone
actually
uses in practice, right?)
The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available?
Yes.
Or more generally, we should pick some common baseline build environment such that we're pretty sure wheels built there can run on 99% of end-user systems and give this environment a name. (Doesn't have to be "centos5", though IIUC CentOS 5 is what people are using for this baseline build environment right now.) That way when distros catch up and start providing binary-compatibility.cfg files, we can give tell them that this is an environment that they should try to support because it's what everyone is using, and to kick start that process we should assume it as a default until the distros do catch up. This has two benefits: it means that these wheels would actually become useful in some reasonable amount of time, and as a bonus, it would provide a clear incentive for those rare distros that *aren't* compatible to document that by starting to provide a binary-compatibility.cfg.
Sounds like a reinvention of LSB, which is still a thing I think, but really didn't take the vendor world by storm.
Yeah, I've been carefully not mentioning LSB because LSB is a disaster :-). But, I think this is different.
IIUC, the problem with LSB is that it's trying to make it possible for big enterprise software vendors to stop saying "This RDBMS is certified to work on RHEL 6" and start saying "This RDBMS is certified to work on any distribution that meets the LSB criteria".
That's great. Is there a Dockerfile invocation for: - running the tests - building a binary in a mapped path - posting build state and artifacts to a central server
But in practice this creates more risk and work for the vendor, while not actually solving any real problem -- if a customer is spending $$$$ on some enterprise database then they might as well throw in an extra $$ for a RHEL license, so the customers don't care, so the vendor doesn't either. And the folks building free software like Postgres don't care either because the distros do the support for them. So the LSB continues to limp along through the ISO process because just enough managers have been convinced that it *ought* to be useful that they continue to throw some money at it, and hey, it's probably useful to some people sometimes, just not very many people very often.
We, on the other hand, are trying to solve a real problem that our users feel keenly (lots of people want to be able to distribute some little binary python extension in a way that just works for a wide range of users), and the proposed mechanism for solving this problem is not "let's form an ISO committee and hire contractors to write a Grand Unified Test Suite", it's codifying an existing working solution in the form of a wiki page or PEP or something.
Of course if you have an alternative proposal than I'm all ears :-).
Required_caps = [('blas1', None), ('blas', '>= 1'), ('np17', None)] Re-post [TODO: upgrade mailman] """ ... BLAS may not be the best example, but should we expect such linked interfaces to change over time? (And e.g. be versioned dependencies with shim packages that have check functions)? ... How is an ABI constraint different from a package dependency? iiuc, ABI tags are thus combinatorial with package/wheel dependency strings? Conda/pycosat solve this with "preprocessing selectors" : http://conda.pydata.org/docs/building/meta-yaml.html#preprocessing-selectors : ``` linux True if the platform is Linux linux32 True if the platform is Linux and the Python architecture is 32-bit linux64 True if the platform is Linux and the Python architecture is 64-bit armv6 True if the platform is Linux and the Python architecture is armv6l osx True if the platform is OS X unix True if the platform is Unix (OS X or Linux) win True if the platform is Windows win32 True if the platform is Windows and the Python architecture is 32-bit win64 True if the platform is Windows and the Python architecture is 64-bit py The Python version as a two digit string (like '27'). See also the CONDA_PY environment variable below. py3k True if the Python major version is 3 py2k True if the Python major version is 2 py26 True if the Python version is 2.6 py27 True if the Python version is 2.7 py33 True if the Python version is 3.3 py34 True if the Python version is 3.4 np The NumPy version as a two digit string (like '17'). See also the CONDA_NPY environment variable below. Because the selector is any valid Python expression, complicated logic is possible. ```
-n
P.S.: since probably not everyone on the mailing list has been following Linux inside baseball for decades, some context...: https://en.wikipedia.org/wiki/Linux_Standard_Base http://www.linuxfoundation.org/collaborate/workgroups/lsb/download https://lwn.net/Articles/152580/ http://udrepper.livejournal.com/8511.html (Last two links are from 2005, I can't really say how accurate they still are in details but they do describe some of the structural reasons why the LSB has not been massively popular)
-- Nathaniel J. Smith -- http://vorpus.org _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On 14 August 2015 at 14:14, Nathaniel Smith <njs@pobox.com> wrote: ...>
Of course if you have an alternative proposal than I'm all ears :-).
Yeah :) So, I want to dedicate some time to contributing to this discussion meaningfully, but I can't for the next few weeks - Jury duty, Kiwi PyCon and polishing up the PEP's I'm already committed to... I think the approach of being able to ask the *platform* for things needed to build-or-use known artifacts is going to enable a bunch of different answers in this space. I'm much more enthusiastic about that than doing anything that ends up putting PyPI in competition with the distribution space. My criteria for success are: - there's *a* migration path from what we have today to what we propose. Doesn't have to be good, just exist. - authors of scipy, numpy, cryptography etc can upload binary wheels for *linux, Mac OSX and Windows 32/64 in a safe and sane way - we don't need to do things like uploading wheels containing non-Python shared libraries, nor upload statically linked modules In fact, I think uploading regular .so files is just a huge heartache waiting to happen, so I'm almost inclined to add: - we don't support uploading external non-Python libraries [ without prejuidice for changing our minds in the future] There was a post that referenced a numpy ABI, dunno if it was in this thread - I need to drill down into that, because I don't understand why thats not a regular version resolution problem,unlike the Python ABI, which pip can't install [and shouldn't be able to!] -Rob -- Robert Collins <rbtcollins@hp.com> Distinguished Technologist HP Converged Cloud
On Thu, Aug 13, 2015 at 7:27 PM, Robert Collins <robertc@robertcollins.net> wrote:
On 14 August 2015 at 14:14, Nathaniel Smith <njs@pobox.com> wrote: ...>
Of course if you have an alternative proposal than I'm all ears :-).
Yeah :)
So, I want to dedicate some time to contributing to this discussion meaningfully, but I can't for the next few weeks - Jury duty, Kiwi PyCon and polishing up the PEP's I'm already committed to...
Totally hear that... it's not super urgent anyway. We should make it clear to Nate -- hi Nate! -- that there's no reason that solving this problem should block putting together the basic binary-compatibility.cfg infrastructure.
I think the approach of being able to ask the *platform* for things needed to build-or-use known artifacts is going to enable a bunch of different answers in this space. I'm much more enthusiastic about that than doing anything that ends up putting PyPI in competition with the distribution space.
My criteria for success are:
- there's *a* migration path from what we have today to what we propose. Doesn't have to be good, just exist.
- authors of scipy, numpy, cryptography etc can upload binary wheels for *linux, Mac OSX and Windows 32/64 in a safe and sane way
So the problem is that, IMO, "sane" here means "not building a separate wheel for every version of distro on distrowatch". So I can see two ways to do that: - my suggestion that we just pick a particular highly-compatible distro like centos 5 to build against, and make a standard list of which libraries can be assumed to be provided - the PEP-497-or-number-to-be-determined approach, in which we still have to pick a highly-compatible distro like centos 5 to build against, but each wheel has a list of which libraries from that distro it is counting on being provided I can see the appeal of the latter approach, since if you want to do the former approach right you need to be careful about exactly which libraries you're assuming are present, etc. They both could work. But in practice, you still have to pick which distro you are going to use to build, and you still have to say "when I say I need libblas.so.1, what I mean is that I need a file that is ABI-compatible with the version of libblas.so.1 that existed in centos 5 exactly, not any other libblas.so.1". And then in practice not every distro will have such a thing, so for a project like numpy that wants to make things easy for a wide variety of users, we'll still only be able to take advantage of external dependencies for libraries that are effectively universally available and compatible anyway and end up vendoring the rest... so in the end basically we'd be distributing exactly the same wheels under either of these proposals, just the latter requires a much much more complicated scheme for metadata and installation. And in practice I think the main alternative possibility if we don't come up with some solid guidance for how packages can build works-everywhere-wheels is that we'll see wheels for latest-version-of-Ubuntu-only, plus the occasional smattering of other distros, varying randomly on a project-by-project basis. Which would suck.
- we don't need to do things like uploading wheels containing non-Python shared libraries, nor upload statically linked modules
In fact, I think uploading regular .so files is just a huge heartache waiting to happen, so I'm almost inclined to add:
- we don't support uploading external non-Python libraries [ without prejuidice for changing our minds in the future]
Windows and OS X don't (reliably) have any package manager. So PyPI *is* inevitably going to contain non-Python shared libraries or statically linked modules or something like that. (And in fact it already contains such things today.) I'm not sure what the alternative would even be. This also means that projects like numpy are already forced to accept that we're on the hook for security updates in our dependencies etc., so doing it on Linux too is not really that scary. Oh, I just thought of another issue: an extremely important requirement for numpy/scipy/etc. wheels is that they be reliably installable without root access. People *really* care about this: missing your grant deadline b/c you can't upgrade some package to fix some showstopper bug b/c university IT support is not answering calls at midnight on Sunday = rather poor UX. Given that, the only situation I can see where we would ever distribute wheels that require system blas on Linux, is if we were able to do it alongside wheels that do not require system blas, and pip were clever enough to reliably always pick the latter except in cases where the system blas was actually present and working.
There was a post that referenced a numpy ABI, dunno if it was in this thread - I need to drill down into that, because I don't understand why thats not a regular version resolution problem,unlike the Python ABI, which pip can't install [and shouldn't be able to!]
The problem is that numpy is very unusual among Python packages in that exposes a large and widely-used *C* API/ABI: http://docs.scipy.org/doc/numpy/reference/c-api.html This means that when you build, e.g., scipy, then you get a binary that depends on things like the in-memory layout of numpy's internal objects. We'd like it to be the case that when we release a new version of numpy, pip could realize "hey, this new version says it has an incompatible ABI that will break your currently installed version of scipy -- I'd better fetch a new version of scipy as well, or at least rebuild the same version I already have". Notice that at the time scipy is built, it is not known which future version of numpy will require a rebuild. There are a lot of ways this might work on both the numpy and pip sides -- definitely fodder for a separate thread -- but that's the basic problem. -n -- Nathaniel J. Smith -- http://vorpus.org
On 14Aug2015 0038, Nathaniel Smith wrote:
Windows and OS X don't (reliably) have any package manager. So PyPI *is* inevitably going to contain non-Python shared libraries or statically linked modules or something like that. (And in fact it already contains such things today.) I'm not sure what the alternative would even be.
Windows 10 has a package manager (http://blogs.technet.com/b/keithmayer/archive/2014/04/16/what-s-new-in-power...) but I don't think it will be particularly helpful here. The Windows model has always been to only share system libraries and each application should keep its own dependencies local. I actually like two ideas for Windows (not clear to me how well they apply on other platforms), both of which have been mentioned in the past: * PyPI packages that are *very* thin wrappers around a shared library For example, maybe "libpng" shows up on PyPI, and packages can then depend on it. It takes some care on the part of the publisher to maintain version-to-version compatibility (or maybe wheel/setup.py/.cfg grows a way to define vendored dependencies?) but this should be possible today. * "Packages" that contain shared sources One big problem on Windows is there's no standard place to put library sources, so build tools can't find them. If a package declared "build requires libpng.x.y source" then there could be tarballs "somewhere" (or even links to public version control) that have that version of the source, and the build tools can add the path references to include it. I don't have numbers, but I do know that once a C compiler is available the next easiest problem to solve is getting and referencing sources.
Given that, the only situation I can see where we would ever distribute wheels that require system blas on Linux, is if we were able to do it alongside wheels that do not require system blas, and pip were clever enough to reliably always pick the latter except in cases where the system blas was actually present and working.
I think something similar came up back when we were discussing SSE support in Windows wheels. I'd love to see packages be able to run system checks to determine their own platform string (maybe a pip/wheel extension?) before selecting and downloading a wheel. I think that would actually solve a lot of these issues.
This means that when you build, e.g., scipy, then you get a binary that depends on things like the in-memory layout of numpy's internal objects. We'd like it to be the case that when we release a new version of numpy, pip could realize "hey, this new version says it has an incompatible ABI that will break your currently installed version of scipy -- I'd better fetch a new version of scipy as well, or at least rebuild the same version I already have". Notice that at the time scipy is built, it is not known which future version of numpy will require a rebuild. There are a lot of ways this might work on both the numpy and pip sides -- definitely fodder for a separate thread -- but that's the basic problem.
There was discussion about an "incompatible_with" metadata item at one point. Could numpy include {incompatible_with: "scipy<x.y"} in such a release? Or would that not be possible. Cheers, Steve
-n
On Fri, Aug 14, 2015 at 9:17 AM, Steve Dower <steve.dower@python.org> wrote:
I actually like two ideas for Windows (not clear to me how well they apply on other platforms),
I think this same approach should be used for OS-X, not sure about Linux -- on LInux, you normally have "normal" ways to get libs. both of which have been mentioned in the past:
* PyPI packages that are *very* thin wrappers around a shared library
For example, maybe "libpng" shows up on PyPI, and packages can then depend on it. It takes some care on the part of the publisher to maintain version-to-version compatibility (or maybe wheel/setup.py/.cfg grows a way to define vendored dependencies?) but this should be possible today.
excep that AFAICT, we have no way to describe wheel (or platform) dependent dependencies: i.e "this particular binary wheel, for Windows depends on the libPNG version x.y wheel" Though you could probably fairly easily path that dependency into the wheel itself. But ideally, we would have a semi-standard place to put such stuff, and then the source package would depend on libPNG being there at build time, too, but only on Windows. (or maybe only on OS-X, or both, but not Linux, or...) Or just go with conda :-) -- conda packages depend on other conda packages -- not on other projects (I.e. not source, etc). And yuo can do platform dependent configuration, like dependencies. * "Packages" that contain shared sources
One big problem on Windows is there's no standard place to put library sources, so build tools can't find them. If a package declared "build requires libpng.x.y source" then there could be tarballs "somewhere" (or even links to public version control) that have that version of the source, and the build tools can add the path references to include it.
That would be the source equivalent of the above, and yes, I like that idea -- but again, you need a way to express platform-dependent dependencies. Though given that setup.py is python code, that's not too hard. There was discussion about an "incompatible_with" metadata item at one
point. Could numpy include {incompatible_with: "scipy<x.y"} in such a release? Or would that not be possible.
circular dependency hell! scipy depends on numpy, not teh other way around -- so it needs to be clear which version of numpy a given version of scipy depends on. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On 8/14/2015 16:16, Chris Barker wrote:
On Fri, Aug 14, 2015 at 9:17 AM, Steve Dower <steve.dower@python.org <mailto:steve.dower@python.org>> wrote:
There was discussion about an "incompatible_with" metadata item at one point. Could numpy include {incompatible_with: "scipy<x.y"} in such a release? Or would that not be possible.
circular dependency hell! scipy depends on numpy, not teh other way around -- so it needs to be clear which version of numpy a given version of scipy depends on.
-CHB
I think a better spelling of that would be something along the lines of 'abi_version' - listing all the packages your new version of your module breaks... is a long list. - Alex W.
On Fri, Aug 14, 2015 at 3:38 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 7:27 PM, Robert Collins <robertc@robertcollins.net> wrote:
On 14 August 2015 at 14:14, Nathaniel Smith <njs@pobox.com> wrote: ...>
Of course if you have an alternative proposal than I'm all ears :-).
Yeah :)
So, I want to dedicate some time to contributing to this discussion meaningfully, but I can't for the next few weeks - Jury duty, Kiwi PyCon and polishing up the PEP's I'm already committed to...
Totally hear that... it's not super urgent anyway. We should make it clear to Nate -- hi Nate! -- that there's no reason that solving this problem should block putting together the basic binary-compatibility.cfg infrastructure.
Hi! I've been working on bits of this as I've also been working on, as a test case, building out psycopg2 wheels for lots of different popular distros on i386 and x86_64, UCS2 and UCS4, under Docker. As a result, it's clear that my Linux distro tagging work in wheel's pep425tags has some issues. I've been adding to this list of distributions but it's going to need a lot more work: https://bitbucket.org/pypa/wheel/pull-requests/54/soabi-2x-platform-os-distr... So I need a bit of guidance here. I've arbitrarily chosen some tags - `rhel` for example - and wonder if, like PEP 425's mapping of Python implementations to tags, a defined mapping of Linux distributions to shorthand tags is necessary (of course this would be difficult to keep up to date, but binary-compatibility.cfg would make it less relevant in the long run). Alternatively, I could simply trust and normalize platform.linux_distribution()[0], but this means that the platform tag on RHEL would be something like `linux_x86_64_red_hat_enterprise_linux_server_6_5` Finally, by *default*, the built platform tag will include whatever version information is provided in platform.linux_distribution()[1], but the "major-only" version is also included in the list of platforms, so a default debian tag might look like `linux_x86_64_debian_7_8`, but it would be possible to build (and install) `linux_x86_64_debian_7`. However, it may be the case that the default (at least for building, maybe not for installing) ought to be the major-only tag since it should really be ABI compatible with any minor release of that distro. --nate
I think the approach of being able to ask the *platform* for things needed to build-or-use known artifacts is going to enable a bunch of different answers in this space. I'm much more enthusiastic about that than doing anything that ends up putting PyPI in competition with the distribution space.
My criteria for success are:
- there's *a* migration path from what we have today to what we propose. Doesn't have to be good, just exist.
- authors of scipy, numpy, cryptography etc can upload binary wheels for *linux, Mac OSX and Windows 32/64 in a safe and sane way
So the problem is that, IMO, "sane" here means "not building a separate wheel for every version of distro on distrowatch". So I can see two ways to do that: - my suggestion that we just pick a particular highly-compatible distro like centos 5 to build against, and make a standard list of which libraries can be assumed to be provided - the PEP-497-or-number-to-be-determined approach, in which we still have to pick a highly-compatible distro like centos 5 to build against, but each wheel has a list of which libraries from that distro it is counting on being provided
I can see the appeal of the latter approach, since if you want to do the former approach right you need to be careful about exactly which libraries you're assuming are present, etc. They both could work. But in practice, you still have to pick which distro you are going to use to build, and you still have to say "when I say I need libblas.so.1, what I mean is that I need a file that is ABI-compatible with the version of libblas.so.1 that existed in centos 5 exactly, not any other libblas.so.1". And then in practice not every distro will have such a thing, so for a project like numpy that wants to make things easy for a wide variety of users, we'll still only be able to take advantage of external dependencies for libraries that are effectively universally available and compatible anyway and end up vendoring the rest... so in the end basically we'd be distributing exactly the same wheels under either of these proposals, just the latter requires a much much more complicated scheme for metadata and installation.
And in practice I think the main alternative possibility if we don't come up with some solid guidance for how packages can build works-everywhere-wheels is that we'll see wheels for latest-version-of-Ubuntu-only, plus the occasional smattering of other distros, varying randomly on a project-by-project basis. Which would suck.
- we don't need to do things like uploading wheels containing non-Python shared libraries, nor upload statically linked modules
In fact, I think uploading regular .so files is just a huge heartache waiting to happen, so I'm almost inclined to add:
- we don't support uploading external non-Python libraries [ without prejuidice for changing our minds in the future]
Windows and OS X don't (reliably) have any package manager. So PyPI *is* inevitably going to contain non-Python shared libraries or statically linked modules or something like that. (And in fact it already contains such things today.) I'm not sure what the alternative would even be.
This also means that projects like numpy are already forced to accept that we're on the hook for security updates in our dependencies etc., so doing it on Linux too is not really that scary.
Oh, I just thought of another issue: an extremely important requirement for numpy/scipy/etc. wheels is that they be reliably installable without root access. People *really* care about this: missing your grant deadline b/c you can't upgrade some package to fix some showstopper bug b/c university IT support is not answering calls at midnight on Sunday = rather poor UX.
Given that, the only situation I can see where we would ever distribute wheels that require system blas on Linux, is if we were able to do it alongside wheels that do not require system blas, and pip were clever enough to reliably always pick the latter except in cases where the system blas was actually present and working.
There was a post that referenced a numpy ABI, dunno if it was in this thread - I need to drill down into that, because I don't understand why thats not a regular version resolution problem,unlike the Python ABI, which pip can't install [and shouldn't be able to!]
The problem is that numpy is very unusual among Python packages in that exposes a large and widely-used *C* API/ABI:
http://docs.scipy.org/doc/numpy/reference/c-api.html
This means that when you build, e.g., scipy, then you get a binary that depends on things like the in-memory layout of numpy's internal objects. We'd like it to be the case that when we release a new version of numpy, pip could realize "hey, this new version says it has an incompatible ABI that will break your currently installed version of scipy -- I'd better fetch a new version of scipy as well, or at least rebuild the same version I already have". Notice that at the time scipy is built, it is not known which future version of numpy will require a rebuild. There are a lot of ways this might work on both the numpy and pip sides -- definitely fodder for a separate thread -- but that's the basic problem.
-n
-- Nathaniel J. Smith -- http://vorpus.org
We've also considered using truncated SHA-256 hashes. Then the tag would not be readable, but it would always be the same length. On Thu, Aug 20, 2015 at 2:27 PM Nate Coraor <nate@bx.psu.edu> wrote:
On Fri, Aug 14, 2015 at 3:38 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Thu, Aug 13, 2015 at 7:27 PM, Robert Collins <robertc@robertcollins.net> wrote:
On 14 August 2015 at 14:14, Nathaniel Smith <njs@pobox.com> wrote: ...>
Of course if you have an alternative proposal than I'm all ears :-).
Yeah :)
So, I want to dedicate some time to contributing to this discussion meaningfully, but I can't for the next few weeks - Jury duty, Kiwi PyCon and polishing up the PEP's I'm already committed to...
Totally hear that... it's not super urgent anyway. We should make it clear to Nate -- hi Nate! -- that there's no reason that solving this problem should block putting together the basic binary-compatibility.cfg infrastructure.
Hi!
I've been working on bits of this as I've also been working on, as a test case, building out psycopg2 wheels for lots of different popular distros on i386 and x86_64, UCS2 and UCS4, under Docker. As a result, it's clear that my Linux distro tagging work in wheel's pep425tags has some issues. I've been adding to this list of distributions but it's going to need a lot more work:
https://bitbucket.org/pypa/wheel/pull-requests/54/soabi-2x-platform-os-distr...
So I need a bit of guidance here. I've arbitrarily chosen some tags - `rhel` for example - and wonder if, like PEP 425's mapping of Python implementations to tags, a defined mapping of Linux distributions to shorthand tags is necessary (of course this would be difficult to keep up to date, but binary-compatibility.cfg would make it less relevant in the long run).
Alternatively, I could simply trust and normalize platform.linux_distribution()[0], but this means that the platform tag on RHEL would be something like `linux_x86_64_red_hat_enterprise_linux_server_6_5`
Finally, by *default*, the built platform tag will include whatever version information is provided in platform.linux_distribution()[1], but the "major-only" version is also included in the list of platforms, so a default debian tag might look like `linux_x86_64_debian_7_8`, but it would be possible to build (and install) `linux_x86_64_debian_7`. However, it may be the case that the default (at least for building, maybe not for installing) ought to be the major-only tag since it should really be ABI compatible with any minor release of that distro.
--nate
I think the approach of being able to ask the *platform* for things needed to build-or-use known artifacts is going to enable a bunch of different answers in this space. I'm much more enthusiastic about that than doing anything that ends up putting PyPI in competition with the distribution space.
My criteria for success are:
- there's *a* migration path from what we have today to what we propose. Doesn't have to be good, just exist.
- authors of scipy, numpy, cryptography etc can upload binary wheels for *linux, Mac OSX and Windows 32/64 in a safe and sane way
So the problem is that, IMO, "sane" here means "not building a separate wheel for every version of distro on distrowatch". So I can see two ways to do that: - my suggestion that we just pick a particular highly-compatible distro like centos 5 to build against, and make a standard list of which libraries can be assumed to be provided - the PEP-497-or-number-to-be-determined approach, in which we still have to pick a highly-compatible distro like centos 5 to build against, but each wheel has a list of which libraries from that distro it is counting on being provided
I can see the appeal of the latter approach, since if you want to do the former approach right you need to be careful about exactly which libraries you're assuming are present, etc. They both could work. But in practice, you still have to pick which distro you are going to use to build, and you still have to say "when I say I need libblas.so.1, what I mean is that I need a file that is ABI-compatible with the version of libblas.so.1 that existed in centos 5 exactly, not any other libblas.so.1". And then in practice not every distro will have such a thing, so for a project like numpy that wants to make things easy for a wide variety of users, we'll still only be able to take advantage of external dependencies for libraries that are effectively universally available and compatible anyway and end up vendoring the rest... so in the end basically we'd be distributing exactly the same wheels under either of these proposals, just the latter requires a much much more complicated scheme for metadata and installation.
And in practice I think the main alternative possibility if we don't come up with some solid guidance for how packages can build works-everywhere-wheels is that we'll see wheels for latest-version-of-Ubuntu-only, plus the occasional smattering of other distros, varying randomly on a project-by-project basis. Which would suck.
- we don't need to do things like uploading wheels containing non-Python shared libraries, nor upload statically linked modules
In fact, I think uploading regular .so files is just a huge heartache waiting to happen, so I'm almost inclined to add:
- we don't support uploading external non-Python libraries [ without prejuidice for changing our minds in the future]
Windows and OS X don't (reliably) have any package manager. So PyPI *is* inevitably going to contain non-Python shared libraries or statically linked modules or something like that. (And in fact it already contains such things today.) I'm not sure what the alternative would even be.
This also means that projects like numpy are already forced to accept that we're on the hook for security updates in our dependencies etc., so doing it on Linux too is not really that scary.
Oh, I just thought of another issue: an extremely important requirement for numpy/scipy/etc. wheels is that they be reliably installable without root access. People *really* care about this: missing your grant deadline b/c you can't upgrade some package to fix some showstopper bug b/c university IT support is not answering calls at midnight on Sunday = rather poor UX.
Given that, the only situation I can see where we would ever distribute wheels that require system blas on Linux, is if we were able to do it alongside wheels that do not require system blas, and pip were clever enough to reliably always pick the latter except in cases where the system blas was actually present and working.
There was a post that referenced a numpy ABI, dunno if it was in this thread - I need to drill down into that, because I don't understand why thats not a regular version resolution problem,unlike the Python ABI, which pip can't install [and shouldn't be able to!]
The problem is that numpy is very unusual among Python packages in that exposes a large and widely-used *C* API/ABI:
http://docs.scipy.org/doc/numpy/reference/c-api.html
This means that when you build, e.g., scipy, then you get a binary that depends on things like the in-memory layout of numpy's internal objects. We'd like it to be the case that when we release a new version of numpy, pip could realize "hey, this new version says it has an incompatible ABI that will break your currently installed version of scipy -- I'd better fetch a new version of scipy as well, or at least rebuild the same version I already have". Notice that at the time scipy is built, it is not known which future version of numpy will require a rebuild. There are a lot of ways this might work on both the numpy and pip sides -- definitely fodder for a separate thread -- but that's the basic problem.
-n
-- Nathaniel J. Smith -- http://vorpus.org
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On August 20, 2015 at 2:40:04 PM, Daniel Holth (dholth@gmail.com) wrote:
We've also considered using truncated SHA-256 hashes. Then the tag would not be readable, but it would always be the same length.
I think it’d be a problem where can’t reverse the SHA-256 operation, so you could no longer inspect a wheel to determine what platforms it supports based on the filename, you would only ever be able to determine if it matches a particular platform. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
If you need that for some reason just put the longer information in the metadata, inside the WHEEL file for example. Surely "does it work on my system" dominates, as opposed to "I have a wheel with this mnemonic tag, now let me install debian 5 so I can get it to run". On Thu, Aug 20, 2015 at 3:19 PM Donald Stufft <donald@stufft.io> wrote:
On August 20, 2015 at 2:40:04 PM, Daniel Holth (dholth@gmail.com) wrote:
We've also considered using truncated SHA-256 hashes. Then the tag would not be readable, but it would always be the same length.
I think it’d be a problem where can’t reverse the SHA-256 operation, so you could no longer inspect a wheel to determine what platforms it supports based on the filename, you would only ever be able to determine if it matches a particular platform.
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth@gmail.com) wrote:
If you need that for some reason just put the longer information in the metadata, inside the WHEEL file for example. Surely "does it work on my system" dominates, as opposed to "I have a wheel with this mnemonic tag, now let me install debian 5 so I can get it to run".
It’s less about “now let me install Debian 5” and more like tooling that doesn’t run *on* the platform but which needs to make decisions based on what platform a wheel is built for. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Thu, Aug 20, 2015 at 3:25 PM, Donald Stufft <donald@stufft.io> wrote:
On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth@gmail.com) wrote:
If you need that for some reason just put the longer information in the metadata, inside the WHEEL file for example. Surely "does it work on my system" dominates, as opposed to "I have a wheel with this mnemonic tag, now let me install debian 5 so I can get it to run".
It’s less about “now let me install Debian 5” and more like tooling that doesn’t run *on* the platform but which needs to make decisions based on what platform a wheel is built for.
This makes binary-compatibility.cfg much more difficult, however. There'd still have to be a maintained list of "platform" to hash. --nate
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 21 August 2015 at 07:25, Donald Stufft <donald@stufft.io> wrote:
On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth@gmail.com) wrote:
If you need that for some reason just put the longer information in the metadata, inside the WHEEL file for example. Surely "does it work on my system" dominates, as opposed to "I have a wheel with this mnemonic tag, now let me install debian 5 so I can get it to run".
It’s less about “now let me install Debian 5” and more like tooling that doesn’t run *on* the platform but which needs to make decisions based on what platform a wheel is built for.
Cramming that into the file name is a mistake IMO. Make it declarative data, make it indexable, and index it. We can do that locally as much as via the REST API. That btw is why the draft for referencing external dependencies specifies file names (because file names give an ABI in the context of a platform) - but we do need to identify the platform, and platform.distribution should be good enough for that (or perhaps we start depending on lsb-release for detection -Rob -- Robert Collins <rbtcollins@hp.com> Distinguished Technologist HP Converged Cloud
On 21 August 2015 at 05:58, Robert Collins <robertc@robertcollins.net> wrote:
On 21 August 2015 at 07:25, Donald Stufft <donald@stufft.io> wrote:
On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth@gmail.com) wrote:
If you need that for some reason just put the longer information in the metadata, inside the WHEEL file for example. Surely "does it work on my system" dominates, as opposed to "I have a wheel with this mnemonic tag, now let me install debian 5 so I can get it to run".
It’s less about “now let me install Debian 5” and more like tooling that doesn’t run *on* the platform but which needs to make decisions based on what platform a wheel is built for.
Cramming that into the file name is a mistake IMO.
Make it declarative data, make it indexable, and index it. We can do that locally as much as via the REST API.
That btw is why the draft for referencing external dependencies specifies file names (because file names give an ABI in the context of a platform) - but we do need to identify the platform, and platform.distribution should be good enough for that (or perhaps we start depending on lsb-release for detection
LSB has too much stuff in it, so most distros aren't LSB compliant out of the box - you have to install extra packages. /etc/os-release is a better option: http://www.freedesktop.org/software/systemd/man/os-release.html My original concern with using that was that it *over*specifies the distro (e.g. not only do CentOS and RHEL releases show up as different platforms, but so do X.Y releases within a series), but the binary-compatibility.txt idea resolves that issue, since a derived distro can explicitly identify itself as binary compatible with its upstream and be able to use the corresponding wheel files. Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 21.08.2015 08:51, Nick Coghlan wrote:
On 21 August 2015 at 05:58, Robert Collins <robertc@robertcollins.net> wrote:
On 21 August 2015 at 07:25, Donald Stufft <donald@stufft.io> wrote:
On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth@gmail.com) wrote:
If you need that for some reason just put the longer information in the metadata, inside the WHEEL file for example. Surely "does it work on my system" dominates, as opposed to "I have a wheel with this mnemonic tag, now let me install debian 5 so I can get it to run".
It’s less about “now let me install Debian 5” and more like tooling that doesn’t run *on* the platform but which needs to make decisions based on what platform a wheel is built for.
Cramming that into the file name is a mistake IMO.
Agreed. IMO, the file name should really just be a hint to what's in the box and otherwise just serve the main purpose of being unique for whatever the platform needs are. You might be interested in the approach we've chosen for our prebuilt packages when used with our Python package web installer: Instead of parsing file names, we use a tag file for each package, which maps a set of tags to the URLs of the distribution files. The web installer takes care of determining the right distribution file to download by looking at those tags, not be looking at the file name. Since tags are very flexible, and, most importantly, extensible, this approach has allowed us to add new differentiations to the system without changing the basic architecture. Here's a talk on the installer architecture I gave at PyCon UK 2014: http://www.egenix.com/library/presentations/PyCon-UK-2014-Python-Web-Install... This architecture was born out of the need to support more platforms than eggs, wheels, etc. currently support. We had previously tried to use the file name approach and get setuptools to play along, but this failed. The prebuilt distribution files still use a variant of this to make the file names uniques, but we've stopped putting more energy into getting those to work with setuptools, since the tags allow for a much more flexible approach than file names. We currently support Windows, Linux, FreeBSD and Mac OS X. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 21 2015)
Python Projects, Coaching and Consulting ... http://www.egenix.com/ mxODBC Plone/Zope Database Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
2015-08-19: Released mxODBC 3.3.5 ... http://egenix.com/go82 2015-08-22: FrOSCon 2015 ... tomorrow ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/
On Fri, Aug 21, 2015 at 2:51 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 21 August 2015 at 07:25, Donald Stufft <donald@stufft.io> wrote:
On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth@gmail.com)
wrote:
If you need that for some reason just put the longer information in the metadata, inside the WHEEL file for example. Surely "does it work on my system" dominates, as opposed to "I have a wheel with this mnemonic tag, now let me install debian 5 so I can get it to run".
It’s less about “now let me install Debian 5” and more like tooling
On 21 August 2015 at 05:58, Robert Collins <robertc@robertcollins.net> wrote: that doesn’t run *on* the platform but which needs to make decisions based on what platform a wheel is built for.
Cramming that into the file name is a mistake IMO.
Make it declarative data, make it indexable, and index it. We can do that locally as much as via the REST API.
That btw is why the draft for referencing external dependencies specifies file names (because file names give an ABI in the context of a platform) - but we do need to identify the platform, and platform.distribution should be good enough for that (or perhaps we start depending on lsb-release for detection
LSB has too much stuff in it, so most distros aren't LSB compliant out of the box - you have to install extra packages.
/etc/os-release is a better option: http://www.freedesktop.org/software/systemd/man/os-release.html
As per this discussion, and because I've discovered that the entire platform module is deprecated in 3.5 (and other amusements, like a Ubuntu-modified version of platform that ships on Ubuntu - platform as shipped with CPython detects Ubuntu as debian), I'm switching to os-release, but even that is unreliable - the file does not exist in CentOS/RHEL 6, for example. On Debian testing/sid installs, VERSION and VERSION_ID are unset (which is not wrong - there is no release of testing, but it does make identifying the platform more complicated since even the codename is not provided other than at the end of PRETTY_NAME). Regardless of whether a hash or a human-identifiable string is used to identify the platform, there still needs to be a way to reliably detect it. Unless someone tells me not to, I'm going to default to using os-release and then fall back to other methods in the event that os-release isn't available, and this will be in some sort of library alongside pep425tags in wheel/pip. FWIW, os-release's `ID_LIKE` gives us some ability to make assumptions without explicit need for a binary-compatibility.cfg (although not blindly - for example, CentOS sets this to "rhel fedora", but of course RHEL/CentOS and Fedora versions are not congruent). --nate
My original concern with using that was that it *over*specifies the distro (e.g. not only do CentOS and RHEL releases show up as different platforms, but so do X.Y releases within a series), but the binary-compatibility.txt idea resolves that issue, since a derived distro can explicitly identify itself as binary compatible with its upstream and be able to use the corresponding wheel files.
Regards, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Mon, Aug 24, 2015 at 10:03 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Fri, Aug 21, 2015 at 2:51 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 21 August 2015 at 07:25, Donald Stufft <donald@stufft.io> wrote:
On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth@gmail.com)
wrote:
If you need that for some reason just put the longer information in
metadata, inside the WHEEL file for example. Surely "does it work on my system" dominates, as opposed to "I have a wheel with this mnemonic tag, now let me install debian 5 so I can get it to run".
It’s less about “now let me install Debian 5” and more like tooling
On 21 August 2015 at 05:58, Robert Collins <robertc@robertcollins.net> wrote: the that doesn’t run *on* the platform but which needs to make decisions based on what platform a wheel is built for.
Cramming that into the file name is a mistake IMO.
Make it declarative data, make it indexable, and index it. We can do that locally as much as via the REST API.
That btw is why the draft for referencing external dependencies specifies file names (because file names give an ABI in the context of a platform) - but we do need to identify the platform, and platform.distribution should be good enough for that (or perhaps we start depending on lsb-release for detection
LSB has too much stuff in it, so most distros aren't LSB compliant out of the box - you have to install extra packages.
/etc/os-release is a better option: http://www.freedesktop.org/software/systemd/man/os-release.html
As per this discussion, and because I've discovered that the entire platform module is deprecated in 3.5 (and other amusements, like a Ubuntu-modified version of platform that ships on Ubuntu - platform as shipped with CPython detects Ubuntu as debian), I'm switching to os-release, but even that is unreliable - the file does not exist in CentOS/RHEL 6, for example. On Debian testing/sid installs, VERSION and VERSION_ID are unset (which is not wrong - there is no release of testing, but it does make identifying the platform more complicated since even the codename is not provided other than at the end of PRETTY_NAME). Regardless of whether a hash or a human-identifiable string is used to identify the platform, there still needs to be a way to reliably detect it.
Unless someone tells me not to, I'm going to default to using os-release and then fall back to other methods in the event that os-release isn't available, and this will be in some sort of library alongside pep425tags in wheel/pip.
FWIW, os-release's `ID_LIKE` gives us some ability to make assumptions without explicit need for a binary-compatibility.cfg (although not blindly - for example, CentOS sets this to "rhel fedora", but of course RHEL/CentOS and Fedora versions are not congruent).
IIUC, then the value of os-release will be used to generalize the compatible versions of *.so deps of a given distribution at a point in time? This works for distros that don't change [libc] much during a release, but for rolling release models (e.g. arch, gentoo), IDK how this simplification will work. (This is a graph with nodes and edges (with attributes), and rules). * Keying/namespacing is a simplification which may work. * *conda preprocessing selectors* (and ~LSB-Python-Conda) ~'prune' large parts of the graph * Someone mentioned LSB[-Python-Base] (again as a simplification) * [[package, [version<=>verstr]]] Salt * __salt__['grains']['os'] = "Fedora" || "Ubuntu" * __salt__['grains']['os_family'] = "RedHat" || "Debian" * __salt__['grains']['osrelease'] = "22" || "14.04" * __salt__['grains']['oscodename'] = "Twenty Two" || "trusty" * Docs: http://docs.saltstack.com/en/latest/topics/targeting/grains.html * Docs: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.grains.html... * Src: https://github.com/saltstack/salt/blob/develop/salt/grains/core.py#L1018 ("def os_data()") $ sudo salt-call --local grains.item os_family os osrelease oscodename local: ---------- os: Fedora os_family: RedHat oscodename: Twenty Two osrelease: 22
--nate
My original concern with using that was that it *over*specifies the distro (e.g. not only do CentOS and RHEL releases show up as different platforms, but so do X.Y releases within a series), but the binary-compatibility.txt idea resolves that issue, since a derived distro can explicitly identify itself as binary compatible with its upstream and be able to use the corresponding wheel files.
Regards, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Mon, Aug 24, 2015 at 1:51 PM, Wes Turner <wes.turner@gmail.com> wrote:
On Mon, Aug 24, 2015 at 10:03 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Fri, Aug 21, 2015 at 2:51 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 21 August 2015 at 07:25, Donald Stufft <donald@stufft.io> wrote:
On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth@gmail.com)
wrote:
If you need that for some reason just put the longer information in
metadata, inside the WHEEL file for example. Surely "does it work on my system" dominates, as opposed to "I have a wheel with this mnemonic tag, now let me install debian 5 so I can get it to run".
It’s less about “now let me install Debian 5” and more like tooling
On 21 August 2015 at 05:58, Robert Collins <robertc@robertcollins.net> wrote: the that doesn’t run *on* the platform but which needs to make decisions based on what platform a wheel is built for.
Cramming that into the file name is a mistake IMO.
Make it declarative data, make it indexable, and index it. We can do that locally as much as via the REST API.
That btw is why the draft for referencing external dependencies specifies file names (because file names give an ABI in the context of a platform) - but we do need to identify the platform, and platform.distribution should be good enough for that (or perhaps we start depending on lsb-release for detection
LSB has too much stuff in it, so most distros aren't LSB compliant out of the box - you have to install extra packages.
/etc/os-release is a better option: http://www.freedesktop.org/software/systemd/man/os-release.html
As per this discussion, and because I've discovered that the entire platform module is deprecated in 3.5 (and other amusements, like a Ubuntu-modified version of platform that ships on Ubuntu - platform as shipped with CPython detects Ubuntu as debian), I'm switching to os-release, but even that is unreliable - the file does not exist in CentOS/RHEL 6, for example. On Debian testing/sid installs, VERSION and VERSION_ID are unset (which is not wrong - there is no release of testing, but it does make identifying the platform more complicated since even the codename is not provided other than at the end of PRETTY_NAME). Regardless of whether a hash or a human-identifiable string is used to identify the platform, there still needs to be a way to reliably detect it.
Unless someone tells me not to, I'm going to default to using os-release and then fall back to other methods in the event that os-release isn't available, and this will be in some sort of library alongside pep425tags in wheel/pip.
FWIW, os-release's `ID_LIKE` gives us some ability to make assumptions without explicit need for a binary-compatibility.cfg (although not blindly - for example, CentOS sets this to "rhel fedora", but of course RHEL/CentOS and Fedora versions are not congruent).
IIUC, then the value of os-release will be used to generalize the compatible versions of *.so deps of a given distribution at a point in time?
This works for distros that don't change [libc] much during a release, but for rolling release models (e.g. arch, gentoo), IDK how this simplification will work. (This is a graph with nodes and edges (with attributes), and rules).
Arch, Gentoo, and other rolling release distributions don't have a stable ABI, so by definition I don't think we can support redistributable wheels on them. I'm adding platform detection support for them regardless, but I don't think there's any way to allow wheels built for these platforms in PyPI.
* Keying/namespacing is a simplification which may work. * *conda preprocessing selectors* (and ~LSB-Python-Conda) ~'prune' large parts of the graph
* Someone mentioned LSB[-Python-Base] (again as a simplification) * [[package, [version<=>verstr]]]
Salt * __salt__['grains']['os'] = "Fedora" || "Ubuntu" * __salt__['grains']['os_family'] = "RedHat" || "Debian" * __salt__['grains']['osrelease'] = "22" || "14.04" * __salt__['grains']['oscodename'] = "Twenty Two" || "trusty" * Docs: http://docs.saltstack.com/en/latest/topics/targeting/grains.html * Docs: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.grains.html... * Src: https://github.com/saltstack/salt/blob/develop/salt/grains/core.py#L1018 ("def os_data()")
$ sudo salt-call --local grains.item os_family os osrelease oscodename local: ---------- os: Fedora os_family: RedHat oscodename: Twenty Two osrelease: 22
--nate
My original concern with using that was that it *over*specifies the distro (e.g. not only do CentOS and RHEL releases show up as different platforms, but so do X.Y releases within a series), but the binary-compatibility.txt idea resolves that issue, since a derived distro can explicitly identify itself as binary compatible with its upstream and be able to use the corresponding wheel files.
Regards, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
I've started down this road of Linux platform detection, here's the work so far: https://bitbucket.org/natefoo/wheel/src/tip/wheel/platform/linux.py I'm collecting distribution details here: https://gist.github.com/natefoo/814c5bf936922dad97ff One thing to note, although it's not used, I'm attempting to label a particular ABI as stable or unstable, so for example, Debian testing is unstable, whereas full releases are stable. Arch and Gentoo are always unstable, Ubuntu is always stable, etc. Hopefully this would be useful in making a decision about what wheels to allow into PyPI. --nate On Mon, Aug 24, 2015 at 2:17 PM, Nate Coraor <nate@bx.psu.edu> wrote:
On Mon, Aug 24, 2015 at 1:51 PM, Wes Turner <wes.turner@gmail.com> wrote:
On Mon, Aug 24, 2015 at 10:03 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Fri, Aug 21, 2015 at 2:51 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 21 August 2015 at 07:25, Donald Stufft <donald@stufft.io> wrote:
On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth@gmail.com)
wrote:
> If you need that for some reason just put the longer information in
> metadata, inside the WHEEL file for example. Surely "does it work on my > system" dominates, as opposed to "I have a wheel with this mnemonic tag, > now let me install debian 5 so I can get it to run". > >
It’s less about “now let me install Debian 5” and more like tooling
On 21 August 2015 at 05:58, Robert Collins <robertc@robertcollins.net> wrote: the that doesn’t run *on* the platform but which needs to make decisions based on what platform a wheel is built for.
Cramming that into the file name is a mistake IMO.
Make it declarative data, make it indexable, and index it. We can do that locally as much as via the REST API.
That btw is why the draft for referencing external dependencies specifies file names (because file names give an ABI in the context of a platform) - but we do need to identify the platform, and platform.distribution should be good enough for that (or perhaps we start depending on lsb-release for detection
LSB has too much stuff in it, so most distros aren't LSB compliant out of the box - you have to install extra packages.
/etc/os-release is a better option: http://www.freedesktop.org/software/systemd/man/os-release.html
As per this discussion, and because I've discovered that the entire platform module is deprecated in 3.5 (and other amusements, like a Ubuntu-modified version of platform that ships on Ubuntu - platform as shipped with CPython detects Ubuntu as debian), I'm switching to os-release, but even that is unreliable - the file does not exist in CentOS/RHEL 6, for example. On Debian testing/sid installs, VERSION and VERSION_ID are unset (which is not wrong - there is no release of testing, but it does make identifying the platform more complicated since even the codename is not provided other than at the end of PRETTY_NAME). Regardless of whether a hash or a human-identifiable string is used to identify the platform, there still needs to be a way to reliably detect it.
Unless someone tells me not to, I'm going to default to using os-release and then fall back to other methods in the event that os-release isn't available, and this will be in some sort of library alongside pep425tags in wheel/pip.
FWIW, os-release's `ID_LIKE` gives us some ability to make assumptions without explicit need for a binary-compatibility.cfg (although not blindly - for example, CentOS sets this to "rhel fedora", but of course RHEL/CentOS and Fedora versions are not congruent).
IIUC, then the value of os-release will be used to generalize the compatible versions of *.so deps of a given distribution at a point in time?
This works for distros that don't change [libc] much during a release, but for rolling release models (e.g. arch, gentoo), IDK how this simplification will work. (This is a graph with nodes and edges (with attributes), and rules).
Arch, Gentoo, and other rolling release distributions don't have a stable ABI, so by definition I don't think we can support redistributable wheels on them. I'm adding platform detection support for them regardless, but I don't think there's any way to allow wheels built for these platforms in PyPI.
* Keying/namespacing is a simplification which may work. * *conda preprocessing selectors* (and ~LSB-Python-Conda) ~'prune' large parts of the graph
* Someone mentioned LSB[-Python-Base] (again as a simplification) * [[package, [version<=>verstr]]]
Salt * __salt__['grains']['os'] = "Fedora" || "Ubuntu" * __salt__['grains']['os_family'] = "RedHat" || "Debian" * __salt__['grains']['osrelease'] = "22" || "14.04" * __salt__['grains']['oscodename'] = "Twenty Two" || "trusty" * Docs: http://docs.saltstack.com/en/latest/topics/targeting/grains.html * Docs: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.grains.html... * Src: https://github.com/saltstack/salt/blob/develop/salt/grains/core.py#L1018 ("def os_data()")
$ sudo salt-call --local grains.item os_family os osrelease oscodename local: ---------- os: Fedora os_family: RedHat oscodename: Twenty Two osrelease: 22
--nate
My original concern with using that was that it *over*specifies the distro (e.g. not only do CentOS and RHEL releases show up as different platforms, but so do X.Y releases within a series), but the binary-compatibility.txt idea resolves that issue, since a derived distro can explicitly identify itself as binary compatible with its upstream and be able to use the corresponding wheel files.
Regards, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Tue, Aug 25, 2015 at 12:54 PM, Nate Coraor <nate@bx.psu.edu> wrote:
I've started down this road of Linux platform detection, here's the work so far:
https://bitbucket.org/natefoo/wheel/src/tip/wheel/platform/linux.py
IDK whether codecs.open(file, 'r', encoding='utf8') is necessary or not? There are probably distros with Unicode characters in their e.g. lsb-release files.
I'm collecting distribution details here:
Oh wow; thanks!
One thing to note, although it's not used, I'm attempting to label a particular ABI as stable or unstable, so for example, Debian testing is unstable, whereas full releases are stable. Arch and Gentoo are always unstable, Ubuntu is always stable, etc. Hopefully this would be useful in making a decision about what wheels to allow into PyPI.
Is it possible to enumerate the set into a table? e.g. [((distro,ver), {'ABI': 'stable'}), (...)]
--nate
On Mon, Aug 24, 2015 at 2:17 PM, Nate Coraor <nate@bx.psu.edu> wrote:
On Mon, Aug 24, 2015 at 1:51 PM, Wes Turner <wes.turner@gmail.com> wrote:
On Mon, Aug 24, 2015 at 10:03 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Fri, Aug 21, 2015 at 2:51 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 21 August 2015 at 07:25, Donald Stufft <donald@stufft.io> wrote: > > On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth@gmail.com) wrote: >> If you need that for some reason just put the longer information in the >> metadata, inside the WHEEL file for example. Surely "does it work on my >> system" dominates, as opposed to "I have a wheel with this mnemonic tag, >> now let me install debian 5 so I can get it to run". >> >> > > It’s less about “now let me install Debian 5” and more like tooling
On 21 August 2015 at 05:58, Robert Collins <robertc@robertcollins.net> wrote: that doesn’t run *on* the platform but which needs to make decisions based on what platform a wheel is built for.
Cramming that into the file name is a mistake IMO.
Make it declarative data, make it indexable, and index it. We can do that locally as much as via the REST API.
That btw is why the draft for referencing external dependencies specifies file names (because file names give an ABI in the context
of
a platform) - but we do need to identify the platform, and platform.distribution should be good enough for that (or perhaps we start depending on lsb-release for detection
LSB has too much stuff in it, so most distros aren't LSB compliant out of the box - you have to install extra packages.
/etc/os-release is a better option: http://www.freedesktop.org/software/systemd/man/os-release.html
As per this discussion, and because I've discovered that the entire platform module is deprecated in 3.5 (and other amusements, like a Ubuntu-modified version of platform that ships on Ubuntu - platform as shipped with CPython detects Ubuntu as debian), I'm switching to os-release, but even that is unreliable - the file does not exist in CentOS/RHEL 6, for example. On Debian testing/sid installs, VERSION and VERSION_ID are unset (which is not wrong - there is no release of testing, but it does make identifying the platform more complicated since even the codename is not provided other than at the end of PRETTY_NAME). Regardless of whether a hash or a human-identifiable string is used to identify the platform, there still needs to be a way to reliably detect it.
Unless someone tells me not to, I'm going to default to using os-release and then fall back to other methods in the event that os-release isn't available, and this will be in some sort of library alongside pep425tags in wheel/pip.
FWIW, os-release's `ID_LIKE` gives us some ability to make assumptions without explicit need for a binary-compatibility.cfg (although not blindly - for example, CentOS sets this to "rhel fedora", but of course RHEL/CentOS and Fedora versions are not congruent).
IIUC, then the value of os-release will be used to generalize the compatible versions of *.so deps of a given distribution at a point in time?
This works for distros that don't change [libc] much during a release, but for rolling release models (e.g. arch, gentoo), IDK how this simplification will work. (This is a graph with nodes and edges (with attributes), and rules).
Arch, Gentoo, and other rolling release distributions don't have a stable ABI, so by definition I don't think we can support redistributable wheels on them. I'm adding platform detection support for them regardless, but I don't think there's any way to allow wheels built for these platforms in PyPI.
* Keying/namespacing is a simplification which may work. * *conda preprocessing selectors* (and ~LSB-Python-Conda) ~'prune' large parts of the graph
* Someone mentioned LSB[-Python-Base] (again as a simplification) * [[package, [version<=>verstr]]]
Salt * __salt__['grains']['os'] = "Fedora" || "Ubuntu" * __salt__['grains']['os_family'] = "RedHat" || "Debian" * __salt__['grains']['osrelease'] = "22" || "14.04" * __salt__['grains']['oscodename'] = "Twenty Two" || "trusty" * Docs: http://docs.saltstack.com/en/latest/topics/targeting/grains.html * Docs: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.grains.html... * Src: https://github.com/saltstack/salt/blob/develop/salt/grains/core.py#L1018 ("def os_data()")
$ sudo salt-call --local grains.item os_family os osrelease oscodename local: ---------- os: Fedora os_family: RedHat oscodename: Twenty Two osrelease: 22
--nate
My original concern with using that was that it *over*specifies the distro (e.g. not only do CentOS and RHEL releases show up as different platforms, but so do X.Y releases within a series), but the binary-compatibility.txt idea resolves that issue, since a derived distro can explicitly identify itself as binary compatible with its upstream and be able to use the corresponding wheel files.
Regards, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Tue, Aug 25, 2015 at 1:54 PM, Nate Coraor <nate@bx.psu.edu> wrote:
I've started down this road of Linux platform detection, here's the work so far:
https://bitbucket.org/natefoo/wheel/src/tip/wheel/platform/linux.py
I'm collecting distribution details here:
https://gist.github.com/natefoo/814c5bf936922dad97ff
One thing to note, although it's not used, I'm attempting to label a particular ABI as stable or unstable, so for example, Debian testing is unstable, whereas full releases are stable. Arch and Gentoo are always unstable, Ubuntu is always stable, etc. Hopefully this would be useful in making a decision about what wheels to allow into PyPI.
--nate
Hi all, Platform detection and binary-compatibility.cfg support is now available in my branch of pip[1]. I've also built a large number of psycopg2 wheels for testing[2]. Here's what happens when you try to install one of them on CentOS 7 using my pip: # pip install --index https://wheels.galaxyproject.org/ --no-cache-dir psycopg2 Collecting psycopg2 Could not find a version that satisfies the requirement psycopg2 (from versions: ) No matching distribution found for psycopg2 Then create /etc/python/binary-compatibility.cfg: # cat /etc/python/binary-compatibility.cfg { "linux_x86_64_centos_7": { "install": ["linux_x86_64_rhel_6"] } } # pip install --index https://wheels.galaxyproject.org/ --no-cache-dir psycopg2 Collecting psycopg2 Downloading https://wheels.galaxyproject.org/packages/psycopg2-2.6.1-cp27-cp27mu-linux_x... (307kB) 100% |################################| 307kB 75.7MB/s Installing collected packages: psycopg2 Successfully installed psycopg2-2.6.1 Of course, I have not attempted to solve the external dependency problem: # python -c 'import psycopg2; print psycopg2' Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python2.7/site-packages/psycopg2/__init__.py", line 50, in <module> from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID ImportError: libpq.so.5: cannot open shared object file: No such file or directory But after installing postgresql-libs, everything works as expected: # python -c 'import psycopg2; print psycopg2' <module 'psycopg2' from '/usr/lib/python2.7/site-packages/psycopg2/__init__.pyc'> This is an improvement over the current situation of an sdist in PyPI, however, since only one non-default package (postgresql-libs) needs to be installed as opposed to postgresql-devel and the build tools (gcc, make, etc.). In addition, a user installing psycopg2 is likely to already have postgresql-libs installed. I'd really appreciate if this work could be given a look, and some discussion could take place on where to go from here. Thanks, --nate [1]: https://github.com/natefoo/pip/tree/linux-wheels [2]: https://wheels.galaxyproject.org/simple/psycopg2
Looks amazing, why don't we merge it. On Thu, Aug 27, 2015 at 3:24 PM Nate Coraor <nate@bx.psu.edu> wrote:
On Tue, Aug 25, 2015 at 1:54 PM, Nate Coraor <nate@bx.psu.edu> wrote:
I've started down this road of Linux platform detection, here's the work so far:
https://bitbucket.org/natefoo/wheel/src/tip/wheel/platform/linux.py
I'm collecting distribution details here:
https://gist.github.com/natefoo/814c5bf936922dad97ff
One thing to note, although it's not used, I'm attempting to label a particular ABI as stable or unstable, so for example, Debian testing is unstable, whereas full releases are stable. Arch and Gentoo are always unstable, Ubuntu is always stable, etc. Hopefully this would be useful in making a decision about what wheels to allow into PyPI.
--nate
Hi all,
Platform detection and binary-compatibility.cfg support is now available in my branch of pip[1]. I've also built a large number of psycopg2 wheels for testing[2]. Here's what happens when you try to install one of them on CentOS 7 using my pip:
# pip install --index https://wheels.galaxyproject.org/ --no-cache-dir psycopg2 Collecting psycopg2 Could not find a version that satisfies the requirement psycopg2 (from versions: ) No matching distribution found for psycopg2
Then create /etc/python/binary-compatibility.cfg:
# cat /etc/python/binary-compatibility.cfg { "linux_x86_64_centos_7": { "install": ["linux_x86_64_rhel_6"] } }
# pip install --index https://wheels.galaxyproject.org/ --no-cache-dir psycopg2 Collecting psycopg2 Downloading https://wheels.galaxyproject.org/packages/psycopg2-2.6.1-cp27-cp27mu-linux_x... (307kB) 100% |################################| 307kB 75.7MB/s Installing collected packages: psycopg2 Successfully installed psycopg2-2.6.1
Of course, I have not attempted to solve the external dependency problem:
# python -c 'import psycopg2; print psycopg2' Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python2.7/site-packages/psycopg2/__init__.py", line 50, in <module> from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID ImportError: libpq.so.5: cannot open shared object file: No such file or directory
But after installing postgresql-libs, everything works as expected:
# python -c 'import psycopg2; print psycopg2' <module 'psycopg2' from '/usr/lib/python2.7/site-packages/psycopg2/__init__.pyc'>
This is an improvement over the current situation of an sdist in PyPI, however, since only one non-default package (postgresql-libs) needs to be installed as opposed to postgresql-devel and the build tools (gcc, make, etc.). In addition, a user installing psycopg2 is likely to already have postgresql-libs installed.
I'd really appreciate if this work could be given a look, and some discussion could take place on where to go from here.
Thanks, --nate
[1]: https://github.com/natefoo/pip/tree/linux-wheels [2]: https://wheels.galaxyproject.org/simple/psycopg2 _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On September 1, 2015 at 9:57:50 AM, Daniel Holth (dholth@gmail.com) wrote:
Looks amazing, why don't we merge it.
I think we need to update the PEP or write a new PEP before we add new tags to the implementation. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
We could at least merge the implementation of the SOABI tag for Python 2.7 (cp27m, cp27mu, ...), which has been in the PEP from the beginning but was never implemented for Python 2. This lets you distinguish between wheels built for CPython with debug, pymalloc, unicode builds. For pypy which does not have SOABI, the current 'none' should suffice. On Wed, Sep 2, 2015 at 7:45 PM Donald Stufft <donald@stufft.io> wrote:
On September 1, 2015 at 9:57:50 AM, Daniel Holth (dholth@gmail.com) wrote:
Looks amazing, why don't we merge it.
I think we need to update the PEP or write a new PEP before we add new tags to the implementation.
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On September 3, 2015 at 8:15:53 AM, Daniel Holth (dholth@gmail.com) wrote:
We could at least merge the implementation of the SOABI tag for Python 2.7 (cp27m, cp27mu, ...), which has been in the PEP from the beginning but was never implemented for Python 2. This lets you distinguish between wheels built for CPython with debug, pymalloc, unicode builds.
For pypy which does not have SOABI, the current 'none' should suffice.
Merging the SOABI tag sounds like a win to me. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Thu, Sep 3, 2015 at 8:16 AM, Donald Stufft <donald@stufft.io> wrote:
On September 3, 2015 at 8:15:53 AM, Daniel Holth (dholth@gmail.com) wrote:
We could at least merge the implementation of the SOABI tag for Python 2.7 (cp27m, cp27mu, ...), which has been in the PEP from the beginning but was never implemented for Python 2. This lets you distinguish between wheels built for CPython with debug, pymalloc, unicode builds.
For pypy which does not have SOABI, the current 'none' should suffice.
The ABI tag code as written will actually set it for PyPy (e.g. 'pp222mu') since the SOABI config var is unset on it (and probably any other non-Python-3 implementation). This was intentional since PyPy does actually build some C Extensions, but I can limit SOABI detection to CPython if it doesn't make sense to do it on PyPy. However, I see now it will also be set for Jython, which it definitely should not do, so I'll fix that regardless.
Merging the SOABI tag sounds like a win to me.
I'll create PRs for this against wheel and pip shortly. I can also work on a PEP for the platform tag - I don't think it's going to need to be a big one. Are there any preferences as to whether this should be a new PEP or an update to 425?
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
IIRC there's also a bug where we use pypy's version "2.6.2" and not the version of Python it implements "2.7" for the first tag. On Thu, Sep 3, 2015 at 9:53 AM Nate Coraor <nate@bx.psu.edu> wrote:
On Thu, Sep 3, 2015 at 8:16 AM, Donald Stufft <donald@stufft.io> wrote:
On September 3, 2015 at 8:15:53 AM, Daniel Holth (dholth@gmail.com) wrote:
We could at least merge the implementation of the SOABI tag for Python 2.7 (cp27m, cp27mu, ...), which has been in the PEP from the beginning but was never implemented for Python 2. This lets you distinguish between wheels built for CPython with debug, pymalloc, unicode builds.
For pypy which does not have SOABI, the current 'none' should suffice.
The ABI tag code as written will actually set it for PyPy (e.g. 'pp222mu') since the SOABI config var is unset on it (and probably any other non-Python-3 implementation). This was intentional since PyPy does actually build some C Extensions, but I can limit SOABI detection to CPython if it doesn't make sense to do it on PyPy.
However, I see now it will also be set for Jython, which it definitely should not do, so I'll fix that regardless.
Merging the SOABI tag sounds like a win to me.
I'll create PRs for this against wheel and pip shortly. I can also work on a PEP for the platform tag - I don't think it's going to need to be a big one. Are there any preferences as to whether this should be a new PEP or an update to 425?
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Thu, Sep 3, 2015 at 9:56 AM, Daniel Holth <dholth@gmail.com> wrote:
IIRC there's also a bug where we use pypy's version "2.6.2" and not the version of Python it implements "2.7" for the first tag.
It's the other way around: https://github.com/pypa/pip/issues/2882 My changes set the Python tag to the version of PyPy.
On Thu, Sep 3, 2015 at 9:53 AM Nate Coraor <nate@bx.psu.edu> wrote:
On Thu, Sep 3, 2015 at 8:16 AM, Donald Stufft <donald@stufft.io> wrote:
On September 3, 2015 at 8:15:53 AM, Daniel Holth (dholth@gmail.com) wrote:
We could at least merge the implementation of the SOABI tag for Python 2.7 (cp27m, cp27mu, ...), which has been in the PEP from the beginning but was never implemented for Python 2. This lets you distinguish between wheels built for CPython with debug, pymalloc, unicode builds.
For pypy which does not have SOABI, the current 'none' should suffice.
The ABI tag code as written will actually set it for PyPy (e.g. 'pp222mu') since the SOABI config var is unset on it (and probably any other non-Python-3 implementation). This was intentional since PyPy does actually build some C Extensions, but I can limit SOABI detection to CPython if it doesn't make sense to do it on PyPy.
However, I see now it will also be set for Jython, which it definitely should not do, so I'll fix that regardless.
Merging the SOABI tag sounds like a win to me.
I'll create PRs for this against wheel and pip shortly. I can also work on a PEP for the platform tag - I don't think it's going to need to be a big one. Are there any preferences as to whether this should be a new PEP or an update to 425?
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Thu, Sep 3, 2015 at 10:04 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Thu, Sep 3, 2015 at 9:56 AM, Daniel Holth <dholth@gmail.com> wrote:
IIRC there's also a bug where we use pypy's version "2.6.2" and not the version of Python it implements "2.7" for the first tag.
It's the other way around:
https://github.com/pypa/pip/issues/2882
My changes set the Python tag to the version of PyPy.
On Thu, Sep 3, 2015 at 9:53 AM Nate Coraor <nate@bx.psu.edu> wrote:
On Thu, Sep 3, 2015 at 8:16 AM, Donald Stufft <donald@stufft.io> wrote:
On September 3, 2015 at 8:15:53 AM, Daniel Holth (dholth@gmail.com) wrote:
We could at least merge the implementation of the SOABI tag for Python 2.7 (cp27m, cp27mu, ...), which has been in the PEP from the beginning but was never implemented for Python 2. This lets you distinguish between wheels built for CPython with debug, pymalloc, unicode builds.
For pypy which does not have SOABI, the current 'none' should suffice.
The ABI tag code as written will actually set it for PyPy (e.g. 'pp222mu') since the SOABI config var is unset on it (and probably any other non-Python-3 implementation). This was intentional since PyPy does actually build some C Extensions, but I can limit SOABI detection to CPython if it doesn't make sense to do it on PyPy.
However, I see now it will also be set for Jython, which it definitely should not do, so I'll fix that regardless.
Merging the SOABI tag sounds like a win to me.
I'll create PRs for this against wheel and pip shortly. I can also work on a PEP for the platform tag - I don't think it's going to need to be a big one. Are there any preferences as to whether this should be a new PEP or an update to 425?
Here are the PRs for SOABI support and PyPy version tag correction:
https://bitbucket.org/pypa/wheel/pull-requests/55/soabi-support-for-python-2... https://github.com/pypa/pip/pull/3075 --nate
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On September 3, 2015 at 1:23:03 PM, Nate Coraor (nate@bx.psu.edu) wrote:
I'll create PRs for this against wheel and pip shortly. I can also work on a PEP for the platform tag - I don't think it's going to need to be a big one. Are there any preferences as to whether this should be a new PEP or an update to 425?
Coming back to this, I'm wondering if we should include the libc implementation/version in a less generic, but still generic linux wheel. Right now if you staticly link I think the only platform ABIs you need to worry about are libc and Python itself. Python itself is handled already but libc is not. The only thing I've seen so far is "build on an old enough version of glibc that it handles anything sane", but not all versions of Linux even use glibc at all. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Mon, Sep 7, 2015 at 12:02 PM, Donald Stufft <donald@stufft.io> wrote:
On September 3, 2015 at 1:23:03 PM, Nate Coraor (nate@bx.psu.edu) wrote:
I'll create PRs for this against wheel and pip shortly. I can also
work
on a PEP for the platform tag - I don't think it's going to need to be a big one. Are there any preferences as to whether this should be a new PEP or an update to 425?
Coming back to this, I'm wondering if we should include the libc implementation/version in a less generic, but still generic linux wheel. Right now if you staticly link I think the only platform ABIs you need to worry about are libc and Python itself. Python itself is handled already but libc is not.
The only thing I've seen so far is "build on an old enough version of glibc
that it handles anything sane", but not all versions of Linux even use glibc at all.
This proposal makes a lot of sense to me. pip will need an update to do the backwards compatibility, and it may be a little ugly to do this all on the platform tag. For example, linux_x86_64_ubuntu_12_04 wheels should not be installed on systems that identify as linux_x86_64_ubuntu_14_04, but linux_x86_64_glibc_2_15 wheels can be installed on systems that identify as linux_x86_64_glibc_2_19. pip would need to maintain a list of which tag prefixes or patterns should be considered backward compatible, and which should not. Granted, new libcs do not pop up overnight, so it's not exactly a nightmare scenario. Wheel should be updated to generate the "libc-generic" wheels by default when nothing other than libc is dynamically linked. It'll need libc vendor/version detection. Alternatively, the platform tag could be split in two, in which case you have a "generic" portion (which would probably be what it currently is, distutils.util.get_platform()) and a "specific" portion (the distro or libc), possibly prefixed with something to avoid having to maintain a list of what's version compatible and what's not, (e.g. 'd_ubuntu_14_04' vs. 'c_glibc_2_19')? I don't think there is a strong case to include the libc version in the specific portion when a distro version will also be specified, because the distro is supposed to define the ABI (at least in the case of distros with stable ABIs), and that includes the libc compatibility. So for psycopg2 wheels you'd get a "distro" wheel (linux_x86_64-d_ubuntu_14_04) but for SQLAlchemy, you'd get a "libc-generic" wheel (linux_x86_64-c_glibc_2_19). It's then up to PyPI project owners to build on whatever platforms they wish to support. --nate
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On September 8, 2015 at 1:29:53 PM, Nate Coraor (nate@bx.psu.edu) wrote:
On Mon, Sep 7, 2015 at 12:02 PM, Donald Stufft wrote:
On September 3, 2015 at 1:23:03 PM, Nate Coraor (nate@bx.psu.edu) wrote:
I'll create PRs for this against wheel and pip shortly. I can also
work
on a PEP for the platform tag - I don't think it's going to need to be a big one. Are there any preferences as to whether this should be a new PEP or an update to 425?
Coming back to this, I'm wondering if we should include the libc implementation/version in a less generic, but still generic linux wheel. Right now if you staticly link I think the only platform ABIs you need to worry about are libc and Python itself. Python itself is handled already but libc is not.
The only thing I've seen so far is "build on an old enough version of glibc
that it handles anything sane", but not all versions of Linux even use glibc at all.
This proposal makes a lot of sense to me. pip will need an update to do the backwards compatibility, and it may be a little ugly to do this all on the platform tag. For example, linux_x86_64_ubuntu_12_04 wheels should not be installed on systems that identify as linux_x86_64_ubuntu_14_04, but linux_x86_64_glibc_2_15 wheels can be installed on systems that identify as linux_x86_64_glibc_2_19. pip would need to maintain a list of which tag prefixes or patterns should be considered backward compatible, and which should not. Granted, new libcs do not pop up overnight, so it's not exactly a nightmare scenario.
Wheel should be updated to generate the "libc-generic" wheels by default when nothing other than libc is dynamically linked. It'll need libc vendor/version detection.
Alternatively, the platform tag could be split in two, in which case you have a "generic" portion (which would probably be what it currently is, distutils.util.get_platform()) and a "specific" portion (the distro or libc), possibly prefixed with something to avoid having to maintain a list of what's version compatible and what's not, (e.g. 'd_ubuntu_14_04' vs. 'c_glibc_2_19')?
I don't think there is a strong case to include the libc version in the specific portion when a distro version will also be specified, because the distro is supposed to define the ABI (at least in the case of distros with stable ABIs), and that includes the libc compatibility. So for psycopg2 wheels you'd get a "distro" wheel (linux_x86_64-d_ubuntu_14_04) but for SQLAlchemy, you'd get a "libc-generic" wheel (linux_x86_64-c_glibc_2_19).
It's then up to PyPI project owners to build on whatever platforms they wish to support.
I think it's reasonable to not include the libc when the wheel is distro specific. I think the barrier to entry on adding new tags is far lower than adding a whole new type of tag. Right now, I think our longest tag is for OSX which is something like macosx_10_10_x86_64 at 19 chars, I don't think it's much worse to have something like linux_glibc_2_19_x86_64 at 23 chars, or linux_ubuntu_14_04_x86_64 at 25 chars. I don't think we need the special c or d prefix, we can just treat it as ==, and special case glibc as >= like we're currently special casing the macosx wheels to be >=. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Sep 8, 2015 1:33 PM, "Donald Stufft" <donald@stufft.io> wrote:
On September 8, 2015 at 1:29:53 PM, Nate Coraor (nate@bx.psu.edu) wrote:
On Mon, Sep 7, 2015 at 12:02 PM, Donald Stufft wrote:
On September 3, 2015 at 1:23:03 PM, Nate Coraor (nate@bx.psu.edu)
> > I'll create PRs for this against wheel and pip shortly. I can also work > on a PEP for the platform tag - I don't think it's going to need to be a > big one. Are there any preferences as to whether this should be a new PEP > or an update to 425? >
Coming back to this, I'm wondering if we should include the libc implementation/version in a less generic, but still generic linux wheel. Right now if you staticly link I think the only platform ABIs you need to worry about are libc and Python itself. Python itself is handled already but libc is not.
The only thing I've seen so far is "build on an old enough version of glibc
that it handles anything sane", but not all versions of Linux even use glibc at all.
This proposal makes a lot of sense to me. pip will need an update to do
backwards compatibility, and it may be a little ugly to do this all on
wrote: the the
platform tag. For example, linux_x86_64_ubuntu_12_04 wheels should not be installed on systems that identify as linux_x86_64_ubuntu_14_04, but linux_x86_64_glibc_2_15 wheels can be installed on systems that identify as linux_x86_64_glibc_2_19. pip would need to maintain a list of which tag prefixes or patterns should be considered backward compatible, and which should not. Granted, new libcs do not pop up overnight, so it's not exactly a nightmare scenario.
Wheel should be updated to generate the "libc-generic" wheels by default when nothing other than libc is dynamically linked. It'll need libc vendor/version detection.
Alternatively, the platform tag could be split in two, in which case you have a "generic" portion (which would probably be what it currently is, distutils.util.get_platform()) and a "specific" portion (the distro or libc), possibly prefixed with something to avoid having to maintain a
of what's version compatible and what's not, (e.g. 'd_ubuntu_14_04' vs. 'c_glibc_2_19')?
I don't think there is a strong case to include the libc version in the specific portion when a distro version will also be specified, because
distro is supposed to define the ABI (at least in the case of distros with stable ABIs), and that includes the libc compatibility. So for psycopg2 wheels you'd get a "distro" wheel (linux_x86_64-d_ubuntu_14_04) but for SQLAlchemy, you'd get a "libc-generic" wheel (linux_x86_64-c_glibc_2_19).
It's then up to PyPI project owners to build on whatever platforms they wish to support.
I think it's reasonable to not include the libc when the wheel is distro specific. I think the barrier to entry on adding new tags is far lower
Could there be shim packages here? How is this a different dependency? list the than
adding a whole new type of tag. Right now, I think our longest tag is for OSX which is something like macosx_10_10_x86_64 at 19 chars, I don't think it's much worse to have something like linux_glibc_2_19_x86_64 at 23 chars, or linux_ubuntu_14_04_x86_64 at 25 chars. I don't think we need the special c or d prefix, we can just treat it as ==, and special case glibc as >= like we're currently special casing the macosx wheels to be >=.
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
https://www.python.org/dev/peps/pep-0425/#platform-tag is currently defined in terms of distutils get_platform(). Instead, it could be defined more abstractly to read something like "The platform tag expresses which system(s) might be capable of running or linking with binary components of the package." This would express what the tag is for rather than the list of allowed values. Then a "legal" change in the list of allowed values would not necessarily be effected by changing the distutils get_platform function. As for whether a binary is allowed from a particular server the idea of using a different list of compatible/allowed tags per-package-source has floated around. Distasteful amount of configuration though. Something like the Internet Explorer security zones where you have categories of remotes... On Tue, Sep 8, 2015 at 3:14 PM Wes Turner <wes.turner@gmail.com> wrote:
On Sep 8, 2015 1:33 PM, "Donald Stufft" <donald@stufft.io> wrote:
On September 8, 2015 at 1:29:53 PM, Nate Coraor (nate@bx.psu.edu) wrote:
On Mon, Sep 7, 2015 at 12:02 PM, Donald Stufft wrote:
On September 3, 2015 at 1:23:03 PM, Nate Coraor (nate@bx.psu.edu)
>> >> I'll create PRs for this against wheel and pip shortly. I can also work >> on a PEP for the platform tag - I don't think it's going to need to be a >> big one. Are there any preferences as to whether this should be a new PEP >> or an update to 425? >>
Coming back to this, I'm wondering if we should include the libc implementation/version in a less generic, but still generic linux wheel. Right now if you staticly link I think the only platform ABIs you need to worry about are libc and Python itself. Python itself is handled already but
not.
The only thing I've seen so far is "build on an old enough version of glibc
that it handles anything sane", but not all versions of Linux even use glibc at all.
This proposal makes a lot of sense to me. pip will need an update to do the backwards compatibility, and it may be a little ugly to do this all on
wrote: libc is the
platform tag. For example, linux_x86_64_ubuntu_12_04 wheels should not be installed on systems that identify as linux_x86_64_ubuntu_14_04, but linux_x86_64_glibc_2_15 wheels can be installed on systems that identify as linux_x86_64_glibc_2_19. pip would need to maintain a list of which tag prefixes or patterns should be considered backward compatible, and which should not. Granted, new libcs do not pop up overnight, so it's not exactly a nightmare scenario.
Could there be shim packages here? How is this a different dependency?
Wheel should be updated to generate the "libc-generic" wheels by
when nothing other than libc is dynamically linked. It'll need libc vendor/version detection.
Alternatively, the platform tag could be split in two, in which case you have a "generic" portion (which would probably be what it currently is, distutils.util.get_platform()) and a "specific" portion (the distro or libc), possibly prefixed with something to avoid having to maintain a
of what's version compatible and what's not, (e.g. 'd_ubuntu_14_04' vs. 'c_glibc_2_19')?
I don't think there is a strong case to include the libc version in the specific portion when a distro version will also be specified, because
default list the
distro is supposed to define the ABI (at least in the case of distros with stable ABIs), and that includes the libc compatibility. So for psycopg2 wheels you'd get a "distro" wheel (linux_x86_64-d_ubuntu_14_04) but for SQLAlchemy, you'd get a "libc-generic" wheel (linux_x86_64-c_glibc_2_19).
It's then up to PyPI project owners to build on whatever platforms they wish to support.
I think it's reasonable to not include the libc when the wheel is distro specific. I think the barrier to entry on adding new tags is far lower than adding a whole new type of tag. Right now, I think our longest tag is for OSX which is something like macosx_10_10_x86_64 at 19 chars, I don't think it's much worse to have something like linux_glibc_2_19_x86_64 at 23 chars, or linux_ubuntu_14_04_x86_64 at 25 chars. I don't think we need the special c or d prefix, we can just treat it as ==, and special case glibc as >= like we're currently special casing the macosx wheels to be >=.
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Mon, Sep 7, 2015 at 9:02 AM, Donald Stufft <donald@stufft.io> wrote:
On September 3, 2015 at 1:23:03 PM, Nate Coraor (nate@bx.psu.edu) wrote:
I'll create PRs for this against wheel and pip shortly. I can also work on a PEP for the platform tag - I don't think it's going to need to be a big one. Are there any preferences as to whether this should be a new PEP or an update to 425?
Coming back to this, I'm wondering if we should include the libc implementation/version in a less generic, but still generic linux wheel. Right now if you staticly link I think the only platform ABIs you need to worry about are libc and Python itself. Python itself is handled already but libc is not. The only thing I've seen so far is "build on an old enough version of glibc that it handles anything sane", but not all versions of Linux even use glibc at all.
This feels kinda half-baked to me? "linux" is a useful tag because it has a clear meaning: "there exists a linux system somewhere that can run this, but no guarantees about which one, good luck". When building a wheel it's easy to tell whether this tag can be correctly applied. Distro-specific tags are useful because they also have a fairly clear meaning: "here's a specific class of systems that can run this, so long as you install enough packages to fulfill the external dependencies". Again, when building a wheel it's pretty easy to tell whether this tag can be correctly applied. (Of course someone could screw this up, e.g. by building on a system is technically distro X but has some incompatible hand-compiled libraries installed, but 99% of the time we can guess correctly.) If we define a LSB-style base system and give it a tag, like I don't know, the "Python base environment", call it "linux_pybe1_core" or something, that describes what libraries are and aren't available and their ABIs, and provide docs/tooling to help people explicitly create such wheels and check whether they're compatible with their system, then this is also useful -- we have proof that this is sufficient to actually distribute arbitrary software usefully, given that multiple distributors have converged on this strategy. (I've been talking to some people off-list about maybe actually putting together a proposal like this...) To me "linux_glibc_2.18" falls between the cracks though. If this starts being what you get by default when you build a wheel, then people will use it for wheels that are *not* statically linked, and what that tag will mean is "there exists some system that can run this, and that has glibc 2.18 on it, and also some other unspecified stuff, good luck". Which is pretty useless -- we might as well just stick with "linux" in this case. OTOH if it's something that builders have to opt into, then we could document that it's only to be used for wheels that are statically linked except for glibc, and make it mean "*any* system which has glibc 2.18 or later on it can run this". Which would be useful in some cases. But at this point it's basically a version of the "defined base environment" approach, and once you've gone that far you might as well take advantage of the various distributors' experience about what should actually be in that environment -- glibc isn't enough. -n -- Nathaniel J. Smith -- http://vorpus.org
On Tue, Sep 8, 2015 at 10:10 PM, Nathaniel Smith <njs@pobox.com> wrote:
On Mon, Sep 7, 2015 at 9:02 AM, Donald Stufft <donald@stufft.io> wrote:
On September 3, 2015 at 1:23:03 PM, Nate Coraor (nate@bx.psu.edu) wrote:
I'll create PRs for this against wheel and pip shortly. I can also
work
on a PEP for the platform tag - I don't think it's going to need to be a big one. Are there any preferences as to whether this should be a new PEP or an update to 425?
Coming back to this, I'm wondering if we should include the libc implementation/version in a less generic, but still generic linux wheel. Right now if you staticly link I think the only platform ABIs you need to worry about are libc and Python itself. Python itself is handled already but libc is not. The only thing I've seen so far is "build on an old enough version of glibc that it handles anything sane", but not all versions of Linux even use glibc at all.
This feels kinda half-baked to me?
"linux" is a useful tag because it has a clear meaning: "there exists a linux system somewhere that can run this, but no guarantees about which one, good luck". When building a wheel it's easy to tell whether this tag can be correctly applied.
I'm not sure how it'd be possible to tell. The same meaning for a generic tag would be true of any wheel built, regardless of whether the wheel has dependencies in addition to libc.
Distro-specific tags are useful because they also have a fairly clear meaning: "here's a specific class of systems that can run this, so long as you install enough packages to fulfill the external dependencies". Again, when building a wheel it's pretty easy to tell whether this tag can be correctly applied. (Of course someone could screw this up, e.g. by building on a system is technically distro X but has some incompatible hand-compiled libraries installed, but 99% of the time we can guess correctly.)
If we define a LSB-style base system and give it a tag, like I don't know, the "Python base environment", call it "linux_pybe1_core" or something, that describes what libraries are and aren't available and their ABIs, and provide docs/tooling to help people explicitly create such wheels and check whether they're compatible with their system, then this is also useful -- we have proof that this is sufficient to actually distribute arbitrary software usefully, given that multiple distributors have converged on this strategy. (I've been talking to some people off-list about maybe actually putting together a proposal like this...)
To me "linux_glibc_2.18" falls between the cracks though. If this starts being what you get by default when you build a wheel, then people will use it for wheels that are *not* statically linked, and what that tag will mean is "there exists some system that can run this, and that has glibc 2.18 on it, and also some other unspecified stuff, good luck". Which is pretty useless -- we might as well just stick with "linux" in this case. OTOH if it's something that builders have to opt into, then we could document that it's only to be used for wheels that are statically linked except for glibc, and make it mean "*any* system which has glibc 2.18 or later on it can run this". Which would be useful in some cases.
This is a tooling issue. If wheel (the package) inspects the built .so files and finds they are not dynamically linked to anything not included with glibc, it can apply the glibc tag. Otherwise, it'd apply the distro tag. There's no possibility for human error here, unless they explicitly override the platform tag.
But at this point it's basically a version of the "defined base environment" approach, and once you've gone that far you might as well take advantage of the various distributors' experience about what should actually be in that environment -- glibc isn't enough.
While I agree that glibc isn't always enough, defining a base environment that may not be met by the "standard" install of popular distributions makes unprivileged wheel installation much more difficult. It's also not going to work out of the box on older distributions that wouldn't provide whatever standardized mechanism is defined for a list of "base environments currently provided by this system" (unless pip does the work itself at runtime to determine whether a base environment is met). Maybe an important question: how many popular packages with C Extensions have dependencies in addition to glibc?
-n
-- Nathaniel J. Smith -- http://vorpus.org
On Wed, Sep 9, 2015 at 8:06 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Tue, Sep 8, 2015 at 10:10 PM, Nathaniel Smith <njs@pobox.com> wrote:
On Mon, Sep 7, 2015 at 9:02 AM, Donald Stufft <donald@stufft.io> wrote:
On September 3, 2015 at 1:23:03 PM, Nate Coraor (nate@bx.psu.edu) wrote:
> > I'll create PRs for this against wheel and pip shortly. I can also > work > on a PEP for the platform tag - I don't think it's going to need to > be a > big one. Are there any preferences as to whether this should be a > new PEP > or an update to 425? >
Coming back to this, I'm wondering if we should include the libc implementation/version in a less generic, but still generic linux wheel. Right now if you staticly link I think the only platform ABIs you need to worry about are libc and Python itself. Python itself is handled already but libc is not. The only thing I've seen so far is "build on an old enough version of glibc that it handles anything sane", but not all versions of Linux even use glibc at all.
This feels kinda half-baked to me?
"linux" is a useful tag because it has a clear meaning: "there exists a linux system somewhere that can run this, but no guarantees about which one, good luck". When building a wheel it's easy to tell whether this tag can be correctly applied.
I'm not sure how it'd be possible to tell. The same meaning for a generic tag would be true of any wheel built, regardless of whether the wheel has dependencies in addition to libc.
Sure... my point is just that "linux" is unambiguous and fills a niche: it unambiguously says "you're on your own", and sometimes that's the best we can hope to say.
Distro-specific tags are useful because they also have a fairly clear meaning: "here's a specific class of systems that can run this, so long as you install enough packages to fulfill the external dependencies". Again, when building a wheel it's pretty easy to tell whether this tag can be correctly applied. (Of course someone could screw this up, e.g. by building on a system is technically distro X but has some incompatible hand-compiled libraries installed, but 99% of the time we can guess correctly.)
If we define a LSB-style base system and give it a tag, like I don't know, the "Python base environment", call it "linux_pybe1_core" or something, that describes what libraries are and aren't available and their ABIs, and provide docs/tooling to help people explicitly create such wheels and check whether they're compatible with their system, then this is also useful -- we have proof that this is sufficient to actually distribute arbitrary software usefully, given that multiple distributors have converged on this strategy. (I've been talking to some people off-list about maybe actually putting together a proposal like this...)
To me "linux_glibc_2.18" falls between the cracks though. If this starts being what you get by default when you build a wheel, then people will use it for wheels that are *not* statically linked, and what that tag will mean is "there exists some system that can run this, and that has glibc 2.18 on it, and also some other unspecified stuff, good luck". Which is pretty useless -- we might as well just stick with "linux" in this case. OTOH if it's something that builders have to opt into, then we could document that it's only to be used for wheels that are statically linked except for glibc, and make it mean "*any* system which has glibc 2.18 or later on it can run this". Which would be useful in some cases.
This is a tooling issue. If wheel (the package) inspects the built .so files and finds they are not dynamically linked to anything not included with glibc, it can apply the glibc tag. Otherwise, it'd apply the distro tag. There's no possibility for human error here, unless they explicitly override the platform tag.
But at this point it's basically a version of the "defined base environment" approach, and once you've gone that far you might as well take advantage of the various distributors' experience about what should actually be in that environment -- glibc isn't enough.
While I agree that glibc isn't always enough, defining a base environment that may not be met by the "standard" install of popular distributions makes unprivileged wheel installation much more difficult.
Yeah, which is why my suggestion is that we steal the "base environment" definition from the folks like Continuum and Enthought who have already done the work of determining what is in the "standard" install of popular distributions, and have spent years actually distributing packages to unprivileged users :-).
It's also not going to work out of the box on older distributions that wouldn't provide whatever standardized mechanism is defined for a list of "base environments currently provided by this system" (unless pip does the work itself at runtime to determine whether a base environment is met).
Right -- which is basically what pip will have to do to figure out the current glibc version too, right? Trying to guess whether the installed versions of several libraries are really ABI compatible with what we expect is harder than trying to guess whether the installed version of glibc alone is really ABI compatible with what we expect, but in both cases it's basically a heuristic (some distros could have local patches to their glibc that break ABI, who knows) and in both cases it's basically safe to just assume it will work (because if we stick to libraries that other distributors are already depending on then we have years of experience that it pretty much always works).
Maybe an important question: how many popular packages with C Extensions have dependencies in addition to glibc?
Certainly enough that the major distributors of binary packages on Linux, like Continuum and Enthought, have decided that they need to require more than glibc :-). libstdc++ is an example of one particularly common external dependency. To be clear: if you're talking specifically about the model where we validate that the extensions are statically linked before we enable the glibc tag, then I don't think it will do any harm to have it as an option. It just seems redundant with the more general solution. -n -- Nathaniel J. Smith -- http://vorpus.org
Hi all, I think Nathaniel raised a lot of important points below and I do see the case for a "base environment" meta tag. The implementation of sniffing out those environments on a wide array of systems may be complicated, but perhaps we can, er, borrow from conda here. I do think the glibc tag is useful as well, although it may be unnecessary if there's a way to deal with the glibc version in a base environment. However, I don't think I'm qualified to make a decision on what direction to go, and I'd like to work on updating PEP 425 for improved platform tags. So, I'm hoping to kickstart the discussion again and see if we can get a consensus on what to do. One proposal - if PEP 425 were updated to indicate that the platform tag can be more than simply `distutils.util.get_platform()`, and some language as to its intent, without specifying exactly what it must be, we could separate out the exact details into the packaging documentation as Nick has suggested. --nate On Wed, Sep 9, 2015 at 7:49 PM, Nathaniel Smith <njs@pobox.com> wrote:
On Wed, Sep 9, 2015 at 8:06 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Tue, Sep 8, 2015 at 10:10 PM, Nathaniel Smith <njs@pobox.com> wrote:
On Mon, Sep 7, 2015 at 9:02 AM, Donald Stufft <donald@stufft.io> wrote:
On September 3, 2015 at 1:23:03 PM, Nate Coraor (nate@bx.psu.edu)
wrote:
>> >> I'll create PRs for this against wheel and pip shortly. I can also >> work >> on a PEP for the platform tag - I don't think it's going to need to >> be a >> big one. Are there any preferences as to whether this should be a >> new PEP >> or an update to 425? >>
Coming back to this, I'm wondering if we should include the libc implementation/version in a less generic, but still generic linux wheel. Right now if you staticly link I think the only platform ABIs you need to worry about are libc and Python itself. Python itself is handled already but libc is not. The only thing I've seen so far is "build on an old enough version of glibc that it handles anything sane", but not all versions of Linux even use glibc at all.
This feels kinda half-baked to me?
"linux" is a useful tag because it has a clear meaning: "there exists a linux system somewhere that can run this, but no guarantees about which one, good luck". When building a wheel it's easy to tell whether this tag can be correctly applied.
I'm not sure how it'd be possible to tell. The same meaning for a generic tag would be true of any wheel built, regardless of whether the wheel has dependencies in addition to libc.
Sure... my point is just that "linux" is unambiguous and fills a niche: it unambiguously says "you're on your own", and sometimes that's the best we can hope to say.
Distro-specific tags are useful because they also have a fairly clear meaning: "here's a specific class of systems that can run this, so long as you install enough packages to fulfill the external dependencies". Again, when building a wheel it's pretty easy to tell whether this tag can be correctly applied. (Of course someone could screw this up, e.g. by building on a system is technically distro X but has some incompatible hand-compiled libraries installed, but 99% of the time we can guess correctly.)
If we define a LSB-style base system and give it a tag, like I don't know, the "Python base environment", call it "linux_pybe1_core" or something, that describes what libraries are and aren't available and their ABIs, and provide docs/tooling to help people explicitly create such wheels and check whether they're compatible with their system, then this is also useful -- we have proof that this is sufficient to actually distribute arbitrary software usefully, given that multiple distributors have converged on this strategy. (I've been talking to some people off-list about maybe actually putting together a proposal like this...)
To me "linux_glibc_2.18" falls between the cracks though. If this starts being what you get by default when you build a wheel, then people will use it for wheels that are *not* statically linked, and what that tag will mean is "there exists some system that can run this, and that has glibc 2.18 on it, and also some other unspecified stuff, good luck". Which is pretty useless -- we might as well just stick with "linux" in this case. OTOH if it's something that builders have to opt into, then we could document that it's only to be used for wheels that are statically linked except for glibc, and make it mean "*any* system which has glibc 2.18 or later on it can run this". Which would be useful in some cases.
This is a tooling issue. If wheel (the package) inspects the built .so files and finds they are not dynamically linked to anything not included with glibc, it can apply the glibc tag. Otherwise, it'd apply the distro tag. There's no possibility for human error here, unless they explicitly override the platform tag.
But at this point it's basically a version of the "defined base environment" approach, and once you've gone that far you might as well take advantage of the various distributors' experience about what should actually be in that environment -- glibc isn't enough.
While I agree that glibc isn't always enough, defining a base environment that may not be met by the "standard" install of popular distributions makes unprivileged wheel installation much more difficult.
Yeah, which is why my suggestion is that we steal the "base environment" definition from the folks like Continuum and Enthought who have already done the work of determining what is in the "standard" install of popular distributions, and have spent years actually distributing packages to unprivileged users :-).
It's also not going to work out of the box on older distributions that wouldn't provide whatever standardized mechanism is defined for a list of "base environments currently provided by this system" (unless pip does the work itself at runtime to determine whether a base environment is met).
Right -- which is basically what pip will have to do to figure out the current glibc version too, right?
Trying to guess whether the installed versions of several libraries are really ABI compatible with what we expect is harder than trying to guess whether the installed version of glibc alone is really ABI compatible with what we expect, but in both cases it's basically a heuristic (some distros could have local patches to their glibc that break ABI, who knows) and in both cases it's basically safe to just assume it will work (because if we stick to libraries that other distributors are already depending on then we have years of experience that it pretty much always works).
Maybe an important question: how many popular packages with C Extensions have dependencies in addition to glibc?
Certainly enough that the major distributors of binary packages on Linux, like Continuum and Enthought, have decided that they need to require more than glibc :-).
libstdc++ is an example of one particularly common external dependency.
To be clear: if you're talking specifically about the model where we validate that the extensions are statically linked before we enable the glibc tag, then I don't think it will do any harm to have it as an option. It just seems redundant with the more general solution.
-n
-- Nathaniel J. Smith -- http://vorpus.org
On 3 September 2015 at 09:45, Donald Stufft <donald@stufft.io> wrote:
On September 1, 2015 at 9:57:50 AM, Daniel Holth (dholth@gmail.com) wrote:
Looks amazing, why don't we merge it.
I think we need to update the PEP or write a new PEP before we add new tags to the implementation.
Right, we're mainly talking about replacing/updating the compatibility tags in PEP 425. The most expedient way to formalise consensus on that would be to just write a replacement PEP and have it take precedence over 425 even for current generation wheel files. More generally, this an area where I don't think the PEP process is actually working well for us - I think we'd be better off separating the "produced artifact" (i.e. versioned interoperability specifications) from the change management process for those specifications (i.e. the PEP process). That's the way CPython works, after all - the released artifacts are the reference interpreter, the language reference, and the standard library reference, while the PEP process is a way of negotiating major changes to those. PyPI is similar - pypi.python.org and its APIs are the released artifact, the PEPs negotiate changes. It's only the interoperability specs where we currently follow the RFC model of having the same document describe both the end result *and* the rationale for changes from the previous version, and I mostly find it to be a pain. Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On September 4, 2015 at 10:12:08 PM, Nick Coghlan (ncoghlan@gmail.com) wrote:
On 3 September 2015 at 09:45, Donald Stufft wrote:
On September 1, 2015 at 9:57:50 AM, Daniel Holth (dholth@gmail.com) wrote:
Looks amazing, why don't we merge it.
I think we need to update the PEP or write a new PEP before we add new tags to the implementation.
Right, we're mainly talking about replacing/updating the compatibility tags in PEP 425. The most expedient way to formalise consensus on that would be to just write a replacement PEP and have it take precedence over 425 even for current generation wheel files.
More generally, this an area where I don't think the PEP process is actually working well for us - I think we'd be better off separating the "produced artifact" (i.e. versioned interoperability specifications) from the change management process for those specifications (i.e. the PEP process). That's the way CPython works, after all - the released artifacts are the reference interpreter, the language reference, and the standard library reference, while the PEP process is a way of negotiating major changes to those. PyPI is similar - pypi.python.org and its APIs are the released artifact, the PEPs negotiate changes.
It's only the interoperability specs where we currently follow the RFC model of having the same document describe both the end result *and* the rationale for changes from the previous version, and I mostly find it to be a pain.
I'm not sure that I follow what you’re saying here, can you describe what your ideal situation would look like? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 5 September 2015 at 12:14, Donald Stufft <donald@stufft.io> wrote:
On September 4, 2015 at 10:12:08 PM, Nick Coghlan (ncoghlan@gmail.com) wrote:
It's only the interoperability specs where we currently follow the RFC model of having the same document describe both the end result *and* the rationale for changes from the previous version, and I mostly find it to be a pain.
I'm not sure that I follow what you’re saying here, can you describe what your ideal situation would look like?
1. We add a new section to packaging.python.org for "Specifications". The specification sections of approved PEPs (compatibility tags, wheel format, version specifiers, dist-info directories) get added there. API specifications for index servers may also be added there. 2. Interoperability PEPs become proposals for new packaging.python.org specifications or changes to existing specifications, rather than specifications in their own right. 3. Each specification has a "version history" section at the bottom, which links to the PEPs that drove each update. This way, the PEPs can focus on transition plans, backwards compatibility constraints, and the rationale for particular changes, etc, but folks wanting "just the current spec, thanks" can look at the latest version on packaging.python.org without worrying about the history. It also means that the specs themselves (whether additions or updates) can be prepared as packaging.python.org PRs. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On September 4, 2015 at 10:56:38 PM, Nick Coghlan (ncoghlan@gmail.com) wrote:
On 5 September 2015 at 12:14, Donald Stufft wrote:
On September 4, 2015 at 10:12:08 PM, Nick Coghlan (ncoghlan@gmail.com) wrote:
It's only the interoperability specs where we currently follow the RFC model of having the same document describe both the end result *and* the rationale for changes from the previous version, and I mostly find it to be a pain.
I'm not sure that I follow what you’re saying here, can you describe what your ideal situation would look like?
1. We add a new section to packaging.python.org for "Specifications". The specification sections of approved PEPs (compatibility tags, wheel format, version specifiers, dist-info directories) get added there. API specifications for index servers may also be added there.
2. Interoperability PEPs become proposals for new packaging.python.org specifications or changes to existing specifications, rather than specifications in their own right.
3. Each specification has a "version history" section at the bottom, which links to the PEPs that drove each update.
This way, the PEPs can focus on transition plans, backwards compatibility constraints, and the rationale for particular changes, etc, but folks wanting "just the current spec, thanks" can look at the latest version on packaging.python.org without worrying about the history.
It also means that the specs themselves (whether additions or updates) can be prepared as packaging.python.org PRs.
Personally I don't have much of a problem with the specs living as PEPs, I think a bigger problem is that we're producing specs that have end user impact without anything designed for end users to go along with them. PEP 440 is a wonderful example of this, the spec of PEP 440 goes into lots of edge cases and describes the reasons why particular decisions were made and all of that jazz. I think all of that data is useful when you're implementing PEP 440 because it helps inform how someone interprets the spec in situations where it may be ambiguous. What I don't think is useful is having no answer to someone who asks "What's a valid version for a Python package" except "here go read this massive document which covers tons of edge cases which you don't really need to care about unless you're pip/PyPI/setuptools etc". I guess for me then, the ideal situation would be to keep using PEPs for the actual specification/RFC like documentation, but when that has end user impact then a requirement is that it comes with a PR to packaging.python.org that gives us end user documentation for the spec, before the spec can be accepted (or finalized or whatever the right terminology is). I mean, I don't have a specific problem with the specs living somewhere else as well, I just don't think moving a lengthy document full of edge cases from one location to another is going to make things better unless we start producing end user focused documentation, and in many cases it may make it worse since understanding a spec fully often requires understanding why certain decisions were made. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
I don't have a specific problem with the specs living somewhere else as well, I just don't think moving a lengthy document full of edge cases from one location to another is going to make things better
If I may, I don't think that really captures Nick's idea. I think it's about clearly distinguishing the following: 1) Current Specs (for metadata, versioning, pypi etc..) 2) Proposals to adjust or add to the Current Specs We don't have a clear distinction right now. We just have a series of PEPs, and it's work to figure out where the actual current spec is at, in the noise of rationales and transition plans etc... - So, for #1, maintain documents in PyPUG - For #2, keep using PEPs - As PEPs are accepted, update the Spec docs (the version history can mention what PEP drove the change) And separate from all of this I think is your idea that regular Usage docs should be modified as well, as PEPs are accepted, which I think is great. Marcus On Fri, Sep 4, 2015 at 8:06 PM, Donald Stufft <donald@stufft.io> wrote:
On September 4, 2015 at 10:56:38 PM, Nick Coghlan (ncoghlan@gmail.com) wrote:
On 5 September 2015 at 12:14, Donald Stufft wrote:
On September 4, 2015 at 10:12:08 PM, Nick Coghlan (ncoghlan@gmail.com) wrote:
It's only the interoperability specs where we currently follow the RFC model of having the same document describe both the end result *and* the rationale for changes from the previous version, and I mostly find it to be a pain.
I'm not sure that I follow what you’re saying here, can you describe what your ideal situation would look like?
1. We add a new section to packaging.python.org for "Specifications". The specification sections of approved PEPs (compatibility tags, wheel format, version specifiers, dist-info directories) get added there. API specifications for index servers may also be added there.
2. Interoperability PEPs become proposals for new packaging.python.org specifications or changes to existing specifications, rather than specifications in their own right.
3. Each specification has a "version history" section at the bottom, which links to the PEPs that drove each update.
This way, the PEPs can focus on transition plans, backwards compatibility constraints, and the rationale for particular changes, etc, but folks wanting "just the current spec, thanks" can look at the latest version on packaging.python.org without worrying about the history.
It also means that the specs themselves (whether additions or updates) can be prepared as packaging.python.org PRs.
Personally I don't have much of a problem with the specs living as PEPs, I think a bigger problem is that we're producing specs that have end user impact without anything designed for end users to go along with them. PEP 440 is a wonderful example of this, the spec of PEP 440 goes into lots of edge cases and describes the reasons why particular decisions were made and all of that jazz. I think all of that data is useful when you're implementing PEP 440 because it helps inform how someone interprets the spec in situations where it may be ambiguous.
What I don't think is useful is having no answer to someone who asks "What's a valid version for a Python package" except "here go read this massive document which covers tons of edge cases which you don't really need to care about unless you're pip/PyPI/setuptools etc".
I guess for me then, the ideal situation would be to keep using PEPs for the actual specification/RFC like documentation, but when that has end user impact then a requirement is that it comes with a PR to packaging.python.org that gives us end user documentation for the spec, before the spec can be accepted (or finalized or whatever the right terminology is). I mean, I don't have a specific problem with the specs living somewhere else as well, I just don't think moving a lengthy document full of edge cases from one location to another is going to make things better unless we start producing end user focused documentation, and in many cases it may make it worse since understanding a spec fully often requires understanding why certain decisions were made.
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On 5 September 2015 at 14:24, Marcus Smith <qwcode@gmail.com> wrote:
I don't have a specific problem with the specs living somewhere else as well, I just don't think moving a lengthy document full of edge cases from one location to another is going to make things better
If I may, I don't think that really captures Nick's idea.
Right, having more user friendly introductions on packaging.python.org is a good idea, but it's a separate question. To address that specific problem, we can paraphrase the semantic versioning compatibility section from PEP 440: https://www.python.org/dev/peps/pep-0440/#semantic-versioning I filed a PR along those lines, inserting it as a new subsection under "Configuring your project"
I think it's about clearly distinguishing the following:
1) Current Specs (for metadata, versioning, pypi etc..) 2) Proposals to adjust or add to the Current Specs
We don't have a clear distinction right now. We just have a series of PEPs, and it's work to figure out where the actual current spec is at, in the noise of rationales and transition plans etc...
- So, for #1, maintain documents in PyPUG - For #2, keep using PEPs - As PEPs are accepted, update the Spec docs (the version history can mention what PEP drove the change)
Right. Another potential benefit of this approach is that it means we can more easily cross-link from the implementor facing specifications to end user facing parts of the user guide - at the moment, there's no standard discoverability path from PEPs like PEP 440 to packaging.python.org at all. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 5 September 2015 at 16:43, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 5 September 2015 at 14:24, Marcus Smith <qwcode@gmail.com> wrote:
I don't have a specific problem with the specs living somewhere else as well, I just don't think moving a lengthy document full of edge cases from one location to another is going to make things better
If I may, I don't think that really captures Nick's idea.
Right, having more user friendly introductions on packaging.python.org is a good idea, but it's a separate question. To address that specific problem, we can paraphrase the semantic versioning compatibility section from PEP 440: https://www.python.org/dev/peps/pep-0440/#semantic-versioning
I filed a PR along those lines, inserting it as a new subsection under "Configuring your project"
And the link for that: https://github.com/pypa/python-packaging-user-guide/pull/163 Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Fri, Sep 4, 2015 at 9:24 PM, Marcus Smith <qwcode@gmail.com> wrote:
I don't have a specific problem with the specs living somewhere else as well, I just don't think moving a lengthy document full of edge cases from one location to another is going to make things better
If I may, I don't think that really captures Nick's idea.
I think it's about clearly distinguishing the following:
1) Current Specs (for metadata, versioning, pypi etc..) 2) Proposals to adjust or add to the Current Specs
We don't have a clear distinction right now. We just have a series of PEPs, and it's work to figure out where the actual current spec is at, in the noise of rationales and transition plans etc...
Speaking as someone who has been pretty confused in the past trying to look up what the actual current rules are for something like version numbers or metadata (is this the current PEP? oh wait this one's newer -- oh but wait is the newer one still in development? or maybe abandoned?, etc.): +1 -- Nathaniel J. Smith -- http://vorpus.org
On 5 September 2015 at 16:46, Nathaniel Smith <njs@pobox.com> wrote:
On Fri, Sep 4, 2015 at 9:24 PM, Marcus Smith <qwcode@gmail.com> wrote:
I don't have a specific problem with the specs living somewhere else as well, I just don't think moving a lengthy document full of edge cases from one location to another is going to make things better
If I may, I don't think that really captures Nick's idea.
I think it's about clearly distinguishing the following:
1) Current Specs (for metadata, versioning, pypi etc..) 2) Proposals to adjust or add to the Current Specs
We don't have a clear distinction right now. We just have a series of PEPs, and it's work to figure out where the actual current spec is at, in the noise of rationales and transition plans etc...
Speaking as someone who has been pretty confused in the past trying to look up what the actual current rules are for something like version numbers or metadata (is this the current PEP? oh wait this one's newer -- oh but wait is the newer one still in development? or maybe abandoned?, etc.): +1
We also have specs like Tarek's database of installed distributions (https://www.python.org/dev/peps/pep-0376/), where we kept the "dist-info" parts, but not any of the API proposals. *Existing* formats (like sdist) could also be specified there without requiring a new PEP (modulo people's time to do the work, but at least having a place for such specs to *go* would be a good first step). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
One thought that comes to mind is how to present specs that deal with formats and artifacts that persist for years. For example, down the road when there's Wheel 2.0, what's the "Current Specs" for wheel? I would describe 2.0 is the "Latest" spec, but "Current Specs" includes all versions we're attempting to support, so we'd want the "Current Specs" page to easily show all the versions, and not have to dig them out from version control or something, right? --Marcus On Sat, Sep 5, 2015 at 1:35 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 5 September 2015 at 16:46, Nathaniel Smith <njs@pobox.com> wrote:
On Fri, Sep 4, 2015 at 9:24 PM, Marcus Smith <qwcode@gmail.com> wrote:
I don't have a specific problem with the specs living somewhere else as well, I just don't think moving a lengthy document full of edge cases from one location to another is going to make things better
If I may, I don't think that really captures Nick's idea.
I think it's about clearly distinguishing the following:
1) Current Specs (for metadata, versioning, pypi etc..) 2) Proposals to adjust or add to the Current Specs
We don't have a clear distinction right now. We just have a series of PEPs, and it's work to figure out where the actual current spec is at, in the noise of rationales and transition plans etc...
Speaking as someone who has been pretty confused in the past trying to look up what the actual current rules are for something like version numbers or metadata (is this the current PEP? oh wait this one's newer -- oh but wait is the newer one still in development? or maybe abandoned?, etc.): +1
We also have specs like Tarek's database of installed distributions (https://www.python.org/dev/peps/pep-0376/), where we kept the "dist-info" parts, but not any of the API proposals.
*Existing* formats (like sdist) could also be specified there without requiring a new PEP (modulo people's time to do the work, but at least having a place for such specs to *go* would be a good first step).
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 7 September 2015 at 02:09, Marcus Smith <qwcode@gmail.com> wrote:
One thought that comes to mind is how to present specs that deal with formats and artifacts that persist for years.
For example, down the road when there's Wheel 2.0, what's the "Current Specs" for wheel?
I would describe 2.0 is the "Latest" spec, but "Current Specs" includes all versions we're attempting to support, so we'd want the "Current Specs" page to easily show all the versions, and not have to dig them out from version control or something, right?
Yes, but I think that's easy enough to handle by having a default URL that always goes to the latest version of the spec, and moving previous versions to URLs that include the version number. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Nick Coghlan <ncoghlan@gmail.com> writes:
On 7 September 2015 at 02:09, Marcus Smith <qwcode@gmail.com> wrote:
For example, down the road when there's Wheel 2.0, what's the "Current Specs" for wheel?
Yes, but I think that's easy enough to handle by having a default URL that always goes to the latest version of the spec, and moving previous versions to URLs that include the version number.
<modification severity="bikeshed"> Or consistently publish each spec version to a predictable URL with the version number, and have the default URL *redirect* to whatever is the current-versioned spec. </modification> That way, the URL works as people expect, *and* the resulting destination gives a URL that (when inevitably copy-and-pasted) will retain its meaning over time. -- \ Moriarty: “Forty thousand million billion dollars? That money | `\ must be worth a fortune!” —The Goon Show, _The Sale of | _o__) Manhattan_ | Ben Finney
On 7 September 2015 at 09:42, Ben Finney <ben+python@benfinney.id.au> wrote:
Nick Coghlan <ncoghlan@gmail.com> writes:
On 7 September 2015 at 02:09, Marcus Smith <qwcode@gmail.com> wrote:
For example, down the road when there's Wheel 2.0, what's the "Current Specs" for wheel?
Yes, but I think that's easy enough to handle by having a default URL that always goes to the latest version of the spec, and moving previous versions to URLs that include the version number.
<modification severity="bikeshed"> Or consistently publish each spec version to a predictable URL with the version number, and have the default URL *redirect* to whatever is the current-versioned spec. </modification>
That way, the URL works as people expect, *and* the resulting destination gives a URL that (when inevitably copy-and-pasted) will retain its meaning over time.
Yes, ReadTheDocs does let us do that. However, there's actually a problem with it, and it's this: it perpetuates the myth that it is possible to publish viable packaging software without committing to ongoing maintenance of that software to track changes to distribution formats and user experience expectations. Software distribution *fundamentally* involves interacting with the outside world, and coping with evolving interoperability expectations. Users should be able to grab the latest version of a packaging tool and be confident that it supports the latest interoperability standards (modulo a rollout window of a few weeks or maybe a few months for tools designed for relatively slow moving environments). This is the problem we always hit with distutils, and the one we still regularly hit with the Linux distributions: their update and rollout cycles are too slow, so they can't keep up with user expectations. Thus, the mindset we want to cultivate amongst tool developers is "I commit to ensuring my project gains support for the latest versions of the Python packaging interoperability standards in a timely manner, as well as supporting legacy versions of those standards for backwards compatibility purposes", rather than "My project supports version 1.0 of the interoperability standards, and I might upgrade to 2.0 when that happens. If I feel like it, and I have time. Maybe". Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
That way, the URL works as people expect, *and* the resulting
destination gives a URL that (when inevitably copy-and-pasted) will retain its meaning over time.
Yes, ReadTheDocs does let us do that.
well, it lets you do it for a whole project. we'd have to have a project per spec for it to work like that. we've been talking about all specs being in one project (PyPUG) I think it we'd either have to: 1) only render the latest version, and construct an index of links to the unrendered old versions in vcs history or 2) use a custom tailored tool to publish/render this like we want. or 3) use distinct documents for distinct versions as peers in the src tree. -Marcus
On 7 September 2015 at 14:11, Marcus Smith <qwcode@gmail.com> wrote:
That way, the URL works as people expect, *and* the resulting destination gives a URL that (when inevitably copy-and-pasted) will retain its meaning over time.
Yes, ReadTheDocs does let us do that.
well, it lets you do it for a whole project.
RTD also has page redirects now: https://read-the-docs.readthedocs.org/en/latest/user-defined-redirects.html#... (I thought the same thing you did, but found that when double checking) So we *could* redirect unqualified links to qualified ones if we wanted to. I just don't want to :) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
I'm still unclear on whether you'd want A or B: A) Different major/minor versions of the spec are different documents B) Different versions of the spec are tags or branches of the same document If it's B, then you'd either: 1) only build the latest version, and construct an index of links to the unrendered old versions in vcs history 2) use a custom build/publishing worflow that pulls versions out of history so they can be built as peers in the published version On Sun, Sep 6, 2015 at 9:26 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 7 September 2015 at 14:11, Marcus Smith <qwcode@gmail.com> wrote:
That way, the URL works as people expect, *and* the resulting destination gives a URL that (when inevitably copy-and-pasted) will retain its meaning over time.
Yes, ReadTheDocs does let us do that.
well, it lets you do it for a whole project.
RTD also has page redirects now:
https://read-the-docs.readthedocs.org/en/latest/user-defined-redirects.html#... (I thought the same thing you did, but found that when double checking)
So we *could* redirect unqualified links to qualified ones if we wanted to. I just don't want to :)
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
MAJOR.MINOR.PATCH[-rev] would be helpful for these (and other) PEPs. On Sep 7, 2015 10:36 AM, "Marcus Smith" <qwcode@gmail.com> wrote:
I'm still unclear on whether you'd want A or B:
A) Different major/minor versions of the spec are different documents
From http://semver.org Semantic Versioning 2.0 :
``` Given a version number MAJOR.MINOR.PATCH, increment the: - MAJOR version when you make incompatible API changes, - MINOR version when you add functionality in a backwards-compatible manner, and - PATCH version when you make backwards-compatible bug fixes. Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format. ```
B) Different versions of the spec are tags or branches of the same document
``` *Linux/Python Compatible Semantic Versioning 3.0.0* This is a fork of Semantic Versioning 2.0. The specific changes have to do with the format of pre-release and build labels, specifically to make them not confusing when co-existing with Linux distribution packaging and Python packaging. Inspiration for the format of the pre-release and build labels came from Python’s PEP440. *Changes vs **SemVer** 2.0**¶* <http://docs.openstack.org/developer/pbr/semver.html#changes-vs-semver-2-0> dev versions are defined. These are extremely useful when dealing with CI and CD systems when ‘every commit is a release’ is not feasible.All versions have been made PEP-440 compatible, because of our deep roots in Python. Pre-release versions are now separated by . not -, and use a/b/c rather than alpha/beta etc. ``` Something like v1.0.01-eb4df7f[-linux64] would have greater traceability.
If it's B, then you'd either: 1) only build the latest version, and construct an index of links to the
unrendered old versions in vcs history
2) use a custom build/publishing worflow that pulls versions out of history so they can be built as peers in the published version
#. TBH I'm more concerned about determining downstream tool support from MAJOR.MINOR.PATCH (The PEP workflow is probably fine; I think there is need for better versioning under one heading).
On Sun, Sep 6, 2015 at 9:26 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 7 September 2015 at 14:11, Marcus Smith <qwcode@gmail.com> wrote:
That way, the URL works as people expect, *and* the resulting destination gives a URL that (when inevitably copy-and-pasted) will retain its meaning over time.
Yes, ReadTheDocs does let us do that.
well, it lets you do it for a whole project.
RTD also has page redirects now:
https://read-the-docs.readthedocs.org/en/latest/user-defined-redirects.html#...
(I thought the same thing you did, but found that when double checking)
So we *could* redirect unqualified links to qualified ones if we wanted to. I just don't want to :)
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Wes, this isn't about the versioning scheme for Specs or PEPS. For *whatever* scheme we have, my discussion was about how to render all the "current" versions we support in a Sphinx project. in short, should the current versions we want to publish be distinct documents or not.
The PEP workflow is probably fine
well, if you look up in the thread, a few of us are saying it's not. It doesn't distinguish Current Specs vs Proposals very well. On Mon, Sep 7, 2015 at 9:40 AM, Wes Turner <wes.turner@gmail.com> wrote:
MAJOR.MINOR.PATCH[-rev] would be helpful for these (and other) PEPs.
On Sep 7, 2015 10:36 AM, "Marcus Smith" <qwcode@gmail.com> wrote:
I'm still unclear on whether you'd want A or B:
A) Different major/minor versions of the spec are different documents
From http://semver.org Semantic Versioning 2.0 :
``` Given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes, - MINOR version when you add functionality in a backwards-compatible manner, and - PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format. ```
B) Different versions of the spec are tags or branches of the same document
From http://docs.openstack.org/developer/pbr/semver.html :
``` *Linux/Python Compatible Semantic Versioning 3.0.0*
This is a fork of Semantic Versioning 2.0. The specific changes have to do with the format of pre-release and build labels, specifically to make them not confusing when co-existing with Linux distribution packaging and Python packaging. Inspiration for the format of the pre-release and build labels came from Python’s PEP440.
*Changes vs **SemVer** 2.0**¶* <http://docs.openstack.org/developer/pbr/semver.html#changes-vs-semver-2-0>
dev versions are defined. These are extremely useful when dealing with CI and CD systems when ‘every commit is a release’ is not feasible.All versions have been made PEP-440 compatible, because of our deep roots in Python. Pre-release versions are now separated by . not -, and use a/b/c rather than alpha/beta etc. ```
Something like v1.0.01-eb4df7f[-linux64] would have greater traceability.
If it's B, then you'd either: 1) only build the latest version, and construct an index of links to the
unrendered old versions in vcs history
2) use a custom build/publishing worflow that pulls versions out of history so they can be built as peers in the published version
#. TBH I'm more concerned about determining downstream tool support from MAJOR.MINOR.PATCH (The PEP workflow is probably fine; I think there is need for better versioning under one heading).
On Sun, Sep 6, 2015 at 9:26 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 7 September 2015 at 14:11, Marcus Smith <qwcode@gmail.com> wrote:
That way, the URL works as people expect, *and* the resulting destination gives a URL that (when inevitably copy-and-pasted) will retain its meaning over time.
Yes, ReadTheDocs does let us do that.
well, it lets you do it for a whole project.
RTD also has page redirects now:
https://read-the-docs.readthedocs.org/en/latest/user-defined-redirects.html#...
(I thought the same thing you did, but found that when double checking)
So we *could* redirect unqualified links to qualified ones if we wanted to. I just don't want to :)
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Sep 7, 2015 12:51 PM, "Marcus Smith" <qwcode@gmail.com> wrote:
Wes, this isn't about the versioning scheme for Specs or PEPS. For *whatever* scheme we have, my discussion was about how to render all
the "current" versions we support in a Sphinx project. More or less itertools.product and a sphinx directive for ~CSVW? Marcus, we could change the subject line. The objective here, IIUC, is to generate and maintain the expanded set of packages and their metadata [[[ with the ability to download all/subset of the package metadata [ without having to execute each and every setup.py [ again ] ] ]]]. Possible subject lines: * [ ] Add RDFa to pypi and warehouse * [ ] Add JSONLD to pypi and warehouse * "PEP ???: Metadata 3.0.1" * "Re: [Python-ideas] Increasing public package discoverability (was: Adding jsonschema to the standard library)" * https://groups.google.com/d/msg/python-ideas/3MRVM6C6bQU/76hWP7bFgiwJ * https://groups.google.com/d/msg/python-ideas/3MRVM6C6bQU/VXq3yHcrCxcJ ``` So there is a schema.org/SoftwareApplication (or doap:Project, or seon:) Resource, which has * a unique URI (e.g. http://python.org/pypi/readme) * JSON metadata extracted from setup.py into pydist.json (setuptools, wheel) - [ ] create JSON-LD @context - [ ] create mappings to standard schema * [ ] http://schema.org/SoftwareApplication * [ ] http://schema.org/SoftwareSourceCode In terms of schema.org, a Django Packages resource has: * [ ] a unique URI * [ ] typed features (predicates with ranges) * [ ] http://schema.org/review * [ ] http://schema.org/VoteAction * [ ] http://schema.org/LikeAction ``` There is a matrix of packages that could, should, are uploaded; which is a subset of a [giant global] graph; which can be most easily represented in an RDF graph representation format like RDFa, JSON-LD, CSVW. * setup.py * requirements[-test|-docs][-dev][.peep].txt * tox.ini -- tox grid (+docker = dox) * Jenkins grid * --> Pypi (e.g. with twine) This does something more sequential than itertools.product w/ a Requirement namedtuple and a RequirementsMap to iterate through (for generating combinations of requirements-{test,dev,{extras}}: * https://github.com/westurner/pyleset/blob/57140bcef53/setup.py * https://github.com/westurner/pyleset/tree/57140bcef53/requirements
in short, should the current versions we want to publish be distinct documents or not.
The PEP workflow is probably fine
well, if you look up in the thread, a few of us are saying it's not. It doesn't distinguish Current Specs vs Proposals very well.
How would you add that metadata to the version string (according to PEP 440)? Semver 3.0 (pbr)
From http://docs.openstack.org/developer/pbr/semver.html : Example: 1.0.0.dev8 < 1.0.0.dev9 < 1.0.0.a1.dev3 < 1.0.0.a1 < 1.0.0.b2 < 1.0.0.c1 < 1.0.0
On Mon, Sep 7, 2015 at 9:40 AM, Wes Turner <wes.turner@gmail.com> wrote:
MAJOR.MINOR.PATCH[-rev] would be helpful for these (and other) PEPs.
On Sep 7, 2015 10:36 AM, "Marcus Smith" <qwcode@gmail.com> wrote:
I'm still unclear on whether you'd want A or B:
A) Different major/minor versions of the spec are different documents
From http://semver.org Semantic Versioning 2.0 :
``` Given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes, - MINOR version when you add functionality in a backwards-compatible
- PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format. ```
B) Different versions of the spec are tags or branches of the same document
From http://docs.openstack.org/developer/pbr/semver.html :
``` Linux/Python Compatible Semantic Versioning 3.0.0
This is a fork of Semantic Versioning 2.0. The specific changes have to do with the format of pre-release and build labels, specifically to make
Changes vs SemVer 2.0¶
dev versions are defined. These are extremely useful when dealing with
CI and CD systems when ‘every commit is a release’ is not feasible.All versions have been made PEP-440 compatible, because of our deep roots in Python. Pre-release versions are now separated by . not -, and use a/b/c rather than alpha/beta etc.
```
Something like v1.0.01-eb4df7f[-linux64] would have greater traceability.
If it's B, then you'd either: 1) only build the latest version, and construct an index of links to
manner, and them not confusing when co-existing with Linux distribution packaging and Python packaging. Inspiration for the format of the pre-release and build labels came from Python’s PEP440. the unrendered old versions in vcs history
2) use a custom build/publishing worflow that pulls versions out of history so they can be built as peers in the published version
#. TBH I'm more concerned about determining downstream tool support from MAJOR.MINOR.PATCH (The PEP workflow is probably fine; I think there is need for better versioning under one heading).
On Sun, Sep 6, 2015 at 9:26 PM, Nick Coghlan <ncoghlan@gmail.com>
wrote:
On 7 September 2015 at 14:11, Marcus Smith <qwcode@gmail.com> wrote:
> That way, the URL works as people expect, *and* the resulting > destination gives a URL that (when inevitably copy-and-pasted)
will
> retain its meaning over time.
Yes, ReadTheDocs does let us do that.
well, it lets you do it for a whole project.
RTD also has page redirects now:
https://read-the-docs.readthedocs.org/en/latest/user-defined-redirects.html#...
(I thought the same thing you did, but found that when double checking)
So we *could* redirect unqualified links to qualified ones if we wanted to. I just don't want to :)
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On 8 September 2015 at 01:36, Marcus Smith <qwcode@gmail.com> wrote:
I'm still unclear on whether you'd want A or B:
A) Different major/minor versions of the spec are different documents B) Different versions of the spec are tags or branches of the same document
I'm mainly thinking A, using versionadded tags for minor updates, and new files for major updates. The key thing I'd like to avoid is version pinning where we have to uprev a higher level spec (e.g. the wheel format) just because a lower level spec (e.g. compatibility tags) was updated in a backwards compatible way. Using PEP numbers for cross-links between specifications the way we do now doesn't give us that. So, using that as an example, suppose we used a series focused naming convention like: https://packaging.python.org/specifications/wheel-1.x.html This would contain the wheel 1.x specification, with versionadded tags for everything introduced post 1.0. Then, rather than referring to PEP 425 specifically as it does today, the wheel 1.x specification would instead refer to https://packaging.python.org/specifications/compatibility-tags-1.x.html Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Thu, 20 Aug 2015 14:26:44 -0400 Nate Coraor <nate@bx.psu.edu> wrote:
So I need a bit of guidance here. I've arbitrarily chosen some tags - `rhel` for example - and wonder if, like PEP 425's mapping of Python implementations to tags, a defined mapping of Linux distributions to shorthand tags is necessary (of course this would be difficult to keep up to date, but binary-compatibility.cfg would make it less relevant in the long run).
Alternatively, I could simply trust and normalize platform.linux_distribution()[0],
In practice, the `platform` module does not really keep up to date with evolution in the universe of Linux distributions. Regards Antoine.
On Thu, Aug 20, 2015 at 3:14 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
On Thu, 20 Aug 2015 14:26:44 -0400 Nate Coraor <nate@bx.psu.edu> wrote:
So I need a bit of guidance here. I've arbitrarily chosen some tags - `rhel` for example - and wonder if, like PEP 425's mapping of Python implementations to tags, a defined mapping of Linux distributions to shorthand tags is necessary (of course this would be difficult to keep up to date, but binary-compatibility.cfg would make it less relevant in the long run).
Alternatively, I could simply trust and normalize platform.linux_distribution()[0],
In practice, the `platform` module does not really keep up to date with evolution in the universe of Linux distributions.
Understandable, although so far it's doing a pretty good job: ('Red Hat Enterprise Linux Server', '6.5', 'Santiago') ('CentOS', '6.7', 'Final') ('CentOS Linux', '7.1.1503', 'Core') ('Scientific Linux', '6.2', 'Carbon') ('debian', '6.0.10', '') ('debian', '7.8', '') ('debian', '8.1', '') ('debian', 'stretch/sid', '') ('Ubuntu', '12.04', 'precise') ('Ubuntu', '14.04', 'trusty') ('Fedora', '21', 'Twenty One') ('SUSE Linux Enterprise Server ', '11', 'x86_64') ('Gentoo Base System', '2.2', '') platform.linux_distribution(full_distribution_name=False) might be nice but it made some bad assumptions, e.g. on Scientific Linux it returned the platform as 'redhat'. --nate
Regards
Antoine.
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Thu, 20 Aug 2015 15:40:57 -0400 Nate Coraor <nate@bx.psu.edu> wrote:
In practice, the `platform` module does not really keep up to date with evolution in the universe of Linux distributions.
Understandable, although so far it's doing a pretty good job:
Hmm, perhaps that one just parses /os/lsb-release, then :-) Regards Antoine.
On Thu, Aug 13, 2015 at 2:05 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Aug 12, 2015 13:57, "Nate Coraor" <nate@bx.psu.edu> wrote:
Hello all,
I've implemented the wheel side of Nick's suggestion from very early in
this thread to support a vendor-providable binary-compatibility.cfg.
https://bitbucket.org/pypa/wheel/pull-request/54/
If this is acceptable, I'll add support for it to the pip side. What
else should be implemented at this stage to get the PR accepted?
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag, (2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
To make this *really* slick, it would be cool if, say, David C. could make a formal list of exactly which system libraries are important to depend on (xlib, etc.), and we could hard-code two compatibility profiles "centos5-minimal" (= just glibc and the C++ runtime) and "centos5" (= that plus the core too-hard-to-ship libraries), and possibly teach pip how to check whether that hard-coded core set is available.
So this is a basic list I got w/ a few minutes of scripting, by installing our 200 most used packages on centos 5, ldd'ing all of the .so, and filtering out a few things/bugs of some of our own packages): /usr/lib64/libatk-1.0.so.0 /usr/lib64/libcairo.so.2 /usr/lib64/libdrm.so.2 /usr/lib64/libfontconfig.so.1 /usr/lib64/libGL.so.1 /usr/lib64/libGLU.so.1 /usr/lib64/libstdc++.so.6 /usr/lib64/libX11.so.6 /usr/lib64/libXau.so.6 /usr/lib64/libXcursor.so.1 /usr/lib64/libXdmcp.so.6 /usr/lib64/libXext.so.6 /usr/lib64/libXfixes.so.3 /usr/lib64/libXft.so.2 /usr/lib64/libXinerama.so.1 /usr/lib64/libXi.so.6 /usr/lib64/libXrandr.so.2 /usr/lib64/libXrender.so.1 /usr/lib64/libXt.so.6 /usr/lib64/libXv.so.1 /usr/lib64/libXxf86vm.so.1 /usr/lib64/libz.so.1 This list should only be taken as a first idea, I can work on a more precise list including the versions if that's deemed useful. One significant issue is SSL: in theory, we (as a downstream distributor) really want to avoid distributing such a key piece of infrastructure, but in practice, there are so many versions which are incompatible across distributions that it is not an option. David
Compare with osx, where there are actually a ton of different ABIs but in practice everyone distributing wheels basically sat down and picked one and wrote some ad hoc tools to make it work, and it does: https://github.com/MacPython/wiki/wiki/Spinning-wheels
-n
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On Thu, Aug 13, 2015 at 10:52 AM, David Cournapeau <cournape@gmail.com> wrote:
On Thu, Aug 13, 2015 at 2:05 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Aug 12, 2015 13:57, "Nate Coraor" <nate@bx.psu.edu> wrote:
Hello all,
I've implemented the wheel side of Nick's suggestion from very early in this thread to support a vendor-providable binary-compatibility.cfg.
https://bitbucket.org/pypa/wheel/pull-request/54/
If this is acceptable, I'll add support for it to the pip side. What else should be implemented at this stage to get the PR accepted?
From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag, (2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
To make this *really* slick, it would be cool if, say, David C. could make a formal list of exactly which system libraries are important to depend on (xlib, etc.), and we could hard-code two compatibility profiles "centos5-minimal" (= just glibc and the C++ runtime) and "centos5" (= that plus the core too-hard-to-ship libraries), and possibly teach pip how to check whether that hard-coded core set is available.
So this is a basic list I got w/ a few minutes of scripting, by installing our 200 most used packages on centos 5, ldd'ing all of the .so, and filtering out a few things/bugs of some of our own packages):
/usr/lib64/libatk-1.0.so.0 /usr/lib64/libcairo.so.2 /usr/lib64/libdrm.so.2 /usr/lib64/libfontconfig.so.1 /usr/lib64/libGL.so.1 /usr/lib64/libGLU.so.1 /usr/lib64/libstdc++.so.6 /usr/lib64/libX11.so.6 /usr/lib64/libXau.so.6 /usr/lib64/libXcursor.so.1 /usr/lib64/libXdmcp.so.6 /usr/lib64/libXext.so.6 /usr/lib64/libXfixes.so.3 /usr/lib64/libXft.so.2 /usr/lib64/libXinerama.so.1 /usr/lib64/libXi.so.6 /usr/lib64/libXrandr.so.2 /usr/lib64/libXrender.so.1 /usr/lib64/libXt.so.6 /usr/lib64/libXv.so.1 /usr/lib64/libXxf86vm.so.1 /usr/lib64/libz.so.1
This list should only be taken as a first idea, I can work on a more precise list including the versions if that's deemed useful.
Cool. Here's a list of the external .so's assumed by the packages currently included in a default Anaconda install: https://gist.github.com/njsmith/6c3d3f2dbaaf526a8585 The lists look fairly similar overall -- glibc, libstdc++, Xlib. They additionally assume the availability of expat, glib, ncurses, pcre, maybe some other stuff I missed, but they ship their own versions of libz and fontconfig, and they don't seem to either ship or use cairo or atk in their default install. For defining a "standard platform", just taking the union seems reasonable -- if either project has gotten away this long with assuming some library is there, then it's probably there. Writing a little script that takes a wheel and checks whether it has any external dependencies outside of these lists, or takes a system and checks whether all these libraries are available, seems like it would be pretty trivial.
One significant issue is SSL: in theory, we (as a downstream distributor) really want to avoid distributing such a key piece of infrastructure, but in practice, there are so many versions which are incompatible across distributions that it is not an option.
This is mostly an issue for distributing Python itself, right? ...I hope? -n -- Nathaniel J. Smith -- http://vorpus.org
On Thu, Aug 13, 2015 at 10:52 AM, David Cournapeau <cournape@gmail.com> wrote:
So this is a basic list I got w/ a few minutes of scripting,
could we define this list (or somethign like it) as "Python-Linux-Standard-Base-version X.Y" Then we have a tag to use on binary wheels, and clearly defined way to know whether you can use them. My understanding tis that Anaconda uses a "kinda old" version of Linux Z(CentOS?) -- and it seems to work OK, though it's not really all that well defined or documented. This could be a way to do about the same thing, but better defined and documented. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
On Fri, Aug 14, 2015 at 5:04 PM, Chris Barker <chris.barker@noaa.gov> wrote:
On Thu, Aug 13, 2015 at 10:52 AM, David Cournapeau <cournape@gmail.com> wrote:
So this is a basic list I got w/ a few minutes of scripting,
could we define this list (or somethign like it) as "Python-Linux-Standard-Base-version X.Y"
Then we have a tag to use on binary wheels, and clearly defined way to know whether you can use them.
My understanding tis that Anaconda uses a "kinda old" version of Linux Z(CentOS?) -- and it seems to work OK, though it's not really all that well defined or documented.
This could be a way to do about the same thing, but better defined and documented.
My suggestion would be to actually document this by simply providing a corresponding docker image (built through say packer). David
-CHB
--
Christopher Barker, Ph.D. Oceanographer
Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception
Chris.Barker@noaa.gov
On Wed, Aug 12, 2015 at 6:05 PM, Nathaniel Smith <njs@pobox.com> wrote:
(2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?)
Is LSB a fantasy that never happened? I haven't followed it for years.... -CHB
Compare with osx, where there are actually a ton of different ABIs
I suppose so -- but monstrously fewer than Linux, and a very small set that are in common use. A really different problem. But yes, the consensus on what to support really helps. -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
Hi, On 17 July 2015 at 05:22, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 17 July 2015 at 03:41, Nate Coraor <nate@bx.psu.edu> wrote:
[...]
As mentioned in the wheels PR, there are some questions and decisions made that I need guidance on:
- On Linux, the distro name/version (as determined by platform.linux_distribution()) will be appended to the platform string, e.g. linux_x86_64_ubuntu_14_04. This is going to be necessary to make a reasonable attempt at wheel compatibility in PyPI. But this may violate PEP 425.
I think it's going beyond it in a useful way, though. At the moment, the "linux_x86_64" platform tag *under*specifies the platform - a binary extension built on Ubuntu 14.04 with default settings may not work on CentOS 7, for example.
Adding in the precise distro name and version number changes that to *over*specification, but I now think we can address that through configuration settings on the installer side that allow the specification of "compatible platforms". That way a derived distribution could add the corresponding upstream distribution's platform tag and their users would be able to install the relevant wheel files by default. [...]
The definition of "acceptable platform tags" should list the platforms in order of preference (for example, some of the backward compatible past releases of a linux distro, in reverse order), so that if multiple acceptable wheels are present the closest one is selected. As some other have mentioned, this doesn't solve the problem of system dependencies. I.e.: a perfectly compiled lxml wheel for linux_x86_64_ubuntu_14_04, installed into Ubuntu 14.04, will still fail to work if libxml2 and libxslt1.1 debian packages are not installed (among others). Worse is that pip will gladly install such package, and the failure will happen as a potentially cryptic error message payload to an ImportError that doesn't really make it clear what needs to be done to make the package actually work. To solve this problem, so far we've only been able to come up with two extremes: - Have the libraries contain enough metadata in their source form that we can generate true system packages from them (this doesn't really help the virtualenv case) - Carry all the dependencies. Either by static linking, or by including all dynamic libraries in the wheel, or by becoming something like Conda where we package even non Python projects. As a further step that could be taken on top of Nate's proposed PR, but avoiding the extremes above, I like Daniel's idea of "specifying the full library names [...] à-lá RPM". Combine it with the specification of abstract locations, and we could have wheels declare something like. - lxml wheel for linux_x86_64_ubuntu_14_04: - extdeps: - <dynlibdir>/libc.so.6 - <dynlibdir>/libm.so.6 - <dynlibdir>/libxml2.so.2 - <dynlibdir>/libexslt.so.0 This also makes it possible to have wheels depend on stuff other than libraries, for example binaries or data files (imagine a lightweight version of pytz that didn't have to carry its own timezones, and depended on the host system to keep them updated). As long as we have a proper abstract location to anchor the files, we can express these dependencies without hardcoding paths as they were on the build machine. It even opens the possibility that some of these external dependencies could be provided on a per-virtualenv basis, instead of globally. Pip could then (optionally?) check the existence of these external dependencies before allowing installation of the wheel, increasing the likelihood that it will work once installed. This same way of expressing external dependencies could be extended to source packages themselves. For example the `setup()` (or whatever successor we end up with) for a PIL source package could express dependency on '<include>/png.h'. Or, what's more likely these days, a dependency on '<bindir>/libpng12-config', which when run prints the correct invocations of gcc flags to add to the build process. The build process would then check the presence of these external build dependencies early on, allowing for much clearer error messages and precise instructions on how to provide the proper build environment. Most distros provide handy ways of querying which packages provide which files, so I believe the specification of external file dependences to be a nice step up from where we are right now, without wading into full-system-integration territory. Leo
On 20 July 2015 at 11:42, Leonardo Rochael Almeida <leorochael@gmail.com> wrote:
To solve this problem, so far we've only been able to come up with two extremes:
- Have the libraries contain enough metadata in their source form that we can generate true system packages from them (this doesn't really help the virtualenv case) - Carry all the dependencies. Either by static linking, or by including all dynamic libraries in the wheel, or by becoming something like Conda where we package even non Python projects.
We keep stalling on making progress with Linux wheel files as our discussions spiral out into all the reasons why solving the general case of binary distribution is so hard. However, Nate has a specific concrete problem in needing to get artifacts from Galaxy's build servers and installing them into their analysis environments - let's help him solve that, on the assumption that some *other* mechanism will be used to manage the non-Python components. This approach is actually applicable to many server based environments, as a configuration management tool like Puppet, Chef, Salt or Ansible will be used to deal with the non-Python aspects. This approach is even applicable to some "centrally managed data analysis workstation" cases. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Sun, Jul 19, 2015 at 11:00 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
However, Nate has a specific concrete problem in needing to get artifacts from Galaxy's build servers and installing them into their analysis environments - let's help him solve that, on the assumption that some *other* mechanism will be used to manage the non-Python components
What is there to solve here? Galaxy's build servers put all the wheels somewhere. Galaxy's analysis systems point to that place. I thought pip+wheel_wheelhouse already solved that problem? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov
participants (22)
-
Alexander Walters
-
Andrea Bedini
-
Antoine Pitrou
-
Ben Finney
-
Chris Barker
-
Chris Barker - NOAA Federal
-
Daniel Holth
-
David Cournapeau
-
Donald Stufft
-
Leonardo Rochael Almeida
-
M.-A. Lemburg
-
Marcus Smith
-
Nate Coraor
-
Nathaniel Smith
-
Nick Coghlan
-
Olivier Grisel
-
Oscar Benjamin
-
Paul Moore
-
Robert Collins
-
Steve Dower
-
Tres Seaver
-
Wes Turner