Hello, I'm having the following errors when trying to upload a wheel to PyPI. (yes, the version number is off - but that's besides the point here, I think): $ twine upload dist/llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo warnings.warn('No PKG-INFO found for package: %s' % self.package_name) Uploading distributions to https://pypi.python.org/pypi Uploading llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl HTTPError: 400 Client Error: Binary wheel for an unsupported platform $ twine upload dist/llvmlite-0.0.0-py3-none-linux_x86_64.whl /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo warnings.warn('No PKG-INFO found for package: %s' % self.package_name) Uploading distributions to https://pypi.python.org/pypi Uploading llvmlite-0.0.0-py3-none-linux_x86_64.whl HTTPError: 400 Client Error: Binary wheel for an unsupported platform Thanks Antoine.
On Mon, Jul 6, 2015 at 11:24 AM, Antoine Pitrou
Hello,
I'm having the following errors when trying to upload a wheel to PyPI. (yes, the version number is off - but that's besides the point here, I think):
$ twine upload dist/llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo warnings.warn('No PKG-INFO found for package: %s' % self.package_name) Uploading distributions to https://pypi.python.org/pypi Uploading llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl HTTPError: 400 Client Error: Binary wheel for an unsupported platform
$ twine upload dist/llvmlite-0.0.0-py3-none-linux_x86_64.whl /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo warnings.warn('No PKG-INFO found for package: %s' % self.package_name) Uploading distributions to https://pypi.python.org/pypi Uploading llvmlite-0.0.0-py3-none-linux_x86_64.whl HTTPError: 400 Client Error: Binary wheel for an unsupported platform
Unrelated to your problem here (which I think is due in part to the lack of classifiers for binaries for *nix systems (or something along those lines that Nick or Donald will be more familiar with)), I filed a twine bug for those warnings: https://github.com/pypa/twine/issues/114 I've never seen those before, so I'm sorry for the noise in that output. Cheers, Ian
On 6 July 2015 at 17:24, Antoine Pitrou
(yes, the version number is off - but that's besides the point here, I think):
$ twine upload dist/llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo warnings.warn('No PKG-INFO found for package: %s' % self.package_name) Uploading distributions to https://pypi.python.org/pypi Uploading llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl HTTPError: 400 Client Error: Binary wheel for an unsupported platform
PyPI does not support uploading binary wheels for Linux. This is a deliberate restriction because the tags supported by the wheel spec are not fine enough grained to specify compatibility between the myriad of Linux variations. The intention is that once a resolution to that problem is found, this restriction will be lifted. In the meantime wheels are fine on PyPI for Windows and OSX, and on Linux for private use. It's only public distribution of Linux wheels where care is needed. Paul
On Mon, 6 Jul 2015 19:03:19 +0100 Paul Moore
On 6 July 2015 at 17:24, Antoine Pitrou
wrote: (yes, the version number is off - but that's besides the point here, I think):
$ twine upload dist/llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo warnings.warn('No PKG-INFO found for package: %s' % self.package_name) Uploading distributions to https://pypi.python.org/pypi Uploading llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl HTTPError: 400 Client Error: Binary wheel for an unsupported platform
PyPI does not support uploading binary wheels for Linux. This is a deliberate restriction because the tags supported by the wheel spec are not fine enough grained to specify compatibility between the myriad of Linux variations. The intention is that once a resolution to that problem is found, this restriction will be lifted.
What if packagers take care of working around the issue? (for example by building on a suitably old Linux platform, as we already do for Conda packages) Regards Antoine.
On 6 July 2015 at 19:18, Antoine Pitrou
What if packagers take care of working around the issue? (for example by building on a suitably old Linux platform, as we already do for Conda packages)
At the moment it's just a simple "if the wheel is for Linux, reject it" test. As to whether that's too conservative, one of the Linux guys would need to comment. Maybe the issue is simply that we can't be sure people will take the care that you do, and the risk of people getting broken installs is too high? Paul
On Mon, 6 Jul 2015 22:34:38 +0100 Paul Moore
On 6 July 2015 at 19:18, Antoine Pitrou
wrote: What if packagers take care of working around the issue? (for example by building on a suitably old Linux platform, as we already do for Conda packages)
At the moment it's just a simple "if the wheel is for Linux, reject it" test.
As to whether that's too conservative, one of the Linux guys would need to comment. Maybe the issue is simply that we can't be sure people will take the care that you do, and the risk of people getting broken installs is too high?
Then how about a warning, or a rejection by default with a well-known way to bypass it? Regards Antoine.
On 7 July 2015 at 07:46, Antoine Pitrou
On Mon, 6 Jul 2015 22:34:38 +0100 Paul Moore
wrote: On 6 July 2015 at 19:18, Antoine Pitrou
wrote: What if packagers take care of working around the issue? (for example by building on a suitably old Linux platform, as we already do for Conda packages)
At the moment it's just a simple "if the wheel is for Linux, reject it" test.
As to whether that's too conservative, one of the Linux guys would need to comment. Maybe the issue is simply that we can't be sure people will take the care that you do, and the risk of people getting broken installs is too high?
Then how about a warning, or a rejection by default with a well-known way to bypass it?
Unfortunately, the compatibility tagging for Linux wheels is currently so thoroughly inadequate that even in tightly controlled environments having a wheel file escape from its "intended" target platforms can cause hard to debug problems. There was a good proposal not that long ago to add a "platform tag override" capability to both pip (for installation) and bdist_wheel (for publication), but I don't know what became of that. If we had that system, then I think it would be reasonable to allow Linux uploads with a "pypi_linux_x86_64" override tag - they'd never be installed by default, but folks could opt in to allowing them. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Tue, 7 Jul 2015 23:53:59 +1000
Nick Coghlan
On 7 July 2015 at 07:46, Antoine Pitrou
wrote: On Mon, 6 Jul 2015 22:34:38 +0100 Paul Moore
wrote: On 6 July 2015 at 19:18, Antoine Pitrou
wrote: What if packagers take care of working around the issue? (for example by building on a suitably old Linux platform, as we already do for Conda packages)
At the moment it's just a simple "if the wheel is for Linux, reject it" test.
As to whether that's too conservative, one of the Linux guys would need to comment. Maybe the issue is simply that we can't be sure people will take the care that you do, and the risk of people getting broken installs is too high?
Then how about a warning, or a rejection by default with a well-known way to bypass it?
Unfortunately, the compatibility tagging for Linux wheels is currently so thoroughly inadequate that even in tightly controlled environments having a wheel file escape from its "intended" target platforms can cause hard to debug problems.
I'm not sure what you're pointing to, could you elaborate a bit? For the record, building against a well-known, old glibc + gcc has served the Anaconda platform well. Regards Antoine.
On 8 July 2015 at 00:07, Antoine Pitrou
On Tue, 7 Jul 2015 23:53:59 +1000 Nick Coghlan
wrote: Unfortunately, the compatibility tagging for Linux wheels is currently so thoroughly inadequate that even in tightly controlled environments having a wheel file escape from its "intended" target platforms can cause hard to debug problems.
I'm not sure what you're pointing to, could you elaborate a bit?
That was a reference to a case of someone building for Debian (I think), and then having one of their wheel files end up installed on a CentOS system and wondering why things weren't working.
For the record, building against a well-known, old glibc + gcc has served the Anaconda platform well.
The key problem is that there's no straightforward way for us to verify that folks are actually building against a suitably limited set of platform APIs that all Linux distros in widespread use provide. And when it inevitably fails (which it will, Python and PyPI are too popular for it not to), the folks running into the problem are unlikely to be able to diagnose what has happened, and even once we figure out what has gone wrong, we'd be left having to explain how to blacklist wheel files, and the UX would just generally be terrible (and the burden of dealing with that would fall on the pip and PyPI maintainers, not on the individual projects publishing insufficiently conservative Linux wheel files). That's why the platform override tags are such a nice idea, as it becomes possible to start iterating on possible solutions to the problem without affecting the default installation UX in the near term. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On July 7, 2015 at 10:22:55 AM, Nick Coghlan (ncoghlan@gmail.com) wrote:
On 8 July 2015 at 00:07, Antoine Pitrou wrote:
On Tue, 7 Jul 2015 23:53:59 +1000 Nick Coghlan wrote:
Unfortunately, the compatibility tagging for Linux wheels is currently so thoroughly inadequate that even in tightly controlled environments having a wheel file escape from its "intended" target platforms can cause hard to debug problems.
I'm not sure what you're pointing to, could you elaborate a bit?
That was a reference to a case of someone building for Debian (I think), and then having one of their wheel files end up installed on a CentOS system and wondering why things weren't working.
For the record, building against a well-known, old glibc + gcc has served the Anaconda platform well.
The key problem is that there's no straightforward way for us to verify that folks are actually building against a suitably limited set of platform APIs that all Linux distros in widespread use provide.
And when it inevitably fails (which it will, Python and PyPI are too popular for it not to), the folks running into the problem are unlikely to be able to diagnose what has happened, and even once we figure out what has gone wrong, we'd be left having to explain how to blacklist wheel files, and the UX would just generally be terrible (and the burden of dealing with that would fall on the pip and PyPI maintainers, not on the individual projects publishing insufficiently conservative Linux wheel files).
That's why the platform override tags are such a nice idea, as it becomes possible to start iterating on possible solutions to the problem without affecting the default installation UX in the near term.
pip 7+ actually has the UI for blacklisting binary packages now, primarily to ask "no don't build a wheel for X", but it also functions to ask *not* to accept wheels for a particular project from PyPI. In my mind, the biggest reason to not just open up the ability to upload even generic linux wheels right now is the lack of a safe-ish default. I think if we added a few things: * Default to per platform tags (e.g. ubuntu_14_04), but allow this to be customized and also accept "Generic" Linux wheels as well. * Put the libc into the file name as well since it's reasonable to build a "generic" linux wheel that statically links all dependencies (afaik), however it does not really work to statically link glibc. This means that even if you build against an old version of glibc if you're running on a Linux that *doesnt* use glibc (like Alpine which uses MUSL) you'll run into problems. I think that it is entirely possible to build a generic linux wheel that will work on any Linux built the same libc* of the same or newer version, however I think that you have to be careful if you do it. You have to ensure all your dependencies are statically linked (if you have any) and you have to ensure that you build against an old enough Linux (likely some form of CentOS). * Side question, since I don't actually know how a computer works: Is it even possible to have a CPython extension link against a different libc than CPython itself is linked against? What if static linking is involved since there are non glibc libcs which actually do support static linking? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Tue, 7 Jul 2015 11:02:40 -0400
Donald Stufft
In my mind, the biggest reason to not just open up the ability to upload even generic linux wheels right now is the lack of a safe-ish default. I think if we added a few things:
* Default to per platform tags (e.g. ubuntu_14_04), but allow this to be customized and also accept "Generic" Linux wheels as well.
That would be cool :)
* Put the libc into the file name as well since it's reasonable to build a "generic" linux wheel that statically links all dependencies (afaik), however it does not really work to statically link glibc.
True. For example, here is the meat of a build of llvmlite on Linux: $ ldd miniconda3/pkgs/llvmlite-0.6.0-py34_5/lib/python3.4/site-packages/llvmlite/binding/libllvmlite.so linux-vdso.so.1 => (0x00007ffeacefd000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f8c9e2f5000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f8c9e0d7000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f8c9decf000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f8c9dccb000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f8c9d9c5000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f8c9d7ae000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8c9d3ea000) /lib64/ld-linux-x86-64.so.2 (0x00007f8c9fcee000) It embeds LLVM and has no dynamic reference to anything beside the most basic runtime libraries (libstdc++ is statically linked in). The .so file doesn't even require Python (but the rest of llvmlite, which loads the .so file using ctypes, does need Python, of course). We use a similar strategy on Windows and OS X.
This means that even if you build against an old version of glibc if you're running on a Linux that *doesnt* use glibc (like Alpine which uses MUSL) you'll run into problems.
glibc vs. non-glibc is yet a different concern IMHO. Mainstream Linux setups use glibc.
You have to ensure all your dependencies are statically linked (if you have any) and you have to ensure that you build against an old enough Linux (likely some form of CentOS).
Yes, we use CentOS 5...
* Side question, since I don't actually know how a computer works: Is it even possible to have a CPython extension link against a different libc than CPython itself is linked against?
Half-incompetent answer here: I think link-time it would be fine. Then at library load time it depends on whether the actual system glibc is ABI/API-compatible with the one the C extension (and/or CPython itself) was linked with. Regards Antoine.
On Tue, Jul 7, 2015 at 7:02 PM, Donald Stufft
On July 7, 2015 at 10:22:55 AM, Nick Coghlan (ncoghlan@gmail.com) wrote:
On 8 July 2015 at 00:07, Antoine Pitrou wrote:
On Tue, 7 Jul 2015 23:53:59 +1000 Nick Coghlan wrote:
Unfortunately, the compatibility tagging for Linux wheels is currently so thoroughly inadequate that even in tightly controlled environments having a wheel file escape from its "intended" target platforms can cause hard to debug problems.
I'm not sure what you're pointing to, could you elaborate a bit?
That was a reference to a case of someone building for Debian (I think), and then having one of their wheel files end up installed on a CentOS system and wondering why things weren't working.
For the record, building against a well-known, old glibc + gcc has served the Anaconda platform well.
The key problem is that there's no straightforward way for us to verify that folks are actually building against a suitably limited set of platform APIs that all Linux distros in widespread use provide.
And when it inevitably fails (which it will, Python and PyPI are too popular for it not to), the folks running into the problem are unlikely to be able to diagnose what has happened, and even once we figure out what has gone wrong, we'd be left having to explain how to blacklist wheel files, and the UX would just generally be terrible (and the burden of dealing with that would fall on the pip and PyPI maintainers, not on the individual projects publishing insufficiently conservative Linux wheel files).
That's why the platform override tags are such a nice idea, as it becomes possible to start iterating on possible solutions to the problem without affecting the default installation UX in the near term.
pip 7+ actually has the UI for blacklisting binary packages now, primarily to ask "no don't build a wheel for X", but it also functions to ask *not* to accept wheels for a particular project from PyPI.
In my mind, the biggest reason to not just open up the ability to upload even generic linux wheels right now is the lack of a safe-ish default. I think if we added a few things:
* Default to per platform tags (e.g. ubuntu_14_04), but allow this to be customized and also accept "Generic" Linux wheels as well. * Put the libc into the file name as well since it's reasonable to build a "generic" linux wheel that statically links all dependencies (afaik), however it does not really work to statically link glibc. This means that even if you build against an old version of glibc if you're running on a Linux that *doesnt* use glibc (like Alpine which uses MUSL) you'll run into problems.
I think that it is entirely possible to build a generic linux wheel that will work on any Linux built the same libc* of the same or newer version, however I think that you have to be careful if you do it. You have to ensure all your dependencies are statically linked (if you have any) and you have to ensure that you build against an old enough Linux (likely some form of CentOS).
* Side question, since I don't actually know how a computer works: Is it even possible to have a CPython extension link against a different libc than CPython itself is linked against? What if static linking is involved since there are non glibc libcs which actually do support static linking?
You can use versioned symbols to manage some of those issues ( https://www.kernel.org/pub/software/libs/glibc/hjl/compat/). Some softwares even include their own libc, but this is getting hair fast. The common solution is to do what Antoine mentioned: build on the lowest common denominator, and reduce the dependencies on the system as much as possible. To be honest, glibc is rarely your problem: the kernel is actually more problematic (some common python packages don't build on 2.6.18 anymore), and C++ even more so. E.g. llvm 3.6 will not build on gcc 4.1 (the version of centos 5), so you need a new g++ which means a new libtstdc++. I am biased, but that's the kind of things where you may want to work with "professional" providers with people paid to work on those boring but critical issues. David
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/06/2015 02:18 PM, Antoine Pitrou wrote:
On Mon, 6 Jul 2015 19:03:19 +0100 Paul Moore
wrote: On 6 July 2015 at 17:24, Antoine Pitrou
wrote: (yes, the version number is off - but that's besides the point here, I think):
$ twine upload dist/llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo warnings.warn('No PKG-INFO found for package: %s' % self.package_name) Uploading distributions to https://pypi.python.org/pypi Uploading llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl HTTPError: 400 Client Error: Binary wheel for an unsupported platform
PyPI does not support uploading binary wheels for Linux. This is a deliberate restriction because the tags supported by the wheel spec are not fine enough grained to specify compatibility between the myriad of Linux variations. The intention is that once a resolution to that problem is found, this restriction will be lifted.
What if packagers take care of working around the issue? (for example by building on a suitably old Linux platform, as we already do for Conda packages)
Compared to Windows, and even somewhat OS/X, the win for uploading wheels to PyPI is miniscule: pretty much everybody developing on Linux has or can get the toolchain required to build a wheel from an sdist, and can share those built wheels across the hosts she knows to be compatible. A-foolish-consistency'ly, Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBAgAGBQJVnGGVAAoJEPKpaDSJE9HY5McP/0MCer5wS0MygSHw/UGrcvKN 2pDfGdFsucjY2cR5Wr/R28NEvTPzrmBghPAw4x47rNRpNWBOHvRMyOu3UWKJIeMI r/elBcP9te26jA7o/clqkfnVNKGjcoDSis+YkyhZAfhEDNGXQAaSf8uNFsVDKMTj W8ggxe0DraXHImE0X4rahu2a7Svd7yWIMtRv5eMP+HjfLA9ouQUuFNYXCuXDmTIS 5SJ+XkXjToKy0DSqPkknESVcCnF6sDjI1STo4dPi65QNlUQomv9m3TwLN9Ak+bx6 u0vM8R594PXMcb8cTnBXdmDbdDhQRym7Wy2Fr5zYs7nwJ8x13d2QKs5fQ/QhmUF5 DtyWeQekhK1+wupt6NQ8tXgu9jx9SV81XtvZSAp9SAVS9asC7BUjLAZEh9r3K07b dTkaY719vFUioiYQazDjIrkMLxSKjGbBgkve78tkjDtfwvKDOBqofFEmcsBv8yh0 wA5/iYJ78Wmr2rti5d4/JwLU5Tc+NkkYcw1W0bRUgi+GX8vYtYQ03i36f33Kyf3f z6n8rquZhGDIcPjbrEmveBJxErJTu/3ifuIy6NnTwCFmQNOl+HpqtJFtYS1a3NUR 0s9eivFgl6vwExN2KywYW9N/6tCcWyB0qvECG6tB/a+Ao3iW8KOciyyjaQZWwOFx glybBbAdFgtqaLafaJnQ =NgK8 -----END PGP SIGNATURE-----
On Tue, 07 Jul 2015 19:32:37 -0400
Tres Seaver
Compared to Windows, and even somewhat OS/X, the win for uploading wheels to PyPI is miniscule: pretty much everybody developing on Linux has or can get the toolchain required to build a wheel from an sdist, and can share those built wheels across the hosts she knows to be compatible.
That's a dramatically uninformed statement, to put it politely... Some packages have difficult-to-meet build dependencies, and can also take a long time to do so. llvmlite, the package I'm talking about, builds against the development libraries for LLVM 3.6 (a non-trivial download and install, assuming you can find binaries of that LLVM version for your OS version.... otherwise, count ~20 minutes to compile it with a modern quad-core CPU). We regularly have bug reports from people failing to compile the package on Linux (and OS X), which is why we are considering the option of pre-built binary wheels (in addition to the conda packages we already provide, and which some people are reluctant to use). Regards Antoine.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/07/2015 07:43 PM, Antoine Pitrou wrote:
That's a dramatically uninformed statement, to put it politely... Some packages have difficult-to-meet build dependencies, and can also take a long time to do so.
In the general case, it is *exactly* those projects which are going to trip people up when you upload their binary wheels to PyPI: there will be no way to know that the compiled-in stuff will work on "any" Linux, and solving the problem of "which" Linux variants a given wheel can support is intractable. At least for performance-snesitev codes, building on an old, "least-common-denominator" platform has been unsat: squeezing out the most peformance on a newer machine requires compiling on exactly that platform, using the best compiler available for it (and maybe tweak options differently than are even possible on the LCD platform).
llvmlite, the package I'm talking about, builds against the development libraries for LLVM 3.6 (a non-trivial download and install, assuming you can find binaries of that LLVM version for your OS version.... otherwise, count ~20 minutes to compile it with a modern quad-core CPU). We regularly have bug reports from people failing to compile the package on Linux (and OS X), which is why we are considering the option of pre-built binary wheels (in addition to the conda packages we already provide, and which some people are reluctant to use).
A conda pacakge solves the problem by pinning all the underlying non-Python dependencies to "known-good" versions, which makes it the right choice for folks who cannot / won't build it themselves. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBAgAGBQJVnO/AAAoJEPKpaDSJE9HY01kQAJV3Jht+KBdBfWn9w5G+/bD9 /8XRweaHFl+jBhFc+NEKjZg1Nfcz+bF8PHvC/unsUF4hBMJyeMtAadutDYVvlbOb hjUnY7BF94ssP1HcNJW9x7eQfKiwQqdOxr+4r15YkYGf0osW/JJ3SXYj/R9GwQ1c d/ZlTFtbL+fZaEEwUHS8pr3J2hx9HELFPQI3VCdt7AomNqGMoM92UDPXcyOvLUTB OfswrojVM2g1NJclvVEbd0FXIO/ScQeDYVd767LIynMbv4xQoB8/Bs9B1RBEj+gj ZphfRFtGssEHiNKN0Txk9Z22aYqQhlmxiJJx4mqT5qaSyY15iG34WsBh3gwqaDvR o2VaWMDAtxunqqiB2E1NXGzPH5InivalG1laPxYs2SZJMsYn0M3y0FDgNaV0vHuO JC3U1ckVm/oeuLhmaHmi/qzfaFTAxo4JPPcTxYnAdhVWqmWOQJRz/q20acpUu7Cx ZrezhYOVeDkF5AB+XyU6O4e7a0zB7r9jKKSIn/6EJHrMQ+l744U3nT+mJ31CIIO2 ZAYAm/ZOQK8MtSBAi9rVLzKmkyOAnUX/Fnil6uBYHAeBiIRy+BLExuzcT/0bNBXi Gfhzky8/rw7RCdBUTHeWY1y+2cZpq3BzFDSZqXMPT79SA3I+kQ9wAyyRI0x5tB6s 0ujHYOv331tJooipuYrY =JH2E -----END PGP SIGNATURE-----
On Wed, 08 Jul 2015 05:39:13 -0400
Tres Seaver
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/07/2015 07:43 PM, Antoine Pitrou wrote:
That's a dramatically uninformed statement, to put it politely... Some packages have difficult-to-meet build dependencies, and can also take a long time to do so.
In the general case, it is *exactly* those projects which are going to trip people up when you upload their binary wheels to PyPI: there will be no way to know that the compiled-in stuff will work on "any" Linux, and solving the problem of "which" Linux variants a given wheel can support is intractable.
Seriously, how this is even supposed to be relevant? The whole point is to produce best-effort packages that work on still-supported mainstream distros, not any arbitrary "Linux" setup. Instead of lecturing people about what is in your opinion "intractable", how about you just shut up, if you don't have anything constructive to contribute? Regards Antoine.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/08/2015 07:10 AM, Antoine Pitrou wrote:
Seriously, how this is even supposed to be relevant? The whole point is to produce best-effort packages that work on still-supported mainstream distros, not any arbitrary "Linux" setup.
I'm arguing that allowing PyPI uploads of binary wheels for Linux will be actively harmful. The chance that hundreds of project maintainers can get the dance you are suggesting right is iffectively nil: it requires them to compile wheel only on some unspecified LCD platform which most of them will have no access to. Once uploaded, mis-built wheels will cause no end of pain for the hapless users who try to use them. conda *is* the right solution for distributing cross-platform binaries for the relatively small in number (but not importance) hard-to-build packages.
Instead of lecturing people about what is in your opinion "intractable", how about you just shut up, if you don't have anything constructive to contribute?
Really? Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBAgAGBQJVnVhpAAoJEPKpaDSJE9HYnxIP/0b9dCEHarpzzzZP+fam04i2 a1sdxLn5RodGcUF9kid4KsdtOMc3D1sKf0yLbuBA7z1ctMYsPDYPcmvlo4cW2iSJ WHYaBm1nJ1JUIxBDagWl5IhIO+aMtRmBQ+UnRj2vQi6Clga/d0M0T1+lNhAJy4hs SogpFx3SayW+BotThqY1JGvAaKyM5hGYARAQIlrxWk2Ei6pff4mNAdXd3t1hjqg0 Y6PlBVCo9APFtbtO2AyzdVY5Mp4D6QqDcdj3NhjHsukvPTlpgv1dk+jJgPJfTxCw Gx0Gv2Zt673y9QNOer6Lp3A/jkLG6w5mnY9K3lid+wx3Pv4wd3NNy+O2FuopKHZc RdE1mWVe5W4ufSJVhdaXTOj5Eww9KnL5SH3KbTYBbDST7c9pFM0tnk/8Lq+4OU0z soGCrZtH5mjof7LPDkvVh/j/mrLeRo+Wz9JsaAb8+PZVDuBSEfHKDxoKcmO0VzAx weJE24eNfqCLzbQKhaIiXWA21D2jE1KxdUuzf0qJwwlkPaa4BmvMzJ0iKMeZYFf4 lYnBhNBJhZSGDTPpXRiM3Q73bzxvz3OYY/+6lcZU+LYx8LIHmQMrgHDddh2E9FvQ BC4sfIBGt+LTbZPSsK0xp+MbX1yoW4TcXZeA/fe8/e2GY0UFgKQRD44Fa0hRdU7b QXabMbyCHLKEKxc+xmXF =Lrn9 -----END PGP SIGNATURE-----
On Wed, 08 Jul 2015 13:05:45 -0400
Tres Seaver
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/08/2015 07:10 AM, Antoine Pitrou wrote:
Seriously, how this is even supposed to be relevant? The whole point is to produce best-effort packages that work on still-supported mainstream distros, not any arbitrary "Linux" setup.
I'm arguing that allowing PyPI uploads of binary wheels for Linux will be actively harmful.
There is no point in reinstating an argument that has already been made and discussed in the other subthread (of course, you would have to read it first to know that). Regards Antoine.
On 9 July 2015 at 05:06, Antoine Pitrou
On Wed, 08 Jul 2015 13:05:45 -0400 Tres Seaver
wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/08/2015 07:10 AM, Antoine Pitrou wrote:
Seriously, how this is even supposed to be relevant? The whole point is to produce best-effort packages that work on still-supported mainstream distros, not any arbitrary "Linux" setup.
I'm arguing that allowing PyPI uploads of binary wheels for Linux will be actively harmful.
There is no point in reinstating an argument that has already been made and discussed in the other subthread (of course, you would have to read it first to know that).
Steady on folks - prebuilt binary software distribution is *really*, *really*, hard, and we're not going to magically solve problems in a couple of days that have eluded Linux distribution vendors for over a decade. Yes, it's annoying, yes, it's frustrating, but sniping at each other when we point out the many and varied reasons it's hard won't help us to improve the experience for Python users. The key is remembering that now matter how broken you think prebuilt binary software distribution might be, it's actually worse. And channeling Hofstadter's Law: this principle remains true, even when you attempt to take this principle into account :) If you look at various prebuilt binary ecosystems to date, there's either a central authority defining the ABI to link against: - CPython on Windows - CPython on Mac OS X - Linux distributions with centralised package review and build systems - conda - nix - MS Visual Studio - XCode - Google Play - Apple App Store Or else a relatively tightly controlled isolation layer between the application code and the host system: - JVM - .NET CLR (even Linux containers can still hit the kernel ABI compatibility issues mentioned elsewhere in the thread) As Donald notes, I think we're now in a good position to start making progress here, but the first step is going to be finding a way to ensure that *by default*, pip on Linux ignores wheel files published on PyPI, and requires that they be *whitelisted* in some fashion (whether individually or categorically). That way, we know we're not going to make the default user experience on Linux *worse* than the status quo while we're still experimenting with how we want the publication side of things to work. Debugging build time API compatibility errors can be hard enough, debugging runtime A*B*I compatibility errors is a nightmare even for seasoned support engineers. It seems to me that one possible way to do that might be to change PyPI from whitelisting Windows and Mac OS X (as I believe it does now) to instead blacklisting all the other currently possible results from distutils.util.get_platform(). Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Thu, 9 Jul 2015 23:50:30 +1000
Nick Coghlan
As Donald notes, I think we're now in a good position to start making progress here, but the first step is going to be finding a way to ensure that *by default*, pip on Linux ignores wheel files published on PyPI, and requires that they be *whitelisted* in some fashion (whether individually or categorically). That way, we know we're not going to make the default user experience on Linux *worse* than the status quo while we're still experimenting with how we want the publication side of things to work. Debugging build time API compatibility errors can be hard enough, debugging runtime A*B*I compatibility errors is a nightmare even for seasoned support engineers.
By the way, I think there's another possibility if the Python packaging authority doesn't want to tackle this (admittedly delicate) problem: issue a public statement that Anaconda is the preferred way of installing Linux binary packages if they aren't provided (or the version is too old) by their Linux distribution of choice. It would then give more authority to software developers if they want to tell their users "don't use pip to install our code under Linux, use conda". Regards Antoine.
On Thu, Jul 9, 2015 at 3:50 PM, Antoine Pitrou
On Thu, 9 Jul 2015 23:50:30 +1000 Nick Coghlan
wrote: As Donald notes, I think we're now in a good position to start making progress here, but the first step is going to be finding a way to ensure that *by default*, pip on Linux ignores wheel files published on PyPI, and requires that they be *whitelisted* in some fashion (whether individually or categorically). That way, we know we're not going to make the default user experience on Linux *worse* than the status quo while we're still experimenting with how we want the publication side of things to work. Debugging build time API compatibility errors can be hard enough, debugging runtime A*B*I compatibility errors is a nightmare even for seasoned support engineers.
By the way, I think there's another possibility if the Python packaging authority doesn't want to tackle this (admittedly delicate) problem: issue a public statement that Anaconda is the preferred way of installing Linux binary packages if they aren't provided (or the version is too old) by their Linux distribution of choice.
It would then give more authority to software developers if they want to tell their users "don't use pip to install our code under Linux, use conda".
I don't think it is reasonable for pypa to recommend one solution when multiple are available (though it is certainly fair to mention them). ActiveState, Enthought (my own employer) also provide linux binaries, David
On Thu, 9 Jul 2015 17:52:06 +0100
David Cournapeau
I don't think it is reasonable for pypa to recommend one solution when multiple are available (though it is certainly fair to mention them).
ActiveState, Enthought (my own employer) also provide linux binaries,
You are right, I was forgetting about them. Then mentioning them would probably work. Regards Antoine.
On 10 July 2015 at 00:50, Antoine Pitrou
On Thu, 9 Jul 2015 23:50:30 +1000 Nick Coghlan
wrote: As Donald notes, I think we're now in a good position to start making progress here, but the first step is going to be finding a way to ensure that *by default*, pip on Linux ignores wheel files published on PyPI, and requires that they be *whitelisted* in some fashion (whether individually or categorically). That way, we know we're not going to make the default user experience on Linux *worse* than the status quo while we're still experimenting with how we want the publication side of things to work. Debugging build time API compatibility errors can be hard enough, debugging runtime A*B*I compatibility errors is a nightmare even for seasoned support engineers.
By the way, I think there's another possibility if the Python packaging authority doesn't want to tackle this (admittedly delicate) problem: issue a public statement that Anaconda is the preferred way of installing Linux binary packages if they aren't provided (or the version is too old) by their Linux distribution of choice.
We already provide a page specifically aimed at alerting folks to their prebuilt binary options for the scientific Python stack: https://packaging.python.org/en/latest/science.html In addition to referencing the upstream conda components, that also links through to http://www.scipy.org/install.html where Anaconda and Enthought Canopy are both mentioned. (Also Pyzo, which was a new one to me, and further introduced me to a couple of interesting projects: http://www.iep-project.org/index.html & its successor http://zoof.io/, which aims to take advantage of the Project Jupyter architecture to better support multiple language runtimes)
It would then give more authority to software developers if they want to tell their users "don't use pip to install our code under Linux, use conda".
I'd personally phrase such suggestions more along the lines of "For annoying technical reasons that folks are looking to find ways to fix, if you don't want to build from source yourself, then you'll currently need to use a Python redistributor rather than using the upstream Python Package Index directly. We know the conda binaries are kept up to date, but there isn't anyone we're aware of currently ensuring that up to date versions of our packages are readily available through Linux system package managers." Along those lines, while it's my personal recommendation rather than PyPA's collective recommendation, some of the references in http://www.curiousefficiency.org/posts/2015/04/stop-supporting-python26.html for "Third Party Supported" upgrade paths may prove useful. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Would be quite useful to see some references and details about the vague
issues being mentioned in the thread. It would help a lot the less versed
engineers (like me) understand the issues at hand (and hopefully reduce the
amount of disagreement overall).
For example, for me it's not clear what's wrong with Antoine's proposal
(compile on Centos 5) - it seemed quite sensible approach to produce a
reasonably compatible binary.
Some issues with kernel ABI have been mentioned - can anyone point me to
some resources describing the possible problems? Is it correct to assume
that it's about using vendor-specific kernel api?
Also, what does Conda do to solve the binary compatibility issues and
distutils or pip could never ever do (or implement)?
Thanks,
-- Ionel Cristian Mărieș
On Thu, Jul 9, 2015 at 4:50 PM, Nick Coghlan
On 9 July 2015 at 05:06, Antoine Pitrou
wrote: On Wed, 08 Jul 2015 13:05:45 -0400 Tres Seaver
wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/08/2015 07:10 AM, Antoine Pitrou wrote:
Seriously, how this is even supposed to be relevant? The whole point is to produce best-effort packages that work on still-supported mainstream distros, not any arbitrary "Linux" setup.
I'm arguing that allowing PyPI uploads of binary wheels for Linux will be actively harmful.
There is no point in reinstating an argument that has already been made and discussed in the other subthread (of course, you would have to read it first to know that).
Steady on folks - prebuilt binary software distribution is *really*, *really*, hard, and we're not going to magically solve problems in a couple of days that have eluded Linux distribution vendors for over a decade. Yes, it's annoying, yes, it's frustrating, but sniping at each other when we point out the many and varied reasons it's hard won't help us to improve the experience for Python users.
The key is remembering that now matter how broken you think prebuilt binary software distribution might be, it's actually worse. And channeling Hofstadter's Law: this principle remains true, even when you attempt to take this principle into account :)
If you look at various prebuilt binary ecosystems to date, there's either a central authority defining the ABI to link against:
- CPython on Windows - CPython on Mac OS X - Linux distributions with centralised package review and build systems - conda - nix - MS Visual Studio - XCode - Google Play - Apple App Store
Or else a relatively tightly controlled isolation layer between the application code and the host system:
- JVM - .NET CLR
(even Linux containers can still hit the kernel ABI compatibility issues mentioned elsewhere in the thread)
As Donald notes, I think we're now in a good position to start making progress here, but the first step is going to be finding a way to ensure that *by default*, pip on Linux ignores wheel files published on PyPI, and requires that they be *whitelisted* in some fashion (whether individually or categorically). That way, we know we're not going to make the default user experience on Linux *worse* than the status quo while we're still experimenting with how we want the publication side of things to work. Debugging build time API compatibility errors can be hard enough, debugging runtime A*B*I compatibility errors is a nightmare even for seasoned support engineers.
It seems to me that one possible way to do that might be to change PyPI from whitelisting Windows and Mac OS X (as I believe it does now) to instead blacklisting all the other currently possible results from distutils.util.get_platform().
Regards, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
On July 9, 2015 at 7:41:25 PM, Ionel Cristian Mărieș (contact@ionelmc.ro) wrote:
Would be quite useful to see some references and details about the vague issues being mentioned in the thread. It would help a lot the less versed engineers (like me) understand the issues at hand (and hopefully reduce the amount of disagreement overall).
For example, for me it's not clear what's wrong with Antoine's proposal (compile on Centos 5) - it seemed quite sensible approach to produce a reasonably compatible binary.
Some issues with kernel ABI have been mentioned - can anyone point me to some resources describing the possible problems? Is it correct to assume that it's about using vendor-specific kernel api?
It is about ABI. I'm not an expert but essentially anything that a Wheel doesn't statically link needs to be ABI compatible. For a plain C extension that doesn't link to anything else you have primarily: * The libc that it was linked against. * The Python that it was linked against. Currently on Python 3 we have a "stable" ABI (though I don't think it covers the entire thing) which is represented in the Wheel filename, however we don't have anything to cover the libc version. The most common version of libc in use is glibc and that is (as far as I know) basically always ABI compatible when going from an older version to a new version. However there is no guarentee that a glibc from Linux X will be ABI compatible with a glibC from Linux Z. There isn't even a guarentee that it will be glibc at all (for instance, Alpine linux uses MUSL). On top of that, you have the fact that a lot of C extensions are not self contained C extensions but instead are bindings for some other library. The problem starts becoming a lot bigger here, for example psycopg2 is a pretty popular library that links to libpsql. This may or may not be ABI compatible across CentOS to Ubuntu (for instance) and trying to install one that isn't will cause breakages. Circling back to Antoine's suggestion, the problem isn't that it wouldn't actually work, because it would sometimes, the problem is that you need to be careful about using it because it only works in some situations. In order for that to work you need to make sure that the project you're compiling is either a completely self contained C-extension or that you statically link everything that you're linking against. You also need to make sure that the person compiling the Wheel does so on a sufficiently ancient glibc to cover everyone that you care about covering. However this will still break in "weird" ways on even older versions of glibc, on Linux distributions which do not use glibc, or just because two Linux distributions decided to make their glibc slightly incompatible. This is the reason that it's currently blocked, because the edge cases are sufficiently sharp and you have to be very careful to build your Wheels in just the right way that I felt it was better to punt on it until someone puts in the effort to make it do something more resembling the right thing by default.
Also, what does Conda do to solve the binary compatibility issues and distutils or pip could never ever do (or implement)?
They don't do anything (to my knowledge, and hopefully Antoine or someone can correct me if I'm wrong) to solve the libc compatiblity problem besides building on a sufficiently old version of CentOS that they feel comfortable calling that their minimum level of support. I assume that if I tried to run Conda on something like Alpine the default repositories would break since it doesn't use glibc. They however _do_ own the ABI of everything else, so everything from the Python you're using to the libpsql to the openssl comes from their repositories. This means that they don't have to worry about the ABI differences of OpenSSL in CentOS/RHEL 5 and Ubuntu 14.04 since they'll never use them, they'll just use the OpenSSL they personally have packaged. This isn't something we can adopt because we're not trying to be another platform (in the vein of Conda) we're trying to provide a common tooling that can be used across a wide range of platforms to install Python projects. This will absolutely involve trying to figure out what the right line to walk is for us between defining a defacto platform and simply being something you "plug" into another platform. I think the best way to go about this is to figure out the best way to publish Wheels for a specific platform and focus on identifying the platforms and providing ways for people to override that. A generic "Linux" platform is a reasonable one, but I don't think it's a reasonable default since to actually support it requires taking the special precautions from above. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Fri, Jul 10, 2015 at 12:16 AM, Ionel Cristian Mărieș
Would be quite useful to see some references and details about the vague issues being mentioned in the thread. It would help a lot the less versed engineers (like me) understand the issues at hand (and hopefully reduce the amount of disagreement overall).
For example, for me it's not clear what's wrong with Antoine's proposal (compile on Centos 5) - it seemed quite sensible approach to produce a reasonably compatible binary.
Some issues with kernel ABI have been mentioned - can anyone point me to some resources describing the possible problems? Is it correct to assume that it's about using vendor-specific kernel api?
No, it is about some python packages depending directly or indirectly on kernel features not available in the kernel on centos 5. For example, you can't build subprocess32 (https://pypi.python.org/pypi/subprocess32/) on centos 5 kernels.
Also, what does Conda do to solve the binary compatibility issues and distutils or pip could never ever do (or implement)?
They do what almost everybody distributing large applications on Linux do : they "ship the world". Any large binary python distribution provider does the same here: except for low level X11/glibc libraries, everything else is bundled as part of the distribution. So no magic, just lots of maintenance work. David
Thanks, -- Ionel Cristian Mărieș
On Thu, Jul 9, 2015 at 4:50 PM, Nick Coghlan
wrote: On 9 July 2015 at 05:06, Antoine Pitrou
wrote: On Wed, 08 Jul 2015 13:05:45 -0400 Tres Seaver
wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/08/2015 07:10 AM, Antoine Pitrou wrote:
Seriously, how this is even supposed to be relevant? The whole point is to produce best-effort packages that work on still-supported mainstream distros, not any arbitrary "Linux" setup.
I'm arguing that allowing PyPI uploads of binary wheels for Linux will be actively harmful.
There is no point in reinstating an argument that has already been made and discussed in the other subthread (of course, you would have to read it first to know that).
Steady on folks - prebuilt binary software distribution is *really*, *really*, hard, and we're not going to magically solve problems in a couple of days that have eluded Linux distribution vendors for over a decade. Yes, it's annoying, yes, it's frustrating, but sniping at each other when we point out the many and varied reasons it's hard won't help us to improve the experience for Python users.
The key is remembering that now matter how broken you think prebuilt binary software distribution might be, it's actually worse. And channeling Hofstadter's Law: this principle remains true, even when you attempt to take this principle into account :)
If you look at various prebuilt binary ecosystems to date, there's either a central authority defining the ABI to link against:
- CPython on Windows - CPython on Mac OS X - Linux distributions with centralised package review and build systems - conda - nix - MS Visual Studio - XCode - Google Play - Apple App Store
Or else a relatively tightly controlled isolation layer between the application code and the host system:
- JVM - .NET CLR
(even Linux containers can still hit the kernel ABI compatibility issues mentioned elsewhere in the thread)
As Donald notes, I think we're now in a good position to start making progress here, but the first step is going to be finding a way to ensure that *by default*, pip on Linux ignores wheel files published on PyPI, and requires that they be *whitelisted* in some fashion (whether individually or categorically). That way, we know we're not going to make the default user experience on Linux *worse* than the status quo while we're still experimenting with how we want the publication side of things to work. Debugging build time API compatibility errors can be hard enough, debugging runtime A*B*I compatibility errors is a nightmare even for seasoned support engineers.
It seems to me that one possible way to do that might be to change PyPI from whitelisting Windows and Mac OS X (as I believe it does now) to instead blacklisting all the other currently possible results from distutils.util.get_platform().
Regards, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
I just checked and indeed the python exec installed by miniconda does not work on Alpine linux (launch via docker from the gliderlabs/alpine image): # ldd /root/miniconda3/pkgs/python-3.4.3-0/bin/python3.4 /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libpython3.4m.so.1.0 => /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0 (0x7f26bd153000) libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libutil.so.1 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __finite: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __rawmemchr: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __isinff: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __isnan: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __isinf: symbol not found We could still have a platform or ABI tag for linux that would include some libc information to ensure that it points to a compatible glibc and provide a reference docker image to build such wheels. We could assume that wheel binary packages should not link to any .so file from the system besides the libc. I think the packages that have specific kernel ABI requirements are rare enough to be explicitly left outside of this first effort and let the user build those from source directly. Is there an easy way to introspect a binary to detect such kernel dependencies? -- Olivier
On Fri, Jul 10, 2015 at 1:53 PM, Olivier Grisel
I just checked and indeed the python exec installed by miniconda does not work on Alpine linux (launch via docker from the gliderlabs/alpine image):
# ldd /root/miniconda3/pkgs/python-3.4.3-0/bin/python3.4 /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libpython3.4m.so.1.0 => /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0 (0x7f26bd153000) libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libutil.so.1 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __finite: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __rawmemchr: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __isinff: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __isnan: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __isinf: symbol not found
We could still have a platform or ABI tag for linux that would include some libc information to ensure that it points to a compatible glibc and provide a reference docker image to build such wheels.
We could assume that wheel binary packages should not link to any .so file from the system besides the libc.
This is too restrictive if you want plotting-related packages (which I suspsect you are interested in ;) ). The libraries we at Enthought depend on for our packages are: * glibc (IMO if you use a system w/o glibc, you are expected to be on your own to build packages from sources) * X11/fontconfig * libstdc++ Those are the ones you really do not want to ship. I don't know the proportion of packages that would work from pypi if you could assume the system has those available through some kind of ABI/Platform specifier following pep425. David
On 07/10/2015 12:00 AM, David Cournapeau wrote:
They do what almost everybody distributing large applications on Linux do : they "ship the world". Any large binary python distribution provider does the same here: except for low level X11/glibc libraries, everything else is bundled as part of the distribution.
Huh, sounds like Windows. <ducks and runs> -- ~Ethan~
participants (10)
-
Antoine Pitrou
-
David Cournapeau
-
Donald Stufft
-
Ethan Furman
-
Ian Cordasco
-
Ionel Cristian Mărieș
-
Nick Coghlan
-
Olivier Grisel
-
Paul Moore
-
Tres Seaver