![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
Hey, A recent post to the wheel-builders mailing list pointed out some links to places providing free PowerPC hosting for open source projects, if they agree to a submitted request: https://mail.python.org/pipermail/wheel-builders/2017-February/000257.html It would be good to get some testing going on these architectures. Shall we apply for hosting, as the numpy organization? Cheers, Matthew
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Thu, Feb 16, 2017 at 8:02 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Those are bare VMs it seems. Remembering the Buildbot and Mailman horrors, I think we should be very reluctant to taking responsibility for maintaining CI on anything that's not hosted and can be controlled with a simple config file in our repo. Ralf
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
On Wed, Feb 15, 2017 at 7:37 PM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Not sure what you mean about mailman - maybe the Enthought servers we didn't have access to? For buildbot, I've been maintaining about 12 crappy old machines for about 7 years now [1] - I'm happy to do the same job for a couple of properly hosted PPC machines. At least we'd have some way of testing for these machines, if we get stuck - even if that involved spinning up a VM and installing the stuff we needed from the command line. Cheers, Matthew [1] http://nipy.bic.berkeley.edu/buildslaves
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Thu, Feb 16, 2017 at 8:45 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
We did have access (for most of the time), it's just that no one is interested in putting in lots of hours on sysadmin duties.
That's awesome persistence. The NumPy and SciPy buildbots certainly weren't maintained like that, half of them were offline or broken for long periods usually.
I do see the value of testing on more platforms of course. It's just about logistics/responsibilities. If you're saying that you'll do the maintenance, and want to apply for resources using the NumPy name, that's much better I think then making "the numpy devs" collectively responsible. Ralf
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
On Wed, Feb 15, 2017 at 7:55 PM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Right - they do need persistence, and to have someone who takes responsibility for them.
Yes, exactly. I'm happy to take responsibility for them, I just wanted to make sure that numpy devs could get at them if I'm not around for some reason. Matthew
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
Hi, On Thu, Feb 16, 2017 at 12:50 AM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
OK - IBM have kindly given me access to a testing machine, via my own SSH public key. Would it make sense to have a Numpy key, with several people having access to the private key and passphrase? Cheers, Matthew
![](https://secure.gravatar.com/avatar/4e72a41e3fa6f7650ca20ef13d70e6ce.jpg?s=120&d=mm&r=g)
The debian project has some powerpc machines (and we still build numpy on those boxes when i upload a new revision to our archives) and they also have hosts dedicated to let debian developers login and debug issues with their packages on that architecture. I can sponsor access to those machines for some of you, but it is not a place where you can host a CI instance. Just keep it in mind more broadly than powerpc, f.e. these are all the archs where numpy was built after the last upload https://buildd.debian.org/status/package.php?p=python-numpy&suite=unstable (the grayed out archs are the ones non release critical, so packages are built as best effort and if missing is not a big deal) -- Sandro "morph" Tosi My website: http://sandrotosi.me/ Me at Debian: http://wiki.debian.org/SandroTosi G+: https://plus.google.com/u/0/+SandroTosi
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Thu, Feb 16, 2017 at 3:53 PM, Sandro Tosi <morph@debian.org> wrote:
Thanks Sandro. It looks like even for the release-critical ones, it's just the build that has to succeed and failures are not detected? For example, armel is green but has 9 failures: https://buildd.debian.org/status/fetch.php?pkg=python-numpy&arch=armel&ver=1%3A1.12.0-2&stamp=1484889563&raw=0 Ralf
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Thu, Feb 16, 2017 at 3:55 AM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
More general questions on this: Are there any overviews over which packages in the python for science or python for data anlaysis areas work correctly on different platforms: Are there any platforms/processors, besides the standard x32/x54, where this is important? for example for statsmodels: In early releases of statsmodels, maybe 5 to 7 years ago, Yarik and I were still debugging problems on several machines like ppc and s390x during Debian testing. Since then I haven't heard much about specific problems. The current status for statsmodels on Debian machines is pretty mixed. In several of them some dependencies are not available, in some cases we have errors that might be caused by errors in dependencies, e.g. cvxopt. ppc64el test run for statsmodels has a large number of failure but checking scipy, it looks like it's also not working properly https://buildd.debian.org/status/fetch.php?pkg=python- scipy&arch=ppc64el&ver=0.18.1-2&stamp=1477075663&raw=0 In those cases it would be impossible to start debugging, if we would have to debug through the entire dependency chain. CI-testing for Windows, Apple and Linux for mainly x64 seems to be working pretty well, with some delays while version incompatibilities are fixed. But anything that is not in a CI testing setup looks pretty random to me. (I'm mainly curious what the status for those machines are. I'm not really eager to create more debugging work, but sometimes failures on a machine point to code that is "fragile".) Josef
![](https://secure.gravatar.com/avatar/4e72a41e3fa6f7650ca20ef13d70e6ce.jpg?s=120&d=mm&r=g)
On Thu, Feb 16, 2017 at 3:55 AM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
i made any error in the test suite non-fatal so that we could collect the errors and then report them back. sadly i'm currently lacking the time to report all the errors in the archs, will try to get at that soon -- Sandro "morph" Tosi My website: http://sandrotosi.me/ Me at Debian: http://wiki.debian.org/SandroTosi G+: https://plus.google.com/u/0/+SandroTosi
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
On Wed, Feb 15, 2017 at 6:53 PM, Sandro Tosi <morph@debian.org> wrote:
Numpy master now passes all tests on PPC64el. Still a couple of remaining failures for PPC64 (big-endian): * https://github.com/numpy/numpy/pull/8566 * https://github.com/numpy/numpy/issues/8325 Cheers, Matthew
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Thu, Feb 16, 2017 at 8:02 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Those are bare VMs it seems. Remembering the Buildbot and Mailman horrors, I think we should be very reluctant to taking responsibility for maintaining CI on anything that's not hosted and can be controlled with a simple config file in our repo. Ralf
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
On Wed, Feb 15, 2017 at 7:37 PM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Not sure what you mean about mailman - maybe the Enthought servers we didn't have access to? For buildbot, I've been maintaining about 12 crappy old machines for about 7 years now [1] - I'm happy to do the same job for a couple of properly hosted PPC machines. At least we'd have some way of testing for these machines, if we get stuck - even if that involved spinning up a VM and installing the stuff we needed from the command line. Cheers, Matthew [1] http://nipy.bic.berkeley.edu/buildslaves
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Thu, Feb 16, 2017 at 8:45 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
We did have access (for most of the time), it's just that no one is interested in putting in lots of hours on sysadmin duties.
That's awesome persistence. The NumPy and SciPy buildbots certainly weren't maintained like that, half of them were offline or broken for long periods usually.
I do see the value of testing on more platforms of course. It's just about logistics/responsibilities. If you're saying that you'll do the maintenance, and want to apply for resources using the NumPy name, that's much better I think then making "the numpy devs" collectively responsible. Ralf
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
On Wed, Feb 15, 2017 at 7:55 PM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
Right - they do need persistence, and to have someone who takes responsibility for them.
Yes, exactly. I'm happy to take responsibility for them, I just wanted to make sure that numpy devs could get at them if I'm not around for some reason. Matthew
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
Hi, On Thu, Feb 16, 2017 at 12:50 AM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
OK - IBM have kindly given me access to a testing machine, via my own SSH public key. Would it make sense to have a Numpy key, with several people having access to the private key and passphrase? Cheers, Matthew
![](https://secure.gravatar.com/avatar/4e72a41e3fa6f7650ca20ef13d70e6ce.jpg?s=120&d=mm&r=g)
The debian project has some powerpc machines (and we still build numpy on those boxes when i upload a new revision to our archives) and they also have hosts dedicated to let debian developers login and debug issues with their packages on that architecture. I can sponsor access to those machines for some of you, but it is not a place where you can host a CI instance. Just keep it in mind more broadly than powerpc, f.e. these are all the archs where numpy was built after the last upload https://buildd.debian.org/status/package.php?p=python-numpy&suite=unstable (the grayed out archs are the ones non release critical, so packages are built as best effort and if missing is not a big deal) -- Sandro "morph" Tosi My website: http://sandrotosi.me/ Me at Debian: http://wiki.debian.org/SandroTosi G+: https://plus.google.com/u/0/+SandroTosi
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Thu, Feb 16, 2017 at 3:53 PM, Sandro Tosi <morph@debian.org> wrote:
Thanks Sandro. It looks like even for the release-critical ones, it's just the build that has to succeed and failures are not detected? For example, armel is green but has 9 failures: https://buildd.debian.org/status/fetch.php?pkg=python-numpy&arch=armel&ver=1%3A1.12.0-2&stamp=1484889563&raw=0 Ralf
![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Thu, Feb 16, 2017 at 3:55 AM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
More general questions on this: Are there any overviews over which packages in the python for science or python for data anlaysis areas work correctly on different platforms: Are there any platforms/processors, besides the standard x32/x54, where this is important? for example for statsmodels: In early releases of statsmodels, maybe 5 to 7 years ago, Yarik and I were still debugging problems on several machines like ppc and s390x during Debian testing. Since then I haven't heard much about specific problems. The current status for statsmodels on Debian machines is pretty mixed. In several of them some dependencies are not available, in some cases we have errors that might be caused by errors in dependencies, e.g. cvxopt. ppc64el test run for statsmodels has a large number of failure but checking scipy, it looks like it's also not working properly https://buildd.debian.org/status/fetch.php?pkg=python- scipy&arch=ppc64el&ver=0.18.1-2&stamp=1477075663&raw=0 In those cases it would be impossible to start debugging, if we would have to debug through the entire dependency chain. CI-testing for Windows, Apple and Linux for mainly x64 seems to be working pretty well, with some delays while version incompatibilities are fixed. But anything that is not in a CI testing setup looks pretty random to me. (I'm mainly curious what the status for those machines are. I'm not really eager to create more debugging work, but sometimes failures on a machine point to code that is "fragile".) Josef
![](https://secure.gravatar.com/avatar/4e72a41e3fa6f7650ca20ef13d70e6ce.jpg?s=120&d=mm&r=g)
On Thu, Feb 16, 2017 at 3:55 AM, Ralf Gommers <ralf.gommers@gmail.com> wrote:
i made any error in the test suite non-fatal so that we could collect the errors and then report them back. sadly i'm currently lacking the time to report all the errors in the archs, will try to get at that soon -- Sandro "morph" Tosi My website: http://sandrotosi.me/ Me at Debian: http://wiki.debian.org/SandroTosi G+: https://plus.google.com/u/0/+SandroTosi
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
On Wed, Feb 15, 2017 at 6:53 PM, Sandro Tosi <morph@debian.org> wrote:
Numpy master now passes all tests on PPC64el. Still a couple of remaining failures for PPC64 (big-endian): * https://github.com/numpy/numpy/pull/8566 * https://github.com/numpy/numpy/issues/8325 Cheers, Matthew
participants (4)
-
josef.pktd@gmail.com
-
Matthew Brett
-
Ralf Gommers
-
Sandro Tosi