On Wed, May 25, 2016 at 11:22 AM, Thomas Güttler <guettliml@thomas-guettler.de> wrote:
Am 25.05.2016 um 15:55 schrieb Paul Moore:
On 25 May 2016 at 14:42, Thomas Güttler <guettliml@thomas-guettler.de> wrote:
Am 25.05.2016 um 09:57 schrieb Alex Grönholm:
Amen to that, but who will pay for it? I imagine a great deal of processing power would be required for this. How do implementors of other languages handle this?
I talked with someone who is member of the python software foundation, and he said that money for projects like this is available. Of course this was no official statement.
The other aspect of this is who has sufficient time/expertise to set something like this up? Are you volunteering to do this?
I am volunteering for doing coordination work: - communication - layout of datastructures - interchange of datastructures. - no coding
But we need at least ten people how say "I'm willing to help"
We (scientific Python folks) have been thinking about this too [1]. We're getting to the stage where the informal methods we have been using are difficult to coordinate. As more people provide wheels, it gets more common for packages to release source before wheels, and cause a cry of pain from users whose installation suddenly fails or changes. The situation got particularly bad when Python 3.5 was released, because none or very few of us had Python 3.5 packages ready, and so nearly all new installs were suddenly not getting wheels, and suffering. We all really need some system that can, from some simple trigger, like a push of a git tag, build wheels for Windows 32 + 64, OSX, and manylinux1_x86_64, for Pythons 2.7, 3.4 and 3.5 (and 3.6?), test these installs, and, if the tests pass, and push these either to an accessible spot (we have been using donated rackspace hosting) or to pypi directly. We're fairly close to that at the moment. Many of us are building and testing binaries on Appveyor, Travis by default. We have lots of projects set up with repos MacPython github org, where the repositories only exist to build, test OSX wheels on travis-ci OSX VMs, and push the wheels to rackspace - e.g. [2]. We have been setting up similar systems for manylinux - e.g. [3, 4]. The problem is that these systems are largely manual, in that a release for e.g. numpy involves: * Trigger build / test on separate Appveyor repo; * Trigger build / test on MacPython/numpy-wheels; * Trigger build on manylinux-builds / test on manylinux testing; * When all these are done and tests passing, locate generated binaries on rackspace / Appveyor, and upload to pypi. This is complicated, and it's relatively hard for a given package to set this up for themselves. When Python 3.6 comes out, we'll all have to do this release procedure at more or less the same time. So, I would love to have a system that could either collate these different services (Appveyor, Travis, Circle-CI, Rackspace, AWS) into something coherent, or generate something new that is more streamlined. See conda-forge [5] for an example of collating build services. I think it's fine for each package to specify its own build and test recipes as long as they can do it in a way that is well defined, with examples to work from. The success of travis-ci is a testament to the ingenuity of packagers in getting their packages built and tested. Maybe this could be a PEP of its own. It would certainly help to have some idea of what kind of support the PSF can give - the spec would look different for a new custom system and a system collating Appveyor / Travis etc. I'm certainly happy to devote time to this (in the hope of saving a lot of time later). Best, Matthew [1] https://github.com/scipy/scipy/issues/6157#issuecomment-219314029 [2] https://github.com/MacPython/numpy-wheels [3] https://github.com/matthew-brett/manylinux-builds [4] https://github.com/matthew-brett/manylinux-testing [5] https://conda-forge.github.io/