On 25 January 2014 21:56, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
I'll try to summarise your take on this: You would like to take the time to ensure that Python packaging is done properly. That may mean that some functionality isn't available for some time, but you think that it's better to "get it right" than rush something out the door just to "get it working fast".
That's not an unreasonable position to take but I wanted to contrast that with your advice to numpy: Just rush something out of the door even if it has obvious problems. Don't worry about getting it right; we'll do that later...
Just to be clear, that's *not* my position. (It may be the position of some of the other pip developers). My view is that the sooner "pip install X" works out of the box, for as many of the cases where it didn't in the past, the faster we'll get adoption and the sooner people will start reporting any remaining issues. It won't be a perfect solution, but it will be better than the current status quo (at least, I'm looking for solutions that *are* better than the status quo :-)). I'd also like to see some visible impact from wheels - at the moment, even though (for example) pip and setuptools publish wheels, I doubt anyone sees any difference. I sort of wish we hadn't managed to make wheels quite so transparent in use, ironically :-) Wheels are an improvement over the wininst status quo because they support virtualenvs. So I want to see numpy wheels available, because that would be a significant step in making people aware of some of the improvements we're making. I do *not* want to see existing numpy users, or the specialists already well-served by the scientific python community, being harmed by the existence of wheels - but my impression was that the advice given to people who want to use numpy/scipy seriously, is to use one of the curated stacks like conda or enthought. So I'm looking at people who at the moment don't use numpy, but are somewhat interested in trying it out (not enough to install a whole new Python stack, though). Those people currently either use the wininst installers (which I'm *not* advocating that we remove) or they use virtualenvs, and have the view that they have to at a minimum jump through some hoops to get numpy to work. It's purely the people who can't use the wininst installers, and don't want a curated stack, that are my focus at the moment. It seems to me that there are a number of solutions for them: 1. Ignore them. It's a small enough group as to not matter. I have a personal dislike of this option, because I'm in this group :-) But it *is* a fair position to take. 2. The pip developers add facilities to pip to allow the numpy folks to generate multi-architecture wheels 3. The numpy folks put up interim wheels that work for some, but not all, users. Option 2 has an issue because developing a "proper" solution is still a long way off. But a "quick fix" postinstall script solution is possible. However, even this requires some development work, and once that has been done, a new pip release is needed, and the numpy folks then need to add the relevant postinstall scripts and update their build process to incorporate the multi-architecture DLLs into their wheels. So it's a non-trivial amount of work and time, even if it's better than a "full" solution. Option 3 has a problem because there's a support problem with people who try to use the wheels and get obscure runtime errors. But that's the only cost I can see - the numpy people can build wheels by just adding "setup.py bdist_wheel" to their build process alongside the current "setup.py bdist_wininst", and then upload them. Maybe to testpypi if it's important to make using the wheels an opt-in process. Also, maybe we could distributw a script that allowed people to check in advance if the numpy wheels would work on their PC. I'm not giving conflicting advice here - all I'm doing is looking for the short-term action that gives the most benefit for the least cost. It seems to me that option 2 takes longer and involves more effort than option 3. I'm not even convinced that option 2 is less effort for the numpy developers (if we ignore the pip development work). But I know nothing about the numpy build process, so my assumptions could be way off here - I'd be more than happy for someone to clarify what effort is involved in publishing wheels for numpy (which already exist, is that right?). As general information, it would be very valuable, because I'm pretty sure most people round here assume it's little more than adding "bdist_wheel" to an existing binary distribution production process. My apologies for this email being so long, but hopefully it's explained my position more clearly than I seem to have done previously. Paul