NumPy 1.0.4 release

Hi all, After speaking with Travis, I think that we can release NumPy 1.0.4 by the end of the month. 1.0.3 came out almost 5 months ago and there have been a number of bug-fixes and other improvements since then. Please take a look at the 1.0.4 roadmap: http://scipy.org/scipy/numpy/milestone/1.0.4 There are currently 23 active tickets that could use some attention. Hopefully, we can close as many of these as possible before the release (some need fixes, some can just be closed if appropriate). If you see something that must be addressed before the release, mark it with a severity level of 'blocker'. Here is the proposed release schedule: Monday (10/22) * Development Freeze. After this point, only bug fixes should be checked in. Sunday (10/28) * Release Tagged. At this point, I will be asking for help from packagers and testers. Thursday (11/1) * Release Announced. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/

Jarrod Millman wrote:
Hi all,
After speaking with Travis, I think that we can release NumPy 1.0.4 by the end of the month. 1.0.3 came out almost 5 months ago and there have been a number of bug-fixes and other improvements since then.
Please take a look at the 1.0.4 roadmap: http://scipy.org/scipy/numpy/milestone/1.0.4 There are currently 23 active tickets that could use some attention. Hopefully, we can close as many of these as possible before the release (some need fixes, some can just be closed if appropriate). If you see something that must be addressed before the release, mark it with a severity level of 'blocker'.
Here is the proposed release schedule:
Monday (10/22) * Development Freeze. After this point, only bug fixes should be checked in.
Sunday (10/28) * Release Tagged. At this point, I will be asking for help from packagers and testers.
Thursday (11/1) * Release Announced.
Thanks,
Hi Jarrod, Would it be possible to merge some of the work I have done recently concerning cleaning configuration and so on (If nobody is against it, of course) ? If this is considerer too big of a change, what is the plan for a 1.1 release, if any ? cheers, David

David Cournapeau wrote:
Hi Jarrod,
Would it be possible to merge some of the work I have done recently concerning cleaning configuration and so on (If nobody is against it, of course) ? If this is considerer too big of a change, what is the plan for a 1.1 release, if any ?
Could you review what the disadvantages are for including your changes into the trunk? Would there be any code breakage? What is the approximate size increase of the resulting NumPy? I've tried to follow the discussion, but haven't kept up with all of it. Thanks, -Travis

Travis E. Oliphant wrote:
David Cournapeau wrote:
Hi Jarrod,
Would it be possible to merge some of the work I have done recently concerning cleaning configuration and so on (If nobody is against it, of course) ? If this is considerer too big of a change, what is the plan for a 1.1 release, if any ?
Could you review what the disadvantages are for including your changes into the trunk?
Would there be any code breakage? What is the approximate size increase of the resulting NumPy?
I've tried to follow the discussion, but haven't kept up with all of it. There are been several things. I tried to keep different things separated for separate merge, and to make the review easier.
cleanconfig branch ================== For the cleanconfig branch, I tried to be detailed enough in the ticket where I posted the patch: the goal is to separate the config.h into two separate headers, one public, one private. The point is to avoid polluting the C namespace when using numpy (the current config.h define HAVE_* and SIZEOF_* symbols which are used by autotools). This also makes it more similar to autoheader, so that if/when my scons work is integrated, replacing the custom config generators by scons will be trivial. The patch being actually generated from the branch, it does not really make sense to apply the patch, just merge the branch (I have also updated the code since I posted the patch). http://scipy.org/scipy/numpy/ticket/593 This is purely internal, this should not change anything to anyone on any platform, otherwise, it is a bug. numpy.scons branch ================== This is a much more massive change. Scons itself adds something like 350 kb to a bzip tarball. There are two milestones already in this branch. - The first one just adds some facilities to numpy.distutils, and it should not break nor change anything: this would enable people to build ctypes-based extension in a portable way, though. - The second milestone: I don't think it would be appropriate to merge this at this point for a 1.0.x release. Normally, nothing is used by default (e.g. you have to use a different setup.py to build with the new method), but it is so easy to screw things up that with such a short window (a few days), I would prefer avoiding potential breakage. The first milestone, if merged, consist in all revision up to 4177 (this one has been tested on many platforms, including Mac OS X intel, Windows with gnu compilers, linux with gnu compilers, and more obscure configurations as well; a breakage should be trivial to fix anyway). cheers, David

On 10/19/07, David Cournapeau <david@ar.media.kyoto-u.ac.jp> wrote:
numpy.scons branch
This is a much more massive change. Scons itself adds something like 350 kb to a bzip tarball.
If numpy build system will not depend on scons (is this right?) then .. Is it strictly needed to distribute scons with numpy? -- Lisandro Dalcín --------------- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) PTLC - Güemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594

Lisandro Dalcin wrote:
On 10/19/07, David Cournapeau <david@ar.media.kyoto-u.ac.jp> wrote:
numpy.scons branch
This is a much more massive change. Scons itself adds something like 350 kb to a bzip tarball.
If numpy build system will not depend on scons (is this right?) then .. Is it strictly needed to distribute scons with numpy? Well, the choice is between requiring another dependency, with all the version problems, and including it in the sources; also, although numpy would not depend on scons in any way with the changes I suggested for 1.0.4, it would have helped detecting problems on some platforms. And I hope that in the end, scons will be used for numpy (for 1.1 ?), once I finish the conversion.
I don't see situations where adding 350 Kb in the tarball can be an issue, so could you tell me what the problem would be ? cheers, David

On 10/20/07, David Cournapeau <david@ar.media.kyoto-u.ac.jp> wrote: And I
hope that in the end, scons will be used for numpy (for 1.1 ?), once I finish the conversion.
I don't see situations where adding 350 Kb in the tarball can be an issue, so could you tell me what the problem would be ?
Then if numpy is going to use scons for build itself in the near future, then there is no problem, and including scons in numpy tarball is good from the user side. Just a final observation. Try to make it possible to reuse a scons other than that included in numpy tarball. Packagers would be happy with this. Regards, -- Lisandro Dalcín --------------- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) PTLC - Güemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594

Lisandro Dalcin wrote:
On 10/20/07, David Cournapeau <david@ar.media.kyoto-u.ac.jp> wrote: And I
hope that in the end, scons will be used for numpy (for 1.1 ?), once I finish the conversion.
I don't see situations where adding 350 Kb in the tarball can be an issue, so could you tell me what the problem would be ?
Then if numpy is going to use scons for build itself in the near future, then there is no problem, and including scons in numpy tarball is good from the user side.
Just as a precision: I am the only one so far working on this (using scons instead of distutils), and no decision has been made whether it will be used at all in the future.
Just a final observation. Try to make it possible to reuse a scons other than that included in numpy tarball. Packagers would be happy with this.
Yes, I though about that, it would not be too difficult to handle (and scons has a simple mechanism to ensure a given version), since scons is called externally (it could be a option given to the scons command inside distutils, for example). cheers, David

I've finally caught up with the discussion on aligned allocators for NumPy. In general I'm favorable to the idea, although it is not as easy to implement in 1.0.X because of the need to possibly change the C-API. The Python solution is workable and would just require a function call on the Python side (which one could call from the C-side as well with little difficulty, I believe Chuck Harris already suggested such a function). So, I think at least the Python functions are an easy addition for 1.0.4 (along with simple tests for alignment --- although a.ctypes.data % 16 is pretty simple and probably doesn't warrant a new function) I'm a bit more resistant to the more involved C-code in the patch provided with #568, because of the requested new additions to the C-API, but I understand the need. I'm currently also thinking heavily about using SIMD intrinsics in ufunc inner loops but will not likely get those in before 1.0.4. Unfortunately, all ufuncs that take advantage of SIMD instructions will have to handle the unaligned portions which may occur even if the start of the array is aligned, so the problem of thinking about alignment does not go away there with a simplified function call. A simple addition is an NPY_ALIGNED_16 and NPY_ALIGNED_32 flag for the PyArray_From_Any that could adjust the data-pointer as needed to get at least those kinds of alignment. We can't change the C-API for PyArray_FromAny to accept an alignment flag, and I'm pretty loath to do that even for 1.1. Is there a consensus? What do others think of the patch in ticket #568? Is there a need to add general-purpose aligned memory allocators to NumPy without a corresponding array_allocator? I would think the PyArray_FromAny and PyArray_NewFromDescr with aligned memory is more important which I think we could do with flag bits. -Travis O.

Travis E. Oliphant wrote:
I've finally caught up with the discussion on aligned allocators for NumPy. In general I'm favorable to the idea, although it is not as easy to implement in 1.0.X because of the need to possibly change the C-API.
The Python solution is workable and would just require a function call on the Python side (which one could call from the C-side as well with little difficulty, I believe Chuck Harris already suggested such a function). So, I think at least the Python functions are an easy addition for 1.0.4 (along with simple tests for alignment --- although a.ctypes.data % 16 is pretty simple and probably doesn't warrant a new function)
I'm a bit more resistant to the more involved C-code in the patch provided with #568, because of the requested new additions to the C-API, but I understand the need. I'm currently also thinking heavily about using SIMD intrinsics in ufunc inner loops but will not likely get those in before 1.0.4. Unfortunately, all ufuncs that take advantage of SIMD instructions will have to handle the unaligned portions which may occur even if the start of the array is aligned, so the problem of thinking about alignment does not go away there with a simplified function call.
I don't know anything about the ufunc machinery yet, but I guess you need to know the alignement of a given buffer, right ? This can be done easily. Actually, one problem I encountered (If I remember correctly) was that there is no pure C library facility in numpy: by that, I mean a simple C library, independant of python, which could be reusable by C code using numpy. For example, if we want to start thinking about using SIMD, I think it would be good to support basics in a pure C library. I don't see any downside to this approach ?
A simple addition is an NPY_ALIGNED_16 and NPY_ALIGNED_32 flag for the PyArray_From_Any that could adjust the data-pointer as needed to get at least those kinds of alignment.
We can't change the C-API for PyArray_FromAny to accept an alignment flag, and I'm pretty loath to do that even for 1.1.
Is there a consensus? What do others think of the patch in ticket #568? Is there a need to add general-purpose aligned memory allocators to NumPy without a corresponding array_allocator?
Having the NPY_ALIGNED_* flags would already be enough for most cases (for SIMD, you rarely, if ever needs more than 32 bytes alignment AFAIK). Those flags + general purposes memory allocators (in a C support library) would be enough to do everything we need to fft, for example. cheers, David

Hello, I just got back from a camping trip and haven't had time to check the status of the remaining open tickets; but it looks like there are a few tickets that are ready to close. So I am pushing the schedule back a week (with the hope that we can close a few more tickets before the 1.0.4 release): Sunday (11/4) * Release Tagged. At this point, I will be asking for help from packagers and testers. Wednesday (11/7) * Release Announced. If you have time this week or weekend, please take a look at the open tickets and see if you can help close one: http://projects.scipy.org/scipy/numpy/milestone/1.0.4 Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/
participants (4)
-
David Cournapeau
-
Jarrod Millman
-
Lisandro Dalcin
-
Travis E. Oliphant