[Numpy-discussion] could anyone check on a 32bit system?

Matthew Brett matthew.brett at gmail.com
Wed May 1 16:19:39 EDT 2013


On Wed, May 1, 2013 at 1:01 PM, Sebastian Berg
<sebastian at sipsolutions.net> wrote:
> On Wed, 2013-05-01 at 15:29 -0400, Yaroslav Halchenko wrote:
>> just for completeness... I haven't yet double checked if I have done it
>> correctly but here is the bisected commit:
>> aed9925a9d5fe9a407d0ca2c65cb577116c4d0f1 is the first bad commit
>> commit aed9925a9d5fe9a407d0ca2c65cb577116c4d0f1
>> Author: Mark Wiebe <mwiebe at enthought.com>
>> Date:   Tue Aug 2 13:34:13 2011 -0500
>>     ENH: ufunc: Rewrite PyUFunc_Reduce to be more general and easier to adapt to NA masks
>>     This generalizes the 'axis' parameter to accept None or a list of
>>     axes on which to do the reduction.
>> :040000 040000 2bdd71a1ea60c0dbfe370c77f69724fab28038e1 44f54a15f480ccaf519d10e9c42032de86bd0dca M      numpy
>> bisect run success
>> FWIW ( ;-) ):
> There really is no point discussing here, this has to do with numpy
> doing iteration order optimization, and you actually *want* this. Lets
> for a second assume that the old behavior was better, then the next guy
> is going to ask: "Why is np.add.reduce(array, axis=0) so much slower
> then reduce(array, np.add)?". This is huge speed improvement by Marks
> new iterator for reductions over the slow axes, so instead of trying to
> track "regressions" down, I think the right thing is to say kudos for
> doing this improvement :).

I don't believe Yarick meant his bisection to be a criticism, but as
an aid to full understanding.

Is it an issue that Fortran and C contiguous arrays give different
rounding error for the sums?



More information about the NumPy-Discussion mailing list