[Numpy-discussion] odd performance of sum?

eat e.antero.tammi at gmail.com
Thu Feb 10 16:32:30 EST 2011


Hi Robert,

On Thu, Feb 10, 2011 at 10:58 PM, Robert Kern <robert.kern at gmail.com> wrote:

> On Thu, Feb 10, 2011 at 14:29, eat <e.antero.tammi at gmail.com> wrote:
> > Hi Robert,
> >
> > On Thu, Feb 10, 2011 at 8:16 PM, Robert Kern <robert.kern at gmail.com>
> wrote:
> >>
> >> On Thu, Feb 10, 2011 at 11:53, eat <e.antero.tammi at gmail.com> wrote:
> >> > Thanks Chuck,
> >> >
> >> > for replying. But don't you still feel very odd that dot outperforms
> sum
> >> > in
> >> > your machine? Just to get it simply; why sum can't outperform dot?
> >> > Whatever
> >> > architecture (computer, cache) you have, it don't make any sense at
> all
> >> > that
> >> > when performing significantly less instructions, you'll reach to spend
> >> > more
> >> > time ;-).
> >>
> >> These days, the determining factor is less often instruction count
> >> than memory latency, and the optimized BLAS implementations of dot()
> >> heavily optimize the memory access patterns.
> >
> > Can't we have this as well with simple sum?
>
> It's technically feasible to accomplish, but as I mention later, it
> entails quite a large cost. Those optimized BLASes represent many
> man-years of effort

Yes I acknowledge this. But didn't they then  ignore them something simpler,
like sum (but which actually could benefit exactly similiar optimizations).

> and cause substantial headaches for people
> building and installing numpy.

I appreciate this. No doubt at all.

> However, they are frequently worth it
> because those operations are often bottlenecks in whole applications.
> sum(), even in its stupidest implementation, rarely is. In the places
> where it is a significant bottleneck, an ad hoc implementation in C or
> Cython or even FORTRAN for just that application is pretty easy to
> write.

But here I have to disagree; I'll think that at least I (if not even the
majority of numpy users) don't like (nor I'm be capable/ or have enough
time/ resources) go to dwell such details. I'm sorry but I'll have to
restate that it's quite reasonable to expect that sum outperforms dot in any
case. Lets now to start make such movements, which enables sum to outperform
dot.

> You can gain speed by specializing to just your use case, e.g.
> contiguous data, summing down to one number, or summing along one axis
> of only 2D data, etc. There's usually no reason to try to generalize
> that implementation to put it back into numpy.

Yes, I would really like to specialize into my case, but 'without going out
the python realm.'


Thanks,
eat

>
> >> Additionally, the number
> >> of instructions in your dot() probably isn't that many more than the
> >> sum(). The sum() is pretty dumb
> >
> > But does it need to be?
>
> As I also allude to later in my email, no, but there are still costs
> involved.
>
> >> and just does a linear accumulation
> >> using the ufunc reduce mechanism, so (m*n-1) ADDs plus quite a few
> >> instructions for traversing the array in a generic manner. With fused
> >> multiply-adds, being able to assume contiguous data and ignore the
> >> numpy iterator overhead, and applying divide-and-conquer kernels to
> >> arrange sums, the optimized dot() implementations could have a
> >> comparable instruction count.
> >
> > Couldn't sum benefit with similar logic?
>
> Etc. I'm not going to keep repeating myself.
>
> --
>  Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> enigma that is made terrible by our own mad attempt to interpret it as
> though it had an underlying truth."
>   -- Umberto Eco
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20110210/6e6ac3e2/attachment.html>


More information about the NumPy-Discussion mailing list