ufunc for sum of squared difference

I was reading this <http://notes-on-cython.readthedocs.io/en/latest/std_dev.html> and got thinking about if a ufunc could compute the sum of squared differences in a single pass without a temporary array. The python code below demonstrates a possible approach. import numpy as np x = np.arange(10) c = 1.0 def add_square_diff(x1, x2): return x1 + (x2-c)**2 ufunc = np.frompyfunc(add_square_diff, 2, 1) print(ufunc.reduce(x) - x[0] + (x[0]-c)**2) print(np.sum(np.square(x-c))) I have (at least) 4 questions: 1. Is it possible to pass run time constants to a ufunc written in C for use in its inner loop, and if so how? 2. Is it possible to pass an initial value to reduce to avoid the clean up required for the first element? 3. Does that ufunc work, or are there special cases which cause it to fall apart? 4. Would a very specialized ufunc such as this be considered for incorporating in numpy since it would help reduce time and memory of functions already in numpy? Thank you, Matt

On Fr, 2016-11-04 at 13:11 -0400, Matthew Harrigan wrote:
I don't think its anticipated, since a ufunc could in most cases use a third argument, but a 3 arg ufunc can't be reduced. Not sure if there might be some trickery possible.
2. Is it possible to pass an initial value to reduce to avoid the clean up required for the first element?
This is the identity normally. But the identity can only be 0, 1 or -1 right now I think. The identity is what the output array gets initialized with (which effectively makes it the first value passed into the inner loop).
Might be mixing up things, however, IIRC the single pass approach has a bad numerical accuracy, so that I doubt that it is a good default algorithm. - Sebastian

I didn't notice identity before. Seems like frompyfunc always sets it to None. If it were zero maybe it would work as desired here. In the writing your own ufunc doc, I was wondering if the pointer to data could be used to get a constant at runtime. If not, what could that be used for? static void double_logit(char **args, npy_intp *dimensions, npy_intp* steps, void* data) Why would the numerical accuracy be any different? The subtraction and square operations look identical and I thought np.sum just calls np.add.reduce, so the reduction step uses the same code and would therefore have the same accuracy. Thanks On Fri, Nov 4, 2016 at 1:56 PM, Sebastian Berg <sebastian@sipsolutions.net> wrote:

I started a ufunc to compute the sum of square differences here <https://gist.github.com/mattharrigan/6f678b3d6df5efd236fc23bfb59fd3bd>. It is about 4x faster and uses half the memory compared to np.sum(np.square(x-c)). This could significantly speed up common operations like std and var (where c=np.mean(x). It faster because its a single pass instead of 3, and also because the inner loop is specialized for the common reduce case, which might be an optimization considering more generally. I think I have answered my question #2&3 above. Can someone please point me to an example where "data" was used in a ufunc inner loop? How can that value be set at runtime? Thanks On Fri, Nov 4, 2016 at 5:33 PM, Sebastian Berg <sebastian@sipsolutions.net> wrote:

here is an old unfinished PR adding max_abs which has a similar purpose as yours: https://github.com/numpy/numpy/pull/5509 So far I know data is set during runtime construction and is part of the ufunc object, so it is not really very suitable to pass in constants on call. Its intended purpose is pass in functions to be called in generic loops like PyUFunc_d_d. Having an option to mark arguments of a ufunc as special in reductions could be useful, e.g. it would allow a potential (fused-)multiply-and-add ufunc to be used to implement a weighted sum. On 11.11.2016 17:25, Matthew Harrigan wrote:

My possibly poorly thought out API would be something like np.add_sq_diff.set_c(42.0).reduce(x), basically add a setter method which returned self. Thanks for explaining what data was intended to do. All of this is really just a dance around the requirement that reduce only works on binary functions. Would there be any interest in extending reduce to allow for arbitrary numbers of input and output arguments? On Fri, Nov 11, 2016 at 1:38 PM, Julian Taylor < jtaylor.debian@googlemail.com> wrote:

On Fri, 11 Nov 2016 11:25:58 -0500 Matthew Harrigan <harrigan.matthew@gmail.com> wrote:
Hi Matt, Using *blas* you win already a factor two (maybe more depending on you blas implementation): % python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))" "np.sum(np.square(x-2.))" 10 loops, best of 3: 135 msec per loop % python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))" "y=x-2.;np.dot(y,y)" 10 loops, best of 3: 70.2 msec per loop Cheers, -- Jérôme Kieffer

Yeah, but it's not so obvious what's happening "under the hoods". Consider this (with an old Win7 machine): Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)] np.__version__ '1.11.1' On Mon, Nov 14, 2016 at 10:38 AM, Jerome Kieffer <Jerome.Kieffer@esrf.fr> wrote:
x= np.linspace(0, 1, int(1e6)) timeit np.sum(np.square(x- 2.)) 10 loops, best of 3: 23 ms per loop y= x- 2. timeit np.dot(y, y) The slowest run took 18.60 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 3: 1.78 ms per loop timeit np.dot(y, y) 1000 loops, best of 3: 1.73 ms per loop Best, eat

On Mon, 14 Nov 2016 22:38:25 +0200 eat <e.antero.tammi@gmail.com> wrote:
What matters is the "blas" library used under the hood, hence the options passed to numpy at compile time. I notices 20x differences depending on the blas version. But more importantly: * results are the same (at the limit of the numerical precision) * dot() was always faster than sum(square()), varying from a bit to a lot. I agree this may change in future version of numpy. Cheers, -- Jérôme Kieffer

Still slower and worse uses 2x the memory for the intermediate temporary array. I propose allowing implicit reductions with ufuncs. Specifically if out is provided with shape[axis] = 1, then pass it on to the ufunc with a stride of 0. That should allow this to work: x = np.arange(10) def add_square_diff(x1, x2, x3): return x1 + (x2-x3)**2 result =np.zeros(1) np.frompyfunc(add_square_diff, 3, 1)(result, x, np.mean(x), result) Essentially it creates a reduce for a function which isn't binary. I think this would be generally useful. For instance, finding the min and max in one pass would be nice: def minmax(x1, x2, x3): return min(x1,x3), max(x2,x3) minn = np.array([np.inf]) maxx = np.array([-np.inf]) np.frompyfunc(minmax, 3, 2)(minn, maxx, x, minn, maxx) Note it also allows for arbitrary initial values or identity to be specified, possibly determined at run time. I think this would make ufuncs even more universal. On Mon, Nov 14, 2016 at 3:38 AM, Jerome Kieffer <Jerome.Kieffer@esrf.fr> wrote:

On Mon, Nov 14, 2016 at 5:40 PM, Matthew Harrigan < harrigan.matthew@gmail.com> wrote:
Essentially it creates a reduce for a function which isn't binary. I think this would be generally useful.
NumPy already has a generic enough interface for creating such ufuncs. In fact, it's called a "generalized ufunc": https://docs.scipy.org/doc/numpy/reference/c-api.generalized-ufuncs.html I think you could already write "implicit reductions" using gufuncs?

Thanks for pointing me to that. I think its a great match for a 1D case, like the inner product of two arrays in terms of signature. I don't see how it works with higher dimensional arrays, esp with something like the axis parameter in ufunc.reduce. With what I proposed for an array of shape (M, N) for example, result.shape could be (1,) or (M, 1) or (1, N) for reducing over the flattened array or either axis. Can you do something like that with gufunc or do you need to iterate gufunc calls over slices/views? Thanks again. On Mon, Nov 14, 2016 at 8:49 PM, Stephan Hoyer <shoyer@gmail.com> wrote:

On Fr, 2016-11-04 at 13:11 -0400, Matthew Harrigan wrote:
I don't think its anticipated, since a ufunc could in most cases use a third argument, but a 3 arg ufunc can't be reduced. Not sure if there might be some trickery possible.
2. Is it possible to pass an initial value to reduce to avoid the clean up required for the first element?
This is the identity normally. But the identity can only be 0, 1 or -1 right now I think. The identity is what the output array gets initialized with (which effectively makes it the first value passed into the inner loop).
Might be mixing up things, however, IIRC the single pass approach has a bad numerical accuracy, so that I doubt that it is a good default algorithm. - Sebastian

I didn't notice identity before. Seems like frompyfunc always sets it to None. If it were zero maybe it would work as desired here. In the writing your own ufunc doc, I was wondering if the pointer to data could be used to get a constant at runtime. If not, what could that be used for? static void double_logit(char **args, npy_intp *dimensions, npy_intp* steps, void* data) Why would the numerical accuracy be any different? The subtraction and square operations look identical and I thought np.sum just calls np.add.reduce, so the reduction step uses the same code and would therefore have the same accuracy. Thanks On Fri, Nov 4, 2016 at 1:56 PM, Sebastian Berg <sebastian@sipsolutions.net> wrote:

I started a ufunc to compute the sum of square differences here <https://gist.github.com/mattharrigan/6f678b3d6df5efd236fc23bfb59fd3bd>. It is about 4x faster and uses half the memory compared to np.sum(np.square(x-c)). This could significantly speed up common operations like std and var (where c=np.mean(x). It faster because its a single pass instead of 3, and also because the inner loop is specialized for the common reduce case, which might be an optimization considering more generally. I think I have answered my question #2&3 above. Can someone please point me to an example where "data" was used in a ufunc inner loop? How can that value be set at runtime? Thanks On Fri, Nov 4, 2016 at 5:33 PM, Sebastian Berg <sebastian@sipsolutions.net> wrote:

here is an old unfinished PR adding max_abs which has a similar purpose as yours: https://github.com/numpy/numpy/pull/5509 So far I know data is set during runtime construction and is part of the ufunc object, so it is not really very suitable to pass in constants on call. Its intended purpose is pass in functions to be called in generic loops like PyUFunc_d_d. Having an option to mark arguments of a ufunc as special in reductions could be useful, e.g. it would allow a potential (fused-)multiply-and-add ufunc to be used to implement a weighted sum. On 11.11.2016 17:25, Matthew Harrigan wrote:

My possibly poorly thought out API would be something like np.add_sq_diff.set_c(42.0).reduce(x), basically add a setter method which returned self. Thanks for explaining what data was intended to do. All of this is really just a dance around the requirement that reduce only works on binary functions. Would there be any interest in extending reduce to allow for arbitrary numbers of input and output arguments? On Fri, Nov 11, 2016 at 1:38 PM, Julian Taylor < jtaylor.debian@googlemail.com> wrote:

On Fri, 11 Nov 2016 11:25:58 -0500 Matthew Harrigan <harrigan.matthew@gmail.com> wrote:
Hi Matt, Using *blas* you win already a factor two (maybe more depending on you blas implementation): % python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))" "np.sum(np.square(x-2.))" 10 loops, best of 3: 135 msec per loop % python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))" "y=x-2.;np.dot(y,y)" 10 loops, best of 3: 70.2 msec per loop Cheers, -- Jérôme Kieffer

Yeah, but it's not so obvious what's happening "under the hoods". Consider this (with an old Win7 machine): Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)] np.__version__ '1.11.1' On Mon, Nov 14, 2016 at 10:38 AM, Jerome Kieffer <Jerome.Kieffer@esrf.fr> wrote:
x= np.linspace(0, 1, int(1e6)) timeit np.sum(np.square(x- 2.)) 10 loops, best of 3: 23 ms per loop y= x- 2. timeit np.dot(y, y) The slowest run took 18.60 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 3: 1.78 ms per loop timeit np.dot(y, y) 1000 loops, best of 3: 1.73 ms per loop Best, eat

On Mon, 14 Nov 2016 22:38:25 +0200 eat <e.antero.tammi@gmail.com> wrote:
What matters is the "blas" library used under the hood, hence the options passed to numpy at compile time. I notices 20x differences depending on the blas version. But more importantly: * results are the same (at the limit of the numerical precision) * dot() was always faster than sum(square()), varying from a bit to a lot. I agree this may change in future version of numpy. Cheers, -- Jérôme Kieffer

Still slower and worse uses 2x the memory for the intermediate temporary array. I propose allowing implicit reductions with ufuncs. Specifically if out is provided with shape[axis] = 1, then pass it on to the ufunc with a stride of 0. That should allow this to work: x = np.arange(10) def add_square_diff(x1, x2, x3): return x1 + (x2-x3)**2 result =np.zeros(1) np.frompyfunc(add_square_diff, 3, 1)(result, x, np.mean(x), result) Essentially it creates a reduce for a function which isn't binary. I think this would be generally useful. For instance, finding the min and max in one pass would be nice: def minmax(x1, x2, x3): return min(x1,x3), max(x2,x3) minn = np.array([np.inf]) maxx = np.array([-np.inf]) np.frompyfunc(minmax, 3, 2)(minn, maxx, x, minn, maxx) Note it also allows for arbitrary initial values or identity to be specified, possibly determined at run time. I think this would make ufuncs even more universal. On Mon, Nov 14, 2016 at 3:38 AM, Jerome Kieffer <Jerome.Kieffer@esrf.fr> wrote:

On Mon, Nov 14, 2016 at 5:40 PM, Matthew Harrigan < harrigan.matthew@gmail.com> wrote:
Essentially it creates a reduce for a function which isn't binary. I think this would be generally useful.
NumPy already has a generic enough interface for creating such ufuncs. In fact, it's called a "generalized ufunc": https://docs.scipy.org/doc/numpy/reference/c-api.generalized-ufuncs.html I think you could already write "implicit reductions" using gufuncs?

Thanks for pointing me to that. I think its a great match for a 1D case, like the inner product of two arrays in terms of signature. I don't see how it works with higher dimensional arrays, esp with something like the axis parameter in ufunc.reduce. With what I proposed for an array of shape (M, N) for example, result.shape could be (1,) or (M, 1) or (1, N) for reducing over the flattened array or either axis. Can you do something like that with gufunc or do you need to iterate gufunc calls over slices/views? Thanks again. On Mon, Nov 14, 2016 at 8:49 PM, Stephan Hoyer <shoyer@gmail.com> wrote:
participants (6)
-
eat
-
Jerome Kieffer
-
Julian Taylor
-
Matthew Harrigan
-
Sebastian Berg
-
Stephan Hoyer