Add nullifier argument to functools.reduce?
![](https://secure.gravatar.com/avatar/d2aafb97833979e3668c61d36e697bfc.jpg?s=120&d=mm&r=g)
I'd like to add an additional optional argument to functools.reduce. The argument is the "nullifier" of the reducing operation. It is a value such that function(nullifier, anything) returns nullifier. For example, if function(x, y) computes x*y, the nullifier is 0. If function(x, y) is the intersection of the sets x and y, the nullifier is the empty set. The argument would allow reduce to "short circuit" its calculation. When reduce encounters the nullifier, it can return immediately. This can provide a significant improvement in performance in some cases. The change is simple. Here, for example, is the "rough equivalent" for functools.reduce from the docs: def reduce(function, iterable, initializer=None): it = iter(iterable) if initializer is None: try: initializer = next(it) except StopIteration: raise TypeError('reduce() of empty sequence with no initial value') accum_value = initializer for x in it: accum_value = function(accum_value, x) return accum_value Here's how it looks with the optional nullifier argument; the only change is the new argument and an 'if' statement in the 'for' loop. def reduce(function, iterable, initializer=None, nullifier=None): it = iter(iterable) if initializer is None: try: initializer = next(it) except StopIteration: raise TypeError('reduce() of empty sequence with no initial value') accum_value = initializer for x in it: if nullifier is not None and accum_value == nullifier: break accum_value = function(accum_value, x) return accum_value (It might be better to use a distinct singleton for the default value of nullifier, to allow None to be a valid nullifier.) The actual implementation is in the extension module _functoolsmodule.c. It looks like the changes to the C code should be straightforward. Warren
![](https://secure.gravatar.com/avatar/f9375b447dd668a10c19891379b9db2a.jpg?s=120&d=mm&r=g)
On Sat, Aug 23, 2014 at 8:30 AM, Warren Weckesser <warren.weckesser@gmail.com> wrote:
The argument would allow reduce to "short circuit" its calculation. When reduce encounters the nullifier, it can return immediately. This can provide a significant improvement in performance in some cases.
If you want something like this you should probably use an explicit loop. And I say that as someone who *likes* reduce. Loops are far more flexible. That said, I guess I am +0. Does this feature exist in any other programming languages? -- Devin
![](https://secure.gravatar.com/avatar/99433fbf587d2bc91bf184da3511c904.jpg?s=120&d=mm&r=g)
On 23 August 2014 16:30, Warren Weckesser <warren.weckesser@gmail.com> wrote:
I'd like to add an additional optional argument to functools.reduce. The argument is the "nullifier" of the reducing operation. It is a value such that function(nullifier, anything) returns nullifier. For example, if function(x, y) computes x*y, the nullifier is 0. If function(x, y) is the intersection of the sets x and y, the nullifier is the empty set.
The argument would allow reduce to "short circuit" its calculation. When reduce encounters the nullifier, it can return immediately. This can provide a significant improvement in performance in some cases.
This hasn't been given a use-case and seems like needless complexity. It also seems far too magical. -1 for those reasons. If its only purpose is to speed up "reduce" it seems like a very bad trade-off, as *everywhere else* has a new now-sanctioned behaviour to worry about. A better answer in my opinion would be something more like "reduce(..., stop_on=sentinel)". This could even allow more optimisations than your idea: reduce(operator.mul, iterable, stop_on=0) reduce(operator.add, iterable, stop_on=float("nan")) etc. I would be -0 on that, because there hasn't been a mentioned use-case.
![](https://secure.gravatar.com/avatar/f9375b447dd668a10c19891379b9db2a.jpg?s=120&d=mm&r=g)
On Sat, Aug 23, 2014 at 9:01 AM, Joshua Landau <joshua@landau.ws> wrote:
A better answer in my opinion would be something more like "reduce(..., stop_on=sentinel)". This could even allow more optimisations than your idea:
reduce(operator.mul, iterable, stop_on=0) reduce(operator.add, iterable, stop_on=float("nan"))
etc.
How is this any different? -- Devin
![](https://secure.gravatar.com/avatar/99433fbf587d2bc91bf184da3511c904.jpg?s=120&d=mm&r=g)
On 23 August 2014 17:15, Devin Jeanpierre <jeanpierreda@gmail.com> wrote:
On Sat, Aug 23, 2014 at 9:01 AM, Joshua Landau <joshua@landau.ws> wrote:
A better answer in my opinion would be something more like "reduce(..., stop_on=sentinel)".
How is this any different?
It's not. I'm a fool who doesn't read. Apologies for the noise.
![](https://secure.gravatar.com/avatar/d2aafb97833979e3668c61d36e697bfc.jpg?s=120&d=mm&r=g)
On Sat, Aug 23, 2014 at 12:01 PM, Joshua Landau <joshua@landau.ws> wrote:
On 23 August 2014 16:30, Warren Weckesser <warren.weckesser@gmail.com> wrote:
I'd like to add an additional optional argument to functools.reduce. The argument is the "nullifier" of the reducing operation. It is a value such that function(nullifier, anything) returns nullifier. For example, if function(x, y) computes x*y, the nullifier is 0. If function(x, y) is the intersection of the sets x and y, the nullifier is the empty set.
The argument would allow reduce to "short circuit" its calculation. When reduce encounters the nullifier, it can return immediately. This can provide a significant improvement in performance in some cases.
This hasn't been given a use-case and seems like needless complexity. It also seems far too magical. -1 for those reasons.
I guess "magical" is a matter of perspective ("Any sufficiently advanced technology..." and all that).
If its only purpose is to speed up "reduce" it seems like a very bad trade-off, as *everywhere else* has a new now-sanctioned behaviour to worry about.
I don't understand what you mean. Could you elaborate?
A better answer in my opinion would be something more like "reduce(..., stop_on=sentinel)". This could even allow more optimisations than your idea:
reduce(operator.mul, iterable, stop_on=0) reduce(operator.add, iterable, stop_on=float("nan"))
Do you mean it would stop when the sentinel was encountered in iterable? That wouldn't help in an example such as reduce(lambda x, y: x & y, [{1,2}, {3, 4}, {5, 6}], nullifier={}) The nullifier is the empty set, but the empty set does not occur in the iterable. (Joshua, sorry for sending this to you again. Forgot to "Reply all" the first time.) etc.
I would be -0 on that, because there hasn't been a mentioned use-case.
![](https://secure.gravatar.com/avatar/99433fbf587d2bc91bf184da3511c904.jpg?s=120&d=mm&r=g)
On 23 August 2014 17:27, Warren Weckesser <warren.weckesser@gmail.com> wrote:
On Sat, Aug 23, 2014 at 12:01 PM, Joshua Landau <joshua@landau.ws> wrote:
On 23 August 2014 16:30, Warren Weckesser <warren.weckesser@gmail.com> wrote:
<sensible stuff>
<not so sensible stuff>
<sensible stuff>
I suggest you just ignore what I wrote, as I was obviously too out of it to read properly. Let me try again: "This sound good, but it lacks a use-case and has a somewhat poor name."
![](https://secure.gravatar.com/avatar/d67ab5d94c2fed8ab6b727b62dc1b213.jpg?s=120&d=mm&r=g)
On Sun, Aug 24, 2014 at 2:27 AM, Warren Weckesser <warren.weckesser@gmail.com> wrote:
That wouldn't help in an example such as
reduce(lambda x, y: x & y, [{1,2}, {3, 4}, {5, 6}], nullifier={})
The nullifier is the empty set, but the empty set does not occur in the iterable.
Caution: Your nullifier is an empty dict, not an empty set, if you spell it that way. I think this is a cute concept, but not something that is going to be needed all that often. Most Python programs don't even use reduce() in its default form, much less need a specific optimization. ChrisA
![](https://secure.gravatar.com/avatar/d2aafb97833979e3668c61d36e697bfc.jpg?s=120&d=mm&r=g)
On Sun, Aug 24, 2014 at 8:05 PM, Chris Angelico <rosuav@gmail.com> wrote:
On Sun, Aug 24, 2014 at 2:27 AM, Warren Weckesser <warren.weckesser@gmail.com> wrote:
That wouldn't help in an example such as
reduce(lambda x, y: x & y, [{1,2}, {3, 4}, {5, 6}], nullifier={})
The nullifier is the empty set, but the empty set does not occur in the iterable.
Caution: Your nullifier is an empty dict, not an empty set, if you spell it that way.
Ah, right--thanks! It should be `nullifier=set()`.
I think this is a cute concept, but not something that is going to be needed all that often. Most Python programs don't even use reduce() in its default form, much less need a specific optimization.
ChrisA _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
![](https://secure.gravatar.com/avatar/92136170d43d61a5eeb6ea8784294aa2.jpg?s=120&d=mm&r=g)
This "nullifier" is mathematically called an "absorbing element", but saying an "attractor" might be a little more general. I.e. think of a local optimization problem, where multiple local min/max points might occur. If you reach one, further iteration won't budge from that point, even if it's not the "global absorbing element." However, any such argument--including the much more useful sentinel/'stop_on' idea--significantly changes the semantics of reduce. In particular, as is reduce() always consumes its iterator. Under these changes, it may or may not consume the iterator, depending on what elements occur. Given that one can easily write one's own three line wrapper 'reduce_with_attractor()' for this special semantics which hasn't been given a use case, I can't see a point of including the argument in the stdlib. -1 on proposal. On Sat, Aug 23, 2014 at 8:30 AM, Warren Weckesser < warren.weckesser@gmail.com> wrote:
I'd like to add an additional optional argument to functools.reduce. The argument is the "nullifier" of the reducing operation. It is a value such that function(nullifier, anything) returns nullifier. For example, if function(x, y) computes x*y, the nullifier is 0. If function(x, y) is the intersection of the sets x and y, the nullifier is the empty set.
The argument would allow reduce to "short circuit" its calculation. When reduce encounters the nullifier, it can return immediately. This can provide a significant improvement in performance in some cases.
The change is simple. Here, for example, is the "rough equivalent" for functools.reduce from the docs:
def reduce(function, iterable, initializer=None): it = iter(iterable) if initializer is None: try: initializer = next(it) except StopIteration: raise TypeError('reduce() of empty sequence with no initial value') accum_value = initializer for x in it: accum_value = function(accum_value, x) return accum_value
Here's how it looks with the optional nullifier argument; the only change is the new argument and an 'if' statement in the 'for' loop.
def reduce(function, iterable, initializer=None, nullifier=None): it = iter(iterable) if initializer is None: try: initializer = next(it) except StopIteration: raise TypeError('reduce() of empty sequence with no initial value') accum_value = initializer for x in it: if nullifier is not None and accum_value == nullifier: break accum_value = function(accum_value, x) return accum_value
(It might be better to use a distinct singleton for the default value of nullifier, to allow None to be a valid nullifier.)
The actual implementation is in the extension module _functoolsmodule.c. It looks like the changes to the C code should be straightforward.
Warren
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
![](https://secure.gravatar.com/avatar/5615a372d9866f203a22b2c437527bbb.jpg?s=120&d=mm&r=g)
On Sat, Aug 23, 2014 at 10:25:06AM -0700, David Mertz wrote:
This "nullifier" is mathematically called an "absorbing element", but saying an "attractor" might be a little more general. I.e. think of a local optimization problem, where multiple local min/max points might occur. If you reach one, further iteration won't budge from that point, even if it's not the "global absorbing element."
Yes. Note that arbitrary systems may have more than one attractors, including cycles (a -> b -> c -> a) and even "strange attractors" of infinite complexity. It's probably too much to expect reduce to deal with cycles, but a really general solution should at least deal with multiple attractors.
Given that one can easily write one's own three line wrapper 'reduce_with_attractor()' for this special semantics
I don't think you can. Although it's 3:30am here and it's past my bed time, so perhaps I'm wrong. The problem is that the wrapper cannot see the reduced value until reduce() returns, and we want to short-circuit the call once the reduced value is the attractor. Still, easy or not, I think the semantics are too specialised to justify in the standard library, especially given that Guido doesn't like reduce and it almost got removed. A better solution, I think, would be to stick this reduce_with_attractor() in some third-party functional tool library. -- Steven
![](https://secure.gravatar.com/avatar/92136170d43d61a5eeb6ea8784294aa2.jpg?s=120&d=mm&r=g)
It's true, Steven, that we'd have to use itertools.accumulate() rather than functools.reduce() to do the heavy lifting here (as Bob suggested). But using that, here are three lines: from itertools import * from operator import add def reduce_with_attractor(func, it, start=ℵ, end_if=ℵ): it = iter(it) start = start if start≢ℵ else it.__next__() return list(takewhile(λ x: x≢end_if, accumulate(chain([start],it), func)))[-1] Oh, that's cute... my copy-paste was using https://github.com/ehamberg/vim-cute-python (somewhat customized further by me). In regular ASCII: def reduce_with_attractor(func, it, start=None, end_if=None): it = iter(it) start = start if start!=None else it.__next__() return list(takewhile(lambda x: x!=end_if, accumulate(chain([start],it), func)))[-1] This gives you the accumulation up-to-but-not-including the attractor. I guess the OP wanted to return the attractor itself (although that seems slightly less useful to me). I'd have to play around to get that version in three lines... maybe it would take 4 or 5 lines to do it. On Sat, Aug 23, 2014 at 10:43 AM, Steven D'Aprano <steve@pearwood.info> wrote:
On Sat, Aug 23, 2014 at 10:25:06AM -0700, David Mertz wrote:
This "nullifier" is mathematically called an "absorbing element", but saying an "attractor" might be a little more general. I.e. think of a local optimization problem, where multiple local min/max points might occur. If you reach one, further iteration won't budge from that point, even if it's not the "global absorbing element."
Yes. Note that arbitrary systems may have more than one attractors, including cycles (a -> b -> c -> a) and even "strange attractors" of infinite complexity. It's probably too much to expect reduce to deal with cycles, but a really general solution should at least deal with multiple attractors.
Given that one can easily write one's own three line wrapper 'reduce_with_attractor()' for this special semantics
I don't think you can. Although it's 3:30am here and it's past my bed time, so perhaps I'm wrong. The problem is that the wrapper cannot see the reduced value until reduce() returns, and we want to short-circuit the call once the reduced value is the attractor.
Still, easy or not, I think the semantics are too specialised to justify in the standard library, especially given that Guido doesn't like reduce and it almost got removed. A better solution, I think, would be to stick this reduce_with_attractor() in some third-party functional tool library.
-- Steven _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
![](https://secure.gravatar.com/avatar/92136170d43d61a5eeb6ea8784294aa2.jpg?s=120&d=mm&r=g)
Completely off topic, but the symbol substitutions in https://github.com/ehamberg/vim-cute-python/tree/moresymbols used '∅' for both 'set()' and 'None' which seemed confusing to me. Using aleph ('ℵ') both somewhat resembles a Roman 'N(one)' and also makes a certain kind of sense if you think of aleph0 as the first "inaccessible cardinal" in set theory. It's too bad I didn't have any loops or 'in' tests in my three liner, since I love the look of 'i ∈ collection' on screen. On Sat, Aug 23, 2014 at 11:43 AM, David Mertz <mertz@gnosis.cx> wrote:
It's true, Steven, that we'd have to use itertools.accumulate() rather than functools.reduce() to do the heavy lifting here (as Bob suggested). But using that, here are three lines:
from itertools import * from operator import add
def reduce_with_attractor(func, it, start=ℵ, end_if=ℵ): it = iter(it) start = start if start≢ℵ else it.__next__() return list(takewhile(λ x: x≢end_if, accumulate(chain([start],it), func)))[-1]
Oh, that's cute... my copy-paste was using https://github.com/ehamberg/vim-cute-python (somewhat customized further by me). In regular ASCII:
def reduce_with_attractor(func, it, start=None, end_if=None): it = iter(it) start = start if start!=None else it.__next__() return list(takewhile(lambda x: x!=end_if, accumulate(chain([start],it), func)))[-1]
This gives you the accumulation up-to-but-not-including the attractor. I guess the OP wanted to return the attractor itself (although that seems slightly less useful to me). I'd have to play around to get that version in three lines... maybe it would take 4 or 5 lines to do it.
On Sat, Aug 23, 2014 at 10:43 AM, Steven D'Aprano <steve@pearwood.info> wrote:
On Sat, Aug 23, 2014 at 10:25:06AM -0700, David Mertz wrote:
This "nullifier" is mathematically called an "absorbing element", but saying an "attractor" might be a little more general. I.e. think of a local optimization problem, where multiple local min/max points might occur. If you reach one, further iteration won't budge from that point, even if it's not the "global absorbing element."
Yes. Note that arbitrary systems may have more than one attractors, including cycles (a -> b -> c -> a) and even "strange attractors" of infinite complexity. It's probably too much to expect reduce to deal with cycles, but a really general solution should at least deal with multiple attractors.
Given that one can easily write one's own three line wrapper 'reduce_with_attractor()' for this special semantics
I don't think you can. Although it's 3:30am here and it's past my bed time, so perhaps I'm wrong. The problem is that the wrapper cannot see the reduced value until reduce() returns, and we want to short-circuit the call once the reduced value is the attractor.
Still, easy or not, I think the semantics are too specialised to justify in the standard library, especially given that Guido doesn't like reduce and it almost got removed. A better solution, I think, would be to stick this reduce_with_attractor() in some third-party functional tool library.
-- Steven _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
![](https://secure.gravatar.com/avatar/5615a372d9866f203a22b2c437527bbb.jpg?s=120&d=mm&r=g)
On Sat, Aug 23, 2014 at 11:43:42AM -0700, David Mertz wrote:
def reduce_with_attractor(func, it, start=None, end_if=None): it = iter(it) start = start if start!=None else it.__next__() return list(takewhile(lambda x: x!=end_if, accumulate(chain([start],it), func)))[-1]
A couple of points: - Don't use it.__next__(), use the built-in next(it). - To match the behaviour of reduce, you need to catch the StopIteration and raise TypeError. - Your implementation eagerly generates a potentially enormous list of intermediate results, which strikes me as somewhat unfortunate for a functional tool like reduce. In other words, for some inputs, this is going to perform like a dog, generating a HUGE list up front, then throwing it all away except for the final value.
This gives you the accumulation up-to-but-not-including the attractor. I guess the OP wanted to return the attractor itself (although that seems slightly less useful to me).
Not really. His use-case seems to be to short-cut a lot of unnecessary calculations, e.g. suppose you write product() as reduce(operator.mul, iterable). In the event that the product reaches zero, you can[1] short-circuit the rest of the iterable and just return 0: product([1, 2, 3, 0] + [4]*1000000) ought to reduce 0, not 6, and the intent is for it to do so *quickly*, ignoring the 4s at the end of the list. [1] Actually you can't. 0 is no longer an attractor in the presence of INF or NAN. -- Steven
![](https://secure.gravatar.com/avatar/92136170d43d61a5eeb6ea8784294aa2.jpg?s=120&d=mm&r=g)
On Sat, Aug 23, 2014 at 6:53 PM, Steven D'Aprano <steve@pearwood.info> wrote:
On Sat, Aug 23, 2014 at 11:43:42AM -0700, David Mertz wrote:
def reduce_with_attractor(func, it, start=None, end_if=None): it = iter(it) start = start if start!=None else it.__next__() return list(takewhile(lambda x: x!=end_if, accumulate(chain([start],it), func)))[-1]
A couple of points:
- Don't use it.__next__(), use the built-in next(it).
Yeah, good point.
- To match the behaviour of reduce, you need to catch the StopIteration and raise TypeError.
Oh, OK. Hadn't thought of that.
- Your implementation eagerly generates a potentially enormous list of intermediate results, which strikes me as somewhat unfortunate for a functional tool like reduce. In other words, for some inputs, this is going to perform like a dog, generating a HUGE list up front, then throwing it all away except for the final value.
I know. I realized this flaw right away. I was trying to be cute and fit it in my promised 3 lines. It would be better to put it in a loop to realize the successive values, of course--but would take an extra line or two. Maybe there's a way to squeeze it in one line with itertools rather than a regular loop though.
This gives you the accumulation up-to-but-not-including the attractor. I guess the OP wanted to return the attractor itself (although that seems slightly less useful to me).
Not really. His use-case seems to be to short-cut a lot of unnecessary calculations, e.g. suppose you write product() as reduce(operator.mul, iterable). In the event that the product reaches zero, you can[1] short-circuit the rest of the iterable and just return 0:
product([1, 2, 3, 0] + [4]*1000000)
ought to reduce 0, not 6, and the intent is for it to do so *quickly*, ignoring the 4s at the end of the list.
I guess that's true. Although I can certainly imagine being interested not only in the final attractor, but *that* it reached an attractor and ended early. Not sure how best to signal that. Actually, now that I think of it, it would be kinda nice to make the function 'reduce_with_attractorS()' instead, and allow specification of multiple attractors. I welcome your improved version of the code :-). Feel free to take a whole 10 lines to do it right.
[1] Actually you can't. 0 is no longer an attractor in the presence of INF or NAN.
I was sort of thinking of a "we're all adults here" attitude. That is, the "attractor" might not really be a genuine attractor, but we still trust the caller to say it is. I.e. my function would accept this call: reduce_with_attractor(operator.mul, range(1,1e6), end_if=6)) I'm making the claim that reaching '6' is a stopping point... which, well it is. No, it's not an actual attractor, but maybe a caller really does want to stop iterating if it gets to that value anyway. Hence 'end_if' is actually an accurate name. -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
![](https://secure.gravatar.com/avatar/92136170d43d61a5eeb6ea8784294aa2.jpg?s=120&d=mm&r=g)
So here's a version of my function that addresses Steven's points, and also adds what I think is more useful behavior. A couple points about my short--but more than 3-line function. * The notion of "attractor" here still follows the "we're all adults" philosophy. That is, the 'stop_on' and 'end_if' arguments may very well not actually specify genuine attractors, but simply "satisfaction" values. In the examples below these are completely unlike attractors, but this could be useful also for something like numeric approximation algorithms where some non-final reduction value is considered "good enough" anyway. * To be more like the reduce() builtin, I raise TypeError on an empty iterable with no 'start' value. * Although it's not as cute in using itertools as much, but simply a loop-with-test, I got rid of the "golf" artifact of potentially allocating a large, useless list. * Most interesting, I think, I separated a 'stop_on' collection of values from a 'end_if' predicate. This lets us generalize attractors, I believe. For a simple cyclic attractor, you could list several value in the 'stop_on' collection. But even for a strange attractor (or simply a complex one, or e.g. an attractor to a continuous set of values--think an attractor to a circular, real-valued orbit), the predicate could, in principle, capture the fact we reached the attractor. ################################################# % cat reduce_with_attractor.py #!/usr/bin/env python3 from itertools import * from operator import add def reduce_with_attractors(func, it, start=None, stop_on=(), end_if=None): it = iter(it) try: start = start if start!=None else next(it) except StopIteration: raise TypeError for x in accumulate(chain([start],it), func): if x in stop_on or (end_if and end_if(x)): break return x print(reduce_with_attractors(add, range(100), stop_on=(2, 6, 17))) print(reduce_with_attractors(add, range(100), stop_on=('foo','bar'))) print(reduce_with_attractors(add, range(100), end_if=lambda x: x>100)) ################################################# On Sat, Aug 23, 2014 at 7:19 PM, David Mertz <mertz@gnosis.cx> wrote:
On Sat, Aug 23, 2014 at 6:53 PM, Steven D'Aprano <steve@pearwood.info>
On Sat, Aug 23, 2014 at 11:43:42AM -0700, David Mertz wrote:
def reduce_with_attractor(func, it, start=None, end_if=None): it = iter(it) start = start if start!=None else it.__next__() return list(takewhile(lambda x: x!=end_if, accumulate(chain([start],it), func)))[-1]
A couple of points:
- Don't use it.__next__(), use the built-in next(it).
Yeah, good point.
- To match the behaviour of reduce, you need to catch the StopIteration and raise TypeError.
Oh, OK. Hadn't thought of that.
- Your implementation eagerly generates a potentially enormous list of intermediate results, which strikes me as somewhat unfortunate for a functional tool like reduce. In other words, for some inputs, this is going to perform like a dog, generating a HUGE list up front, then throwing it all away except for the final value.
I know. I realized this flaw right away. I was trying to be cute and fit it in my promised 3 lines. It would be better to put it in a loop to realize the successive values, of course--but would take an extra line or two. Maybe there's a way to squeeze it in one line with itertools rather
wrote: than a regular loop though.
This gives you the accumulation up-to-but-not-including the attractor.
guess the OP wanted to return the attractor itself (although that seems slightly less useful to me).
Not really. His use-case seems to be to short-cut a lot of unnecessary calculations, e.g. suppose you write product() as reduce(operator.mul, iterable). In the event that the product reaches zero, you can[1] short-circuit the rest of the iterable and just return 0:
product([1, 2, 3, 0] + [4]*1000000)
ought to reduce 0, not 6, and the intent is for it to do so *quickly*, ignoring the 4s at the end of the list.
I guess that's true. Although I can certainly imagine being interested not only in the final attractor, but *that* it reached an attractor and ended early. Not sure how best to signal that. Actually, now that I think of it, it would be kinda nice to make the function 'reduce_with_attractorS()' instead, and allow specification of multiple attractors.
I welcome your improved version of the code :-). Feel free to take a whole 10 lines to do it right.
[1] Actually you can't. 0 is no longer an attractor in the presence of INF or NAN.
I was sort of thinking of a "we're all adults here" attitude. That is,
I the "attractor" might not really be a genuine attractor, but we still trust the caller to say it is. I.e. my function would accept this call:
reduce_with_attractor(operator.mul, range(1,1e6), end_if=6))
I'm making the claim that reaching '6' is a stopping point... which, well
it is. No, it's not an actual attractor, but maybe a caller really does want to stop iterating if it gets to that value anyway. Hence 'end_if' is actually an accurate name.
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
![](https://secure.gravatar.com/avatar/7e41acaa8f6a0e0f5a7c645e93add55a.jpg?s=120&d=mm&r=g)
I think it would be both simpler and more useful to define accumulate_with_attractors, and then define reduce_with_attractors as a wrapper around that. Assuming you're doing a lot of this kind of stuff, you're either going to be using a library like more_itertools or pytoolz or your own custom library, so I'll assume you already have a "takewhileplus" that's like takewhile but also yields the first failing value, and an "ilast" that returns the last value in an iterable and raises a TypeError if empty (maybe with an optional default argument, but I'm not going to use it). _sentinel = object() def accumulate_with_attractors(iterable, func=operator.add, stop_on=(), end_if=None): yield from takewhileplus(lambda x: x not in stop_on and not (end_if and end_if(x)), itertools.accumulate(iterable, func)) def reduce_with_attractors(func, iterable, start=_sentinel, stop_on=(), end_if=None): if start is not _sentinel: iterable = itertools.chain([start], iterable) return ilast(accumulate_with_attractors(iterable, func, stop_on, end_if)) You can easily optimize this in a few ways. You can use a takeuntilplus so the lambda doesn't have to negate its condition, you can use stop_on.__contains__ if end_if isn't passed, etc. With those optimizations, using C implementations of takeuntilplus and ilast, this takes about 40% as long as your version. With no optimizations, using Python takewhileplus and last, it takes about 140% as long. But I think the added simplicity, the fewer edge cases to get wrong, and the fact that you get a usable accumulate_with_attractors out of it makes it a worthwhile tradeoff even at 40% slower. On Sunday, August 24, 2014 10:27 AM, David Mertz <mertz@gnosis.cx> wrote:
So here's a version of my function that addresses Steven's points, and also adds what I think is more useful behavior. A couple points about my short--but more than 3-line function.
* The notion of "attractor" here still follows the "we're all adults" philosophy. That is, the 'stop_on' and 'end_if' arguments may very well not actually specify genuine attractors, but simply "satisfaction" values. In the examples below these are completely unlike attractors, but this could be useful also for something like numeric approximation algorithms where some non-final reduction value is considered "good enough" anyway.
* To be more like the reduce() builtin, I raise TypeError on an empty iterable with no 'start' value.
* Although it's not as cute in using itertools as much, but simply a loop-with-test, I got rid of the "golf" artifact of potentially allocating a large, useless list.
* Most interesting, I think, I separated a 'stop_on' collection of values from a 'end_if' predicate. This lets us generalize attractors, I believe. For a simple cyclic attractor, you could list several value in the 'stop_on' collection. But even for a strange attractor (or simply a complex one, or e.g. an attractor to a continuous set of values--think an attractor to a circular, real-valued orbit), the predicate could, in principle, capture the fact we reached the attractor.
################################################# % cat reduce_with_attractor.py #!/usr/bin/env python3 from itertools import * from operator import add
def reduce_with_attractors(func, it, start=None, stop_on=(), end_if=None): it = iter(it) try: start = start if start!=None else next(it) except StopIteration: raise TypeError for x in accumulate(chain([start],it), func): if x in stop_on or (end_if and end_if(x)): break return x
print(reduce_with_attractors(add, range(100), stop_on=(2, 6, 17))) print(reduce_with_attractors(add, range(100), stop_on=('foo','bar'))) print(reduce_with_attractors(add, range(100), end_if=lambda x: x>100)) #################################################
On Sat, Aug 23, 2014 at 7:19 PM, David Mertz <mertz@gnosis.cx> wrote:
On Sat, Aug 23, 2014 at 6:53 PM, Steven D'Aprano <steve@pearwood.info> wrote:
On Sat, Aug 23, 2014 at 11:43:42AM -0700, David Mertz wrote:
def reduce_with_attractor(func, it, start=None, end_if=None): it = iter(it) start = start if start!=None else it.__next__() return list(takewhile(lambda x: x!=end_if, accumulate(chain([start],it), func)))[-1]
A couple of points:
- Don't use it.__next__(), use the built-in next(it).
Yeah, good point.
- To match the behaviour of reduce, you need to catch the StopIteration and raise TypeError.
Oh, OK. Hadn't thought of that.
- Your implementation eagerly generates a potentially enormous list of intermediate results, which strikes me as somewhat unfortunate for a functional tool like reduce. In other words, for some inputs, this is going to perform like a dog, generating a HUGE list up front, then throwing it all away except for the final value.
I know. I realized this flaw right away. I was trying to be cute and fit it in my promised 3 lines. It would be better to put it in a loop to realize the successive values, of course--but would take an extra line or two. Maybe there's a way to squeeze it in one line with itertools rather than a regular loop though.
This gives you the accumulation up-to-but-not-including the attractor. I guess the OP wanted to return the attractor itself (although that seems slightly less useful to me).
Not really. His use-case seems to be to short-cut a lot of unnecessary calculations, e.g. suppose you write product() as reduce(operator.mul, iterable). In the event that the product reaches zero, you can[1] short-circuit the rest of the iterable and just return 0:
product([1, 2, 3, 0] + [4]*1000000)
ought to reduce 0, not 6, and the intent is for it to do so *quickly*, ignoring the 4s at the end of the list.
I guess that's true. Although I can certainly imagine being interested not only in the final attractor, but *that* it reached an attractor and ended early. Not sure how best to signal that. Actually, now that I think of it, it would be kinda nice to make the function 'reduce_with_attractorS()' instead, and allow specification of multiple attractors.
I welcome your improved version of the code :-). Feel free to take a whole 10 lines to do it right.
[1] Actually you can't. 0 is no longer an attractor in the presence of INF or NAN.
I was sort of thinking of a "we're all adults here" attitude. That is, the "attractor" might not really be a genuine attractor, but we still trust the caller to say it is. I.e. my function would accept this call:
reduce_with_attractor(operator.mul, range(1,1e6), end_if=6))
I'm making the claim that reaching '6' is a stopping point... which, well it is. No, it's not an actual attractor, but maybe a caller really does want to stop iterating if it gets to that value anyway. Hence 'end_if' is actually an accurate name.
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
![](https://secure.gravatar.com/avatar/e88b046bd99c34714b60a04eaf51d334.jpg?s=120&d=mm&r=g)
David Mertz wrote:
def reduce_with_attractor(func, it, start=None, end_if=None): it = iter(it) start = start if start!=None else it.__next__() return list(takewhile(lambda x: x!=end_if, accumulate(chain([start],it), func)))[-1]
Wouldn't it be better to break this into a function that limits a sequence and to combine that with the original reduce()?
from functools import reduce from itertools import takewhile def stop_on(value, items): ... return takewhile(lambda item: item != value, items) ... list(stop_on(0, [1, 2, 3, 0, 4])) [1, 2, 3] from operator import mul reduce(mul, stop_on(0, [1, 2, 3, 0, 4])) 6
My suggestion is to add a value-limited version of itertools.takewhile() rather than making reduce() more more powerful/complex.
![](https://secure.gravatar.com/avatar/5615a372d9866f203a22b2c437527bbb.jpg?s=120&d=mm&r=g)
On Sun, Aug 24, 2014 at 02:35:57PM +0200, Peter Otten wrote:
David Mertz wrote:
def reduce_with_attractor(func, it, start=None, end_if=None): it = iter(it) start = start if start!=None else it.__next__() return list(takewhile(lambda x: x!=end_if, accumulate(chain([start],it), func)))[-1]
Wouldn't it be better to break this into a function that limits a sequence and to combine that with the original reduce()?
In general, if x is an attractor, then we want to stop and return x if either of these two scenarios occur: - we come across x in the input; - or the sequence of intermediate values reaches x. You can do the first by filtering the input stream, but not the second. Here's a concrete example, sticking to product() where 0 is an attractor: # best viewed with a fixed-width font product([1, 2, 3, 9, 0, 8, 7, 4]) .....................^ stop here But multiplication can underflow to zero too, and once it does, we likewise want to stop: product([1e-70, 2e-71, 6e-69, 4e-68, 1e-48, 2e-71, 1e-69, 3e-70]) .....................................^ stop here Notice that 1e-48 is not only non-zero, but it's considerably bigger than the other numbers in the sequence. (About 100000000000000000000 times bigger, give or take a factor of 10.) Yet it's enough to cause the product to underflow to zero, after which the product will never shift away from zero. (Ignoring NANs and INFs.) I'm still not convinced this belongs in the standard library, but it's a nice functional, er, function to add to your private library or as a third-party module. -- Steven
![](https://secure.gravatar.com/avatar/d2aafb97833979e3668c61d36e697bfc.jpg?s=120&d=mm&r=g)
On Sun, Aug 24, 2014 at 9:09 AM, Steven D'Aprano <steve@pearwood.info> wrote:
On Sun, Aug 24, 2014 at 02:35:57PM +0200, Peter Otten wrote:
David Mertz wrote:
def reduce_with_attractor(func, it, start=None, end_if=None): it = iter(it) start = start if start!=None else it.__next__() return list(takewhile(lambda x: x!=end_if, accumulate(chain([start],it), func)))[-1]
Wouldn't it be better to break this into a function that limits a sequence and to combine that with the original reduce()?
In general, if x is an attractor, then we want to stop and return x if either of these two scenarios occur:
- we come across x in the input;
- or the sequence of intermediate values reaches x.
You can do the first by filtering the input stream, but not the second.
Here's a concrete example, sticking to product() where 0 is an attractor:
# best viewed with a fixed-width font product([1, 2, 3, 9, 0, 8, 7, 4]) .....................^ stop here
But multiplication can underflow to zero too, and once it does, we likewise want to stop:
product([1e-70, 2e-71, 6e-69, 4e-68, 1e-48, 2e-71, 1e-69, 3e-70]) .....................................^ stop here
Notice that 1e-48 is not only non-zero, but it's considerably bigger than the other numbers in the sequence. (About 100000000000000000000 times bigger, give or take a factor of 10.) Yet it's enough to cause the product to underflow to zero, after which the product will never shift away from zero. (Ignoring NANs and INFs.)
An example I gave earlier is the reduction of a collection of sets using set intersection. It is possible that (set1 & set2) gives the empty set, so the nullifier can occur during the reduction without being in the given iterable. E.g.: reduce(lambda x, y: x & y, [{1,2}, {3}, {4, 5}, {6}], nullifier={}) That should stop iterating after the first operation.
I'm still not convinced this belongs in the standard library, but it's a nice functional, er, function to add to your private library or as a third-party module.
-- Steven _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
![](https://secure.gravatar.com/avatar/5ce43469c0402a7db8d0cf86fa49da5a.jpg?s=120&d=mm&r=g)
On 2014-08-24 15:56, Warren Weckesser wrote: [snip]
An example I gave earlier is the reduction of a collection of sets using set intersection. It is possible that (set1 & set2) gives the empty set, so the nullifier can occur during the reduction without being in the given iterable. E.g.:
reduce(lambda x, y: x & y, [{1,2}, {3}, {4, 5}, {6}], nullifier={})
That should stop iterating after the first operation.
I think it would be more flexible if, instead of a 'nullifier' object, there were an 'until' or 'while_' predicate: reduce(lambda x, y: x & y, [{1,2}, {3}, {4, 5}, {6}], while_=bool)
![](https://secure.gravatar.com/avatar/b932b1e5a3e8299878e579f51f49b84a.jpg?s=120&d=mm&r=g)
Yes, such a 'reduce_with_attractor()' can be easily implemented with itertools.accumulate. Maybe this would be good to add to the "Itertools Recipes" section of the docs? On Sat, Aug 23, 2014 at 10:25 AM, David Mertz <mertz@gnosis.cx> wrote:
This "nullifier" is mathematically called an "absorbing element", but saying an "attractor" might be a little more general. I.e. think of a local optimization problem, where multiple local min/max points might occur. If you reach one, further iteration won't budge from that point, even if it's not the "global absorbing element."
However, any such argument--including the much more useful sentinel/'stop_on' idea--significantly changes the semantics of reduce. In particular, as is reduce() always consumes its iterator. Under these changes, it may or may not consume the iterator, depending on what elements occur.
Given that one can easily write one's own three line wrapper 'reduce_with_attractor()' for this special semantics which hasn't been given a use case, I can't see a point of including the argument in the stdlib.
-1 on proposal.
On Sat, Aug 23, 2014 at 8:30 AM, Warren Weckesser < warren.weckesser@gmail.com> wrote:
I'd like to add an additional optional argument to functools.reduce. The argument is the "nullifier" of the reducing operation. It is a value such that function(nullifier, anything) returns nullifier. For example, if function(x, y) computes x*y, the nullifier is 0. If function(x, y) is the intersection of the sets x and y, the nullifier is the empty set.
The argument would allow reduce to "short circuit" its calculation. When reduce encounters the nullifier, it can return immediately. This can provide a significant improvement in performance in some cases.
The change is simple. Here, for example, is the "rough equivalent" for functools.reduce from the docs:
def reduce(function, iterable, initializer=None): it = iter(iterable) if initializer is None: try: initializer = next(it) except StopIteration: raise TypeError('reduce() of empty sequence with no initial value') accum_value = initializer for x in it: accum_value = function(accum_value, x) return accum_value
Here's how it looks with the optional nullifier argument; the only change is the new argument and an 'if' statement in the 'for' loop.
def reduce(function, iterable, initializer=None, nullifier=None): it = iter(iterable) if initializer is None: try: initializer = next(it) except StopIteration: raise TypeError('reduce() of empty sequence with no initial value') accum_value = initializer for x in it: if nullifier is not None and accum_value == nullifier: break accum_value = function(accum_value, x) return accum_value
(It might be better to use a distinct singleton for the default value of nullifier, to allow None to be a valid nullifier.)
The actual implementation is in the extension module _functoolsmodule.c. It looks like the changes to the C code should be straightforward.
Warren
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
![](https://secure.gravatar.com/avatar/d2aafb97833979e3668c61d36e697bfc.jpg?s=120&d=mm&r=g)
On Sat, Aug 23, 2014 at 1:25 PM, David Mertz <mertz@gnosis.cx> wrote:
This "nullifier" is mathematically called an "absorbing element", but saying an "attractor" might be a little more general. I.e. think of a local optimization problem, where multiple local min/max points might occur. If you reach one, further iteration won't budge from that point, even if it's not the "global absorbing element."
I took the name "nullifier" from http://www.mayhematics.com/m/m2_operations.htm but a less "mathy" name would probably be better. For consistency, I'll stick with it, but you can think of it as a placeholder for a better name to be determined later.
However, any such argument--including the much more useful sentinel/'stop_on' idea--significantly changes the semantics of reduce. In particular, as is reduce() always consumes its iterator. Under these changes, it may or may not consume the iterator, depending on what elements occur.
I don't agree that this change "significantly changes the semantics of reduce". The nullifier is optional. The short-circuit can only occur when the caller has specified a nullifier, so anyone using it will be aware that the iterable might not be consumed. Indeed, that's the *point* of using it. Assuming the nullifier given is truly a nullifier of the function, the result of the call to reduce should be the same (other than the amount by which the iterable has been consumed) whether or not the nullifier is given. (That was also Hernan's point.)
Given that one can easily write one's own three line wrapper 'reduce_with_attractor()' for this special semantics which hasn't been given a use case,
My interest in the enhancement is purely performance. Here's an example (call it a use case, if you like) that is obviously cooked up to maximize the benefit of short-circuiting. In the following ipython session, `reduce` is the builtin function (I'm using python 2.7 here), and `myreduce.reduce` is the python implementation with the nullifier argument: In [40]: a = range(100) In [41]: %timeit reduce(lambda x, y: x*y, a) 100000 loops, best of 3: 8.89 µs per loop In [42]: %timeit myreduce.reduce(lambda x, y: x*y, a, nullifier=0) 1000000 loops, best of 3: 455 ns per loop
I can't see a point of including the argument in the stdlib.
-1 on proposal.
I think the main objection (here and in other comments) is that the benefit (performance gain in certain cases) does not outweigh the cost (a more complicated API for reduce(), and more code and documentation to maintain in the standard library). That's a compelling argument! I think the enhancement would be useful, but I understand that it might not be useful enough to accept such a change, especially since the cost of *not* having it in the library to a user who wants such a feature is pretty low (i.e. it is easy to "roll your own"). Warren
On Sat, Aug 23, 2014 at 8:30 AM, Warren Weckesser < warren.weckesser@gmail.com> wrote:
I'd like to add an additional optional argument to functools.reduce. The argument is the "nullifier" of the reducing operation. It is a value such that function(nullifier, anything) returns nullifier. For example, if function(x, y) computes x*y, the nullifier is 0. If function(x, y) is the intersection of the sets x and y, the nullifier is the empty set.
The argument would allow reduce to "short circuit" its calculation. When reduce encounters the nullifier, it can return immediately. This can provide a significant improvement in performance in some cases.
The change is simple. Here, for example, is the "rough equivalent" for functools.reduce from the docs:
def reduce(function, iterable, initializer=None): it = iter(iterable) if initializer is None: try: initializer = next(it) except StopIteration: raise TypeError('reduce() of empty sequence with no initial value') accum_value = initializer for x in it: accum_value = function(accum_value, x) return accum_value
Here's how it looks with the optional nullifier argument; the only change is the new argument and an 'if' statement in the 'for' loop.
def reduce(function, iterable, initializer=None, nullifier=None): it = iter(iterable) if initializer is None: try: initializer = next(it) except StopIteration: raise TypeError('reduce() of empty sequence with no initial value') accum_value = initializer for x in it: if nullifier is not None and accum_value == nullifier: break accum_value = function(accum_value, x) return accum_value
(It might be better to use a distinct singleton for the default value of nullifier, to allow None to be a valid nullifier.)
The actual implementation is in the extension module _functoolsmodule.c. It looks like the changes to the C code should be straightforward.
Warren
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
![](https://secure.gravatar.com/avatar/334b870d5b26878a79b2dc4cfcc500bc.jpg?s=120&d=mm&r=g)
Warren Weckesser writes:
I don't agree that this change "significantly changes the semantics of reduce". The nullifier is optional.
Many programs deal with domains that seem to have nullifiers but don't. See Steven's example of floating point multiplication where the "tail" might contain Inf or NaN. You can argue that this is a "consenting adults" issue, but I consider this possibility an attractive nuisance, and I'd rather it not be in the stdlib. IMHO YMMV
![](https://secure.gravatar.com/avatar/d2aafb97833979e3668c61d36e697bfc.jpg?s=120&d=mm&r=g)
On Mon, Aug 25, 2014 at 9:43 PM, Stephen J. Turnbull <stephen@xemacs.org> wrote:
Warren Weckesser writes:
I don't agree that this change "significantly changes the semantics of reduce". The nullifier is optional.
Many programs deal with domains that seem to have nullifiers but don't. See Steven's example of floating point multiplication where the "tail" might contain Inf or NaN. You can argue that this is a "consenting adults" issue, but I consider this possibility an attractive nuisance, and I'd rather it not be in the stdlib.
Gotcha. To spell it out: one might naively think 0 is a nullifier for floating point multiplication, but 0*inf is nan, not 0. That means reduce(lambda x,y: x*y, [1e-50]*10 + [float('inf')], nullifier=0), which underflows to 0 during the intermediate steps, would incorrectly return 0. So yeah, consenting adults, know your data, etc. :) (nan, on the other hand, appears to be a true nullifier for the product in IEEE 754 floating point arithmetic.) Warren IMHO YMMV
![](https://secure.gravatar.com/avatar/f3ba3ecffd20251d73749afbfa636786.jpg?s=120&d=mm&r=g)
On 24 August 2014 01:30, Warren Weckesser <warren.weckesser@gmail.com> wrote:
I'd like to add an additional optional argument to functools.reduce. The argument is the "nullifier" of the reducing operation. It is a value such that function(nullifier, anything) returns nullifier. For example, if function(x, y) computes x*y, the nullifier is 0. If function(x, y) is the intersection of the sets x and y, the nullifier is the empty set.
When it comes to judging the usefulness of functional programming features these days, my first question is generally going to be "Does PyToolz offer this?" Grumblings about the name aside, it's still the solution I recommend to folks that wish Python had more functional programming tools in the standard library: http://toolz.readthedocs.org/en/latest/api.html There's even a Cython accelerated version available (Cytoolz). "pip install toolz" for the pure Python version, "pip install cytoolz" for the accelerated one. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
![](https://secure.gravatar.com/avatar/1fee087d7a1ca17c8ad348271819a8d5.jpg?s=120&d=mm&r=g)
Le 24/08/2014 00:24, Nick Coghlan a écrit :
On 24 August 2014 01:30, Warren Weckesser <warren.weckesser@gmail.com> wrote:
I'd like to add an additional optional argument to functools.reduce. The argument is the "nullifier" of the reducing operation. It is a value such that function(nullifier, anything) returns nullifier. For example, if function(x, y) computes x*y, the nullifier is 0. If function(x, y) is the intersection of the sets x and y, the nullifier is the empty set.
When it comes to judging the usefulness of functional programming features these days, my first question is generally going to be "Does PyToolz offer this?"
Grumblings about the name aside,
I suppose it's for functional whizkidz and cowboyz? Regards Antoine.
![](https://secure.gravatar.com/avatar/d2aafb97833979e3668c61d36e697bfc.jpg?s=120&d=mm&r=g)
On Sun, Aug 24, 2014 at 12:24 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 24 August 2014 01:30, Warren Weckesser <warren.weckesser@gmail.com> wrote:
I'd like to add an additional optional argument to functools.reduce. The argument is the "nullifier" of the reducing operation. It is a value such that function(nullifier, anything) returns nullifier. For example, if function(x, y) computes x*y, the nullifier is 0. If function(x, y) is the intersection of the sets x and y, the nullifier is the empty set.
When it comes to judging the usefulness of functional programming features these days, my first question is generally going to be "Does PyToolz offer this?"
I took a look, and I couldn't find it there.
Grumblings about the name aside, it's still the solution I recommend to folks that wish Python had more functional programming tools in the standard library: http://toolz.readthedocs.org/en/latest/api.html
There's even a Cython accelerated version available (Cytoolz).
"pip install toolz" for the pure Python version, "pip install cytoolz" for the accelerated one.
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
participants (13)
-
Andrew Barnert
-
Antoine Pitrou
-
Bob Ippolito
-
Chris Angelico
-
David Mertz
-
Devin Jeanpierre
-
Joshua Landau
-
MRAB
-
Nick Coghlan
-
Peter Otten
-
Stephen J. Turnbull
-
Steven D'Aprano
-
Warren Weckesser