On 8.5.2015 8:19, Rustom Mody wrote:
On Wed, May 6, 2015 at 6:45 PM, Ivan Levkivskyi <levkivskyi@gmail.com
mailto:levkivskyi@gmail.com> wrote:
def sunique(lst):
return sorted(list(set(lst)))
vs.
sunique = sorted @ list @ set
I would like to suggest that if composition is in fact added to python its order is 'corrected' ie in math there are two alternative definitions of composition
[1] f o g = λ x • g(f(x)) [2] f o g = λ x • f(g(x))
[2] is more common but [1] is also used
And IMHO [1] is much better for left-to-right reading so your example becomes sunique = set @ list @ sorted which reads as smoothly as a classic Unix pipeline:
"Unnamed parameter input to set; output inputted to list; output inputted to sort"
While both versions make sense, [2] is the one that resembles the chaining of linear operators or matrices, since column vectors are the convention. For the left-to-right pipeline version, some other operator might be more appropriate. Also, it would then be more clear to also feed x into the pipeline from the left, instead of putting (x) on the right like in a normal function call.
As a random example, (root @ mean @ square)(x) would produce the right order for rms when using [2].
-- Koos
Koos Zevenhoven writes:
As a random example, (root @ mean @ square)(x) would produce the right order for rms when using [2].
Hardly interesting. :-) The result is an exception, as root and square are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
I suppose you could write (root @ mean @ (map square)) (xs), which seems to support your argument. But will all such issues and solutions give the same support? This kind of thing is a conceptual problem that has to be discussed pretty thoroughly (presumably based on experience with implementations) before discussion of order can be conclusive.
On 9.5.2015 5:58, Stephen J. Turnbull wrote:
Koos Zevenhoven writes:
As a random example, (root @ mean @ square)(x) would produce the right order for rms when using [2].
Hardly interesting. :-) The result is an exception, as root and square are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
I suppose you could write (root @ mean @ (map square)) (xs), which seems to support your argument. But will all such issues and solutions give the same support? This kind of thing is a conceptual problem that has to be discussed pretty thoroughly (presumably based on experience with implementations) before discussion of order can be conclusive.
Well, you're wrong :-)
Working code:
from numpy import sqrt, mean, square
rms = sqrt(mean(square(x)))
The point is that people have previously described sqrt(mean(square(x))) as root-mean-squared x, not squared-mean-root x. But yes, as I said, it's just one example.
-- Koos
On May 8, 2015, at 19:58, Stephen J. Turnbull stephen@xemacs.org wrote:
Koos Zevenhoven writes:
As a random example, (root @ mean @ square)(x) would produce the right order for rms when using [2].
Hardly interesting. :-) The result is an exception, as root and square are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
Unless you're using an elementwise square and an array-to-scalar mean, like the ones in NumPy, in which case it works perfectly well...
I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
But Koos's example, even if it was possibly inadvertent, shows that I may be wrong about that. Maybe compose together with element-wise operators actually _is_ sufficient for something beyond toy examples.
Of course the fact that we have two groups of people each arguing that obviously the only possible reading of @ is compose/rcompose respectively points out a whole other problem with the idea. If people just were going to have to look up which way it went and learn it through experience, that would be one thing; if everyone already knows intuitively and half of them are wrong, that's a different story...
which seems to support your argument. But will all such issues and solutions give the same support? This kind of thing is a conceptual problem that has to be discussed pretty thoroughly (presumably based on experience with implementations) before discussion of order can be conclusive.
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Andrew Barnert writes:
On May 8, 2015, at 19:58, Stephen J. Turnbull stephen@xemacs.org wrote:
Koos Zevenhoven writes:
As a random example, (root @ mean @ square)(x) would produce the right order for rms when using [2].
Hardly interesting. :-) The result is an exception, as root and square are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
Unless you're using an elementwise square and an array-to-scalar mean, like the ones in NumPy,
Erm, why would square be elementwise and root not? I would suppose that everything is element-wise in Numpy (not a user yet).
in which case it works perfectly well...
But that's an aspect of my point (evidently, obscure). Conceptually, as taught in junior high school or so, root and square are scalar-to- scalar. If you are working in a context such as Numpy where it makes sense to assume they are element-wise and thus composable, the context should provide the compose operator(s). Without that context, Koos's example looks like a TypeError.
But Koos's example, even if it was possibly inadvertent, shows that I may be wrong about that. Maybe compose together with element-wise operators actually _is_ sufficient for something beyond toy examples.
Of course it is!<wink /> I didn't really think there was any doubt about that. I thought the question was whether there's enough commonality among such examples to come up with a Pythonic generic definition of compose, or perhaps a sufficiently compelling example to enshrine its definition as the "usual" interpretation in Python (and let other interpretations overload some operator to get that effect in their contexts).
Of course the fact that we have two groups of people each arguing that obviously the only possible reading of @ is compose/rcompose respectively points out a whole other problem with the idea.
I prefer fgh = f(g(h(-))), but I hardly think it's obvious. Unless you're not Dutch. (If it were obvious to a Dutchman, we'd have it already. <wink />)
On May 9, 2015, at 01:36, Stephen J. Turnbull stephen@xemacs.org wrote:
Andrew Barnert writes:
On May 8, 2015, at 19:58, Stephen J. Turnbull stephen@xemacs.org wrote:
Koos Zevenhoven writes:
As a random example, (root @ mean @ square)(x) would produce the right order for rms when using [2].
Hardly interesting. :-) The result is an exception, as root and square are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
Unless you're using an elementwise square and an array-to-scalar mean, like the ones in NumPy,
Erm, why would square be elementwise and root not? I would suppose that everything is element-wise in Numpy (not a user yet).
Most functions in NumPy are elementwise when applied to arrays, but can also be applied to scalars. So, square is elementwise because it's called on an array, root is scalar because it's called on a scalar. (In fact, root could also be elementwise--aggregating functions like mean can be applied across just one axis of a 2D or higher array, reducing it by one dimension, if you want.)
Before you try it, this sounds like a complicated nightmare that can't possibly work in practice. But play with it for just a few minutes and it's completely natural. (Except for a few cases where you want some array-wide but not element-wise operation, most famously matrix multiplication, which is why we now have the @ operator to play with.)
in which case it works perfectly well...
But that's an aspect of my point (evidently, obscure). Conceptually, as taught in junior high school or so, root and square are scalar-to- scalar. If you are working in a context such as Numpy where it makes sense to assume they are element-wise and thus composable, the context should provide the compose operator(s).
I was actually thinking on these lines: what if @ didn't work on types.FunctionType, but did work on numpy.ufunc (the name for the "universal function" type that knows how to broadcast across arrays but also work on scalars)? That's something NumPy could implement without any help from the core language. (Methods are a minor problem here, but it's obvious how to solve them, so I won't get into it.) And if it turned out to be useful all over the place in NumPy, that might turn up some great uses for the idiomatic non-NumPy Python, or it might show that, like elementwise addition, it's really more a part of NumPy than of Python.
But of course that's more of a proposal for NumPy than for Python.
Without that context, Koos's example looks like a TypeError.
But Koos's example, even if it was possibly inadvertent, shows that I may be wrong about that. Maybe compose together with element-wise operators actually _is_ sufficient for something beyond toy examples.
Of course it is!<wink /> I didn't really think there was any doubt about that.
I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
On 5/9/2015 6:19 AM, Andrew Barnert via Python-ideas wrote:
I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
I agree that @ is most likely to be usefull in numpy's restricted context.
A composition operator is usually defined by application: f@g(x) is defined as f(g(x)). (I sure there are also axiomatic treatments.) It is an optional syntactic abbreviation. It is most useful in a context where there is one set of data objects, such as the real numbers, or one set + arrays (vectors) defined on the one set; where all function are univariate (or possible multivariate, but that can can be transformed to univariate on vectors); and where parameter names are dummies like 'x', 'y', 'z', or '_'.
The last point is important. Abbreviating h(x) = f(g(x)) with h = f @ g does not lose any information as 'x' is basically a placeholder (so get rid of it). But parameter names are important in most practical contexts, both for understanding a composition and for using it.
dev npv(transfers, discount): '''Return the net present value of discounted transfers.
transfers: finite iterable of amounts at constant intervals
discount: fraction per interval
'''
divisor = 1 + discount
return sum(tranfer/divisor**time
for time, transfer in enumerate(transfers))
Even if one could replace the def statement with npv = <some combination of @, sum, map, add, div, power, enumerate, ...> with parameter names omitted, it would be harder to understand. Using it would require the ability to infer argument types and order from the composed expression.
I intentionally added a statement to calculate the common subexpression prior to the return. I believe it would have to put back in the return expression before converting.
-- Terry Jan Reedy
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
root ^ mean ^ map & square (xs)
root ^ mean ^ map & square ^ xs ()
Read this as...
compose root, of mean, of map with square, of xs
Or...
apply(map(square, xs), mean, root)
map & square | mean | root (xs)
xs | map & square | mean | root ()
Read this as...
apply xs, to map with square, to mean, to root
These are kind of cool, but does it make python code easier to read? That seems like it may be subjective depending on the amount of programming experience someone has.
Cheers, Ron
Hi, I had to answer some of these questions when I wrote Lawvere: https://pypi.python.org/pypi/lawvere
First, there is two kind of composition: pipe and circle so I think a single operator like @ is a bit restrictive. I like "->" and "<-"
Then, for function name and function to string I had to introduce function signature (a tuple). It provides a good tool for decomposition, introspection and comparison in respect with mathematic definition.
Finally, for me composition make sense when you have typed functions otherwise it can easily become a mess and this make composition tied to multiple dispatch.
I really hope composition will be introduced in python but I can't see how it be made without rethinking a good part of function definition.
2015-05-09 17:38 GMT+02:00 Ron Adam ron3200@gmail.com:
> >
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
root ^ mean ^ map & square (xs)
root ^ mean ^ map & square ^ xs ()
Read this as...
compose root, of mean, of map with square, of xs
Or...
apply(map(square, xs), mean, root)
map & square | mean | root (xs)
xs | map & square | mean | root ()
Read this as...
apply xs, to map with square, to mean, to root
These are kind of cool, but does it make python code easier to read? That seems like it may be subjective depending on the amount of programming experience someone has.
Cheers, Ron
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On Sat, May 09, 2015 at 11:38:38AM -0400, Ron Adam wrote:
How about an operator for partial?
root @ mean @ map $ square(xs)
Apart from the little matter that Guido has said that $ will never be used as an operator in Python, what is the association between $ and partial?
Most other operators have either been used for centuries e.g. + and - or at least decades e.g. * for multiplication because ASCII doesn't have the × symbol. The barrier to using a completely arbitrary symbol with no association to the function it plays should be considered very high.
I would only support an operator for function composition if it was at least close to the standard operators used for function composition in other areas. @ at least suggests the ∘ used in mathematics, e.g. sin∘cos, but | is used in pipelining languages and shells and could be considered, e.g. ls | wc.
My own preference would be to look at @ as the closest available ASCII symbol to ∘ and use it for left-to-right composition, and | for left-to-right function application. E.g.
(spam @ eggs @ cheese)(arg) is equivalent to spam(eggs(cheese(arg)))
(spam | eggs | cheese)(arg) is equivalent to cheese(eggs(spam(arg)))
also known as compose() and rcompose().
We can read "@" as "of", "spam of eggs of cheese of arg", and | as a pipe, "spam(arg) piped to eggs piped to cheese".
It's a pity we can't match the shell syntax and write:
spam(args)|eggs|cheese
but that would have a completely different meaning.
David Beazley has a tutorial on using coroutines in pipelines:
http://www.dabeaz.com/coroutines/
where he ends up writing this:
f = open("access-log")
follow(f,
grep('python',
printer()))
Coroutines grep() and printer() make up the pipeline. I cannot help but feel that the | syntax would be especially powerful for this sort of data processing purpose:
# could this work using some form of function composition?
follow(f, grep('python')|printer)
-- Steve
On Sat, May 9, 2015 at 1:16 PM, Steven D'Aprano steve@pearwood.info wrote:
On Sat, May 09, 2015 at 11:38:38AM -0400, Ron Adam wrote:
How about an operator for partial?
root @ mean @ map $ square(xs)
I have trouble seeing the advantage of a special function composition operator when it is easy to write a general 'compose()' function that can produce such things easily enough.
E.g. in a white paper I just did for O'Reilly on _Functional Programming in Python_ I propose this little example implementation:
def compose(*funcs): "Return a new function s.t. compose(f,g,...)(x) == f(g(...(x)))" def inner(data, funcs=funcs): result = data for f in reversed(funcs): result = f(result) return result return inner
Which we might use as:
RMS = compose(root, mean, square) result = RMS(my_array)
-- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.
On May 9, 2015, at 2:30 PM, David Mertz mertz@gnosis.cx wrote:
On Sat, May 9, 2015 at 1:16 PM, Steven D'Aprano <steve@pearwood.info mailto:steve@pearwood.info> wrote: On Sat, May 09, 2015 at 11:38:38AM -0400, Ron Adam wrote:
How about an operator for partial?
root @ mean @ map $ square(xs)
I have trouble seeing the advantage of a special function composition operator when it is easy to write a general 'compose()' function that can produce such things easily enough.
E.g. in a white paper I just did for O'Reilly on _Functional Programming in Python_ I propose this little example implementation:
def compose(*funcs): "Return a new function s.t. compose(f,g,...)(x) == f(g(...(x)))" def inner(data, funcs=funcs): result = data for f in reversed(funcs): result = f(result) return result return inner
Which we might use as:
RMS = compose(root, mean, square) result = RMS(my_array)
Maybe functools.compose?
Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On May 9, 2015, at 11:33, Donald Stufft donald@stufft.io wrote:
On May 9, 2015, at 2:30 PM, David Mertz mertz@gnosis.cx wrote:
On Sat, May 9, 2015 at 1:16 PM, Steven D'Aprano steve@pearwood.info wrote: On Sat, May 09, 2015 at 11:38:38AM -0400, Ron Adam wrote:
How about an operator for partial?
root @ mean @ map $ square(xs)
I have trouble seeing the advantage of a special function composition operator when it is easy to write a general 'compose()' function that can produce such things easily enough.
E.g. in a white paper I just did for O'Reilly on _Functional Programming in Python_ I propose this little example implementation:
def compose(*funcs): "Return a new function s.t. compose(f,g,...)(x) == f(g(...(x)))" def inner(data, funcs=funcs): result = data for f in reversed(funcs): result = f(result) return result return inner
Which we might use as:
RMS = compose(root, mean, square) result = RMS(my_array)
Maybe functools.compose?
But why?
This is trivial to write.
The nontrivial part is thinking through whether you want left or right compose, what you want to do about multiple arguments, etc. So, unless we can solve _that_ problem by showing that there is one and only one obvious answer, we don't gain anything by implementing one of the many trivial-to-implement possibilities in the stdlib.
Maybe as a recipe in the docs, it would be worth showing two different compose functions to demonstrate how easy it is to write whichever one you want (and how important it is to figure out which one you want).
On Sat, May 09, 2015 at 01:30:17PM -0500, David Mertz wrote:
On Sat, May 9, 2015 at 1:16 PM, Steven D'Aprano steve@pearwood.info wrote:
On Sat, May 09, 2015 at 11:38:38AM -0400, Ron Adam wrote:
How about an operator for partial?
root @ mean @ map $ square(xs)
I have trouble seeing the advantage of a special function composition operator when it is easy to write a general 'compose()' function that can produce such things easily enough.
Do you have trouble seeing the advantage of a special value addition operator when it is easy enough to write a general "add()" function?
wink
I think that, mentally, operators "feel" lightweight. If I write:
getattr(obj, 'method')(arg)
it puts too much emphasis on the attribute access. But using an operator:
obj.method(arg)
put the emphasis on calling the method, not looking it up, which is just right. Even though both forms do about the same about of work, mentally, the dot pseudo-operator feels much more lightweight.
The same with
compose(grep, filter)(data)
versus
(grep @ filter)(data)
The first sends my attention to the wrong place, the composition. The second does not.
I don't expect everyone to agree with me, but I think this explains why people keep suggesting syntax or an operator to do function composition instead of a function. Not everyone thinks this way, but for those who do, a compose() function is like eating a great big bowl gruel that contains all the nutrients you need for the day and tastes of cardboard and smells of wet dog. It might do everything that you want functionally, but it feels wrong and looks wrong and it is not in the least bit pleasurable to use.
-- Steve
On May 9, 2015, at 08:38, Ron Adam ron3200@gmail.com wrote:
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
I'm pretty sure that anyone who sees that and doesn't interpret it as meaningless nonsense is going to interpret it as a variation on Haskell and get the wrong intuition.
But, more importantly, this doesn't work. Your square(xs) isn't going to evaluate to a function, but to a whatever falling square on xs returns. (Which is presumably a TypeError, or you wouldn't be looking to map in the first place). And, even if that did work, you're not actually composing a function here anyway; your @ is just a call operator, which we already have in Python, spelled with parens.
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
Now you're not calling square(xs), but you are calling map(square, xs), which is going to return an iterable of squares, not a function; again, you're not composing a function object at all.
And think about how you'd actually write this correctly. You need to either use lambda (which defeats the entire purpose of compose), or partial (which works, but is clumsy and ugly enough without an operator or syntactic sugar that people rarely use it).
root ^ mean ^ map & square (xs)
root ^ mean ^ map & square ^ xs ()
Read this as...
compose root, of mean, of map with square, of xs
But that's not composing. The whole point of compose is that you can compose root of mean of mappings square over some argument to be passed in later, and the result is itself a function over some argument to be passed in later.
What you're doing doesn't add any new abstraction, it just obfuscates normal function application.
Or...
apply(map(square, xs), mean, root)
map & square | mean | root (xs)
xs | map & square | mean | root ()
Read this as...
apply xs, to map with square, to mean, to root
These are kind of cool, but does it make python code easier to read? That seems like it may be subjective depending on the amount of programming experience someone has.
Cheers, Ron
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 05/09/2015 06:45 PM, Andrew Barnert via Python-ideas wrote:
On May 9, 2015, at 08:38, Ron Adamron3200@gmail.com wrote:
> > >
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
>> >I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
I'm pretty sure that anyone who sees that and doesn't interpret it as meaningless nonsense is going to interpret it as a variation on Haskell and get the wrong intuition.
Yes, I agree that is the problems with it.
But, more importantly, this doesn't work. Your square(xs) isn't going to evaluate to a function, but to a whatever falling square on xs returns. (Which is presumably a TypeError, or you wouldn't be looking to map in the first place). And, even if that did work, you're not actually composing a function here anyway; your @ is just a call operator, which we already have in Python, spelled with parens.
This is following the patterns being discussed in the thread. (or at least an attempt to do so.)
The @ and $ above would bind more tightly than the (). Like the doc "." does for method calls. But the evaluation is from left to right at call time. The calling part does not need to be done at the same times the rest is done. Or at least that is what I got from the conversation.
f = root @ mean @ map & square
result = f(xs)
The other examples would work the same.
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
Now you're not calling square(xs), but you are calling map(square, xs), which is going to return an iterable of squares, not a function; again, you're not composing a function object at all.
Yes, this is what directly calling the functions to do the same thing would look like. Except without returning a composed function.
And think about how you'd actually write this correctly. You need to either use lambda (which defeats the entire purpose of compose), or partial (which works, but is clumsy and ugly enough without an operator or syntactic sugar that people rarely use it).
The advantage of the syntax is that it is a "potentially" (a matter of opinion) alternative to using lambda. And apparently there are a few here who think doing it with lambda's or other means is less than ideal.
Personally I'm not convinced yet either.
Cheers, Ron
On May 9, 2015, at 20:08, Ron Adam ron3200@gmail.com wrote:
On 05/09/2015 06:45 PM, Andrew Barnert via Python-ideas wrote:
On May 9, 2015, at 08:38, Ron Adamron3200@gmail.com wrote: > > >
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
>>> >I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
I'm pretty sure that anyone who sees that and doesn't interpret it as meaningless nonsense is going to interpret it as a variation on Haskell and get the wrong intuition.
Yes, I agree that is the problems with it.
But, more importantly, this doesn't work. Your square(xs) isn't going to evaluate to a function, but to a whatever falling square on xs returns. (Which is presumably a TypeError, or you wouldn't be looking to map in the first place). And, even if that did work, you're not actually composing a function here anyway; your @ is just a call operator, which we already have in Python, spelled with parens.
This is following the patterns being discussed in the thread. (or at least an attempt to do so.)
The @ and $ above would bind more tightly than the (). Like the doc "." does for method calls.
@ can't bind more tightly than (). The operator already exists (that's the whole reason people are suggesting it for compose), and it has the same precedence as *.
And even if you could change that, you wouldn't want to. Just as 2 * f(a) calls f on a and then multiplies by 2, b @ f(a) will call f on a and then matrix-multiply it by b; it would be very confusing if it matrix-multiplied b and f and then called the result on a.
I think I know what you're going for here. Half the reason Haskell has an apply operator even though adjacency already means apply is so it can have different precedence from adjacency. And if you don't like that, you can define your own infix operator with a different string of symbols and a different precedence or even associativity but the same body. That allows you to play all kinds of neat tricks like what you're trying to, where you can write almost anything without parentheses and it means exactly what it looks like. Of course you can just as easily write something that means something completely different from what it looks like... But you have to actually work the operators through carefully, not just wave your hands and say "something like this"; when "this" actually doesn't mean what you want it to, you need to define a new operator that does. And, while allowing users to define enough operators to eliminate all the parens and all the lambdas works great for Haskell, I don't think it's a road that Python should follow.
But the evaluation is from left to right at call time. The calling part does not need to be done at the same times the rest is done. Or at least that is what I got from the conversation.
f = root @ mean @ map & square
result = f(xs)
But that means (root @ mean @ map) & square. Assuming you intended function.__and__ to mean partial, you have to write root @ mean @ (map & square), or create a new operator that has the precedence you want.
The other examples would work the same.
Exactly: they don't work, either because you've got the precedence wrong, or because you've got an explicit function call rather than something that defines or references a function, and it doesn't make sense to compose that (well, except when the explicit call is to a higher-order function that returns a function, but that wasn't true of any of the examples).
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
Now you're not calling square(xs), but you are calling map(square, xs), which is going to return an iterable of squares, not a function; again, you're not composing a function object at all.
Yes, this is what directly calling the functions to do the same thing would look like. Except without returning a composed function.
I don't understand what you mean. The same thing as what? Neither directly calling the functions, nor your proposed thing, returns a composed function (because, again, the last argument is not a function, it's an iterator returned by a function that you called directly).
And think about how you'd actually write this correctly. You need to either use lambda (which defeats the entire purpose of compose), or partial (which works, but is clumsy and ugly enough without an operator or syntactic sugar that people rarely use it).
The advantage of the syntax is that it is a "potentially" (a matter of opinion) alternative to using lambda.
Not your syntax. All of your examples that do anything just call a function immediately, rather than defining a function to be called later, so they can't replace uses of lambda. For example, your compose(root, mean, map(square, xs)) doesn't define a new function anywhere, so no part of it can replace a lambda.
The earlier examples actually do attempt to replace uses of lambda. Stephen's compose(root, mean, map square) returns a function. The problem with his suggestion is that map square isn't valid Python syntax--and if it were, that new syntax would be the thing that replaces a need for lambda, not the compose function. Which is obvious if you look at how you'd write that in valid Python syntax: compose(root, mean, lambda xs: map(square, xs)).
I've used the compose(...) form instead of the @ operator form, but the result is exactly the same either way.
And apparently there are a few here who think doing it with lambda's or other means is less than ideal.
I agree with them--but I don't think adding compose to Python, either as a stdlib function or as an operator--actually solves that problem. If we had auto-curried functions and adjacency as apply and a suite of HOFs like flip and custom infix operators and operator sectioning and so on, then the lack of compose would be a problem that forced people to write unnecessary lambda expressions (although still not a huge problem, since it's so trivial to write). But with none of those things, adding compose doesn't actually help you avoid lambdas, except in a few contrived cases. (And maybe in NumPy-like array processing.)
Personally I'm not convinced yet either.
Cheers, Ron
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 05/10/2015 01:24 AM, Andrew Barnert via Python-ideas wrote:
On May 9, 2015, at 20:08, Ron Adam ron3200@gmail.com wrote: >
On 05/09/2015 06:45 PM, Andrew Barnert via Python-ideas wrote:
On May 9, 2015, at 08:38, Ron Adamron3200@gmail.com wrote:
But, more importantly, this doesn't work. Your square(xs) isn't going to evaluate to a function, but to a whatever falling square on xs returns. (Which is presumably a TypeError, or you wouldn't be looking to map in the first place). And, even if that did work, you're not actually composing a function here anyway; your @ is just a call operator, which we already have in Python, spelled with parens.
This is following the patterns being discussed in the thread. (or at least an attempt to do so.)
The @ and $ above would bind more tightly than the (). Like the doc "." does for method calls.
@ can't bind more tightly than (). The operator already exists (that's the whole reason people are suggesting it for compose), and it has the same precedence as *.
Yes, and so it may need different symbols to work, but there are not many easy to type and read symbols left. So some double symbols of some sort may work.
Picking what those should be is a topic all its own, and it's not even an issue until the concept works.
I should not even given examples earlier. The point I was trying to make was an operator that indicates the next argument is not complete my be useful. And I think the initial (or another) example implementation does do that, but uses a tuple to package the function with the partial arguments instead.
Cheers, Ron