I was thinking about recent ideas discussed here. I also returned back to origins of my initial idea. The point is that it came from Numpy, I use Numpy arrays everyday, and typically I do exactly something like root(mean(square(data))).
Now I am thinking: what is actually a matrix? It is something that takes a vector and returns a vector. But on the other hand the same actually do elementwise functions. It does not really matter, what we do with a vector: transform by a product of matrices or by composition of functions. In other words I agree with Andrew that "elementwise" is a good match with compose, and what we really need is to "pipe" things that take a vector (or just an iterable) and return a vector (iterable).
So that probably a good place (in a potential future) for compose would be not functools but itertools. But indeed a good place to test this would be Numpy.
An additional comment: it is indeed good to have both @ and | for compose and rcompose. Side note, one can actually overload __rmatmul__ on arrays as well so that you can write
root @ mean @ square @ data
Moreover, one can overload __or__ on arrays, so that one can write
data | square | mean | root
even with ordinary functions (not Numpy's ufuncs or composable) . These examples are actually "flat is better than nested" in the extreme form.
Anyway, they (Numpy) are going to implement the @ operator for arrays, may be it would be a good idea to check that if something on the left from me (array) is not an array but a callable then apply it elementwise.
Concerning the multi-argument functions, I don't like $ symbol, don't know why. It seems really unintuitive why it means partial application. One can autocurry composable functions and apply same rules that Numpy uses for ufuncs. More precisely, if I write
add(data1, data2)
with arrays it applies add pairwise. But if I write
add(data1, 42)
it is also fine, it simply adds 42 to every element. With autocurrying one could write
root @ mean @ add(data) @ square @ data2
or
root @ mean @ square @ add(42) @ data
However, as I see it now it is not very readable, so that may be the best choise is to reserve @ and | for "piping" iterables through transformers that take one argument. In other words it should be left to user to make add(42) of an appropriate type. It is the same logic as for decorators, if I write
@modify(arg) def func(x): return None
I must care that modify(arg) evaluates to something that takes one callable and returns a callable.
On May 9, 2015, at 01:36, Stephen J. Turnbull stephen@xemacs.org wrote:
>
Andrew Barnert writes:
On May 8, 2015, at 19:58, Stephen J. Turnbull stephen@xemacs.org wrote:
Koos Zevenhoven writes:
As a random example, (root @ mean @ square)(x) would produce the right order for rms when using [2].
Hardly interesting. :-) The result is an exception, as root and square are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
Unless you're using an elementwise square and an array-to-scalar mean, like the ones in NumPy,
Erm, why would square be elementwise and root not? I would suppose that everything is element-wise in Numpy (not a user yet).
Most functions in NumPy are elementwise when applied to arrays, but can also be applied to scalars. So, square is elementwise because it's called on an array, root is scalar because it's called on a scalar. (In fact, root could also be elementwise--aggregating functions like mean can be applied across just one axis of a 2D or higher array, reducing it by one dimension, if you want.)
Before you try it, this sounds like a complicated nightmare that can't possibly work in practice. But play with it for just a few minutes and it's completely natural. (Except for a few cases where you want some array-wide but not element-wise operation, most famously matrix multiplication, which is why we now have the @ operator to play with.)
in which case it works perfectly well...
But that's an aspect of my point (evidently, obscure). Conceptually, as taught in junior high school or so, root and square are scalar-to- scalar. If you are working in a context such as Numpy where it makes sense to assume they are element-wise and thus composable, the context should provide the compose operator(s).
I was actually thinking on these lines: what if @ didn't work on types.FunctionType, but did work on numpy.ufunc (the name for the "universal function" type that knows how to broadcast across arrays but also work on scalars)? That's something NumPy could implement without any help from the core language. (Methods are a minor problem here, but it's obvious how to solve them, so I won't get into it.) And if it turned out to be useful all over the place in NumPy, that might turn up some great uses for the idiomatic non-NumPy Python, or it might show that, like elementwise addition, it's really more a part of NumPy than of Python.
But of course that's more of a proposal for NumPy than for Python.
Without that context, Koos's example looks like a TypeError.
But Koos's example, even if it was possibly inadvertent, shows that I may be wrong about that. Maybe compose together with element-wise operators actually _is_ sufficient for something beyond toy examples.
Of course it is!<wink /> I didn't really think there was any doubt about that.
I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
On 5/9/2015 6:19 AM, Andrew Barnert via Python-ideas wrote:
I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
I agree that @ is most likely to be usefull in numpy's restricted context.
A composition operator is usually defined by application: f@g(x) is defined as f(g(x)). (I sure there are also axiomatic treatments.) It is an optional syntactic abbreviation. It is most useful in a context where there is one set of data objects, such as the real numbers, or one set + arrays (vectors) defined on the one set; where all function are univariate (or possible multivariate, but that can can be transformed to univariate on vectors); and where parameter names are dummies like 'x', 'y', 'z', or '_'.
The last point is important. Abbreviating h(x) = f(g(x)) with h = f @ g does not lose any information as 'x' is basically a placeholder (so get rid of it). But parameter names are important in most practical contexts, both for understanding a composition and for using it.
dev npv(transfers, discount): '''Return the net present value of discounted transfers.
transfers: finite iterable of amounts at constant intervals
discount: fraction per interval
'''
divisor = 1 + discount
return sum(tranfer/divisor**time
for time, transfer in enumerate(transfers))
Even if one could replace the def statement with npv = <some combination of @, sum, map, add, div, power, enumerate, ...> with parameter names omitted, it would be harder to understand. Using it would require the ability to infer argument types and order from the composed expression.
I intentionally added a statement to calculate the common subexpression prior to the return. I believe it would have to put back in the return expression before converting.
-- Terry Jan Reedy
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
root ^ mean ^ map & square (xs)
root ^ mean ^ map & square ^ xs ()
Read this as...
compose root, of mean, of map with square, of xs
Or...
apply(map(square, xs), mean, root)
map & square | mean | root (xs)
xs | map & square | mean | root ()
Read this as...
apply xs, to map with square, to mean, to root
These are kind of cool, but does it make python code easier to read? That seems like it may be subjective depending on the amount of programming experience someone has.
Cheers, Ron
Hi, I had to answer some of these questions when I wrote Lawvere: https://pypi.python.org/pypi/lawvere
First, there is two kind of composition: pipe and circle so I think a single operator like @ is a bit restrictive. I like "->" and "<-"
Then, for function name and function to string I had to introduce function signature (a tuple). It provides a good tool for decomposition, introspection and comparison in respect with mathematic definition.
Finally, for me composition make sense when you have typed functions otherwise it can easily become a mess and this make composition tied to multiple dispatch.
I really hope composition will be introduced in python but I can't see how it be made without rethinking a good part of function definition.
2015-05-09 17:38 GMT+02:00 Ron Adam ron3200@gmail.com:
> >
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
root ^ mean ^ map & square (xs)
root ^ mean ^ map & square ^ xs ()
Read this as...
compose root, of mean, of map with square, of xs
Or...
apply(map(square, xs), mean, root)
map & square | mean | root (xs)
xs | map & square | mean | root ()
Read this as...
apply xs, to map with square, to mean, to root
These are kind of cool, but does it make python code easier to read? That seems like it may be subjective depending on the amount of programming experience someone has.
Cheers, Ron
On May 9, 2015, at 16:28, Ivan Levkivskyi levkivskyi@gmail.com wrote:
I was thinking about recent ideas discussed here. I also returned back to origins of my initial idea. The point is that it came from Numpy, I use Numpy arrays everyday, and typically I do exactly something like root(mean(square(data))).
Now I am thinking: what is actually a matrix? It is something that takes a vector and returns a vector. But on the other hand the same actually do elementwise functions. It does not really matter, what we do with a vector: transform by a product of matrices or by composition of functions. In other words I agree with Andrew that "elementwise" is a good match with compose, and what we really need is to "pipe" things that take a vector (or just an iterable) and return a vector (iterable).
So that probably a good place (in a potential future) for compose would be not functools but itertools. But indeed a good place to test this would be Numpy.
Itertools is an interesting idea.
Anyway, assuming NumPy isn't going to add this in the near future (has anyone even brought it up on the NumPy list, or only here?), it wouldn't be that hard to write a (maybe inefficient but working) @composable wrapper and wrap all the relevant callables from NumPy or from itertools, upload it to PyPI, and let people start coming up with good examples. If it's later worth direct support in NumPy and/or Python (for simplicity or performance), the module will still be useful for backward compatibility.
An additional comment: it is indeed good to have both @ and | for compose and rcompose. Side note, one can actually overload __rmatmul__ on arrays as well so that you can write
root @ mean @ square @ data
But this doesn't need to overload it on arrays, only on the utuncs, right?
Unless you're suggesting that one of these operations could be a matrix as easily as a function, and NumPy users often won't have to care which it is?
Moreover, one can overload __or__ on arrays, so that one can write
data | square | mean | root
even with ordinary functions (not Numpy's ufuncs or composable) .
That's an interesting point. But I think this will be a bit confusing, because now it _does_ matter whether square is a matrix or a function--you'll get elementwise bitwise or instead of application. (And really, this is the whole reason for @ in the first place--we needed an operator that never means elementwise.)
Also, this doesn't let you actually compose functions--if you want square | mean | root to be a function, square has to have a __or__ operator.
These examples are actually "flat is better than nested" in the extreme form.
Anyway, they (Numpy) are going to implement the @ operator for arrays, may be it would be a good idea to check that if something on the left from me (array) is not an array but a callable then apply it elementwise.
Concerning the multi-argument functions, I don't like $ symbol, don't know why. It seems really unintuitive why it means partial application. One can autocurry composable functions and apply same rules that Numpy uses for ufuncs. More precisely, if I write
add(data1, data2)
with arrays it applies add pairwise. But if I write
add(data1, 42)
it is also fine, it simply adds 42 to every element. With autocurrying one could write
root @ mean @ add(data) @ square @ data2
or
root @ mean @ square @ add(42) @ data
However, as I see it now it is not very readable, so that may be the best choise is to reserve @ and | for "piping" iterables through transformers that take one argument. In other words it should be left to user to make add(42) of an appropriate type. It is the same logic as for decorators, if I write
@modify(arg) def func(x): return None
I must care that modify(arg) evaluates to something that takes one callable and returns a callable.
On May 9, 2015, at 01:36, Stephen J. Turnbull stephen@xemacs.org wrote: >
Andrew Barnert writes:
On May 8, 2015, at 19:58, Stephen J. Turnbull stephen@xemacs.org wrote:
Koos Zevenhoven writes:
As a random example, (root @ mean @ square)(x) would produce the right order for rms when using [2].
Hardly interesting. :-) The result is an exception, as root and square are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
Unless you're using an elementwise square and an array-to-scalar mean, like the ones in NumPy,
Erm, why would square be elementwise and root not? I would suppose that everything is element-wise in Numpy (not a user yet).
Most functions in NumPy are elementwise when applied to arrays, but can also be applied to scalars. So, square is elementwise because it's called on an array, root is scalar because it's called on a scalar. (In fact, root could also be elementwise--aggregating functions like mean can be applied across just one axis of a 2D or higher array, reducing it by one dimension, if you want.)
Before you try it, this sounds like a complicated nightmare that can't possibly work in practice. But play with it for just a few minutes and it's completely natural. (Except for a few cases where you want some array-wide but not element-wise operation, most famously matrix multiplication, which is why we now have the @ operator to play with.)
in which case it works perfectly well...
But that's an aspect of my point (evidently, obscure). Conceptually, as taught in junior high school or so, root and square are scalar-to- scalar. If you are working in a context such as Numpy where it makes sense to assume they are element-wise and thus composable, the context should provide the compose operator(s).
I was actually thinking on these lines: what if @ didn't work on types.FunctionType, but did work on numpy.ufunc (the name for the "universal function" type that knows how to broadcast across arrays but also work on scalars)? That's something NumPy could implement without any help from the core language. (Methods are a minor problem here, but it's obvious how to solve them, so I won't get into it.) And if it turned out to be useful all over the place in NumPy, that might turn up some great uses for the idiomatic non-NumPy Python, or it might show that, like elementwise addition, it's really more a part of NumPy than of Python.
But of course that's more of a proposal for NumPy than for Python.
Without that context, Koos's example looks like a TypeError.
But Koos's example, even if it was possibly inadvertent, shows that I may be wrong about that. Maybe compose together with element-wise operators actually _is_ sufficient for something beyond toy examples.
Of course it is!<wink /> I didn't really think there was any doubt about that.
I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
On 5/9/2015 6:19 AM, Andrew Barnert via Python-ideas wrote:
I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
I agree that @ is most likely to be usefull in numpy's restricted context.
A composition operator is usually defined by application: f@g(x) is defined as f(g(x)). (I sure there are also axiomatic treatments.) It is an optional syntactic abbreviation. It is most useful in a context where there is one set of data objects, such as the real numbers, or one set + arrays (vectors) defined on the one set; where all function are univariate (or possible multivariate, but that can can be transformed to univariate on vectors); and where parameter names are dummies like 'x', 'y', 'z', or '_'.
The last point is important. Abbreviating h(x) = f(g(x)) with h = f @ g does not lose any information as 'x' is basically a placeholder (so get rid of it). But parameter names are important in most practical contexts, both for understanding a composition and for using it.
dev npv(transfers, discount): '''Return the net present value of discounted transfers.
transfers: finite iterable of amounts at constant intervals
discount: fraction per interval
'''
divisor = 1 + discount
return sum(tranfer/divisor**time
for time, transfer in enumerate(transfers))
Even if one could replace the def statement with npv = <some combination of @, sum, map, add, div, power, enumerate, ...> with parameter names omitted, it would be harder to understand. Using it would require the ability to infer argument types and order from the composed expression.
I intentionally added a statement to calculate the common subexpression prior to the return. I believe it would have to put back in the return expression before converting.
-- Terry Jan Reedy
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
root ^ mean ^ map & square (xs)
root ^ mean ^ map & square ^ xs ()
Read this as...
compose root, of mean, of map with square, of xs
Or...
apply(map(square, xs), mean, root)
map & square | mean | root (xs)
xs | map & square | mean | root ()
Read this as...
apply xs, to map with square, to mean, to root
These are kind of cool, but does it make python code easier to read? That seems like it may be subjective depending on the amount of programming experience someone has.
Cheers, Ron
Hi, I had to answer some of these questions when I wrote Lawvere: https://pypi.python.org/pypi/lawvere
First, there is two kind of composition: pipe and circle so I think a single operator like @ is a bit restrictive. I like "->" and "<-"
Then, for function name and function to string I had to introduce function signature (a tuple). It provides a good tool for decomposition, introspection and comparison in respect with mathematic definition.
Finally, for me composition make sense when you have typed functions otherwise it can easily become a mess and this make composition tied to multiple dispatch.
I really hope composition will be introduced in python but I can't see how it be made without rethinking a good part of function definition.
2015-05-09 17:38 GMT+02:00 Ron Adam ron3200@gmail.com:
> >
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
root ^ mean ^ map & square (xs)
root ^ mean ^ map & square ^ xs ()
Read this as...
compose root, of mean, of map with square, of xs
Or...
apply(map(square, xs), mean, root)
map & square | mean | root (xs)
xs | map & square | mean | root ()
Read this as...
apply xs, to map with square, to mean, to root
These are kind of cool, but does it make python code easier to read? That seems like it may be subjective depending on the amount of programming experience someone has.
Cheers, Ron
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 10 May 2015 at 02:05, Andrew Barnert abarnert@yahoo.com wrote:
On May 9, 2015, at 16:28, Ivan Levkivskyi levkivskyi@gmail.com wrote:
I was thinking about recent ideas discussed here. I also returned back to origins of my initial idea. The point is that it came from Numpy, I use Numpy arrays everyday, and typically I do exactly something like root(mean(square(data))).
Now I am thinking: what is actually a matrix? It is something that takes a vector and returns a vector. But on the other hand the same actually do elementwise functions. It does not really matter, what we do with a vector: transform by a product of matrices or by composition of functions. In other words I agree with Andrew that "elementwise" is a good match with compose, and what we really need is to "pipe" things that take a vector (or just an iterable) and return a vector (iterable).
So that probably a good place (in a potential future) for compose would be not functools but itertools. But indeed a good place to test this would be Numpy.
Itertools is an interesting idea.
Anyway, assuming NumPy isn't going to add this in the near future (has anyone even brought it up on the NumPy list, or only here?), it wouldn't be that hard to write a (maybe inefficient but working) @composable wrapper and wrap all the relevant callables from NumPy or from itertools, upload it to PyPI, and let people start coming up with good examples. If it's later worth direct support in NumPy and/or Python (for simplicity or performance), the module will still be useful for backward compatibility.
This is a good step-by-step approach. This is what I would try.
An additional comment: it is indeed good to have both @ and | for compose and rcompose. Side note, one can actually overload __rmatmul__ on arrays as well so that you can write
root @ mean @ square @ data
But this doesn't need to overload it on arrays, only on the utuncs, right?
Unless you're suggesting that one of these operations could be a matrix as easily as a function, and NumPy users often won't have to care which it is?
Exactly, this is what I want. Note that in such approach you have no parentheses at all.
>
Moreover, one can overload __or__ on arrays, so that one can write
data | square | mean | root
even with ordinary functions (not Numpy's ufuncs or composable) .
That's an interesting point. But I think this will be a bit confusing, because now it _does_ matter whether square is a matrix or a function--you'll get elementwise bitwise or instead of application. (And really, this is the whole reason for @ in the first place--we needed an operator that never means elementwise.)
Also, this doesn't let you actually compose functions--if you want square | mean | root to be a function, square has to have a __or__ operator.
This is true. The | is more limited because of its current semantics. The fact that | operator already has a widely used semantics is also why I would choose @ if I would need to choose only one: @ or |
These examples are actually "flat is better than nested" in the extreme form.
Anyway, they (Numpy) are going to implement the @ operator for arrays, may be it would be a good idea to check that if something on the left from me (array) is not an array but a callable then apply it elementwise.
Concerning the multi-argument functions, I don't like $ symbol, don't know why. It seems really unintuitive why it means partial application. One can autocurry composable functions and apply same rules that Numpy uses for ufuncs. More precisely, if I write
add(data1, data2)
with arrays it applies add pairwise. But if I write
add(data1, 42)
it is also fine, it simply adds 42 to every element. With autocurrying one could write
root @ mean @ add(data) @ square @ data2
or
root @ mean @ square @ add(42) @ data
However, as I see it now it is not very readable, so that may be the best choise is to reserve @ and | for "piping" iterables through transformers that take one argument. In other words it should be left to user to make add(42) of an appropriate type. It is the same logic as for decorators, if I write
@modify(arg) def func(x): return None
I must care that modify(arg) evaluates to something that takes one callable and returns a callable.
On May 9, 2015, at 01:36, Stephen J. Turnbull stephen@xemacs.org wrote:
>
Andrew Barnert writes:
On May 8, 2015, at 19:58, Stephen J. Turnbull stephen@xemacs.org wrote:
Koos Zevenhoven writes:
As a random example, (root @ mean @ square)(x) would produce the right order for rms when using [2].
Hardly interesting. :-) The result is an exception, as root and square are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
Unless you're using an elementwise square and an array-to-scalar mean, like the ones in NumPy,
Erm, why would square be elementwise and root not? I would suppose that everything is element-wise in Numpy (not a user yet).
Most functions in NumPy are elementwise when applied to arrays, but can also be applied to scalars. So, square is elementwise because it's called on an array, root is scalar because it's called on a scalar. (In fact, root could also be elementwise--aggregating functions like mean can be applied across just one axis of a 2D or higher array, reducing it by one dimension, if you want.)
Before you try it, this sounds like a complicated nightmare that can't possibly work in practice. But play with it for just a few minutes and it's completely natural. (Except for a few cases where you want some array-wide but not element-wise operation, most famously matrix multiplication, which is why we now have the @ operator to play with.)
in which case it works perfectly well...
But that's an aspect of my point (evidently, obscure). Conceptually, as taught in junior high school or so, root and square are scalar-to- scalar. If you are working in a context such as Numpy where it makes sense to assume they are element-wise and thus composable, the context should provide the compose operator(s).
I was actually thinking on these lines: what if @ didn't work on types.FunctionType, but did work on numpy.ufunc (the name for the "universal function" type that knows how to broadcast across arrays but also work on scalars)? That's something NumPy could implement without any help from the core language. (Methods are a minor problem here, but it's obvious how to solve them, so I won't get into it.) And if it turned out to be useful all over the place in NumPy, that might turn up some great uses for the idiomatic non-NumPy Python, or it might show that, like elementwise addition, it's really more a part of NumPy than of Python.
But of course that's more of a proposal for NumPy than for Python.
Without that context, Koos's example looks like a TypeError.
But Koos's example, even if it was possibly inadvertent, shows that I may be wrong about that. Maybe compose together with element-wise operators actually _is_ sufficient for something beyond toy examples.
Of course it is!<wink /> I didn't really think there was any doubt about that.
I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
On 5/9/2015 6:19 AM, Andrew Barnert via Python-ideas wrote:
I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
I agree that @ is most likely to be usefull in numpy's restricted context.
A composition operator is usually defined by application: f@g(x) is defined as f(g(x)). (I sure there are also axiomatic treatments.) It is an optional syntactic abbreviation. It is most useful in a context where there is one set of data objects, such as the real numbers, or one set + arrays (vectors) defined on the one set; where all function are univariate (or possible multivariate, but that can can be transformed to univariate on vectors); and where parameter names are dummies like 'x', 'y', 'z', or '_'.
The last point is important. Abbreviating h(x) = f(g(x)) with h = f @ g does not lose any information as 'x' is basically a placeholder (so get rid of it). But parameter names are important in most practical contexts, both for understanding a composition and for using it.
dev npv(transfers, discount): '''Return the net present value of discounted transfers.
transfers: finite iterable of amounts at constant intervals
discount: fraction per interval
'''
divisor = 1 + discount
return sum(tranfer/divisor**time
for time, transfer in enumerate(transfers))
Even if one could replace the def statement with npv = <some combination of @, sum, map, add, div, power, enumerate, ...> with parameter names omitted, it would be harder to understand. Using it would require the ability to infer argument types and order from the composed expression.
I intentionally added a statement to calculate the common subexpression prior to the return. I believe it would have to put back in the return expression before converting.
-- Terry Jan Reedy
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
root ^ mean ^ map & square (xs)
root ^ mean ^ map & square ^ xs ()
Read this as...
compose root, of mean, of map with square, of xs
Or...
apply(map(square, xs), mean, root)
map & square | mean | root (xs)
xs | map & square | mean | root ()
Read this as...
apply xs, to map with square, to mean, to root
These are kind of cool, but does it make python code easier to read? That seems like it may be subjective depending on the amount of programming experience someone has.
Cheers, Ron
Hi, I had to answer some of these questions when I wrote Lawvere: https://pypi.python.org/pypi/lawvere
First, there is two kind of composition: pipe and circle so I think a single operator like @ is a bit restrictive. I like "->" and "<-"
Then, for function name and function to string I had to introduce function signature (a tuple). It provides a good tool for decomposition, introspection and comparison in respect with mathematic definition.
Finally, for me composition make sense when you have typed functions otherwise it can easily become a mess and this make composition tied to multiple dispatch.
I really hope composition will be introduced in python but I can't see how it be made without rethinking a good part of function definition.
2015-05-09 17:38 GMT+02:00 Ron Adam ron3200@gmail.com:
> >
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
root ^ mean ^ map & square (xs)
root ^ mean ^ map & square ^ xs ()
Read this as...
compose root, of mean, of map with square, of xs
Or...
apply(map(square, xs), mean, root)
map & square | mean | root (xs)
xs | map & square | mean | root ()
Read this as...
apply xs, to map with square, to mean, to root
These are kind of cool, but does it make python code easier to read? That seems like it may be subjective depending on the amount of programming experience someone has.
Cheers, Ron
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On May 10, 2015, at 00:13, Ivan Levkivskyi levkivskyi@gmail.com wrote:
On 10 May 2015 at 02:05, Andrew Barnert abarnert@yahoo.com wrote:
On May 9, 2015, at 16:28, Ivan Levkivskyi levkivskyi@gmail.com wrote:
I was thinking about recent ideas discussed here. I also returned back to origins of my initial idea. The point is that it came from Numpy, I use Numpy arrays everyday, and typically I do exactly something like root(mean(square(data))).
Now I am thinking: what is actually a matrix? It is something that takes a vector and returns a vector. But on the other hand the same actually do elementwise functions. It does not really matter, what we do with a vector: transform by a product of matrices or by composition of functions. In other words I agree with Andrew that "elementwise" is a good match with compose, and what we really need is to "pipe" things that take a vector (or just an iterable) and return a vector (iterable).
So that probably a good place (in a potential future) for compose would be not functools but itertools. But indeed a good place to test this would be Numpy.
Itertools is an interesting idea.
Anyway, assuming NumPy isn't going to add this in the near future (has anyone even brought it up on the NumPy list, or only here?), it wouldn't be that hard to write a (maybe inefficient but working) @composable wrapper and wrap all the relevant callables from NumPy or from itertools, upload it to PyPI, and let people start coming up with good examples. If it's later worth direct support in NumPy and/or Python (for simplicity or performance), the module will still be useful for backward compatibility.
This is a good step-by-step approach. This is what I would try.
An additional comment: it is indeed good to have both @ and | for compose and rcompose. Side note, one can actually overload __rmatmul__ on arrays as well so that you can write
root @ mean @ square @ data
But this doesn't need to overload it on arrays, only on the utuncs, right?
Unless you're suggesting that one of these operations could be a matrix as easily as a function, and NumPy users often won't have to care which it is?
Exactly, this is what I want. Note that in such approach you have no parentheses at all.
It's worth working up some practical examples here.
Annoyingly, I actually had a perfect example a few years ago, but I can't find it. I'm sure you can imagine what it was. We built-in vector transforms implemented as functions, and a way for a user to input new transforms as matrices, and a way for the user to chain built-in and user-defined transforms. Under the covers, we had to wrap each user transform in a function just so they'd all be callables, which led to a couple of annoying debugging sessions and probably a performance hit. If we could compose them interchangeably, that might have avoided those problems. But if I can't find the code, it's hard to say for sure, so now I'm offering the same vague, untestable use cases that I was complaining about. :)
Moreover, one can overload __or__ on arrays, so that one can write
data | square | mean | root
even with ordinary functions (not Numpy's ufuncs or composable) .
That's an interesting point. But I think this will be a bit confusing, because now it _does_ matter whether square is a matrix or a function--you'll get elementwise bitwise or instead of application. (And really, this is the whole reason for @ in the first place--we needed an operator that never means elementwise.)
Also, this doesn't let you actually compose functions--if you want square | mean | root to be a function, square has to have a __or__ operator.
This is true. The | is more limited because of its current semantics. The fact that | operator already has a widely used semantics is also why I would choose @ if I would need to choose only one: @ or |
These examples are actually "flat is better than nested" in the extreme form.
Anyway, they (Numpy) are going to implement the @ operator for arrays, may be it would be a good idea to check that if something on the left from me (array) is not an array but a callable then apply it elementwise.
Concerning the multi-argument functions, I don't like $ symbol, don't know why. It seems really unintuitive why it means partial application. One can autocurry composable functions and apply same rules that Numpy uses for ufuncs. More precisely, if I write
add(data1, data2)
with arrays it applies add pairwise. But if I write
add(data1, 42)
it is also fine, it simply adds 42 to every element. With autocurrying one could write
root @ mean @ add(data) @ square @ data2
or
root @ mean @ square @ add(42) @ data
However, as I see it now it is not very readable, so that may be the best choise is to reserve @ and | for "piping" iterables through transformers that take one argument. In other words it should be left to user to make add(42) of an appropriate type. It is the same logic as for decorators, if I write
@modify(arg) def func(x): return None
I must care that modify(arg) evaluates to something that takes one callable and returns a callable.
On May 9, 2015, at 01:36, Stephen J. Turnbull stephen@xemacs.org wrote: >
Andrew Barnert writes:
> On May 8, 2015, at 19:58, Stephen J. Turnbull stephen@xemacs.org wrote: > > Koos Zevenhoven writes: > >> As a random example, (root @ mean @ square)(x) would produce the right >> order for rms when using [2]. > > Hardly interesting. :-) The result is an exception, as root and square > are conceptually scalar-to-scalar, while mean is sequence-to-scalar.
Unless you're using an elementwise square and an array-to-scalar mean, like the ones in NumPy,
Erm, why would square be elementwise and root not? I would suppose that everything is element-wise in Numpy (not a user yet).
Most functions in NumPy are elementwise when applied to arrays, but can also be applied to scalars. So, square is elementwise because it's called on an array, root is scalar because it's called on a scalar. (In fact, root could also be elementwise--aggregating functions like mean can be applied across just one axis of a 2D or higher array, reducing it by one dimension, if you want.)
Before you try it, this sounds like a complicated nightmare that can't possibly work in practice. But play with it for just a few minutes and it's completely natural. (Except for a few cases where you want some array-wide but not element-wise operation, most famously matrix multiplication, which is why we now have the @ operator to play with.)
in which case it works perfectly well...
But that's an aspect of my point (evidently, obscure). Conceptually, as taught in junior high school or so, root and square are scalar-to- scalar. If you are working in a context such as Numpy where it makes sense to assume they are element-wise and thus composable, the context should provide the compose operator(s).
I was actually thinking on these lines: what if @ didn't work on types.FunctionType, but did work on numpy.ufunc (the name for the "universal function" type that knows how to broadcast across arrays but also work on scalars)? That's something NumPy could implement without any help from the core language. (Methods are a minor problem here, but it's obvious how to solve them, so I won't get into it.) And if it turned out to be useful all over the place in NumPy, that might turn up some great uses for the idiomatic non-NumPy Python, or it might show that, like elementwise addition, it's really more a part of NumPy than of Python.
But of course that's more of a proposal for NumPy than for Python.
Without that context, Koos's example looks like a TypeError.
But Koos's example, even if it was possibly inadvertent, shows that I may be wrong about that. Maybe compose together with element-wise operators actually _is_ sufficient for something beyond toy examples.
Of course it is!<wink /> I didn't really think there was any doubt about that.
I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
On 5/9/2015 6:19 AM, Andrew Barnert via Python-ideas wrote:
I think there was, and still is. People keep coming up with abstract toy examples, but as soon as someone tries to give a good real example, it only makes sense with NumPy (Koos's) or with some syntax that Python doesn't have (yours), because to write them with actual Python functions would actually be ugly and verbose (my version of yours).
I don't think that's a coincidence. You didn't write "map square" because you don't know how to think in Python, but because using compose profitably inherently implies not thinking in Python. (Except, maybe, in the case of NumPy... which is a different idiom.) Maybe someone has a bunch of obvious good use cases for compose that don't also require other functions, operators, or syntax we don't have, but so far, nobody's mentioned one.
I agree that @ is most likely to be usefull in numpy's restricted context.
A composition operator is usually defined by application: f@g(x) is defined as f(g(x)). (I sure there are also axiomatic treatments.) It is an optional syntactic abbreviation. It is most useful in a context where there is one set of data objects, such as the real numbers, or one set + arrays (vectors) defined on the one set; where all function are univariate (or possible multivariate, but that can can be transformed to univariate on vectors); and where parameter names are dummies like 'x', 'y', 'z', or '_'.
The last point is important. Abbreviating h(x) = f(g(x)) with h = f @ g does not lose any information as 'x' is basically a placeholder (so get rid of it). But parameter names are important in most practical contexts, both for understanding a composition and for using it.
dev npv(transfers, discount): '''Return the net present value of discounted transfers.
transfers: finite iterable of amounts at constant intervals
discount: fraction per interval
'''
divisor = 1 + discount
return sum(tranfer/divisor**time
for time, transfer in enumerate(transfers))
Even if one could replace the def statement with npv = <some combination of @, sum, map, add, div, power, enumerate, ...> with parameter names omitted, it would be harder to understand. Using it would require the ability to infer argument types and order from the composed expression.
I intentionally added a statement to calculate the common subexpression prior to the return. I believe it would have to put back in the return expression before converting.
-- Terry Jan Reedy
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
>I suppose you could write (root @ mean @ (map square)) (xs),
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
root ^ mean ^ map & square (xs)
root ^ mean ^ map & square ^ xs ()
Read this as...
compose root, of mean, of map with square, of xs
Or...
apply(map(square, xs), mean, root)
map & square | mean | root (xs)
xs | map & square | mean | root ()
Read this as...
apply xs, to map with square, to mean, to root
These are kind of cool, but does it make python code easier to read? That seems like it may be subjective depending on the amount of programming experience someone has.
Cheers, Ron
Hi, I had to answer some of these questions when I wrote Lawvere: https://pypi.python.org/pypi/lawvere
First, there is two kind of composition: pipe and circle so I think a single operator like @ is a bit restrictive. I like "->" and "<-"
Then, for function name and function to string I had to introduce function signature (a tuple). It provides a good tool for decomposition, introspection and comparison in respect with mathematic definition.
Finally, for me composition make sense when you have typed functions otherwise it can easily become a mess and this make composition tied to multiple dispatch.
I really hope composition will be introduced in python but I can't see how it be made without rethinking a good part of function definition.
2015-05-09 17:38 GMT+02:00 Ron Adam ron3200@gmail.com:
> >
On 05/09/2015 03:21 AM, Andrew Barnert via Python-ideas wrote:
>I suppose you could write (root @ mean @ (map square)) (xs), >
Actually, you can't. You could write (root @ mean @ partial(map, square))(xs), but that's pretty clearly less readable than root(mean(map(square, xs))) or root(mean(x*x for x in xs). And that's been my main argument: Without a full suite of higher-level operators and related syntax, compose alone doesn't do you any good except for toy examples.
How about an operator for partial?
root @ mean @ map $ square(xs)
Actually I'd rather reuse the binary operators. (I'd be happy if they were just methods on bytes objects BTW.)
compose(root, mean, map(square, xs))
root ^ mean ^ map & square (xs)
root ^ mean ^ map & square ^ xs ()
Read this as...
compose root, of mean, of map with square, of xs
Or...
apply(map(square, xs), mean, root)
map & square | mean | root (xs)
xs | map & square | mean | root ()
Read this as...
apply xs, to map with square, to mean, to root
These are kind of cool, but does it make python code easier to read? That seems like it may be subjective depending on the amount of programming experience someone has.
Cheers, Ron
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On Sun, May 10, 2015 at 01:18:02AM -0700, Andrew Barnert via Python-ideas wrote: [...]
Not picking on Andrew specifically, but could folks please trim their replies occasionally to keep the amount of quoted text manageable?
Andrew's post is about 10 pages of mostly-quoted text (depending on how you count pages, mutt claims it's 14 but I think it means screenfuls, not pages), and I'm seeing up to nine levels of quoting:
>>> As a random example, (root @ mean @ square)(x) would produce the right >>> order for rms when using [2].
Thanks in advance.
-- Steve