Partial operator (and 'third-party methods' and 'piping') [was Re: Function composition (was no subject)]
Reading the recent emails in the function composition thread started by Ivan, I realized that my below sketch for a composition operator would be better if it did not actually do function composition ;). Instead, -> would be quite powerful as 'just' a partial operator -- perhaps even more powerful, as I demonstrate below. However, this is not an argument against @ composition, which might in fact play together with this quite nicely. This allows some nice things with multi-argument functions too. I realize that it may be unlikely that a new operator would be added, but here it is anyway, as food for thought. (With an existing operator, I suspect it would be even less likely, because of precedence rules : ) So, -> would be an operator with a precedence similar to .attribute access (but lower than .attribute): # The simple definition of what it does: arg->func # equivalent to functools.partial(func, arg) This would allow for instance: arg -> spam() -> cheese(kind = 'gouda') -> eggs() which would be equivalent to eggs(cheese(spam(arg), kind = 'gouda')) Or even together together with the proposed @ composition: rms = root @ mean @ square->map # for an iterable non-numpy argument And here's something I find quite interesting. Together with @singledispatch from 3.4 (or possibly an enhanced version using type annotations in the future?), one could add 'third-party methods' to classes in other libraries without monkey patching. A dummy example: from numpy import array my_list = [1,2,3] my_array = array(my_list) my_mean = my_array.mean() # This currently works in numpy from rmslib import rms my_rms = my_array->rms() # efficient rms for numpy arrays my_other_rms = my_list->rms() # rms that works on any iterable One would be able to distinguish between calls to methods and 'third-party methods' based on whether . or -> is used for accessing them, which I think is a good thing. Also, third-party methods would be less likely to mutate the object, just like func(obj) is less likely to mutate obj than obj.method(). See more examples below. I converted my examples from last night to this IMO better version, because at least some of them would still be relevant. On 10.5.2015 2:07, Koos Zevenhoven wrote:
On 10.5.2015 1:03, Gregory Salvan wrote:
Nobody convinced by arrow operator ?
like: arg -> spam -> eggs -> cheese or cheese <- eggs <- spam <- arg
I like | a lot because of the pipe analogy. However, having a new operator for this could solve some issues about operator precedence.
Today, I sketched one possible version that would use a new .. operator. I'll explain what it would do (but with your -> instead of my ..)
Here, the operator (.. or ->) would have a higher precedence than function calls () but a lower precedence than attribute access (obj.attr).
First, with single-argument functions spam, eggs and cheese, and a non-function arg:
arg->eggs->spam->cheese() # equivalent to cheese(spam(eggs(arg)))
With -> as a partial operator, this would instead be: arg->eggs()->spam()->cheese() # equivalent to cheese(spam(eggs(arg)))
eggs->spam->cheese # equivalent to lambda arg: cheese(spam(eggs(arg)))
With -> as a partial operator this could be: lambda arg: arg->eggs()->spam()->cheese()
Then, if spam and eggs both took two arguments; eggs(arg1, arg2), spam(arg1, arg2)
arg->eggs # equivalent to partial(eggs, arg) eggs->spam(a, b, c) # equivalent to spam(eggs(a, b), c)
With -> as a partial operator, the first one would work, and the second would become: eggs(a,b)->spam(c) # equivalent to spam(eggs(a, b), c)
arg->eggs->spam(b,c) # equivalent to spam(eggs(arg, b), c)
This would become: arg->eggs(b)->spam(c) # equivalent to spam(eggs(arg, b), c) Note that this would be quite flexible in partial 'piping' of multi-argument functions.
So you could think of -> as an extended partial operator. And this would naturally generalize to functions with even more arguments. The arguments would always be fed in the same order as in the equivalent function call, which makes for a nice rule of thumb. However, I suppose one would usually avoid combinations that are difficult to understand.
Some examples that this would enable:
# Example 1 from numpy import square, mean, sqrt rms = square->mean->sqrt # I think this order is fine because it is not @
This would become: def rms(arr): return arr->square()->mean()->sqrt()
# Example 2 (both are equivalent) spam(args)->eggs->cheese() # the shell-syntax analogy that Steven mentioned.
This would be: spam(args)->eggs()->cheese() Of course the shell piping analogy would be quite far, because it looks so different.
# Example 3 # Last but not least, we would finally have this :) some_sequence->len() some_object->isinstance(MyType)
And: func->map(seq) func->reduce(seq) -- Koos
In my opinion, this syntax make problems when your arguments are
functions/callables.
And if you code in a functionnal paradigm it is quite common to inject
functions in arguments otherwise how would you do polymorphism ?
The only way I see to distinguish cases is to have tuples, but syntax is
quite strange.
instead of : arg->eggs(b)->spam(c)
my_partial = (arg, b)->eggs->(c, )->spam
Then how would you call my_partial ?
For example, if you have:
def eggs(a, b, c)...
def spam(d, e)...
my_partial(c, e) or my_partial(c)(e) ?
2015-05-10 22:06 GMT+02:00 Koos Zevenhoven
Reading the recent emails in the function composition thread started by Ivan, I realized that my below sketch for a composition operator would be better if it did not actually do function composition ;). Instead, -> would be quite powerful as 'just' a partial operator -- perhaps even more powerful, as I demonstrate below. However, this is not an argument against @ composition, which might in fact play together with this quite nicely.
This allows some nice things with multi-argument functions too.
I realize that it may be unlikely that a new operator would be added, but here it is anyway, as food for thought. (With an existing operator, I suspect it would be even less likely, because of precedence rules : )
So, -> would be an operator with a precedence similar to .attribute access (but lower than .attribute):
# The simple definition of what it does: arg->func # equivalent to functools.partial(func, arg)
This would allow for instance: arg -> spam() -> cheese(kind = 'gouda') -> eggs()
which would be equivalent to eggs(cheese(spam(arg), kind = 'gouda'))
Or even together together with the proposed @ composition: rms = root @ mean @ square->map # for an iterable non-numpy argument
And here's something I find quite interesting. Together with @singledispatch from 3.4 (or possibly an enhanced version using type annotations in the future?), one could add 'third-party methods' to classes in other libraries without monkey patching. A dummy example:
from numpy import array my_list = [1,2,3] my_array = array(my_list) my_mean = my_array.mean() # This currently works in numpy
from rmslib import rms my_rms = my_array->rms() # efficient rms for numpy arrays my_other_rms = my_list->rms() # rms that works on any iterable
One would be able to distinguish between calls to methods and 'third-party methods' based on whether . or -> is used for accessing them, which I think is a good thing. Also, third-party methods would be less likely to mutate the object, just like func(obj) is less likely to mutate obj than obj.method().
See more examples below. I converted my examples from last night to this IMO better version, because at least some of them would still be relevant.
On 10.5.2015 2:07, Koos Zevenhoven wrote:
On 10.5.2015 1:03, Gregory Salvan wrote:
Nobody convinced by arrow operator ?
like: arg -> spam -> eggs -> cheese or cheese <- eggs <- spam <- arg
I like | a lot because of the pipe analogy. However, having a new operator for this could solve some issues about operator precedence.
Today, I sketched one possible version that would use a new .. operator. I'll explain what it would do (but with your -> instead of my ..)
Here, the operator (.. or ->) would have a higher precedence than function calls () but a lower precedence than attribute access (obj.attr).
First, with single-argument functions spam, eggs and cheese, and a non-function arg:
arg->eggs->spam->cheese() # equivalent to cheese(spam(eggs(arg)))
With -> as a partial operator, this would instead be:
arg->eggs()->spam()->cheese() # equivalent to cheese(spam(eggs(arg)))
eggs->spam->cheese # equivalent to lambda arg: cheese(spam(eggs(arg)))
With -> as a partial operator this could be:
lambda arg: arg->eggs()->spam()->cheese()
Then, if spam and eggs both took two arguments; eggs(arg1, arg2),
spam(arg1, arg2)
arg->eggs # equivalent to partial(eggs, arg) eggs->spam(a, b, c) # equivalent to spam(eggs(a, b), c)
With -> as a partial operator, the first one would work, and the second would become:
eggs(a,b)->spam(c) # equivalent to spam(eggs(a, b), c)
arg->eggs->spam(b,c) # equivalent to spam(eggs(arg, b), c)
This would become:
arg->eggs(b)->spam(c) # equivalent to spam(eggs(arg, b), c)
Note that this would be quite flexible in partial 'piping' of multi-argument functions.
So you could think of -> as an extended partial operator. And this would
naturally generalize to functions with even more arguments. The arguments would always be fed in the same order as in the equivalent function call, which makes for a nice rule of thumb. However, I suppose one would usually avoid combinations that are difficult to understand.
Some examples that this would enable:
# Example 1 from numpy import square, mean, sqrt rms = square->mean->sqrt # I think this order is fine because it is not @
This would become:
def rms(arr): return arr->square()->mean()->sqrt()
# Example 2 (both are equivalent)
spam(args)->eggs->cheese() # the shell-syntax analogy that Steven mentioned.
This would be:
spam(args)->eggs()->cheese()
Of course the shell piping analogy would be quite far, because it looks so different.
# Example 3
# Last but not least, we would finally have this :) some_sequence->len() some_object->isinstance(MyType)
And:
func->map(seq) func->reduce(seq)
-- Koos
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Hi Gregory, Did you look at the new version carefully? If I understand the problem you are describing (mentioned also by Steven), my previous version had that issue, but the new one does not. That is why I added examples with callable arguments :). -- Koos On 11.5.2015 0:23, Gregory Salvan wrote:
In my opinion, this syntax make problems when your arguments are functions/callables. And if you code in a functionnal paradigm it is quite common to inject functions in arguments otherwise how would you do polymorphism ?
The only way I see to distinguish cases is to have tuples, but syntax is quite strange.
instead of : arg->eggs(b)->spam(c) my_partial = (arg, b)->eggs->(c, )->spam
Then how would you call my_partial ? For example, if you have: def eggs(a, b, c)... def spam(d, e)...
my_partial(c, e) or my_partial(c)(e) ?
2015-05-10 22:06 GMT+02:00 Koos Zevenhoven
mailto:koos.zevenhoven@aalto.fi>: Reading the recent emails in the function composition thread started by Ivan, I realized that my below sketch for a composition operator would be better if it did not actually do function composition ;). Instead, -> would be quite powerful as 'just' a partial operator -- perhaps even more powerful, as I demonstrate below. However, this is not an argument against @ composition, which might in fact play together with this quite nicely.
This allows some nice things with multi-argument functions too.
I realize that it may be unlikely that a new operator would be added, but here it is anyway, as food for thought. (With an existing operator, I suspect it would be even less likely, because of precedence rules : )
So, -> would be an operator with a precedence similar to .attribute access (but lower than .attribute):
# The simple definition of what it does: arg->func # equivalent to functools.partial(func, arg)
This would allow for instance: arg -> spam() -> cheese(kind = 'gouda') -> eggs()
which would be equivalent to eggs(cheese(spam(arg), kind = 'gouda'))
Or even together together with the proposed @ composition: rms = root @ mean @ square->map # for an iterable non-numpy argument
And here's something I find quite interesting. Together with @singledispatch from 3.4 (or possibly an enhanced version using type annotations in the future?), one could add 'third-party methods' to classes in other libraries without monkey patching. A dummy example:
from numpy import array my_list = [1,2,3] my_array = array(my_list) my_mean = my_array.mean() # This currently works in numpy
from rmslib import rms my_rms = my_array->rms() # efficient rms for numpy arrays my_other_rms = my_list->rms() # rms that works on any iterable
One would be able to distinguish between calls to methods and 'third-party methods' based on whether . or -> is used for accessing them, which I think is a good thing. Also, third-party methods would be less likely to mutate the object, just like func(obj) is less likely to mutate obj than obj.method().
See more examples below. I converted my examples from last night to this IMO better version, because at least some of them would still be relevant.
On 10.5.2015 2:07, Koos Zevenhoven wrote:
On 10.5.2015 1:03, Gregory Salvan wrote:
Nobody convinced by arrow operator ?
like: arg -> spam -> eggs -> cheese or cheese <- eggs <- spam <- arg
I like | a lot because of the pipe analogy. However, having a new operator for this could solve some issues about operator precedence.
Today, I sketched one possible version that would use a new .. operator. I'll explain what it would do (but with your -> instead of my ..)
Here, the operator (.. or ->) would have a higher precedence than function calls () but a lower precedence than attribute access (obj.attr).
First, with single-argument functions spam, eggs and cheese, and a non-function arg:
arg->eggs->spam->cheese() # equivalent to cheese(spam(eggs(arg)))
With -> as a partial operator, this would instead be:
arg->eggs()->spam()->cheese() # equivalent to cheese(spam(eggs(arg)))
eggs->spam->cheese # equivalent to lambda arg: cheese(spam(eggs(arg)))
With -> as a partial operator this could be:
lambda arg: arg->eggs()->spam()->cheese()
Then, if spam and eggs both took two arguments; eggs(arg1, arg2), spam(arg1, arg2)
arg->eggs # equivalent to partial(eggs, arg) eggs->spam(a, b, c) # equivalent to spam(eggs(a, b), c)
With -> as a partial operator, the first one would work, and the second would become:
eggs(a,b)->spam(c) # equivalent to spam(eggs(a, b), c)
arg->eggs->spam(b,c) # equivalent to spam(eggs(arg, b), c)
This would become:
arg->eggs(b)->spam(c) # equivalent to spam(eggs(arg, b), c)
Note that this would be quite flexible in partial 'piping' of multi-argument functions.
So you could think of -> as an extended partial operator. And this would naturally generalize to functions with even more arguments. The arguments would always be fed in the same order as in the equivalent function call, which makes for a nice rule of thumb. However, I suppose one would usually avoid combinations that are difficult to understand.
Some examples that this would enable:
# Example 1 from numpy import square, mean, sqrt rms = square->mean->sqrt # I think this order is fine because it is not @
This would become:
def rms(arr): return arr->square()->mean()->sqrt()
# Example 2 (both are equivalent) spam(args)->eggs->cheese() # the shell-syntax analogy that Steven mentioned.
This would be:
spam(args)->eggs()->cheese()
Of course the shell piping analogy would be quite far, because it looks so different.
# Example 3 # Last but not least, we would finally have this :) some_sequence->len() some_object->isinstance(MyType)
And:
func->map(seq) func->reduce(seq)
-- Koos
_______________________________________________ Python-ideas mailing list Python-ideas@python.org mailto:Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Nope sorry I've misread your code, but it changes nothing.
for example with spam(args)->eggs()->cheese()
if instead you have:
args=something
spam = lambda: args
spam()->eggs()->cheese()
should be treaten as: cheese(eggs(spam())) or cheese(eggs(args)) or
partial(cheese) circle partial(eggs) circle partial(spam) ?
I don't find this syntax convenient, sorry.
2015-05-10 23:41 GMT+02:00 Koos Zevenhoven
Hi Gregory,
Did you look at the new version carefully? If I understand the problem you are describing (mentioned also by Steven), my previous version had that issue, but the new one does not. That is why I added examples with callable arguments :).
-- Koos
On 11.5.2015 0:23, Gregory Salvan wrote:
In my opinion, this syntax make problems when your arguments are functions/callables. And if you code in a functionnal paradigm it is quite common to inject functions in arguments otherwise how would you do polymorphism ?
The only way I see to distinguish cases is to have tuples, but syntax is quite strange.
instead of : arg->eggs(b)->spam(c) my_partial = (arg, b)->eggs->(c, )->spam
Then how would you call my_partial ? For example, if you have: def eggs(a, b, c)... def spam(d, e)...
my_partial(c, e) or my_partial(c)(e) ?
2015-05-10 22:06 GMT+02:00 Koos Zevenhoven
: Reading the recent emails in the function composition thread started by Ivan, I realized that my below sketch for a composition operator would be better if it did not actually do function composition ;). Instead, -> would be quite powerful as 'just' a partial operator -- perhaps even more powerful, as I demonstrate below. However, this is not an argument against @ composition, which might in fact play together with this quite nicely.
This allows some nice things with multi-argument functions too.
I realize that it may be unlikely that a new operator would be added, but here it is anyway, as food for thought. (With an existing operator, I suspect it would be even less likely, because of precedence rules : )
So, -> would be an operator with a precedence similar to .attribute access (but lower than .attribute):
# The simple definition of what it does: arg->func # equivalent to functools.partial(func, arg)
This would allow for instance: arg -> spam() -> cheese(kind = 'gouda') -> eggs()
which would be equivalent to eggs(cheese(spam(arg), kind = 'gouda'))
Or even together together with the proposed @ composition: rms = root @ mean @ square->map # for an iterable non-numpy argument
And here's something I find quite interesting. Together with @singledispatch from 3.4 (or possibly an enhanced version using type annotations in the future?), one could add 'third-party methods' to classes in other libraries without monkey patching. A dummy example:
from numpy import array my_list = [1,2,3] my_array = array(my_list) my_mean = my_array.mean() # This currently works in numpy
from rmslib import rms my_rms = my_array->rms() # efficient rms for numpy arrays my_other_rms = my_list->rms() # rms that works on any iterable
One would be able to distinguish between calls to methods and 'third-party methods' based on whether . or -> is used for accessing them, which I think is a good thing. Also, third-party methods would be less likely to mutate the object, just like func(obj) is less likely to mutate obj than obj.method().
See more examples below. I converted my examples from last night to this IMO better version, because at least some of them would still be relevant.
On 10.5.2015 2:07, Koos Zevenhoven wrote:
On 10.5.2015 1:03, Gregory Salvan wrote:
Nobody convinced by arrow operator ?
like: arg -> spam -> eggs -> cheese or cheese <- eggs <- spam <- arg
I like | a lot because of the pipe analogy. However, having a new operator for this could solve some issues about operator precedence.
Today, I sketched one possible version that would use a new .. operator. I'll explain what it would do (but with your -> instead of my ..)
Here, the operator (.. or ->) would have a higher precedence than function calls () but a lower precedence than attribute access (obj.attr).
First, with single-argument functions spam, eggs and cheese, and a non-function arg:
arg->eggs->spam->cheese() # equivalent to cheese(spam(eggs(arg)))
With -> as a partial operator, this would instead be:
arg->eggs()->spam()->cheese() # equivalent to cheese(spam(eggs(arg)))
eggs->spam->cheese # equivalent to lambda arg: cheese(spam(eggs(arg)))
With -> as a partial operator this could be:
lambda arg: arg->eggs()->spam()->cheese()
Then, if spam and eggs both took two arguments; eggs(arg1, arg2),
spam(arg1, arg2)
arg->eggs # equivalent to partial(eggs, arg) eggs->spam(a, b, c) # equivalent to spam(eggs(a, b), c)
With -> as a partial operator, the first one would work, and the second would become:
eggs(a,b)->spam(c) # equivalent to spam(eggs(a, b), c)
arg->eggs->spam(b,c) # equivalent to spam(eggs(arg, b), c)
This would become:
arg->eggs(b)->spam(c) # equivalent to spam(eggs(arg, b), c)
Note that this would be quite flexible in partial 'piping' of multi-argument functions.
So you could think of -> as an extended partial operator. And this would
naturally generalize to functions with even more arguments. The arguments would always be fed in the same order as in the equivalent function call, which makes for a nice rule of thumb. However, I suppose one would usually avoid combinations that are difficult to understand.
Some examples that this would enable:
# Example 1 from numpy import square, mean, sqrt rms = square->mean->sqrt # I think this order is fine because it is not @
This would become:
def rms(arr): return arr->square()->mean()->sqrt()
# Example 2 (both are equivalent)
spam(args)->eggs->cheese() # the shell-syntax analogy that Steven mentioned.
This would be:
spam(args)->eggs()->cheese()
Of course the shell piping analogy would be quite far, because it looks so different.
# Example 3
# Last but not least, we would finally have this :) some_sequence->len() some_object->isinstance(MyType)
And:
func->map(seq) func->reduce(seq)
-- Koos
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On Sun, May 10, 2015 at 11:06:21PM +0300, Koos Zevenhoven wrote:
So, -> would be an operator with a precedence similar to .attribute access (but lower than .attribute):
Dot . is not an operator. If I remember correctly, the docs describe it as a delimiter.
# The simple definition of what it does: arg->func # equivalent to functools.partial(func, arg)
I believe you require that -> is applied before function application, so arg->func # returns partial(func, arg) arg->func(x) # returns partial(func, arg)(x) arg->(func(x)) # returns partial(func(x), arg)
This would allow for instance: arg -> spam() -> cheese(kind = 'gouda') -> eggs()
I am having a lot of difficulty seeing that as anything other than "call spam with no arguments, then apply arg to the result". But, teasing it apart with the precedence I established above: arg->spam() # returns partial(spam, arg)() == spam(arg) """ -> cheese # returns partial(cheese, spam(arg)) """ (kind='gouda') # returns partial(cheese, spam(arg))(kind='gouda') # == cheese(spam(arg), kind='gouda') """ -> eggs # returns partial(eggs, cheese(spam(arg), kind='gouda')) """ () # calls the previous partial, with no arguments, giving: # partial(eggs, cheese(spam(arg), kind='gouda'))() # == eggs(cheese(spam(arg), kind='gouda'))
which would be equivalent to eggs(cheese(spam(arg), kind = 'gouda'))
Amazingly, you are correct! :-) I think this demonstrates an abuse of partial and the sort of thing that gives functional idioms a bad name. To tease this apart and understand what it does was very difficult to me. And I don't understand the point of creating partial applications that you are then immediately going to call, that just adds an extra layer of indirection to slow the code down. If you write partial(len, 'foo')() instead of just len('foo'), something has gone drastically wrong. So instead of arg->spam()->cheese(kind='gouda')->eggs() which includes *three* partial objects which are immediately called, wouldn't it be easier to just call the functions in the first place? eggs(cheese(spam(arg), kind='gouda')) It will certainly be more efficient! Let's run through a simple chain with no parens: a -> b # partial(b, a) a -> b -> c # partial(c, partial(b, a)) a -> b -> c -> d # partial(d, partial(c, partial(b, a))) I'm not seeing why I would want to write something like that. Let's apply multiple arguments: a -> func # partial(func, a) b -> (a -> func) # partial(partial(func, a), b) c -> (b -> (a -> func)) # partial(partial(partial(func, a), b), c) Perhaps a sufficiently clever implementation of partial could optimize partial(partial(func, a), b) to just a single layer of indirection partial(func, a, b), so it's not *necessarily* as awful as it looks. (I would expect a function composition operator to do the same.) Note that we have to write the second argument first, and bracket the second arrow clause. Writing it the "obvious" way is wrong: a -> b -> func # partial(func, partial(b, a)) I think this is imaginative but hard to read, hard to understand, hard to use correctly, inefficient, and even if used correctly, there are not very many times that you would need it.
Or even together together with the proposed @ composition: rms = root @ mean @ square->map # for an iterable non-numpy argument
I think that a single arrow may be reasonable as syntactic sugar for partial, but once you start chaining them, it all falls apart into a mess. That, in my mind, is a sign that the idea doesn't scale. We can chain dots with no problem: fe.fi.fo.fum and function calls in numerous ways: foo(bar(baz())) foo(bar)(baz) and although they can get hard to read just because of the sheer number of components, they are not conceptually difficult. But chaining arrows is conceptually difficult even with as few as two arrows. I think the problem here is that partial application is an N-ary operation. This is not Haskell where single-argument currying is enforced everywhere! You're trying to perform something which conceptually takes N arguments partial(func, 1, 2, 3, ..., N) using only a operator which can only take two arguments a->b. Things are going to get messy.
And here's something I find quite interesting. Together with @singledispatch from 3.4 (or possibly an enhanced version using type annotations in the future?), one could add 'third-party methods' to classes in other libraries without monkey patching. A dummy example:
from numpy import array my_list = [1,2,3] my_array = array(my_list) my_mean = my_array.mean() # This currently works in numpy
from rmslib import rms my_rms = my_array->rms() # efficient rms for numpy arrays my_other_rms = my_list->rms() # rms that works on any iterable
That looks cute, but isn't very interesting. Effectively, you've invented a new (and less efficient) syntax for calling a function: spam->eggs(cheese) # eggs(spam, cheese) It's less efficient because it builds a partial object first, so instead of one call you end up with two, and a temporary object that gets thrown away immediately after it is used. Yes, you could keep the partial object around, but as your example shows, you don't. And because it is cute, people will write: a->func(), b->func(), c->func() and not realise that it creates three partial functions before calling them. Writing: func(a), func(b), func(c) will avoid that needless overhead. -- Steve
I agree here--I don't think a special operator for functools.partial is desirable. The proposal seems to suggest something between an ordinary lambda expression and Haskell's (>>=) bind.
I expect the arrow to work as it does in Haskell and julia for anonymous functions
x, *xs ->
So, -> would be an operator with a precedence similar to .attribute access (but lower than .attribute):
Dot . is not an operator. If I remember correctly, the docs describe it as a delimiter.
# The simple definition of what it does: arg->func # equivalent to functools.partial(func, arg)
I believe you require that -> is applied before function application, so arg->func # returns partial(func, arg) arg->func(x) # returns partial(func, arg)(x) arg->(func(x)) # returns partial(func(x), arg)
This would allow for instance: arg -> spam() -> cheese(kind = 'gouda') -> eggs()
I am having a lot of difficulty seeing that as anything other than "call spam with no arguments, then apply arg to the result". But, teasing it apart with the precedence I established above: arg->spam() # returns partial(spam, arg)() == spam(arg) """ -> cheese # returns partial(cheese, spam(arg)) """ (kind='gouda') # returns partial(cheese, spam(arg))(kind='gouda') # == cheese(spam(arg), kind='gouda') """ -> eggs # returns partial(eggs, cheese(spam(arg), kind='gouda')) """ () # calls the previous partial, with no arguments, giving: # partial(eggs, cheese(spam(arg), kind='gouda'))() # == eggs(cheese(spam(arg), kind='gouda'))
which would be equivalent to eggs(cheese(spam(arg), kind = 'gouda'))
Amazingly, you are correct! :-) I think this demonstrates an abuse of partial and the sort of thing that gives functional idioms a bad name. To tease this apart and understand what it does was very difficult to me. And I don't understand the point of creating partial applications that you are then immediately going to call, that just adds an extra layer of indirection to slow the code down. If you write partial(len, 'foo')() instead of just len('foo'), something has gone drastically wrong. So instead of arg->spam()->cheese(kind='gouda')->eggs() which includes *three* partial objects which are immediately called, wouldn't it be easier to just call the functions in the first place? eggs(cheese(spam(arg), kind='gouda')) It will certainly be more efficient! Let's run through a simple chain with no parens: a -> b # partial(b, a) a -> b -> c # partial(c, partial(b, a)) a -> b -> c -> d # partial(d, partial(c, partial(b, a))) I'm not seeing why I would want to write something like that. Let's apply multiple arguments: a -> func # partial(func, a) b -> (a -> func) # partial(partial(func, a), b) c -> (b -> (a -> func)) # partial(partial(partial(func, a), b), c) Perhaps a sufficiently clever implementation of partial could optimize partial(partial(func, a), b) to just a single layer of indirection partial(func, a, b), so it's not *necessarily* as awful as it looks. (I would expect a function composition operator to do the same.) Note that we have to write the second argument first, and bracket the second arrow clause. Writing it the "obvious" way is wrong: a -> b -> func # partial(func, partial(b, a)) I think this is imaginative but hard to read, hard to understand, hard to use correctly, inefficient, and even if used correctly, there are not very many times that you would need it.
Or even together together with the proposed @ composition: rms = root @ mean @ square->map # for an iterable non-numpy argument
I think that a single arrow may be reasonable as syntactic sugar for partial, but once you start chaining them, it all falls apart into a mess. That, in my mind, is a sign that the idea doesn't scale. We can chain dots with no problem: fe.fi.fo.fum and function calls in numerous ways: foo(bar(baz())) foo(bar)(baz) and although they can get hard to read just because of the sheer number of components, they are not conceptually difficult. But chaining arrows is conceptually difficult even with as few as two arrows. I think the problem here is that partial application is an N-ary operation. This is not Haskell where single-argument currying is enforced everywhere! You're trying to perform something which conceptually takes N arguments partial(func, 1, 2, 3, ..., N) using only a operator which can only take two arguments a->b. Things are going to get messy.
And here's something I find quite interesting. Together with @singledispatch from 3.4 (or possibly an enhanced version using type annotations in the future?), one could add 'third-party methods' to classes in other libraries without monkey patching. A dummy example:
from numpy import array my_list = [1,2,3] my_array = array(my_list) my_mean = my_array.mean() # This currently works in numpy
from rmslib import rms my_rms = my_array->rms() # efficient rms for numpy arrays my_other_rms = my_list->rms() # rms that works on any iterable
That looks cute, but isn't very interesting. Effectively, you've invented a new (and less efficient) syntax for calling a function: spam->eggs(cheese) # eggs(spam, cheese) It's less efficient because it builds a partial object first, so instead of one call you end up with two, and a temporary object that gets thrown away immediately after it is used. Yes, you could keep the partial object around, but as your example shows, you don't. And because it is cute, people will write: a->func(), b->func(), c->func() and not realise that it creates three partial functions before calling them. Writing: func(a), func(b), func(c) will avoid that needless overhead. -- Steve _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero. I get a headache when I try to understand code that uses function composition, and I end up having to laboriously rewrite it using more traditional call notation before I move on to understanding what it actually does. Python is not Haskell, and perhaps more importantly, Python users are not like Haskel users. Either way, what may work out beautifully in Haskell will be like a fish out of water in Python. I understand that it's fun to try to sole this puzzle, but evolving Python is more than solving puzzles. Enjoy debating the puzzle, but in the end Python will survive without the solution. -- --Guido van Rossum (python.org/~guido)
I don't want to insist and I respect your point of view, I just want to
give a simplified real life example to show that function composition can
be less painful than another syntax.
When validating a lot of data you may want to reuse parts of already writen
validators. It can also be a mess to test complex data validation.
You can reduce this mess and reuse parts of your code by writing atomic
validators and compose them.
# sorry for using my own lib, but if I make no mistakes this code
functions, so...
import re
from lawvere import curry # curry is an arrow without type checking,
inherits composition, mutiple dispatch
user_match =
re.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*$").match
domain_match =
re.compile("^(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$").match
strict_user_match =
re.compile("^[a-z0-9][a-z0-9_-]+(?:\.[a-z0-9_-]+)*$").match
@curry
def is_string(value):
assert isinstance(value, str), '%s is not a string' %value
return value
@curry
def apply_until_char(func, char, value):
func(value[:value.index(char)])
return value
@curry
def apply_from_char(func, char, value):
func(value[value.index(char) + 1:])
return value
@curry
def has_char(char, value):
assert value.count(char) == 1
return value
@curry
def assert_ends_with(text, value):
assert value.endswith(text), '%s do not ends with %s' % (value, text)
return value
@curry
def assert_user(user):
assert user_match(user) is not None, '%s is not a valid user name' %
value
return user
@curry
def assert_strict_user(user):
assert strict_user_match(user) is not None, '%s is not a valid strict
user' % value
return user
@curry
def assert_domain(domain):
assert domain_match(domain) is not None, '%s is not a valid domain
name' % value
return domain
# currying (be made with partial)
has_user = apply_until_char(assert_user, '@')
has_strict_user = apply_until_char(assert_strict_user, '@')
has_domain = apply_from_char(assert_domain, '@')
# composition:
is_email_address = is_string >> has_char('@') >> has_user >> has_domain
is_strict_email_address = is_string >> has_char('@') >> has_strict_user >>
has_domain
# we just want org adresses ?
is_org_addess = is_email_address >> assert_ends_with('.org')
I found a lot of interest in this syntax, mainly for testing purpose,
readability and maintenability of code.
No matters if I'm a fish out of python waters. :)
2015-05-11 16:41 GMT+02:00 Guido van Rossum
As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero. I get a headache when I try to understand code that uses function composition, and I end up having to laboriously rewrite it using more traditional call notation before I move on to understanding what it actually does. Python is not Haskell, and perhaps more importantly, Python users are not like Haskel users. Either way, what may work out beautifully in Haskell will be like a fish out of water in Python.
I understand that it's fun to try to sole this puzzle, but evolving Python is more than solving puzzles. Enjoy debating the puzzle, but in the end Python will survive without the solution.
-- --Guido van Rossum (python.org/~guido)
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Operator overloading (>>) has intuitive readability but in my experience it's better have functions remain "ordinary" functions, not class instances so you know what to expect regarding the type and so on. The other downside is that with (>>) only the functions you wrap can play together.
Leaving aside the readability concern, the really major problem is that your tracebacks are so badly mangled. And if your implementation of the composition function uses recursion it gets even worse.
You also lose the benefits of reflection/inspection--for example, with the code below, what happens if I call help ?? in ipython on `is_email_address`?
________________________________
From: Python-ideas
'is_email_address' is a special tuple which contains functions.
(is_email_address[0] returns is_string)
For help, it's a feature I've not implemented but it's easy to return the
help of each function, plus details as each function has an object
representing it's signature.
For traceback mangling, I don't see what is the problem.
When you call is_email_address(something) it pretty like if you've called:
def is_email_address(value):
is_string(value)
has_char('@', value)
has_user(value)
has_domain(value)
return value
2015-05-11 19:08 GMT+02:00 Douglas La Rocca
Operator overloading (>>) has intuitive readability but in my experience it's better have functions remain "ordinary" functions, not class instances so you know what to expect regarding the type and so on. The other downside is that with (>>) only the functions you wrap can play together.
Leaving aside the readability concern, the really major problem is that your tracebacks are so badly mangled. And if your implementation of the composition function uses recursion it gets even worse.
You also lose the benefits of reflection/inspection--for example, with the code below, what happens if I call help ?? in ipython on ` is_email_address`?
------------------------------ *From:* Python-ideas
on behalf of Gregory Salvan *Sent:* Monday, May 11, 2015 12:13 PM *To:* Guido van Rossum *Cc:* python-ideas@python.org *Subject:* Re: [Python-ideas] Partial operator (and 'third-party methods' and 'piping') [was Re: Function composition (was no subject)] I don't want to insist and I respect your point of view, I just want to give a simplified real life example to show that function composition can be less painful than another syntax.
When validating a lot of data you may want to reuse parts of already writen validators. It can also be a mess to test complex data validation. You can reduce this mess and reuse parts of your code by writing atomic validators and compose them.
# sorry for using my own lib, but if I make no mistakes this code functions, so...
import re from lawvere import curry # curry is an arrow without type checking, inherits composition, mutiple dispatch
user_match = re.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*$").match domain_match = re.compile("^(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$").match strict_user_match = re.compile("^[a-z0-9][a-z0-9_-]+(?:\.[a-z0-9_-]+)*$").match
@curry def is_string(value): assert isinstance(value, str), '%s is not a string' %value return value
@curry def apply_until_char(func, char, value): func(value[:value.index(char)]) return value
@curry def apply_from_char(func, char, value): func(value[value.index(char) + 1:]) return value
@curry def has_char(char, value): assert value.count(char) == 1 return value
@curry def assert_ends_with(text, value): assert value.endswith(text), '%s do not ends with %s' % (value, text) return value
@curry def assert_user(user): assert user_match(user) is not None, '%s is not a valid user name' % value return user
@curry def assert_strict_user(user): assert strict_user_match(user) is not None, '%s is not a valid strict user' % value return user
@curry def assert_domain(domain): assert domain_match(domain) is not None, '%s is not a valid domain name' % value return domain
# currying (be made with partial) has_user = apply_until_char(assert_user, '@') has_strict_user = apply_until_char(assert_strict_user, '@') has_domain = apply_from_char(assert_domain, '@')
# composition: is_email_address = is_string >> has_char('@') >> has_user >> has_domain is_strict_email_address = is_string >> has_char('@') >> has_strict_user >> has_domain
# we just want org adresses ? is_org_addess = is_email_address >> assert_ends_with('.org')
I found a lot of interest in this syntax, mainly for testing purpose, readability and maintenability of code. No matters if I'm a fish out of python waters. :)
2015-05-11 16:41 GMT+02:00 Guido van Rossum
: As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero. I get a headache when I try to understand code that uses function composition, and I end up having to laboriously rewrite it using more traditional call notation before I move on to understanding what it actually does. Python is not Haskell, and perhaps more importantly, Python users are not like Haskel users. Either way, what may work out beautifully in Haskell will be like a fish out of water in Python.
I understand that it's fun to try to sole this puzzle, but evolving Python is more than solving puzzles. Enjoy debating the puzzle, but in the end Python will survive without the solution.
-- --Guido van Rossum (python.org/~guido)
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 5/11/2015 12:13 PM, Gregory Salvan wrote:
I don't want to insist and I respect your point of view, I just want to give a simplified real life example to show that function composition can be less painful than another syntax.
When validating a lot of data you may want to reuse parts of already writen validators. It can also be a mess to test complex data validation. You can reduce this mess and reuse parts of your code by writing atomic validators and compose them.
# sorry for using my own lib, but if I make no mistakes this code functions, so...
import re from lawvere import curry # curry is an arrow without type checking, inherits composition, mutiple dispatch
user_match = re.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*$").match domain_match = re.compile("^(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$").match strict_user_match = re.compile("^[a-z0-9][a-z0-9_-]+(?:\.[a-z0-9_-]+)*$").match
@curry def is_string(value): assert isinstance(value, str), '%s is not a string' %value return value
@curry def apply_until_char(func, char, value): func(value[:value.index(char)]) return value
@curry def apply_from_char(func, char, value): func(value[value.index(char) + 1:]) return value
@curry def has_char(char, value): assert value.count(char) == 1 return value
@curry def assert_ends_with(text, value): assert value.endswith(text), '%s do not ends with %s' % (value, text) return value
@curry def assert_user(user): assert user_match(user) is not None, '%s is not a valid user name' % value return user
@curry def assert_strict_user(user): assert strict_user_match(user) is not None, '%s is not a valid strict user' % value return user
@curry def assert_domain(domain): assert domain_match(domain) is not None, '%s is not a valid domain name' % value return domain
# currying (be made with partial) has_user = apply_until_char(assert_user, '@') has_strict_user = apply_until_char(assert_strict_user, '@') has_domain = apply_from_char(assert_domain, '@')
# composition: is_email_address = is_string >> has_char('@') >> has_user >> has_domain is_strict_email_address = is_string >> has_char('@') >> has_strict_user
has_domain
# we just want org adresses ? is_org_addess = is_email_address >> assert_ends_with('.org')
I found a lot of interest in this syntax, mainly for testing purpose, readability and maintenability of code. No matters if I'm a fish out of python waters. :)
You could do much the same with standard syntax by writing an str subclass with multiple methods that return self, and then chain together the method calls. class VString: # verifiable string def has_char_once(self, char): assert self.count(char) == 1 return self ... def is_email_address(self): # or make standalone return self.has_char_once('@').has_user().has_domain() data = VString(input()) data.is_email() -- Terry Jan Reedy
On Monday, May 11, 2015 9:15 AM, Gregory Salvan
I don't want to insist and I respect your point of view, I just want to give a simplified real life example to show that function composition can be less painful than another syntax.
OK, let's compare your example to a Pythonic implementation of the same thing. import re ruser = re.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*$") rdomain = re.compile("^(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$") rstrict_user = re.compile("^[a-z0-9][a-z0-9_-]+(?:\.[a-z0-9_-]+)*$") def is_email_address(addr): user, domain = addr.split('@', 1) return ruser.match(user) and rdomain.match(domain) def is_strict_email_address(addr): user, domain = addr.split('@', 1) return rstrictuser.match(user) and rdomain.match(domain) def is_org_address(addr): return is_email_address(addr) and addr.ends_with('.org') (An even better solution, given that you're already using regexps, might be to just use a single regexp with named groups for the user or strict-user, full domain, and TLD… but I've left yours alone.) Far from being more painful, the Pythonic version is easier to write, easier to read, easier to debug, shorter, and understandable to even a novice, without having to rewrite anything in your head. It also handles invalid input by returning failure values and/or raising appropriate exceptions rather than asserting and exiting. And it's almost certainly going to be significantly more efficient. And it works with any string-like type (that is, any type that has a .split method and works with re.match). And if you have to debug something, you will have, e.g., values named user and domain, rather than both being named value at different levels on the call stack. If you really want to come up with a convincing example for your idea, I'd take an example out of Learn You a Haskell or another book or tutorial and translate that to Python with your library. I suspect it would still have some of the same problems, but this example wouldn't even really be good in Haskell, so it's just making it harder to see why anyone would want anything like it. And by offering this as the response to Guido's "You're never going to convince me," well, if he _was_ still reading this thread with an open mind, he probably isn't anymore (although, to be honest, he probably wasn't reading it anyway).
import re
from lawvere import curry # curry is an arrow without type checking, inherits composition, mutiple dispatch
user_match = re.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*$").match domain_match = re.compile("^(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$").match strict_user_match = re.compile("^[a-z0-9][a-z0-9_-]+(?:\.[a-z0-9_-]+)*$").match
@curry>def is_string(value): assert isinstance(value, str), '%s is not a string' %value return value
@curry def apply_until_char(func, char, value): func(value[:value.index(char)]) return value
@curry def apply_from_char(func, char, value): func(value[value.index(char) + 1:]) return value
@curry
def has_char(char, value): assert value.count(char) == 1 return value
@curry def assert_ends_with(text, value): assert value.endswith(text), '%s do not ends with %s' % (value, text) return value
@curry def assert_user(user): assert user_match(user) is not None, '%s is not a valid user name' % value return user
@curry def assert_strict_user(user): assert strict_user_match(user) is not None, '%s is not a valid strict user' % value return user
@curry def assert_domain(domain): assert domain_match(domain) is not None, '%s is not a valid domain name' % value return domain
# currying (be made with partial)
has_user = apply_until_char(assert_user, '@')
has_strict_user = apply_until_char(assert_strict_user, '@')
has_domain = apply_from_char(assert_domain, '@')
# composition:
is_email_address = is_string >> has_char('@') >> has_user >> has_domain
is_strict_email_address = is_string >> has_char('@') >> has_strict_user >> has_domain
# we just want org adresses ?
is_org_addess = is_email_address >> assert_ends_with('.org')
I found a lot of interest in this syntax, mainly for testing purpose, readability and maintenability of code.
No matters if I'm a fish out of python waters. :)
2015-05-11 16:41 GMT+02:00 Guido van Rossum
: As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero. I get a headache when I try to understand code that uses function composition, and I end up having to laboriously rewrite it using more traditional call notation before I move on to understanding what it actually does. Python is not Haskell, and perhaps more importantly, Python users are not like Haskel users. Either way, what may work out beautifully in Haskell will be like a fish out of water in Python.
I understand that it's fun to try to sole this puzzle, but evolving Python is more than solving puzzles. Enjoy debating the puzzle, but in the end Python will survive without the solution.
--
--Guido van Rossum (python.org/~guido) _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Andrew Barnet we disagree.
In your example you have no information about if error comes from domain,
user name or domain extension...
Writing a big regexp with group... really ? it is easy to maintain, test
and reuse ? and for a novice ? muliply this by thousands of validators and
their respectives tests.
I call that a mess and inside a project I lead, I will not accept it.
Even in Haskell people rarelly use arrows, I don't criticize this choice as
arrows comes from category theory and we are used to think inside ZF set
theory.
Somes prefer a syntax over another, there is not a good answer, but this
also mean there is no irrelevant answer.
In fact both exists and choosing within the case is never easy. Thinking
the same way for each problem is also wrong, so I will never pretend to
resolve every problem with a single lib.
Now I understand this idea is not a priority, I've seen more and more
threads about functional tools, I regret we can't find a solution but
effectively this absence of solution now can't convince me to stop digging
other paths. This is not irrespectuous.
2015-05-11 20:25 GMT+02:00 Andrew Barnert
On Monday, May 11, 2015 9:15 AM, Gregory Salvan
wrote: I don't want to insist and I respect your point of view, I just want to give a simplified real life example to show that function composition can be less painful than another syntax.
OK, let's compare your example to a Pythonic implementation of the same thing.
import re
ruser = re.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*$") rdomain = re.compile("^(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$") rstrict_user = re.compile("^[a-z0-9][a-z0-9_-]+(?:\.[a-z0-9_-]+)*$")
def is_email_address(addr): user, domain = addr.split('@', 1) return ruser.match(user) and rdomain.match(domain)
def is_strict_email_address(addr): user, domain = addr.split('@', 1) return rstrictuser.match(user) and rdomain.match(domain)
def is_org_address(addr): return is_email_address(addr) and addr.ends_with('.org')
(An even better solution, given that you're already using regexps, might be to just use a single regexp with named groups for the user or strict-user, full domain, and TLD… but I've left yours alone.)
Far from being more painful, the Pythonic version is easier to write, easier to read, easier to debug, shorter, and understandable to even a novice, without having to rewrite anything in your head. It also handles invalid input by returning failure values and/or raising appropriate exceptions rather than asserting and exiting. And it's almost certainly going to be significantly more efficient. And it works with any string-like type (that is, any type that has a .split method and works with re.match). And if you have to debug something, you will have, e.g., values named user and domain, rather than both being named value at different levels on the call stack.
If you really want to come up with a convincing example for your idea, I'd take an example out of Learn You a Haskell or another book or tutorial and translate that to Python with your library. I suspect it would still have some of the same problems, but this example wouldn't even really be good in Haskell, so it's just making it harder to see why anyone would want anything like it. And by offering this as the response to Guido's "You're never going to convince me," well, if he _was_ still reading this thread with an open mind, he probably isn't anymore (although, to be honest, he probably wasn't reading it anyway).
import re
from lawvere import curry # curry is an arrow without type checking, inherits composition, mutiple dispatch
user_match = re.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*$").match domain_match = re.compile("^(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$").match strict_user_match = re.compile("^[a-z0-9][a-z0-9_-]+(?:\.[a-z0-9_-]+)*$").match
@curry>def is_string(value): assert isinstance(value, str), '%s is not a string' %value return value
@curry def apply_until_char(func, char, value): func(value[:value.index(char)]) return value
@curry def apply_from_char(func, char, value): func(value[value.index(char) + 1:]) return value
@curry
def has_char(char, value): assert value.count(char) == 1 return value
@curry def assert_ends_with(text, value): assert value.endswith(text), '%s do not ends with %s' % (value, text) return value
@curry def assert_user(user): assert user_match(user) is not None, '%s is not a valid user name' % value return user
@curry def assert_strict_user(user): assert strict_user_match(user) is not None, '%s is not a valid strict user' % value return user
@curry def assert_domain(domain): assert domain_match(domain) is not None, '%s is not a valid domain name' % value return domain
# currying (be made with partial)
has_user = apply_until_char(assert_user, '@')
has_strict_user = apply_until_char(assert_strict_user, '@')
has_domain = apply_from_char(assert_domain, '@')
# composition:
is_email_address = is_string >> has_char('@') >> has_user >> has_domain
is_strict_email_address = is_string >> has_char('@') >> has_strict_user
has_domain
# we just want org adresses ?
is_org_addess = is_email_address >> assert_ends_with('.org')
I found a lot of interest in this syntax, mainly for testing purpose, readability and maintenability of code.
No matters if I'm a fish out of python waters. :)
2015-05-11 16:41 GMT+02:00 Guido van Rossum
: As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero. I get a headache when I try to understand code that uses function composition, and I end up having to laboriously rewrite it using more traditional call notation before I move on to understanding what it actually does. Python is not Haskell, and perhaps more importantly, Python users are not like Haskel users. Either way, what may work out beautifully in Haskell will be like a fish out of water in Python.
I understand that it's fun to try to sole this puzzle, but evolving
Python is more than solving puzzles. Enjoy debating the puzzle, but in the end Python will survive without the solution.
--
--Guido van Rossum (python.org/~guido) _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
In case you've not seen how it divides the volume of code you'll need to
write, here are tests of "is_email_address":
# What's an email address ?
def test_it_is_a_string(self):
assert is_string in is_email_address
def test_it_has_a_user_name(self):
assert has_user in is_email_address
def test_it_contains_at(self):
assert has_char('@') in is_email_address
def test_it_has_a_domain_name(self):
assert has_domain in is_email_address
# answer: an email address is a string with a user name, char '@' and a
domain name.
@Teddy Reedy with a class you'll have to write more tests and abuse of
inheritance.
2015-05-11 21:44 GMT+02:00 Gregory Salvan
Andrew Barnet we disagree. In your example you have no information about if error comes from domain, user name or domain extension... Writing a big regexp with group... really ? it is easy to maintain, test and reuse ? and for a novice ? muliply this by thousands of validators and their respectives tests. I call that a mess and inside a project I lead, I will not accept it.
Even in Haskell people rarelly use arrows, I don't criticize this choice as arrows comes from category theory and we are used to think inside ZF set theory. Somes prefer a syntax over another, there is not a good answer, but this also mean there is no irrelevant answer. In fact both exists and choosing within the case is never easy. Thinking the same way for each problem is also wrong, so I will never pretend to resolve every problem with a single lib.
Now I understand this idea is not a priority, I've seen more and more threads about functional tools, I regret we can't find a solution but effectively this absence of solution now can't convince me to stop digging other paths. This is not irrespectuous.
2015-05-11 20:25 GMT+02:00 Andrew Barnert
: On Monday, May 11, 2015 9:15 AM, Gregory Salvan
wrote: I don't want to insist and I respect your point of view, I just want to give a simplified real life example to show that function composition can be less painful than another syntax.
OK, let's compare your example to a Pythonic implementation of the same thing.
import re
ruser = re.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*$") rdomain = re.compile("^(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$") rstrict_user = re.compile("^[a-z0-9][a-z0-9_-]+(?:\.[a-z0-9_-]+)*$")
def is_email_address(addr): user, domain = addr.split('@', 1) return ruser.match(user) and rdomain.match(domain)
def is_strict_email_address(addr): user, domain = addr.split('@', 1) return rstrictuser.match(user) and rdomain.match(domain)
def is_org_address(addr): return is_email_address(addr) and addr.ends_with('.org')
(An even better solution, given that you're already using regexps, might be to just use a single regexp with named groups for the user or strict-user, full domain, and TLD… but I've left yours alone.)
Far from being more painful, the Pythonic version is easier to write, easier to read, easier to debug, shorter, and understandable to even a novice, without having to rewrite anything in your head. It also handles invalid input by returning failure values and/or raising appropriate exceptions rather than asserting and exiting. And it's almost certainly going to be significantly more efficient. And it works with any string-like type (that is, any type that has a .split method and works with re.match). And if you have to debug something, you will have, e.g., values named user and domain, rather than both being named value at different levels on the call stack.
If you really want to come up with a convincing example for your idea, I'd take an example out of Learn You a Haskell or another book or tutorial and translate that to Python with your library. I suspect it would still have some of the same problems, but this example wouldn't even really be good in Haskell, so it's just making it harder to see why anyone would want anything like it. And by offering this as the response to Guido's "You're never going to convince me," well, if he _was_ still reading this thread with an open mind, he probably isn't anymore (although, to be honest, he probably wasn't reading it anyway).
import re
from lawvere import curry # curry is an arrow without type checking, inherits composition, mutiple dispatch
user_match = re.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*$").match domain_match = re.compile("^(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$").match strict_user_match = re.compile("^[a-z0-9][a-z0-9_-]+(?:\.[a-z0-9_-]+)*$").match
@curry>def is_string(value): assert isinstance(value, str), '%s is not a string' %value return value
@curry def apply_until_char(func, char, value): func(value[:value.index(char)]) return value
@curry def apply_from_char(func, char, value): func(value[value.index(char) + 1:]) return value
@curry
def has_char(char, value): assert value.count(char) == 1 return value
@curry def assert_ends_with(text, value): assert value.endswith(text), '%s do not ends with %s' % (value, text) return value
@curry def assert_user(user): assert user_match(user) is not None, '%s is not a valid user name' % value return user
@curry def assert_strict_user(user): assert strict_user_match(user) is not None, '%s is not a valid strict user' % value return user
@curry def assert_domain(domain): assert domain_match(domain) is not None, '%s is not a valid domain name' % value return domain
# currying (be made with partial)
has_user = apply_until_char(assert_user, '@')
has_strict_user = apply_until_char(assert_strict_user, '@')
has_domain = apply_from_char(assert_domain, '@')
# composition:
is_email_address = is_string >> has_char('@') >> has_user >> has_domain
is_strict_email_address = is_string >> has_char('@') >> has_strict_user
has_domain
# we just want org adresses ?
is_org_addess = is_email_address >> assert_ends_with('.org')
I found a lot of interest in this syntax, mainly for testing purpose, readability and maintenability of code.
No matters if I'm a fish out of python waters. :)
2015-05-11 16:41 GMT+02:00 Guido van Rossum
: As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero. I get a headache when I try to understand code that uses function composition, and I end up having to laboriously rewrite it using more traditional call notation before I move on to understanding what it actually does. Python is not Haskell, and perhaps more importantly, Python users are not like Haskel users. Either way, what may work out beautifully in Haskell will be like a fish out of water in Python.
I understand that it's fun to try to sole this puzzle, but evolving
Python is more than solving puzzles. Enjoy debating the puzzle, but in the end Python will survive without the solution.
--
--Guido van Rossum (python.org/~guido) _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Sorry the fun part: the more you write code the less you have to write
tests.
# what's a strict email address:
def test_it_is_an_email_address_with_a_strict_user_name(self):
assert is_email_address.replace(has_user, has_strict_user) ==
is_strict_email_address
2015-05-11 23:59 GMT+02:00 Gregory Salvan
In case you've not seen how it divides the volume of code you'll need to write, here are tests of "is_email_address":
# What's an email address ? def test_it_is_a_string(self): assert is_string in is_email_address
def test_it_has_a_user_name(self): assert has_user in is_email_address
def test_it_contains_at(self): assert has_char('@') in is_email_address
def test_it_has_a_domain_name(self): assert has_domain in is_email_address
# answer: an email address is a string with a user name, char '@' and a domain name.
@Teddy Reedy with a class you'll have to write more tests and abuse of inheritance.
2015-05-11 21:44 GMT+02:00 Gregory Salvan
: Andrew Barnet we disagree. In your example you have no information about if error comes from domain, user name or domain extension... Writing a big regexp with group... really ? it is easy to maintain, test and reuse ? and for a novice ? muliply this by thousands of validators and their respectives tests. I call that a mess and inside a project I lead, I will not accept it.
Even in Haskell people rarelly use arrows, I don't criticize this choice as arrows comes from category theory and we are used to think inside ZF set theory. Somes prefer a syntax over another, there is not a good answer, but this also mean there is no irrelevant answer. In fact both exists and choosing within the case is never easy. Thinking the same way for each problem is also wrong, so I will never pretend to resolve every problem with a single lib.
Now I understand this idea is not a priority, I've seen more and more threads about functional tools, I regret we can't find a solution but effectively this absence of solution now can't convince me to stop digging other paths. This is not irrespectuous.
2015-05-11 20:25 GMT+02:00 Andrew Barnert
: On Monday, May 11, 2015 9:15 AM, Gregory Salvan
wrote: I don't want to insist and I respect your point of view, I just want to give a simplified real life example to show that function composition can be less painful than another syntax.
OK, let's compare your example to a Pythonic implementation of the same thing.
import re
ruser = re.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*$") rdomain = re.compile("^(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$") rstrict_user = re.compile("^[a-z0-9][a-z0-9_-]+(?:\.[a-z0-9_-]+)*$")
def is_email_address(addr): user, domain = addr.split('@', 1) return ruser.match(user) and rdomain.match(domain)
def is_strict_email_address(addr): user, domain = addr.split('@', 1) return rstrictuser.match(user) and rdomain.match(domain)
def is_org_address(addr): return is_email_address(addr) and addr.ends_with('.org')
(An even better solution, given that you're already using regexps, might be to just use a single regexp with named groups for the user or strict-user, full domain, and TLD… but I've left yours alone.)
Far from being more painful, the Pythonic version is easier to write, easier to read, easier to debug, shorter, and understandable to even a novice, without having to rewrite anything in your head. It also handles invalid input by returning failure values and/or raising appropriate exceptions rather than asserting and exiting. And it's almost certainly going to be significantly more efficient. And it works with any string-like type (that is, any type that has a .split method and works with re.match). And if you have to debug something, you will have, e.g., values named user and domain, rather than both being named value at different levels on the call stack.
If you really want to come up with a convincing example for your idea, I'd take an example out of Learn You a Haskell or another book or tutorial and translate that to Python with your library. I suspect it would still have some of the same problems, but this example wouldn't even really be good in Haskell, so it's just making it harder to see why anyone would want anything like it. And by offering this as the response to Guido's "You're never going to convince me," well, if he _was_ still reading this thread with an open mind, he probably isn't anymore (although, to be honest, he probably wasn't reading it anyway).
import re
from lawvere import curry # curry is an arrow without type checking, inherits composition, mutiple dispatch
user_match = re.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*$").match domain_match = re.compile("^(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$").match strict_user_match = re.compile("^[a-z0-9][a-z0-9_-]+(?:\.[a-z0-9_-]+)*$").match
@curry>def is_string(value): assert isinstance(value, str), '%s is not a string' %value return value
@curry def apply_until_char(func, char, value): func(value[:value.index(char)]) return value
@curry def apply_from_char(func, char, value): func(value[value.index(char) + 1:]) return value
@curry
def has_char(char, value): assert value.count(char) == 1 return value
@curry def assert_ends_with(text, value): assert value.endswith(text), '%s do not ends with %s' % (value, text) return value
@curry def assert_user(user): assert user_match(user) is not None, '%s is not a valid user name' % value return user
@curry def assert_strict_user(user): assert strict_user_match(user) is not None, '%s is not a valid strict user' % value return user
@curry def assert_domain(domain): assert domain_match(domain) is not None, '%s is not a valid domain name' % value return domain
# currying (be made with partial)
has_user = apply_until_char(assert_user, '@')
has_strict_user = apply_until_char(assert_strict_user, '@')
has_domain = apply_from_char(assert_domain, '@')
# composition:
is_email_address = is_string >> has_char('@') >> has_user >> has_domain
is_strict_email_address = is_string >> has_char('@') >> has_strict_user
has_domain
# we just want org adresses ?
is_org_addess = is_email_address >> assert_ends_with('.org')
I found a lot of interest in this syntax, mainly for testing purpose, readability and maintenability of code.
No matters if I'm a fish out of python waters. :)
2015-05-11 16:41 GMT+02:00 Guido van Rossum
: As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero. I get a headache when I try to understand code that uses function composition, and I end up having to laboriously rewrite it using more traditional call notation before I move on to understanding what it actually does. Python is not Haskell, and perhaps more importantly, Python users are not like Haskel users. Either way, what may work out beautifully in Haskell will be like a fish out of water in Python.
I understand that it's fun to try to sole this puzzle, but evolving
Python is more than solving puzzles. Enjoy debating the puzzle, but in the end Python will survive without the solution.
--
--Guido van Rossum (python.org/~guido) _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 12 May 2015 at 08:12, Gregory Salvan
Sorry the fun part: the more you write code the less you have to write tests.
I think this is the key for the folks hoping to make the case for increased support for function composition in the future (it's definitely too late in the cycle for 3.5): focus on the *pragmatic* benefits in testability, and argue that this makes up for the *loss* of readability. "It's easier to read" is *not* a true statement for anyone that hasn't already learned to think functionally, and "It is worth your while to learn to think functionally, even if it takes you years" is a very *different* statement. The human brain tends to think procedurally by default (presumably because our stream of consciousness is typically experienced as a linear series of events), while object oriented programming can benefit from analogies with physical objects (especially when taught via robotics or other embodied systems), and message passing based concurrent systems can benefit from analogies with human communications. By contrast, there aren't any easy "interaction with the physical world" analogies to draw on for functional programming, so it takes extensive training and practice to teach people to think in functional terms. Folks with a strong mathematical background (especially in formal mathematical proofs) often already have that training (even if they're only novice programmers), while the vast majority of software developers (even professional ones), don't. As a result, I think the more useful perspective to take is the one taken for the PEP 484 type hinting PEP: positioning function composition as an advanced tool for providing increased correctness guarantees for critical components by building them up from independently tested composable parts, rather than relying on ad hoc procedural logic that may itself be a source of bugs. Aside from more accurately reflecting the appropriate role of function composition in Pythonic development (i.e. as a high barrier to entry technique that is nevertheless sometimes worth the additional conceptual complexity, akin to deciding to use metaclasses to solve a problem), it's also likely to prove beneficial that Guido's recently been on the other side of this kind of argument when it comes to both type hinting in PEP 484 and async/await in PEP 492. I assume he'll still remain skeptical of the value of the trade-off when it comes to further improvements to Python's functional programming support, but at least he'll be familiar with the form of the argument :) On the "pragmatic benefits in testability" front, I believe one key tool to focus on is the Quick Check test case generator (https://wiki.haskell.org/Introduction_to_QuickCheck1) which lets the test generator take care of determining appropriate boundary conditions to check based on a specification of the desired externally visible behaviour of a function, rather than relying on the developer to manually specify those boundary conditions as particular test cases. I personally learned about that approach earlier this year through a talk that Fraser Tweedale gave at LCA in January: https://speakerdeck.com/frasertweedale/the-best-test-data-is-random-test-dat... & https://www.youtube.com/watch?v=p7oRMB5V2kE For Python, Fraser pointed out http://xion.io/pyqcy/ and Google tells me there's also https://pypi.python.org/pypi/pytest-quickcheck Gary Bernhardt's work is also worth exploring, including the "Functional Core, Imperative Shell" model discussed in his "Boundaries" presentation (https://www.youtube.com/watch?v=yTkzNHF6rMs) a few years back (an implementation of this approach is available for Python at https://pypi.python.org/pypi/nonobvious/). His closing keynote presentation at PyCon this year was also relevant (relating to the differences between the assurances that testing can provide vs those offered by powerful type systems like Idris), but unfortunately not available online. Andrew's recommendation to "approach via NumPy" is also a good one. Scientific programmers tend to be much better mathematicians than other programmers (and hence more likely to appreciate the value of development techniques based on function composition), and the rapid acceptance of the matrix multiplication PEP shows the scientific Python community have also become quite skilled at making the case to python-dev for new language level features of interest to them :) Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
For Python, Fraser pointed out http://xion.io/pyqcy/ and Google tells me there's also https://pypi.python.org/pypi/pytest-quickcheck
Don't forget about the latest and greatest Python library for
property-based testing, Hypothesis
http://hypothesis.readthedocs.org/en/latest/!
On Mon, May 11, 2015 at 10:16 PM Nick Coghlan
On 12 May 2015 at 08:12, Gregory Salvan
wrote: Sorry the fun part: the more you write code the less you have to write tests.
I think this is the key for the folks hoping to make the case for increased support for function composition in the future (it's definitely too late in the cycle for 3.5): focus on the *pragmatic* benefits in testability, and argue that this makes up for the *loss* of readability. "It's easier to read" is *not* a true statement for anyone that hasn't already learned to think functionally, and "It is worth your while to learn to think functionally, even if it takes you years" is a very *different* statement.
The human brain tends to think procedurally by default (presumably because our stream of consciousness is typically experienced as a linear series of events), while object oriented programming can benefit from analogies with physical objects (especially when taught via robotics or other embodied systems), and message passing based concurrent systems can benefit from analogies with human communications. By contrast, there aren't any easy "interaction with the physical world" analogies to draw on for functional programming, so it takes extensive training and practice to teach people to think in functional terms. Folks with a strong mathematical background (especially in formal mathematical proofs) often already have that training (even if they're only novice programmers), while the vast majority of software developers (even professional ones), don't.
As a result, I think the more useful perspective to take is the one taken for the PEP 484 type hinting PEP: positioning function composition as an advanced tool for providing increased correctness guarantees for critical components by building them up from independently tested composable parts, rather than relying on ad hoc procedural logic that may itself be a source of bugs. Aside from more accurately reflecting the appropriate role of function composition in Pythonic development (i.e. as a high barrier to entry technique that is nevertheless sometimes worth the additional conceptual complexity, akin to deciding to use metaclasses to solve a problem), it's also likely to prove beneficial that Guido's recently been on the other side of this kind of argument when it comes to both type hinting in PEP 484 and async/await in PEP 492. I assume he'll still remain skeptical of the value of the trade-off when it comes to further improvements to Python's functional programming support, but at least he'll be familiar with the form of the argument :)
On the "pragmatic benefits in testability" front, I believe one key tool to focus on is the Quick Check test case generator (https://wiki.haskell.org/Introduction_to_QuickCheck1) which lets the test generator take care of determining appropriate boundary conditions to check based on a specification of the desired externally visible behaviour of a function, rather than relying on the developer to manually specify those boundary conditions as particular test cases.
I personally learned about that approach earlier this year through a talk that Fraser Tweedale gave at LCA in January:
https://speakerdeck.com/frasertweedale/the-best-test-data-is-random-test-dat... & https://speakerdeck.com/frasertweedale/the-best-test-data-is-random-test-data& https://www.youtube.com/watch?v=p7oRMB5V2kE
For Python, Fraser pointed out http://xion.io/pyqcy/ and Google tells me there's also https://pypi.python.org/pypi/pytest-quickcheck
Gary Bernhardt's work is also worth exploring, including the "Functional Core, Imperative Shell" model discussed in his "Boundaries" presentation (https://www.youtube.com/watch?v=yTkzNHF6rMs) a few years back (an implementation of this approach is available for Python at https://pypi.python.org/pypi/nonobvious/). His closing keynote presentation at PyCon this year was also relevant (relating to the differences between the assurances that testing can provide vs those offered by powerful type systems like Idris), but unfortunately not available online.
Andrew's recommendation to "approach via NumPy" is also a good one. Scientific programmers tend to be much better mathematicians than other programmers (and hence more likely to appreciate the value of development techniques based on function composition), and the rapid acceptance of the matrix multiplication PEP shows the scientific Python community have also become quite skilled at making the case to python-dev for new language level features of interest to them :)
Regards, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 5/11/2015 10:41 AM, Guido van Rossum wrote:
As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero.
I have been waiting for this response (which I agree with). By 'this', I presume you mean either more new syntax other than '@', or official support of '@' other than for matrix or array multiplication.
I get a headache when I try to understand code that uses function composition,
Function composition is the *process* of using the output of one function (broadly speaking) as the input (or one of the inputs) of another function. All python code does this. The discussion is about adding a composition operator or function or notation (and accoutrements) as a duplicate *syntax* for expressing composition. As I posted before, mathematician's usually define the operator in terms of call syntax, which can also express composition.
and I end up having to laboriously rewrite it using more traditional call notation before I move on to understanding what it actually does.
Mathematicians do rewrites also ;-). The proof of (f @ g) @ h = f @ (g @ h) (associativity) is that ((f @ g) @ h)(x) and (f @ (g @ h))(x) can both be rewritten as f(g(h(x))).
I understand that it's fun to try to sole this puzzle, but evolving Python is more than solving puzzles.
Leaving aside the problem of stack overflow, one can rewrite "for x in iterable: process x" to perform the same computational process with recursive syntax (using iter and next and catching StopIteration). But one would have to be really stuck on the recursive syntax, as opposed to the inductive process, to use it in practice. -- Terry Jan Reedy
On Mon, May 11, 2015 at 10:45 AM, Terry Reedy
On 5/11/2015 10:41 AM, Guido van Rossum wrote:
As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero.
I have been waiting for this response (which I agree with). By 'this', I presume you mean either more new syntax other than '@', or official support of '@' other than for matrix or array multiplication.
Or even adding a compose() function (or similar) to the stdlib. I'm sorry, I don't have time to argue about this. -- --Guido van Rossum (python.org/~guido)
On 5/11/2015 1:49 PM, Guido van Rossum wrote:
On Mon, May 11, 2015 at 10:45 AM, Terry Reedy
mailto:tjreedy@udel.edu> wrote: On 5/11/2015 10:41 AM, Guido van Rossum wrote:
As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero.
I have been waiting for this response (which I agree with). By 'this', I presume you mean either more new syntax other than '@', or official support of '@' other than for matrix or array multiplication.
Or even adding a compose() function (or similar) to the stdlib.
I'm sorry, I don't have time to argue about this.
-- Terry Jan Reedy
On Monday, May 11, 2015 10:46 AM, Terry Reedy
As long as I'm "in charge" the chances of this (or anything
On 5/11/2015 10:41 AM, Guido van Rossum wrote: like it)
being accepted into Python are zero.
I have been waiting for this response (which I agree with). By 'this', I presume you mean either more new syntax other than '@', or official support of '@' other than for matrix or array multiplication.
I don't think it's worth trying to push for this directly in Python, even with the @ operator or a functools.compose function, even if someone thinks they've solved all the problems. If anyone really wants this feature, the obvious thing to do at this point is to prepare a NumPy-wrapper library that adds __matmul__ and __rmatmul__ to ufuncs, and some examples, convince the NumPy team to accept it, and then, once it becomes idiomatic in NumPy code, come back to python-ideas. Maybe there is nothing about function composition which inherently requires broadcast-style operations to make it useful, but the only decent examples anyone's come up with in this thread (root-mean-square) all do, which has to mean something. And the NumPy core devs haven't explicitly announced that they don't want to be convinced.
I get a headache when I try to
understand code that uses function composition,
Function composition is the *process* of using the output of one function (broadly speaking) as the input (or one of the inputs) of another function. All python code does this. The discussion is about adding a composition operator or function or notation (and accoutrements) as a duplicate *syntax* for expressing composition. As I posted before, mathematician's usually define the operator in terms of call syntax, which can also express composition.
and I end up having to laboriously rewrite it using more traditional call notation before I move on to understanding what it actually does.
Mathematicians do rewrites also ;-). The proof of (f @ g) @ h = f @ (g @ h) (associativity) is that ((f @ g) @ h)(x) and (f @ (g @ h))(x) can both be rewritten as f(g(h(x))).
I understand that it's fun to try to sole this puzzle, but evolving Python is more than solving puzzles.
Leaving aside the problem of stack overflow, one can rewrite "for x in iterable: process x" to perform the same computational process with recursive syntax (using iter and next and catching StopIteration). But one would have to be really stuck on the recursive syntax, as opposed to the inductive process, to use it in practice.
-- Terry Jan Reedy
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On Mon, May 11, 2015 at 8:11 PM, Guido van Rossum
As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero. I get a headache when I try to understand code that uses function composition,
I find it piquant to see this comment from the creator of a language that traces its lineage to Lambert Meertens :-) [Was reading one of the classics just yesterday http://www.kestrel.edu/home/people/meertens/publications/papers/Algorithmics... ] Personally, yeah I dont think python blindly morphing into haskell is a neat idea In the specific case of composition my position is... sqrt(mean(square(x))) is ugly in a lispy way (sqrt @ mean @ square)(x) is backward in one way (square @ mean @ sqrt)(x) is backward in another way sqrt @ mean @ square is neat for being point-free and reads easy like a Unix '|' but the '@' is more strikingly ugly sqrt o mean o square is a parsing nightmare square ∘ mean ∘ root Just right! [Assuming the unicode gods favor its transmission!] ...hopefully not too frivolous to say this but the ugliness of @ overrides the succinctness of the math for me
ha, i love unicode operators (e.g. in scala), but i think guido said python
will stay ASCII.
i hope we one day gain the ability to *optionally* use unicode
alternatives, even if that would put an end to our __matmul__ → function
combination aspirations:
* → ·
@ → × (not ∘)
/ → ÷
... → …
lambda → λ
– phil
Rustom Mody
On Mon, May 11, 2015 at 8:11 PM, Guido van Rossum
wrote: As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero. I get a headache when I try to understand code that uses function composition,
I find it piquant to see this comment from the creator of a language that traces its lineage to Lambert Meertens :-) [Was reading one of the classics just yesterday
http://www.kestrel.edu/home/people/meertens/publications/papers/Algorithmics... ] Personally, yeah I dont think python blindly morphing into haskell is a neat idea In the specific case of composition my position is...
sqrt(mean(square(x))) is ugly in a lispy way
(sqrt @ mean @ square)(x) is backward in one way
(square @ mean @ sqrt)(x) is backward in another way
sqrt @ mean @ square is neat for being point-free and reads easy like a Unix '|' but the '@' is more strikingly ugly
sqrt o mean o square is a parsing nightmare
square ∘ mean ∘ root Just right! [Assuming the unicode gods favor its transmission!]
...hopefully not too frivolous to say this but the ugliness of @ overrides the succinctness of the math for me
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
Python already supports unicode operators (kind of). You just have to use a
custom codec that translates the unicode characters to proper python.
On Tue, 12 May 2015 at 10:42 Philipp A.
ha, i love unicode operators (e.g. in scala), but i think guido said python will stay ASCII.
i hope we one day gain the ability to *optionally* use unicode alternatives, even if that would put an end to our __matmul__ → function combination aspirations:
* → · @ → × (not ∘) / → ÷ ... → … lambda → λ
– phil
Rustom Mody
schrieb am Di., 12. Mai 2015 um 07:07 Uhr: On Mon, May 11, 2015 at 8:11 PM, Guido van Rossum
wrote: As long as I'm "in charge" the chances of this (or anything like it) being accepted into Python are zero. I get a headache when I try to understand code that uses function composition,
I find it piquant to see this comment from the creator of a language that traces its lineage to Lambert Meertens :-) [Was reading one of the classics just yesterday
http://www.kestrel.edu/home/people/meertens/publications/papers/Algorithmics... ] Personally, yeah I dont think python blindly morphing into haskell is a neat idea In the specific case of composition my position is...
sqrt(mean(square(x))) is ugly in a lispy way
(sqrt @ mean @ square)(x) is backward in one way
(square @ mean @ sqrt)(x) is backward in another way
sqrt @ mean @ square is neat for being point-free and reads easy like a Unix '|' but the '@' is more strikingly ugly
sqrt o mean o square is a parsing nightmare
square ∘ mean ∘ root Just right! [Assuming the unicode gods favor its transmission!]
...hopefully not too frivolous to say this but the ugliness of @ overrides the succinctness of the math for me
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
It might be neat to be able to use the superscript and subscript number glyphs for exponentiation and indexing, so 'x₀, x₁, x₂' == 'x[0], x[1], x[2]' and 'some_var³⁵˙¹' == 'some_var ** 35.1'. (That probably shouldn't support anything other than numbers and '.' to keep things simple). There's also the comparison operators (≠, ≤, ≦, ≥, ≧), '∈' for in and perhaps even additional overloads for sets (∪, ∩, ⊂, ⊆, ⊃, ⊇, ⊖). Maybe the math module could have a math.π alias as well for people who wish to import it. - Spencer
On 12 May 2015, at 7:01 pm, João Santos
wrote: Python already supports unicode operators (kind of). You just have to use a custom codec that translates the unicode characters to proper python.
On Tue, 12 May 2015 at 10:42 Philipp A.
wrote: ha, i love unicode operators (e.g. in scala), but i think guido said python will stay ASCII. i hope we one day gain the ability to optionally use unicode alternatives, even if that would put an end to our __matmul__ → function combination aspirations:
* → · @ → × (not ∘) / → ÷ ... → … lambda → λ
– phil
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On Tue, May 12, 2015 at 2:06 PM, Philipp A.
ha, i love unicode operators (e.g. in scala), but i think guido said python will stay ASCII.
Or Julia http://iaindunning.com/blog/julia-unicode.html Also Fortress, Agda and the classic APL Interestingly Haskell is one step ahead of Python in some areas and behind in others --------- GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Prelude> let (x₁, x₂) = (1, 2) Prelude> (x₁, x₂) (1,2) Prelude> --------- However wrt getting ligatures right python is ahead: [Haskell] Prelude> let flag = True Prelude> flag <interactive>:5:1: Not in scope: `flag' [Equivalent of NameError] ------------- [Python3]
flag = True flag True
participants (13)
-
Andrew Barnert
-
Douglas La Rocca
-
Gregory Salvan
-
Guido van Rossum
-
João Santos
-
Koos Zevenhoven
-
Nicholas Chammas
-
Nick Coghlan
-
Philipp A.
-
Rustom Mody
-
Spencer Brown
-
Steven D'Aprano
-
Terry Reedy