Re: [Python-ideas] How do you think about these language extensions?(Thautwarm)

Hi, all! I want to reply many people and it might annoy you if I write multiple replies... As a result, I write them all in one post. ---------------------------------------------------------------------------------- To Christopher Barker, Ph.D. ---------------------------------------------------------------------------------- Hi, Dr. Christopher Barker Ph.D. Just as what you said,
parentheses aren't that bad, and as far as I can tell, this is just another way to call a function on the results of a function.
The above is now spelled:
list(map(lambda x: x+2, range(5)))
[x+2 for x in range(5)]
nicely, we have list comps and generator expressions, so we can avoid the list0 call.
I'll try to talk something about why I think we need this grammar, the reasons for it are not just to remove parentheses. Could you please think this way to define a variable:
var = expr() -> g1(_) if f(_) else g2(_)
which equals
test = f(expr()) var = g1(test) if f(test) else g2(test)
which means that we have to use a temporary variable "test" to define "var". I think the second example is a bit lengthy, isn't it? The reason why I take this kind of grammar is that I can "flatten the programming logic". In another words,I can clearly state what I mean to say in order of my thinking. For example,
lambda x: f(g(x)) -> map(_, range(100))
The codes above means that I'm stressing on what(an action) I'm going to do on an object "range(100)". However, sometimes the actions are not important, so if we want to stress on what we're going to do something on, we write this codes:
range(100) -> map( lambda x:f(g(x)), _ )
Additionally, coding with chaining expressions makes me feel like writing a poem (Although it's a little difficult to me :) How do you think about writing the following codes? options = ... result1 = lambda: ... result2 = lambda: ... def dosomething(obj, options) -> Any: ... def is_meeting_some_conditions( event : Any ) -> bool : ... In my opinion, it's quite readable and "smooth". To be honest, I think we can totolly do coding like chatting and it can be quite enjoyable. However, I'm not sure whether '->' is a good choice, and it didn't lead to any conflics at all when I compiled the CPython source code. Moreover, It can be easily changed in Grammar/Grammar, so I think it not to be crucial. Finally,
I think that using "pipe" model cannot be the right choice. We don't need to worry about this problem if we use the grammar I've implemented yet :)
(lambda x: (x%5, x) ) -> max( range(99), key = _) 94
Thautwarm ---------------------------------------------------------------------------------- To David Mertz ---------------------------------------------------------------------------------- I think what you mean to is partially correct, but auto-currying cannot be effective enough. For sure you can do this in "compose" function.
... -> ... -> ... -> map(lambda x:x+1, _)
However, the evaluated result can be used as any argument of map and other callable objects.
... -> ... -> ... -> map(_ , range(100)) ... -> ... -> ... -> min([1,2,3], key = _ )
Thautwarm ---------------------------------------------------------------------------------- To Chris Angelico ---------------------------------------------------------------------------------- To be honest, I'm in favor of the grammar you prefer, like "expr1 | expr2 | expr3". However, it might be annoying that I should firstly define a lot of kinds of pipeline operators like Map, Reduce and so on. As for whether to allow implicit first/last argument, it seems to be a good idea but two points are in the way: 1. We need to change almost all C functions related to expressions in the source code located at Python/ast.c, while implementing the grammar I'm using now just needing nothing more than adding new C-function here. 2. Implicit methods makes it impossible to use following expressions.
... -> func(some_var, some_var, _, some_eval(), some_key = _)
In other words, implicit methods weaken the grammar, we need one more "where syntax" to do the same thing:
some -> new_func where: new_func = lambda x: func(some_var, some_var, x, some_eval(), some_key = x)
Emmm... I'm not sure about that, how do you think about that? Thautwarm ---------------------------------------------------------------------------------- To Steven D'Aprano ---------------------------------------------------------------------------------- Thank you very much for your reply, and it encouraged me a lot. I've just read your most recent post, and it seems that you've suffered a lot from the parentheses, and so did I.
I couldn't agree more about what you've said here!!! My opinions about "chaining and pipeline" could be found in my reply to Chris Barker, sorry for that I could not repeat myself in the same post.
This has been suggested a few times. The first time, I disliked it, but I've come across to seeing its value. I like it.
That's more how "where" is used mathematically.
As far as I'm concerned, it's not sure to tell how about you opinion. The grammar you've just considered is quite Haskell-like, I think. And the reason why I want to use "where synatx" is to divide the programming logic clearly into different layers. For example, sometimes we just need to know that surface area of a cylinder is 2*S_top + S_side If someone see the codes, he may not need to know how S_top and S_side are evaluated,getting a knowledge of what it means to is enough. And if you want to get more about how to evaluate S_side and S_top, just see the next "where syntax" and find the answers. Here is another example, about forward propagation in neural networks. # input_layer[i] : "np.ndarray[:] " = np.array( ... ) # weight[i] : "np.ndarray[:][:]" = np.array( ... ) output_layer[i] = activate(input_layer[i]) where: """ logic layer 1 """ def activate( layer ): ... return activation[i](layer) # for example, activation[i] = lambda x:x input_layer[i] = forward(weight[i-1], output_layer[i-1].T) where: """ logic layer 2 """ def forward(weight, output): ... # if it's a normal multi-layer perceptron. return np.matmul(weight, output.T) For some people, their works just need them to know that forward propagation of a neural network means that the output layer is generated from the input layer with some transformation. For some people who want to know what the transformation is, they can go to the next "where syntax", and find the definition of the transformation which named "activate". For people who want to know how neural networks works with multiple layers, they can find that the input layer is defined by last output_layer, last weight matrix how NN forwards. I think it a good way to use "where syntax" to deconstruct the programming logic, which can strengthen the readability a lot! And then I'm going to talk something about Pattern Matching, and transform them to regular Python to make it clear to understand.
I find that almost unreadable. Too many new features all at once, it's like trying to read a completely unfamiliar language.
How would you translate that into regular Python?
This algorithm can be fixed a little because the second case is redundant. And here is the regular Python codes transformed from the codes above. from copy import deepcopy def permutations(seq): try: # the first case (a, ) = seq return [a ,] except: try: # the third case (the second case is redundant) def insertAll(x, a): # insertAll([1,2,3], 0) -> [[0, 1, 2, 3], [1, 0, 2, 3], [1, 2, 0, 3], [1, 2, 3, 0]] ret = [] for i in range( len(x) + 1 ): tmp = deepcopy(x) tmp.insert(i, a) ret.append(tmp) return ret (a, *b) = seq tmp = permutations(b) tmp = map(lambda x : insertAll(x, a) , tmp) return sum(tmp, []) # sum([[1,2,3], [-1,-2,-3]], []) -> [1,2,3,-1,-2,-3] except: # no otherwise! pass To be continue...(sorry for my lack of time Thautwarm ---------------------------------------------------------------------------------- I'm sorry that I have to do some other works now and didn't finished writing down all I want to say. I'd like to continue replying the posts tomorrow , and it's quite glad to discuss these topics with you all!!!

On Sat, Aug 19, 2017 at 3:34 AM, ?? ? <twshere@outlook.com> wrote:
OK, I do see this as a nice way to avoid as many "temp" variables, though in this example, I am confused: in the above version, It seems to me that the equivelent wordy version is: temp = expr() var = g1(temp) if f(temp) else g(temp) rather than the f(expr()) -- i.e. you seem to have called f() on expr and extra time? Maybe just a typo. or, of course: var = g1(expr()) if f(expr()) else g(expr()) which I can see would be bad if expr() is expensive (or even worse, has side effects) so I'm coming around this this, though you could currently write that as: _ = expr(); g1(_) if f(_) else g(_) not that different! Also, there is something to be said for giving a name to expr() -- it may make the code more readable. In another words,I can clearly state what I mean to say in order of my
thinking.
well, in the above case, particularly if you use a meaningful name, rather than "temp" or "test", then you are still writing in in the order of meaning. (though elsewhere in this thread there are better examples of how the current nested function call syntax does reverse the logical order of operations)
This is still putting the range(100) at the end of the expression, rather than making it clear that you are starting with it. and putting that much logic in a lambda can be confusing -- in fact, I'm still not sure what that does! (I guess I am still not sure of the order of operations is the lambda expression (f(g(x))) or the whole thing? if not the whole thing, then: is it the same as ?: (f(g(x)) for x in range(100)) I'm also seeing a nested function there -- f(g(x)) which is what I thought you were trying to avoid -- maybe: lambda x: (g(x) -> f(_)) -> map(_, range(100)) ??? In general, much of this seems to be trying to make map cleaner or more clear -- but python has comprehensions, which so far work better, and are more compact and clear for the examples you have provided. granted, deeply nested comprehensions can be pretty ugly -- maybe this will be clearer for those?? However, sometimes the actions are not important, so if we want to stress
on what we're going to do something on, we write this codes:
range(100) -> map( lambda x:f(g(x)), _ )
OK, so THAT makes more sense to me -- start with the "source data", then go to the action on it. but again, is that really clearer than the comprehension (generator expression - why don't we call that a generator comprehension?): (f(g(x)) for x in range(100)) maybe this would be better: range(100) -> (f(g(x)) for x in _) it does put the source data up front -- and could be nicer for nested comprehensions. Hmm, maybe this is an example of the kind of thing I've needed to do is illustrative: [s.upper() for s in (s.replace('"','') for s in (s.strip() for s in line.split()))] would be better as: line.split() -> (s.strip() for s in _) -> (s.replace('"','') for s in _) -> [s.upper() for s in _] though, actually, really best as: [s.strip().replace('"','').upper() for s in line.split()] (which only works for methods, not general functions) but for functions: [fun3(fun2(fun1(x))) for x in an_iterable] so, backwards logic, but that's it for the benefit. So still having a hard time comeing up with an example that's notable better...
again with the lambdas -- this is all making me think that this is about making Python a better functional language, which I'm not sure is a goal of Python... but anyway, the real extra there is the where: clause But that seems to be doing the opposite -- putting the definitions of what you are actually doing AFTER the logic> I'm going to chain all this logic together and by the way, this is what that logic is... If we really wanted to have a kind of context like that, maybe something more like a context manager on the fly: with: options = ... result1 = lambda: ... result2 = lambda: ... def dosomething(obj, options) -> Any: ... def is_meeting_some_conditions( event : Any ) -> bool : ... do: (result1() if is_meeting_some_conditions( dosomething( someone, options=options)) else result2()
this gets uglier if we have both *args and **kwargs..... Which maybe is OK -- don't use it with complex structures like that. For example, sometimes we just need to know that surface area of a
how is that clearer than: S_topo = something S_side = something else surface_area = 2*S_top + S_side ??? (Or, of course, defining a function) Sure, we see the: some expression..."where" some definitions structure a lot in technical papers, but frankly: I'd probably rather see the definitions first and/or the definitions are often only there to support you if you don't already know the nomenclature -- when you go back to read the paper again, you may not need the where. Coding is different, I'd rather see stuff defined BEFORE it is used.
me too.
Too many new features all at once, it's
like trying to read a completely unfamiliar language.
exactly -- this seems to be an effort to make Python a different language! This algorithm can be fixed a little because the second case is redundant.
And here is the regular Python codes transformed from the codes above.
looks like we lost indenting, so I'm going to try to fix that: from copy import deepcopy def permutations(seq): try: # the first case (a, ) = seq return [a ,] except: try: # the third case (the second case is redundant) def insertAll(x, a): # insertAll([1,2,3], 0) -> [[0, 1, 2, 3], [1, 0, 2, 3], [1, 2, 0, 3], [1, 2, 3, 0]] ret = [] for i in range( len(x) + 1 ): tmp = deepcopy(x) tmp.insert(i, a) ret.append(tmp) return ret (a, *b) = seq tmp = permutations(b) tmp = map(lambda x : insertAll(x, a) , tmp) return sum(tmp, []) # sum([[1,2,3], [-1,-2,-3]], []) -> [1,2,3,-1,-2,-3] except: # no otherwise! pass Have I got that right? but anyway, there has GOT to be a more pythonic way to write that! And I say that because this feels to me like trying to write functional code in Python in an unnatural-for-python way, then saying we need to add features to python to make that natural. SoL I think the challenge is: find some nice compeling examples write them in a nice pythonic way show us that that these new features would allow a cleaner, more readable solution. Steven did have a nice example of that: result = (myfile.readlines() -> map(str.strip) -> filter( lambda s: not s.startwith('#') ) -> sorted -> collapse # collapse runs of identical lines -> extract_dates -> map(date_to_seconds) -> min ) Though IIUC, the proposal would make that: result = (myfile.readlines() -> map(str.strip, _) -> filter( lambda s: not s.startwith('#'), _ ) -> sorted( _ ) -> collapse( _ ) # collapse runs of identical lines -> extract_dates( _ ) -> map(date_to_seconds, _) -> min(_) ) The current Python for that might be: result = min((date_to_seconds(d) for d in extract_dates( collapse( sorted([s for s in (s.strip() for line in myfile.readlines) if not s.startswith] ))))) Which really does make the point that nesting comprehension gets ugly fast! So "don't do that": lines = collapse(sorted((l.strip().split("#")[0] for l in myfile.readlines()))) dates = min((date_to_seconds(extract_date(l)) for l in lines)) or any number of other ways -- clearer, less clear?? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

https://github.com/kachayev/fn.py/blob/master/README.rst#scala-style-lambdas... On Monday, August 21, 2017, Chris Barker <chris.barker@noaa.gov> wrote:

On Sat, Aug 19, 2017 at 3:34 AM, ?? ? <twshere@outlook.com> wrote:
OK, I do see this as a nice way to avoid as many "temp" variables, though in this example, I am confused: in the above version, It seems to me that the equivelent wordy version is: temp = expr() var = g1(temp) if f(temp) else g(temp) rather than the f(expr()) -- i.e. you seem to have called f() on expr and extra time? Maybe just a typo. or, of course: var = g1(expr()) if f(expr()) else g(expr()) which I can see would be bad if expr() is expensive (or even worse, has side effects) so I'm coming around this this, though you could currently write that as: _ = expr(); g1(_) if f(_) else g(_) not that different! Also, there is something to be said for giving a name to expr() -- it may make the code more readable. In another words,I can clearly state what I mean to say in order of my
thinking.
well, in the above case, particularly if you use a meaningful name, rather than "temp" or "test", then you are still writing in in the order of meaning. (though elsewhere in this thread there are better examples of how the current nested function call syntax does reverse the logical order of operations)
This is still putting the range(100) at the end of the expression, rather than making it clear that you are starting with it. and putting that much logic in a lambda can be confusing -- in fact, I'm still not sure what that does! (I guess I am still not sure of the order of operations is the lambda expression (f(g(x))) or the whole thing? if not the whole thing, then: is it the same as ?: (f(g(x)) for x in range(100)) I'm also seeing a nested function there -- f(g(x)) which is what I thought you were trying to avoid -- maybe: lambda x: (g(x) -> f(_)) -> map(_, range(100)) ??? In general, much of this seems to be trying to make map cleaner or more clear -- but python has comprehensions, which so far work better, and are more compact and clear for the examples you have provided. granted, deeply nested comprehensions can be pretty ugly -- maybe this will be clearer for those?? However, sometimes the actions are not important, so if we want to stress
on what we're going to do something on, we write this codes:
range(100) -> map( lambda x:f(g(x)), _ )
OK, so THAT makes more sense to me -- start with the "source data", then go to the action on it. but again, is that really clearer than the comprehension (generator expression - why don't we call that a generator comprehension?): (f(g(x)) for x in range(100)) maybe this would be better: range(100) -> (f(g(x)) for x in _) it does put the source data up front -- and could be nicer for nested comprehensions. Hmm, maybe this is an example of the kind of thing I've needed to do is illustrative: [s.upper() for s in (s.replace('"','') for s in (s.strip() for s in line.split()))] would be better as: line.split() -> (s.strip() for s in _) -> (s.replace('"','') for s in _) -> [s.upper() for s in _] though, actually, really best as: [s.strip().replace('"','').upper() for s in line.split()] (which only works for methods, not general functions) but for functions: [fun3(fun2(fun1(x))) for x in an_iterable] so, backwards logic, but that's it for the benefit. So still having a hard time comeing up with an example that's notable better...
again with the lambdas -- this is all making me think that this is about making Python a better functional language, which I'm not sure is a goal of Python... but anyway, the real extra there is the where: clause But that seems to be doing the opposite -- putting the definitions of what you are actually doing AFTER the logic> I'm going to chain all this logic together and by the way, this is what that logic is... If we really wanted to have a kind of context like that, maybe something more like a context manager on the fly: with: options = ... result1 = lambda: ... result2 = lambda: ... def dosomething(obj, options) -> Any: ... def is_meeting_some_conditions( event : Any ) -> bool : ... do: (result1() if is_meeting_some_conditions( dosomething( someone, options=options)) else result2()
this gets uglier if we have both *args and **kwargs..... Which maybe is OK -- don't use it with complex structures like that. For example, sometimes we just need to know that surface area of a
how is that clearer than: S_topo = something S_side = something else surface_area = 2*S_top + S_side ??? (Or, of course, defining a function) Sure, we see the: some expression..."where" some definitions structure a lot in technical papers, but frankly: I'd probably rather see the definitions first and/or the definitions are often only there to support you if you don't already know the nomenclature -- when you go back to read the paper again, you may not need the where. Coding is different, I'd rather see stuff defined BEFORE it is used.
me too.
Too many new features all at once, it's
like trying to read a completely unfamiliar language.
exactly -- this seems to be an effort to make Python a different language! This algorithm can be fixed a little because the second case is redundant.
And here is the regular Python codes transformed from the codes above.
looks like we lost indenting, so I'm going to try to fix that: from copy import deepcopy def permutations(seq): try: # the first case (a, ) = seq return [a ,] except: try: # the third case (the second case is redundant) def insertAll(x, a): # insertAll([1,2,3], 0) -> [[0, 1, 2, 3], [1, 0, 2, 3], [1, 2, 0, 3], [1, 2, 3, 0]] ret = [] for i in range( len(x) + 1 ): tmp = deepcopy(x) tmp.insert(i, a) ret.append(tmp) return ret (a, *b) = seq tmp = permutations(b) tmp = map(lambda x : insertAll(x, a) , tmp) return sum(tmp, []) # sum([[1,2,3], [-1,-2,-3]], []) -> [1,2,3,-1,-2,-3] except: # no otherwise! pass Have I got that right? but anyway, there has GOT to be a more pythonic way to write that! And I say that because this feels to me like trying to write functional code in Python in an unnatural-for-python way, then saying we need to add features to python to make that natural. SoL I think the challenge is: find some nice compeling examples write them in a nice pythonic way show us that that these new features would allow a cleaner, more readable solution. Steven did have a nice example of that: result = (myfile.readlines() -> map(str.strip) -> filter( lambda s: not s.startwith('#') ) -> sorted -> collapse # collapse runs of identical lines -> extract_dates -> map(date_to_seconds) -> min ) Though IIUC, the proposal would make that: result = (myfile.readlines() -> map(str.strip, _) -> filter( lambda s: not s.startwith('#'), _ ) -> sorted( _ ) -> collapse( _ ) # collapse runs of identical lines -> extract_dates( _ ) -> map(date_to_seconds, _) -> min(_) ) The current Python for that might be: result = min((date_to_seconds(d) for d in extract_dates( collapse( sorted([s for s in (s.strip() for line in myfile.readlines) if not s.startswith] ))))) Which really does make the point that nesting comprehension gets ugly fast! So "don't do that": lines = collapse(sorted((l.strip().split("#")[0] for l in myfile.readlines()))) dates = min((date_to_seconds(extract_date(l)) for l in lines)) or any number of other ways -- clearer, less clear?? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

https://github.com/kachayev/fn.py/blob/master/README.rst#scala-style-lambdas... On Monday, August 21, 2017, Chris Barker <chris.barker@noaa.gov> wrote:
participants (3)
-
?? ?
-
Chris Barker
-
Wes Turner