Hello all, The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me. The issue tracker http://bugs.python.org/issue2292 has a working patch. Would someone be able to review it? Thank you very much, Neil
On Mon, Feb 9, 2015, at 16:06, Neil Girdhar wrote:
Hello all,
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
The issue tracker http://bugs.python.org/issue2292 has a working patch. Would someone be able to review it?
The PEP is not even accepted.
FWIW, I've encouraged Neil and others to complete this code as a
prerequisite for a code review (but I can't review it myself). I am mildly
in favor of the PEP -- if the code works and looks maintainable I would
accept it. (A few things got changed in the PEP as a result of the work.)
On Mon, Feb 9, 2015 at 1:28 PM, Benjamin Peterson
Hello all,
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
The issue tracker http://bugs.python.org/issue2292 has a working
On Mon, Feb 9, 2015, at 16:06, Neil Girdhar wrote: patch.
Would someone be able to review it?
The PEP is not even accepted. _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org
-- --Guido van Rossum (python.org/~guido)
On Mon, Feb 9, 2015, at 16:32, Guido van Rossum wrote:
FWIW, I've encouraged Neil and others to complete this code as a prerequisite for a code review (but I can't review it myself). I am mildly in favor of the PEP -- if the code works and looks maintainable I would accept it. (A few things got changed in the PEP as a result of the work.)
In a way, it's a simplification, since functions are now simply called with a sequence of "generalized arguments"; there's no privileged kwarg or vararg. Of course, I wonder how much of f(**w, x, y, *k, *b, **d, c) we would see...
Right,
Just to be clear though: **-args must follow any *-args and position
arguments. So at worst, your example is:
f(x, y, *k, *b, c, **w, **d)
Best,
Neil
On Mon, Feb 9, 2015 at 5:10 PM, Benjamin Peterson
On Mon, Feb 9, 2015, at 16:32, Guido van Rossum wrote:
FWIW, I've encouraged Neil and others to complete this code as a prerequisite for a code review (but I can't review it myself). I am mildly in favor of the PEP -- if the code works and looks maintainable I would accept it. (A few things got changed in the PEP as a result of the work.)
In a way, it's a simplification, since functions are now simply called with a sequence of "generalized arguments"; there's no privileged kwarg or vararg. Of course, I wonder how much of f(**w, x, y, *k, *b, **d, c) we would see...
On Mon, Feb 9, 2015, at 17:12, Neil Girdhar wrote:
Right,
Just to be clear though: **-args must follow any *-args and position arguments. So at worst, your example is:
f(x, y, *k, *b, c, **w, **d)
Best,
Ah, I guess I was confused by this sentence in the PEP: " Function calls currently have the restriction that keyword arguments must follow positional arguments and ** unpackings must additionally follow * unpackings." That suggests that that rule is going to change.
That wording is my fault. I'll update the PEP to remove the word
"currently" after waiting a bit to see if there are any other problems.
Best,
Neil
On Mon, Feb 9, 2015 at 6:16 PM, Benjamin Peterson
On Mon, Feb 9, 2015, at 17:12, Neil Girdhar wrote:
Right,
Just to be clear though: **-args must follow any *-args and position arguments. So at worst, your example is:
f(x, y, *k, *b, c, **w, **d)
Best,
Ah, I guess I was confused by this sentence in the PEP: " Function calls currently have the restriction that keyword arguments must follow positional arguments and ** unpackings must additionally follow * unpackings."
That suggests that that rule is going to change.
On 02/09/2015 01:28 PM, Benjamin Peterson wrote:
On Mon, Feb 9, 2015, at 16:06, Neil Girdhar wrote:
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
The issue tracker http://bugs.python.org/issue2292 has a working patch. Would someone be able to review it?
The PEP is not even accepted.
I believe somebody (Guido?) commented "Why worry about accepting the PEP when there's no working patch?" -- or something to that effect. -- ~Ethan~
On Mon, Feb 9, 2015, at 16:34, Ethan Furman wrote:
On 02/09/2015 01:28 PM, Benjamin Peterson wrote:
On Mon, Feb 9, 2015, at 16:06, Neil Girdhar wrote:
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
The issue tracker http://bugs.python.org/issue2292 has a working patch. Would someone be able to review it?
The PEP is not even accepted.
I believe somebody (Guido?) commented "Why worry about accepting the PEP when there's no working patch?" -- or something to that effect.
On the other hand, I'd rather not do detailed reviews of patches that won't be accepted. :)
On 10 Feb 2015 08:13, "Benjamin Peterson"
On Mon, Feb 9, 2015, at 16:34, Ethan Furman wrote:
On 02/09/2015 01:28 PM, Benjamin Peterson wrote:
On Mon, Feb 9, 2015, at 16:06, Neil Girdhar wrote:
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008)
Florian Hahn (2013) and recently completed by Joshua Landau and me.
The issue tracker http://bugs.python.org/issue2292 has a working
and patch.
Would someone be able to review it?
The PEP is not even accepted.
I believe somebody (Guido?) commented "Why worry about accepting the PEP when there's no working patch?" -- or something to that effect.
On the other hand, I'd rather not do detailed reviews of patches that won't be accepted. :)
It's more a matter of the PEP being acceptable in principle, but a reference implementation being needed to confirm feasibility and to iron out corner cases. For example, the potential for arcane call arguments suggests the need for a PEP 8 addition saying "first standalone args, then iterable expansions, then mapping expansions", even though syntactically any order would now be permitted at call time. Cheers, Nick.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com
On Tue, 10 Feb 2015 08:43:53 +1000
Nick Coghlan
For example, the potential for arcane call arguments suggests the need for a PEP 8 addition saying "first standalone args, then iterable expansions, then mapping expansions", even though syntactically any order would now be permitted at call time.
There are other concerns: - inspect.signature() must be updated to cover the new call possibilities - function call performance must not be crippled by the new possibilities Regards Antoine.
What's an example of a way inspect.signature must change? I thought PEP 448 added new unpacking shortcuts which (for example) change the *caller* side of a function call. I didn't realize it impacted the *callee* side too. //arry/ On 02/09/2015 03:14 PM, Antoine Pitrou wrote:
On Tue, 10 Feb 2015 08:43:53 +1000 Nick Coghlan
wrote: For example, the potential for arcane call arguments suggests the need for a PEP 8 addition saying "first standalone args, then iterable expansions, then mapping expansions", even though syntactically any order would now be permitted at call time. There are other concerns:
- inspect.signature() must be updated to cover the new call possibilities
- function call performance must not be crippled by the new possibilities
Regards
Antoine.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/larry%40hastings.org
Yes, that's exactly right. It does not affect the callee.
Regarding function call performance, nothing has changed for the originally
accepted argument lists: the opcodes generated are the same and they are
processed in the same way.
Also, regarding calling argument order, not any order is allowed. Regular
arguments must precede other kinds of arguments. Keyword arguments must
precede **-args. *-args must precede **-args. However, I agree with
Antoine that PEP 8 should be updated to suggest that *-args should precede
any keyword arguments. It is currently allowed to write f(x=2, *args),
which is equivalent to f(*args, x=2).
Best,
Neil
On Mon, Feb 9, 2015 at 7:30 PM, Larry Hastings
What's an example of a way inspect.signature must change? I thought PEP 448 added new unpacking shortcuts which (for example) change the *caller* side of a function call. I didn't realize it impacted the *callee* side too.
*/arry*
On 02/09/2015 03:14 PM, Antoine Pitrou wrote:
On Tue, 10 Feb 2015 08:43:53 +1000 Nick Coghlan
wrote: For example, the potential for arcane call arguments suggests the need for a PEP 8 addition saying "first standalone args, then iterable expansions, then mapping expansions", even though syntactically any order would now be permitted at call time.
There are other concerns:
- inspect.signature() must be updated to cover the new call possibilities
- function call performance must not be crippled by the new possibilities
Regards
Antoine.
_______________________________________________ Python-Dev mailing listPython-Dev@python.orghttps://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/larry%40hastings.org
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
On Feb 09, 2015, at 07:46 PM, Neil Girdhar wrote:
Also, regarding calling argument order, not any order is allowed. Regular arguments must precede other kinds of arguments. Keyword arguments must precede **-args. *-args must precede **-args. However, I agree with Antoine that PEP 8 should be updated to suggest that *-args should precede any keyword arguments. It is currently allowed to write f(x=2, *args), which is equivalent to f(*args, x=2).
But if we have to add a PEP 8 admonition against some syntax that's being newly added, why is this an improvement? I had some more snarky/funny comments to make, but I'll just say -1. The Rationale in the PEP doesn't sell me on it being an improvement to Python. Cheers, -Barry
The admonition is against syntax that currently exists.
On Mon, Feb 9, 2015 at 7:53 PM, Barry Warsaw
On Feb 09, 2015, at 07:46 PM, Neil Girdhar wrote:
Also, regarding calling argument order, not any order is allowed. Regular arguments must precede other kinds of arguments. Keyword arguments must precede **-args. *-args must precede **-args. However, I agree with Antoine that PEP 8 should be updated to suggest that *-args should precede any keyword arguments. It is currently allowed to write f(x=2, *args), which is equivalent to f(*args, x=2).
But if we have to add a PEP 8 admonition against some syntax that's being newly added, why is this an improvement?
I had some more snarky/funny comments to make, but I'll just say -1. The Rationale in the PEP doesn't sell me on it being an improvement to Python.
Cheers, -Barry _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
Just an FYI:
http://www.reddit.com/r/Python/comments/2v8g26/python_350_alpha_1_has_been_r...
448 was mentioned here (by Python lay people — not developers).
On Mon, Feb 9, 2015 at 7:56 PM, Neil Girdhar
The admonition is against syntax that currently exists.
On Mon, Feb 9, 2015 at 7:53 PM, Barry Warsaw
wrote: On Feb 09, 2015, at 07:46 PM, Neil Girdhar wrote:
Also, regarding calling argument order, not any order is allowed. Regular arguments must precede other kinds of arguments. Keyword arguments must precede **-args. *-args must precede **-args. However, I agree with Antoine that PEP 8 should be updated to suggest that *-args should precede any keyword arguments. It is currently allowed to write f(x=2, *args), which is equivalent to f(*args, x=2).
But if we have to add a PEP 8 admonition against some syntax that's being newly added, why is this an improvement?
I had some more snarky/funny comments to make, but I'll just say -1. The Rationale in the PEP doesn't sell me on it being an improvement to Python.
Cheers, -Barry _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
Hi,
2015-02-09 22:06 GMT+01:00 Neil Girdhar
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
I don't like this PEP. IMO it makes the Python syntax more complex and more difficult to read. Extract of the PEP:
Current usage of the * iterable unpacking operator features unnecessary restrictions that can harm readability.
Yes, the current syntax is more verbose, but it's simpler to understand and simpler to debug. -- Example:
ranges = [range(i) for i in range(5)] [*item for item in ranges] [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
I don't understand this code. It looks like you forgot something before *item, I would expect 2*item for example. If it's really to unpack something, I still don't understand the syntax. Does "*" apply to item or to the whole "item for item in ranges"? It's not clear to me. If it applies to the whole generator, the syntax is really strange and I would expect parenthesis: [*(item for item in ranges)]. --
function(**kw_arguments, **more_arguments)
If the key "key1" is in both dictionaries, more_arguments wins, right? I never suffered of the lack of the PEP 448. But I remember that a friend learning Python asked me that * and ** are limited to functions. I had no answer. The answer is maybe to keep the language simple? :-) I should maybe read the PEP one more time and think about it. Victor
On Mon, 9 Feb 2015 16:06:20 -0500
Neil Girdhar
Hello all,
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
To be clear, the PEP will probably be useful for one single line of Python code every 10000. This is a very weak case for such an intrusive syntax addition. I would support the PEP if it only added the simple cases of tuple unpacking, left alone function call conventions, and didn't introduce **-unpacking. Barring that, I really don't want to review the patch and I'm a rather decided -1 on the current PEP. Regards Antoine.
2015-02-10 0:51 GMT+01:00 Antoine Pitrou
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
To be clear, the PEP will probably be useful for one single line of Python code every 10000.
@Neil: Can you maybe show us some examples of usage of the PEP 448 in the Python stdlib? I mean find some functions where using the PEP would be useful and show the code before/after (maybe in a code review). It would help to get a better opinion on the PEP. I'm not sure that examples in the PEP are the most revelant. Victor
On Feb 9, 2015, at 4:06 PM, Neil Girdhar
wrote: Hello all,
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/ https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
The issue tracker http://bugs.python.org/issue2292 http://bugs.python.org/issue2292 has a working patch. Would someone be able to review it?
I just skimmed over the PEP and it seems like it’s trying to solve a few different things: * Making it easy to combine multiple lists and additional positional args into a function call * Making it easy to combine multiple dicts and additional keyword args into a functional call * Making it easy to do a single level of nested iterable "flatten". Looking at the syntax in the PEP I had a hard time detangling what exactly it was doing even with reading the PEP itself. I wonder if there isn’t a way to combine simpler more generic things to get the same outcome. Looking at the "Making it easy to combine multiple lists and additional positional args into a function call" aspect of this, why is: print(*[1], *[2], 3) better than print(*[1] + [2] + [3])? That's already doable in Python right now and doesn't require anything new to handle it. Looking at the "making it easy to do a single level of nsted iterable 'flatten'"" aspect of this, the example of:
ranges = [range(i) for i in range(5)] [*item for item in ranges] [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
Conceptually a list comprehension like [thing for item in iterable] can be mapped to a for loop like this: result = [] for item in iterable: result.append(thing) However [*item for item in ranges] is mapped more to something like this: result = [] for item in iterable: result.extend(*item) I feel like switching list comprehensions from append to extend just because of a * is really confusing and it acts differently than if you just did *item outside of a list comprehension. I feel like the itertools.chain() way of doing this is *much* clearer. Finally there's the "make it easy to combine multiple dicts into a function call" aspect of this. This I think is the biggest thing that this PEP actually adds, however I think it goes around it the wrong way. Sadly there is nothing like [1] + [2] for dictionaries. The closest thing is: kwargs = dict1.copy() kwargs.update(dict2) func(**kwargs) So what I wonder is if this PEP wouldn't be better off just using the existing methods for doing the kinds of things that I pointed out above, and instead defining + or | or some other symbol for something similar to [1] + [2] but for dictionaries. This would mean that you could simply do: func(**dict1 | dict(y=1) | dict2) instead of dict(**{'x': 1}, y=2, **{'z': 3}) I feel like not only does this genericize way better but it limits the impact and new syntax being added to Python and is a ton more readable. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
For some reason I can't seem to reply using Google groups, which is is telling "this is a read-only mirror" (anyone know why?) Anyway, I'm going to answer as best I can the concerns. Antoine said: To be clear, the PEP will probably be useful for one single line of
Python code every 10000. This is a very weak case for such an intrusive syntax addition. I would support the PEP if it only added the simple cases of tuple unpacking, left alone function call conventions, and didn't introduce **-unpacking.
To me this is more of a syntax simplification than a syntax addition. For me the **-unpacking is the most useful part. Regarding utility, it seems that a many of the people on r/python were pretty excited about this PEP: http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being... — Victor noticed that there's a mistake with the code:
ranges = [range(i) for i in range(5)]
[*item for item in ranges] [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
It should be a range(4) in the code. The "*" applies to only item. It is the same as writing: [*range(0), *range(1), *range(2), *range(3), *range(4)] which is the same as unpacking all of those ranges into a list.
function(**kw_arguments, **more_arguments) If the key "key1" is in both dictionaries, more_arguments wins, right?
There was some debate and it was decided that duplicate keyword arguments
would remain an error (for now at least). If you want to merge the
dictionaries with overriding, then you can still do:
function(**{**kw_arguments, **more_arguments})
because **-unpacking in dicts overrides as you guessed.
—
On Mon, Feb 9, 2015 at 7:12 PM, Donald Stufft
On Feb 9, 2015, at 4:06 PM, Neil Girdhar
wrote: Hello all,
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
The issue tracker http://bugs.python.org/issue2292 has a working patch. Would someone be able to review it?
I just skimmed over the PEP and it seems like it’s trying to solve a few different things:
* Making it easy to combine multiple lists and additional positional args into a function call * Making it easy to combine multiple dicts and additional keyword args into a functional call * Making it easy to do a single level of nested iterable "flatten".
I would say it's: * making it easy to unpack iterables and mappings in function calls * making it easy to unpack iterables into list and set displays and comprehensions, and * making it easy to unpack mappings into dict displays and comprehensions.
Looking at the syntax in the PEP I had a hard time detangling what exactly it was doing even with reading the PEP itself. I wonder if there isn’t a way to combine simpler more generic things to get the same outcome.
Looking at the "Making it easy to combine multiple lists and additional positional args into a function call" aspect of this, why is:
print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?
That's already doable in Python right now and doesn't require anything new to handle it.
Admittedly, this wasn't a great example. But, if [1] and [2] had been iterables, you would have to cast each to list, e.g., accumulator = [] accumulator.extend(a) accumulator.append(b) accumulator.extend(c) print(*accumulator) replaces print(*a, b, *c) where a and c are iterable. The latter version is also more efficient because it unpacks only a onto the stack allocating no auxilliary list.
Looking at the "making it easy to do a single level of nsted iterable 'flatten'"" aspect of this, the example of:
ranges = [range(i) for i in range(5)] [*item for item in ranges] [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
Conceptually a list comprehension like [thing for item in iterable] can be mapped to a for loop like this:
result = [] for item in iterable: result.append(thing)
However [*item for item in ranges] is mapped more to something like this:
result = [] for item in iterable: result.extend(*item)
I feel like switching list comprehensions from append to extend just because of a * is really confusing and it acts differently than if you just did *item outside of a list comprehension. I feel like the itertools.chain() way of doing this is *much* clearer.
Finally there's the "make it easy to combine multiple dicts into a function call" aspect of this. This I think is the biggest thing that this PEP actually adds, however I think it goes around it the wrong way. Sadly there is nothing like [1] + [2] for dictionaries. The closest thing is:
kwargs = dict1.copy() kwargs.update(dict2) func(**kwargs)
So what I wonder is if this PEP wouldn't be better off just using the existing methods for doing the kinds of things that I pointed out above, and instead defining + or | or some other symbol for something similar to [1] + [2] but for dictionaries. This would mean that you could simply do:
func(**dict1 | dict(y=1) | dict2)
instead of
dict(**{'x': 1}, y=2, **{'z': 3})
I feel like not only does this genericize way better but it limits the impact and new syntax being added to Python and is a ton more readable.
--- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Feb 9, 2015, at 7:29 PM, Neil Girdhar
wrote: For some reason I can't seem to reply using Google groups, which is is telling "this is a read-only mirror" (anyone know why?) Anyway, I'm going to answer as best I can the concerns.
Antoine said:
To be clear, the PEP will probably be useful for one single line of Python code every 10000. This is a very weak case for such an intrusive syntax addition. I would support the PEP if it only added the simple cases of tuple unpacking, left alone function call conventions, and didn't introduce **-unpacking.
To me this is more of a syntax simplification than a syntax addition. For me the **-unpacking is the most useful part. Regarding utility, it seems that a many of the people on r/python were pretty excited about this PEP: http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being... http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being...
—
Victor noticed that there's a mistake with the code:
ranges = [range(i) for i in range(5)] [*item for item in ranges] [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
It should be a range(4) in the code. The "*" applies to only item. It is the same as writing:
[*range(0), *range(1), *range(2), *range(3), *range(4)]
which is the same as unpacking all of those ranges into a list.
function(**kw_arguments, **more_arguments) If the key "key1" is in both dictionaries, more_arguments wins, right?
There was some debate and it was decided that duplicate keyword arguments would remain an error (for now at least). If you want to merge the dictionaries with overriding, then you can still do:
function(**{**kw_arguments, **more_arguments})
because **-unpacking in dicts overrides as you guessed.
—
On Mon, Feb 9, 2015 at 7:12 PM, Donald Stufft
mailto:donald@stufft.io> wrote: On Feb 9, 2015, at 4:06 PM, Neil Girdhar
mailto:mistersheik@gmail.com> wrote: Hello all,
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/ https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
The issue tracker http://bugs.python.org/issue2292 http://bugs.python.org/issue2292 has a working patch. Would someone be able to review it?
I just skimmed over the PEP and it seems like it’s trying to solve a few different things:
* Making it easy to combine multiple lists and additional positional args into a function call * Making it easy to combine multiple dicts and additional keyword args into a functional call * Making it easy to do a single level of nested iterable "flatten".
I would say it's: * making it easy to unpack iterables and mappings in function calls * making it easy to unpack iterables into list and set displays and comprehensions, and * making it easy to unpack mappings into dict displays and comprehensions.
Looking at the syntax in the PEP I had a hard time detangling what exactly it was doing even with reading the PEP itself. I wonder if there isn’t a way to combine simpler more generic things to get the same outcome.
Looking at the "Making it easy to combine multiple lists and additional positional args into a function call" aspect of this, why is:
print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?
That's already doable in Python right now and doesn't require anything new to handle it.
Admittedly, this wasn't a great example. But, if [1] and [2] had been iterables, you would have to cast each to list, e.g.,
accumulator = [] accumulator.extend(a) accumulator.append(b) accumulator.extend(c) print(*accumulator)
replaces
print(*a, b, *c)
where a and c are iterable. The latter version is also more efficient because it unpacks only a onto the stack allocating no auxilliary list.
Honestly that doesn’t seem like the way I’d write it at all, if they might not be lists I’d just cast them to lists: print(*list(a) + [b] + list(c)) But if casting to list really is that big a deal, then perhaps a better solution is to simply make it so that something like ``a_list + an_iterable`` is valid and the iterable would just be consumed and +’d onto the list. That still feels like a more general solution and a far less surprising and easier to read one.
Looking at the "making it easy to do a single level of nsted iterable 'flatten'"" aspect of this, the example of:
ranges = [range(i) for i in range(5)] [*item for item in ranges] [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
Conceptually a list comprehension like [thing for item in iterable] can be mapped to a for loop like this:
result = [] for item in iterable: result.append(thing)
However [*item for item in ranges] is mapped more to something like this:
result = [] for item in iterable: result.extend(*item)
I feel like switching list comprehensions from append to extend just because of a * is really confusing and it acts differently than if you just did *item outside of a list comprehension. I feel like the itertools.chain() way of doing this is *much* clearer.
Finally there's the "make it easy to combine multiple dicts into a function call" aspect of this. This I think is the biggest thing that this PEP actually adds, however I think it goes around it the wrong way. Sadly there is nothing like [1] + [2] for dictionaries. The closest thing is:
kwargs = dict1.copy() kwargs.update(dict2) func(**kwargs)
So what I wonder is if this PEP wouldn't be better off just using the existing methods for doing the kinds of things that I pointed out above, and instead defining + or | or some other symbol for something similar to [1] + [2] but for dictionaries. This would mean that you could simply do:
func(**dict1 | dict(y=1) | dict2)
instead of
dict(**{'x': 1}, y=2, **{'z': 3})
I feel like not only does this genericize way better but it limits the impact and new syntax being added to Python and is a ton more readable.
Honestly the use of * and ** in functions doesn’t bother me a whole lot, though i don’t see much use for it over what’s already available for lists (and I think doing something similarly generic for mapping is a better idea). What really bothers me is these parts: * making it easy to unpack iterables into list and set displays and comprehensions, and * making it easy to unpack mappings into dict displays and comprehensions. I feel like these are super wrong and if they were put in I’d probably end up writing a linter to disallow them in my own code bases. I feel like adding a special case for * in list comprehensions breaks the “manually expanded” version of those. Switching from append to extend inside of a list comprehension because of a * doesn’t make any sense to me. I can’t seem to construct any for loop that mimics what this PEP proposes as [*item for item in iterable] without fundamentally changing the operation that happens in each loop of the list comprehension. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Mon, Feb 9, 2015 at 7:53 PM, Donald Stufft
On Feb 9, 2015, at 7:29 PM, Neil Girdhar
wrote: For some reason I can't seem to reply using Google groups, which is is telling "this is a read-only mirror" (anyone know why?) Anyway, I'm going to answer as best I can the concerns.
Antoine said:
To be clear, the PEP will probably be useful for one single line of
Python code every 10000. This is a very weak case for such an intrusive syntax addition. I would support the PEP if it only added the simple cases of tuple unpacking, left alone function call conventions, and didn't introduce **-unpacking.
To me this is more of a syntax simplification than a syntax addition. For me the **-unpacking is the most useful part. Regarding utility, it seems that a many of the people on r/python were pretty excited about this PEP: http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being...
—
Victor noticed that there's a mistake with the code:
ranges = [range(i) for i in range(5)]
[*item for item in ranges] [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
It should be a range(4) in the code. The "*" applies to only item. It is the same as writing:
[*range(0), *range(1), *range(2), *range(3), *range(4)]
which is the same as unpacking all of those ranges into a list.
function(**kw_arguments, **more_arguments) If the key "key1" is in both dictionaries, more_arguments wins, right?
There was some debate and it was decided that duplicate keyword arguments would remain an error (for now at least). If you want to merge the dictionaries with overriding, then you can still do:
function(**{**kw_arguments, **more_arguments})
because **-unpacking in dicts overrides as you guessed.
—
On Mon, Feb 9, 2015 at 7:12 PM, Donald Stufft
wrote: On Feb 9, 2015, at 4:06 PM, Neil Girdhar
wrote: Hello all,
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
The issue tracker http://bugs.python.org/issue2292 has a working patch. Would someone be able to review it?
I just skimmed over the PEP and it seems like it’s trying to solve a few different things:
* Making it easy to combine multiple lists and additional positional args into a function call * Making it easy to combine multiple dicts and additional keyword args into a functional call * Making it easy to do a single level of nested iterable "flatten".
I would say it's: * making it easy to unpack iterables and mappings in function calls * making it easy to unpack iterables into list and set displays and comprehensions, and * making it easy to unpack mappings into dict displays and comprehensions.
Looking at the syntax in the PEP I had a hard time detangling what exactly it was doing even with reading the PEP itself. I wonder if there isn’t a way to combine simpler more generic things to get the same outcome.
Looking at the "Making it easy to combine multiple lists and additional positional args into a function call" aspect of this, why is:
print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?
That's already doable in Python right now and doesn't require anything new to handle it.
Admittedly, this wasn't a great example. But, if [1] and [2] had been iterables, you would have to cast each to list, e.g.,
accumulator = [] accumulator.extend(a) accumulator.append(b) accumulator.extend(c) print(*accumulator)
replaces
print(*a, b, *c)
where a and c are iterable. The latter version is also more efficient because it unpacks only a onto the stack allocating no auxilliary list.
Honestly that doesn’t seem like the way I’d write it at all, if they might not be lists I’d just cast them to lists:
print(*list(a) + [b] + list(c))
Sure, that works too as long as you put in the missing parentheses.
But if casting to list really is that big a deal, then perhaps a better solution is to simply make it so that something like ``a_list + an_iterable`` is valid and the iterable would just be consumed and +’d onto the list. That still feels like a more general solution and a far less surprising and easier to read one.
I understand. However I just want to point out that 448 is more general. There is no binary operator for generators. How do you write (*a, *b, *c)? You need to use itertools.chain(a, b, c).
Looking at the "making it easy to do a single level of nsted iterable 'flatten'"" aspect of this, the example of:
ranges = [range(i) for i in range(5)] [*item for item in ranges] [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
Conceptually a list comprehension like [thing for item in iterable] can be mapped to a for loop like this:
result = [] for item in iterable: result.append(thing)
However [*item for item in ranges] is mapped more to something like this:
result = [] for item in iterable: result.extend(*item)
I feel like switching list comprehensions from append to extend just because of a * is really confusing and it acts differently than if you just did *item outside of a list comprehension. I feel like the itertools.chain() way of doing this is *much* clearer.
Finally there's the "make it easy to combine multiple dicts into a function call" aspect of this. This I think is the biggest thing that this PEP actually adds, however I think it goes around it the wrong way. Sadly there is nothing like [1] + [2] for dictionaries. The closest thing is:
kwargs = dict1.copy() kwargs.update(dict2) func(**kwargs)
So what I wonder is if this PEP wouldn't be better off just using the existing methods for doing the kinds of things that I pointed out above, and instead defining + or | or some other symbol for something similar to [1] + [2] but for dictionaries. This would mean that you could simply do:
func(**dict1 | dict(y=1) | dict2)
instead of
dict(**{'x': 1}, y=2, **{'z': 3})
I feel like not only does this genericize way better but it limits the impact and new syntax being added to Python and is a ton more readable.
Honestly the use of * and ** in functions doesn’t bother me a whole lot, though i don’t see much use for it over what’s already available for lists (and I think doing something similarly generic for mapping is a better idea). What really bothers me is these parts:
* making it easy to unpack iterables into list and set displays and comprehensions, and * making it easy to unpack mappings into dict displays and comprehensions.
I feel like these are super wrong and if they were put in I’d probably end up writing a linter to disallow them in my own code bases.
I feel like adding a special case for * in list comprehensions breaks the “manually expanded” version of those. Switching from append to extend inside of a list comprehension because of a * doesn’t make any sense to me. I can’t seem to construct any for loop that mimics what this PEP proposes as [*item for item in iterable] without fundamentally changing the operation that happens in each loop of the list comprehension.
I don't know what you mean by this. You can write [*item for item in iterable] in current Python as [it for item in iterable for it in item]. You can unroll that as: a = [] for item in iterable: for it in item: a.append(it) — or yield for generators or add for sets.
--- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Feb 9, 2015, at 8:34 PM, Neil Girdhar
wrote: On Mon, Feb 9, 2015 at 7:53 PM, Donald Stufft
mailto:donald@stufft.io> wrote: On Feb 9, 2015, at 7:29 PM, Neil Girdhar
mailto:mistersheik@gmail.com> wrote: For some reason I can't seem to reply using Google groups, which is is telling "this is a read-only mirror" (anyone know why?) Anyway, I'm going to answer as best I can the concerns.
Antoine said:
To be clear, the PEP will probably be useful for one single line of Python code every 10000. This is a very weak case for such an intrusive syntax addition. I would support the PEP if it only added the simple cases of tuple unpacking, left alone function call conventions, and didn't introduce **-unpacking.
To me this is more of a syntax simplification than a syntax addition. For me the **-unpacking is the most useful part. Regarding utility, it seems that a many of the people on r/python were pretty excited about this PEP: http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being... http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being...
—
Victor noticed that there's a mistake with the code:
ranges = [range(i) for i in range(5)] [*item for item in ranges] [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
It should be a range(4) in the code. The "*" applies to only item. It is the same as writing:
[*range(0), *range(1), *range(2), *range(3), *range(4)]
which is the same as unpacking all of those ranges into a list.
function(**kw_arguments, **more_arguments) If the key "key1" is in both dictionaries, more_arguments wins, right?
There was some debate and it was decided that duplicate keyword arguments would remain an error (for now at least). If you want to merge the dictionaries with overriding, then you can still do:
function(**{**kw_arguments, **more_arguments})
because **-unpacking in dicts overrides as you guessed.
—
On Mon, Feb 9, 2015 at 7:12 PM, Donald Stufft
mailto:donald@stufft.io> wrote: On Feb 9, 2015, at 4:06 PM, Neil Girdhar
mailto:mistersheik@gmail.com> wrote: Hello all,
The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/ https://www.python.org/dev/peps/pep-0448/) is implemented now based on some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently completed by Joshua Landau and me.
The issue tracker http://bugs.python.org/issue2292 http://bugs.python.org/issue2292 has a working patch. Would someone be able to review it?
I just skimmed over the PEP and it seems like it’s trying to solve a few different things:
* Making it easy to combine multiple lists and additional positional args into a function call * Making it easy to combine multiple dicts and additional keyword args into a functional call * Making it easy to do a single level of nested iterable "flatten".
I would say it's: * making it easy to unpack iterables and mappings in function calls * making it easy to unpack iterables into list and set displays and comprehensions, and * making it easy to unpack mappings into dict displays and comprehensions.
Looking at the syntax in the PEP I had a hard time detangling what exactly it was doing even with reading the PEP itself. I wonder if there isn’t a way to combine simpler more generic things to get the same outcome.
Looking at the "Making it easy to combine multiple lists and additional positional args into a function call" aspect of this, why is:
print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?
That's already doable in Python right now and doesn't require anything new to handle it.
Admittedly, this wasn't a great example. But, if [1] and [2] had been iterables, you would have to cast each to list, e.g.,
accumulator = [] accumulator.extend(a) accumulator.append(b) accumulator.extend(c) print(*accumulator)
replaces
print(*a, b, *c)
where a and c are iterable. The latter version is also more efficient because it unpacks only a onto the stack allocating no auxilliary list.
Honestly that doesn’t seem like the way I’d write it at all, if they might not be lists I’d just cast them to lists:
print(*list(a) + [b] + list(c))
Sure, that works too as long as you put in the missing parentheses.
There are no missing parentheses, the * and ** is last in the order of operations (though the parens would likely make that more clear).
But if casting to list really is that big a deal, then perhaps a better solution is to simply make it so that something like ``a_list + an_iterable`` is valid and the iterable would just be consumed and +’d onto the list. That still feels like a more general solution and a far less surprising and easier to read one.
I understand. However I just want to point out that 448 is more general. There is no binary operator for generators. How do you write (*a, *b, *c)? You need to use itertools.chain(a, b, c).
I don’t feel like using itertools.chain is a bad thing TBH, it’s extremely clear as to what’s going on, you’re chaining a bunch a bunch of iterables together. I would not however be super upset if the ability to do * and ** multiple times in a function was added, I just don’t think it’s very useful for * (since you can already get that behavior with things I believe are clear-er) and I think getting similar constructs for ** would bring that up to parity. I am really really -1 on the comprehension syntax.
Looking at the "making it easy to do a single level of nsted iterable 'flatten'"" aspect of this, the example of:
ranges = [range(i) for i in range(5)] [*item for item in ranges] [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
Conceptually a list comprehension like [thing for item in iterable] can be mapped to a for loop like this:
result = [] for item in iterable: result.append(thing)
However [*item for item in ranges] is mapped more to something like this:
result = [] for item in iterable: result.extend(*item)
I feel like switching list comprehensions from append to extend just because of a * is really confusing and it acts differently than if you just did *item outside of a list comprehension. I feel like the itertools.chain() way of doing this is *much* clearer.
Finally there's the "make it easy to combine multiple dicts into a function call" aspect of this. This I think is the biggest thing that this PEP actually adds, however I think it goes around it the wrong way. Sadly there is nothing like [1] + [2] for dictionaries. The closest thing is:
kwargs = dict1.copy() kwargs.update(dict2) func(**kwargs)
So what I wonder is if this PEP wouldn't be better off just using the existing methods for doing the kinds of things that I pointed out above, and instead defining + or | or some other symbol for something similar to [1] + [2] but for dictionaries. This would mean that you could simply do:
func(**dict1 | dict(y=1) | dict2)
instead of
dict(**{'x': 1}, y=2, **{'z': 3})
I feel like not only does this genericize way better but it limits the impact and new syntax being added to Python and is a ton more readable.
Honestly the use of * and ** in functions doesn’t bother me a whole lot, though i don’t see much use for it over what’s already available for lists (and I think doing something similarly generic for mapping is a better idea). What really bothers me is these parts:
* making it easy to unpack iterables into list and set displays and comprehensions, and * making it easy to unpack mappings into dict displays and comprehensions.
I feel like these are super wrong and if they were put in I’d probably end up writing a linter to disallow them in my own code bases.
I feel like adding a special case for * in list comprehensions breaks the “manually expanded” version of those. Switching from append to extend inside of a list comprehension because of a * doesn’t make any sense to me. I can’t seem to construct any for loop that mimics what this PEP proposes as [*item for item in iterable] without fundamentally changing the operation that happens in each loop of the list comprehension.
I don't know what you mean by this. You can write [*item for item in iterable] in current Python as [it for item in iterable for it in item]. You can unroll that as: a = [] for item in iterable: for it in item: a.append(it)
— or yield for generators or add for sets.
I don’t think * means “loop” anywhere else in Python and I would never “guess” that [*item for item in iterable] meant that. It’s completely non intuitive. Anywhere else you see *foo it’s unpacking a tuple not making an inner loop. That means that anywhere else in Python *item is the same thing as item[0], item[1], item[2], …, but this PEP makes it so just inside of a comprehension it actually means “make a second, inner loop” instead of what I think anyone who has learned that syntax would expect, which is it should be equivalent to [(item[0], item[1], item[2], …) for item in iterable]. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 02/09/2015 05:48 PM, Donald Stufft wrote:
I don’t think * means “loop” anywhere else in Python and I would never “guess” that
[*item for item in iterable]
meant that. It’s completely non intuitive. Anywhere else you see *foo it’s unpacking a tuple not making an inner loop. That means that anywhere else in Python *item is the same thing as item[0], item[1], item[2], …, but this PEP makes it so just inside of a comprehension it actually means “make a second, inner loop” instead of what I think anyone who has learned that syntax would expect, which is it should be equivalent to [(item[0], item[1], item[2], …) for item in iterable].
I agree with Donald. I would expect a list of lists from that syntax... or maybe a list of tuples? -- ~Ethan~
To be logic, I expect [(*item) for item in mylist] to simply return mylist. [*(item for item in mylist] with mylist=[(1, 2), (3,)] could return [1, 2, 3], as just [*mylist], so "unpack" mylist. Victor
On Tue, Feb 10, 2015 at 2:20 AM, Victor Stinner
To be logic, I expect [(*item) for item in mylist] to simply return mylist.
If you want simply mylist as a list, that is [*mylist]
[*(item) for item in mylist] with mylist=[(1, 2), (3,)] could return [1, 2, 3],
right
as just [*mylist], so "unpack" mylist.
[*mylist] remains equivalent list(mylist), just as it is now. In one case, you're unpacking the elements of the list, in the other you're unpacking the list itself. Victor
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
ah, sorry… forget that I said "just as it is now" — I am losing track of
what's allowed in Python now!
On Tue, Feb 10, 2015 at 2:29 AM, Neil Girdhar
On Tue, Feb 10, 2015 at 2:20 AM, Victor Stinner
wrote: To be logic, I expect [(*item) for item in mylist] to simply return mylist.
If you want simply mylist as a list, that is [*mylist]
[*(item) for item in mylist] with mylist=[(1, 2), (3,)] could return [1, 2, 3],
right
as just [*mylist], so "unpack" mylist.
[*mylist] remains equivalent list(mylist), just as it is now. In one case, you're unpacking the elements of the list, in the other you're unpacking the list itself.
Victor
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
On 10 February 2015 at 01:48, Donald Stufft
I am really really -1 on the comprehension syntax.
[... omitting because gmail seems to have messed up the quoting ...]
I don’t think * means “loop” anywhere else in Python and I would never “guess” that [*item for item in iterable] meant that. It’s completely non intuitive. Anywhere else you see *foo it’s unpacking a tuple not making an inner loop. That means that anywhere else in Python *item is the same thing as item[0], item[1], item[2], …, but this PEP makes it so just inside of a comprehension it actually means “make a second, inner loop” instead of what I think anyone who has learned that syntax would expect, which is it should be equivalent to [(item[0], item[1], item[2], …) for item in iterable].
I agree completely with Donald here. The comprehension syntax has consistently been the part of the proposal that has resulted in confused questions from reviewers, and I don't think it's at all intuitive. Is it allowable to vote on parts of the PEP separately? If not, then the comprehension syntax is enough for me to reject the whole proposal. If we can look at parts in isolation, I'm OK with saying -1 to the comprehension syntax and then we can look at whether the other parts of the PEP add enough to be worth it (the comprehension side is enough of a distraction that I haven't really considered the other bits yet). Paul
On 10 Feb 2015 19:41, "Paul Moore"
I agree completely with Donald here. The comprehension syntax has consistently been the part of the proposal that has resulted in confused questions from reviewers, and I don't think it's at all intuitive.
Is it allowable to vote on parts of the PEP separately? If not, then the comprehension syntax is enough for me to reject the whole proposal. If we can look at parts in isolation, I'm OK with saying -1 to the comprehension syntax and then we can look at whether the other parts of the PEP add enough to be worth it (the comprehension side is enough of a distraction that I haven't really considered the other bits yet).
It occurs to me that the PEP effectively changes the core of a generator expression from "yield x" to "yield from x" if the tuple expansion syntax is used. If we rejected the "yield *x" syntax for standalone yield expressions, I don't think it makes sense to now add it for generator expressions. So I guess that adds me to the -1 camp on the comprehension/generator expression part of the story - it doesn't make things all that much easier to write than the relevant nested loop, and it makes them notably harder to read. I haven't formed an opinion on the rest of the PEP yet, as it's been a while since I read the full text. I'll read through the latest version tomorrow. Regards, Nick.
Paul _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe:
https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com
Donald Stufft wrote:
perhaps a better solution is to simply make it so that something like ``a_list + an_iterable`` is valid and the iterable would just be consumed and +’d onto the list.
I don't think I like the asymmetry that this would introduce into + on lists. Currently [1, 2, 3] + (4, 5, 6) is an error because it's not clear whether the programmer intended the result to be a list or a tuple. I think that's a good thing. Also, it would mean that [1, 2, 3] + foo == [1, 2, 3, "f", "o", "o"] which would be surprising and probably not what was intended. -- Greg
On Tue, 10 Feb 2015 19:04:03 +1300
Greg Ewing
Donald Stufft wrote:
perhaps a better solution is to simply make it so that something like ``a_list + an_iterable`` is valid and the iterable would just be consumed and +’d onto the list.
I don't think I like the asymmetry that this would introduce into + on lists. Currently
[1, 2, 3] + (4, 5, 6)
is an error because it's not clear whether the programmer intended the result to be a list or a tuple.
bytearray(b"a") + b"bc" bytearray(b'abc') b"a" + bytearray(b"bc") b'abc'
It's quite convenient. In many contexts lists and tuples are quite interchangeable (for example when unpacking). Regards Antoine.
Antoine Pitrou wrote:
bytearray(b"a") + b"bc"
bytearray(b'abc')
b"a" + bytearray(b"bc")
b'abc'
It's quite convenient.
It's a bit disconcerting that the left operand wins, rather than one of them being designated as the "wider" type, as occurs with many other operations on mixed types, e.g. int + float. In any case, these seem to be special-case combinations. It's not so promiscuous as to accept any old iterable on the right:
b"a" + [1,2,3] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can't concat bytes to list [1,2,3] + b"a" Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can only concatenate list (not "bytes") to list
-- Greg
On Wed, 11 Feb 2015 18:45:40 +1300
Greg Ewing
Antoine Pitrou wrote:
bytearray(b"a") + b"bc"
bytearray(b'abc')
b"a" + bytearray(b"bc")
b'abc'
It's quite convenient.
It's a bit disconcerting that the left operand wins, rather than one of them being designated as the "wider" type, as occurs with many other operations on mixed types, e.g. int + float.
There is no "wider" type here. This behaviour is perfectly logical.
In any case, these seem to be special-case combinations.
No:
b"abc" + array.array("b", b"def") b'abcdef' bytearray(b"abc") + array.array("b", b"def") bytearray(b'abcdef')
Regards Antoine.
2015-02-10 1:29 GMT+01:00 Neil Girdhar
For some reason I can't seem to reply using Google groups, which is is telling "this is a read-only mirror" (anyone know why?) Anyway, I'm going to answer as best I can the concerns.
Antoine said:
To be clear, the PEP will probably be useful for one single line of Python code every 10000. This is a very weak case for such an intrusive syntax addition. I would support the PEP if it only added the simple cases of tuple unpacking, left alone function call conventions, and didn't introduce **-unpacking.
To me this is more of a syntax simplification than a syntax addition. For me the **-unpacking is the most useful part. Regarding utility, it seems that a many of the people on r/python were pretty excited about this PEP: http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being...
I used grep to find how many times dict.update() is used. haypo@selma$ wc -l Lib/*.py (....) 112055 total haypo@selma$ grep '\.update(.\+)' Lib/*.py|wc -l 63 So there are 63 or less (it's a regex, I didn't check each line) calls to dict.update() on a total of 112,055 lines. I found a few numbers of codes using the pattern: "dict1.update(dict2); func(**dict1)". Examples: functools.py: def partial(func, *args, **keywords): def newfunc(*fargs, **fkeywords): newkeywords = keywords.copy() newkeywords.update(fkeywords) return func(*(args + fargs), **newkeywords) ... return newfunc => def partial(func, *args, **keywords): def newfunc(*fargs, **fkeywords): return func(*(args + fargs), **keywords, **fkeywords) ... return newfunc The new code behaves differently since Neil said that an error is raised if fkeywords and keywords have keys in common. By the way, this must be written in the PEP. pdb.py: ns = self.curframe.f_globals.copy() ns.update(self.curframe_locals) code.interact("*interactive*", local=ns) Hum no sorry, ns is not used with ** here. Victor
On 02/09/2015 05:14 PM, Victor Stinner wrote:
def partial(func, *args, **keywords): def newfunc(*fargs, **fkeywords): return func(*(args + fargs), **keywords, **fkeywords) ... return newfunc
The new code behaves differently since Neil said that an error is raised if fkeywords and keywords have keys in common. By the way, this must be written in the PEP.
That line should read return func(*(args + fargs), **{**keywords, **fkeywords}) to avoid the duplicate key error and keep the original functionality. -- ~Ethan~
Le 10 févr. 2015 03:07, "Ethan Furman"
That line should read
return func(*(args + fargs), **{**keywords, **fkeywords})
to avoid the duplicate key error and keep the original functionality.
To me, this is just ugly. It prefers the original code which use .update(). Maybe the PEP should be changed to behave as .update()? Victor
On Tue, Feb 10, 2015 at 2:08 AM, Victor Stinner
Le 10 févr. 2015 03:07, "Ethan Furman"
a écrit : That line should read
return func(*(args + fargs), **{**keywords, **fkeywords})
to avoid the duplicate key error and keep the original functionality.
To me, this is just ugly. It prefers the original code which use .update().
Maybe the PEP should be changed to behave as .update()?
Victor
Just for clarity, Ethan is right, but it could also be written: return func(*args, *fargs, **{**keywords, **fkeywords}) Best, Neil
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
+1 for adding "+" or "|" operator for merging dicts. To me this operation:
{'x': 1, 'y': 2} + {'z': 3} {'x': 1, 'y': 2, 'z': 3}
Is very clear. The only potentially non obvious case I can see then is when there are duplicate keys, in which case the syntax could just be defined that last setter wins, e.g.:
{'x': 1, 'y': 2} + {'x': 3} {'x': 3, 'y': 2}
Which is analogous to the example:
new_dict = dict1.copy()
new_dict.update(dict2)
~ Ian Lee
On Tue, Feb 10, 2015 at 12:11 AM, Serhiy Storchaka
On 10.02.15 04:06, Ethan Furman wrote:
return func(*(args + fargs), **{**keywords, **fkeywords})
We don't use [*args, *fargs] for concatenating lists, but args + fargs. Why not use "+" or "|" operators for merging dicts?
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ ianlee1521%40gmail.com
+1 for adding "+" or "|" operator for merging dicts. To me this operation:
{'x': 1, 'y': 2} + {'z': 3} {'x': 1, 'y': 2, 'z': 3}
Is very clear. The only potentially non obvious case I can see then is when there are duplicate keys, in which case the syntax could just be defined that last setter wins, e.g.:
{'x': 1, 'y': 2} + {'x': 3} {'x': 3, 'y': 2}
Which is analogous to the example:
new_dict = dict1.copy() new_dict.update(dict2)
Well looking at just list a + b yields new list a += b yields modified a
On Wed, Feb 11, 2015 at 12:35 AM, Ian Lee
I split off a separate thread on python-ideas [1] specific to the idea of
introducing "+" and "+=" operators on a dict.
[1] https://mail.python.org/pipermail/python-ideas/2015-February/031748.html
~ Ian Lee
On Tue, Feb 10, 2015 at 10:35 PM, John Wong
On Wed, Feb 11, 2015 at 12:35 AM, Ian Lee
wrote: +1 for adding "+" or "|" operator for merging dicts. To me this operation:
{'x': 1, 'y': 2} + {'z': 3} {'x': 1, 'y': 2, 'z': 3}
Is very clear. The only potentially non obvious case I can see then is when there are duplicate keys, in which case the syntax could just be defined that last setter wins, e.g.:
{'x': 1, 'y': 2} + {'x': 3} {'x': 3, 'y': 2}
Which is analogous to the example:
new_dict = dict1.copy() new_dict.update(dict2)
Well looking at just list a + b yields new list a += b yields modified a then there is also .extend in list. etc.
so do we want to follow list's footstep? I like + because + is more natural to read. Maybe this needs to be a separate thread. I am actually amazed to remember dict + dict is not possible... there must be a reason (performance??) for this...
On Mon, 09 Feb 2015 18:06:02 -0800
Ethan Furman
On 02/09/2015 05:14 PM, Victor Stinner wrote:
def partial(func, *args, **keywords): def newfunc(*fargs, **fkeywords): return func(*(args + fargs), **keywords, **fkeywords) ... return newfunc
The new code behaves differently since Neil said that an error is raised if fkeywords and keywords have keys in common. By the way, this must be written in the PEP.
That line should read
return func(*(args + fargs), **{**keywords, **fkeywords})
to avoid the duplicate key error and keep the original functionality.
While losing readability. What's the point exactly? One line over 112055 (as shown by Victor) can be collapsed away? Wow, that's sure gonna change Python programming in a massively beneficial way... Regards Antoine.
On 10 Feb 2015 19:24, "Terry Reedy"
On 2/9/2015 7:29 PM, Neil Girdhar wrote:
For some reason I can't seem to reply using Google groups, which is is telling "this is a read-only mirror" (anyone know why?)
I presume spam prevention. Most spam on python-list comes from the
read-write GG mirror. There were also problems with Google Groups getting the reply-to headers wrong (so if someone flipped the mirror to read-only: thank you!) With any luck, we'll have a native web gateway later this year after Mailman 3 is released, so posting through Google Groups will be less desirable. Cheers, Nick.
-- Terry Jan Reedy
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe:
https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com
On Tue, 10 Feb 2015 23:16:38 +1000
Nick Coghlan
On 10 Feb 2015 19:24, "Terry Reedy"
wrote: On 2/9/2015 7:29 PM, Neil Girdhar wrote:
For some reason I can't seem to reply using Google groups, which is is telling "this is a read-only mirror" (anyone know why?)
I presume spam prevention. Most spam on python-list comes from the
read-write GG mirror.
There were also problems with Google Groups getting the reply-to headers wrong (so if someone flipped the mirror to read-only: thank you!)
With any luck, we'll have a native web gateway later this year after Mailman 3 is released, so posting through Google Groups will be less desirable.
There is already a Web and NNTP gateway with Gmane: http://news.gmane.org/gmane.comp.python.devel No need to rely on Google's mediocre services. Regards Antoine.
On 10/02/2015 13:23, Antoine Pitrou wrote:
On Tue, 10 Feb 2015 23:16:38 +1000 Nick Coghlan
wrote: On 10 Feb 2015 19:24, "Terry Reedy"
wrote: On 2/9/2015 7:29 PM, Neil Girdhar wrote:
For some reason I can't seem to reply using Google groups, which is is telling "this is a read-only mirror" (anyone know why?)
I presume spam prevention. Most spam on python-list comes from the
read-write GG mirror.
There were also problems with Google Groups getting the reply-to headers wrong (so if someone flipped the mirror to read-only: thank you!)
With any luck, we'll have a native web gateway later this year after Mailman 3 is released, so posting through Google Groups will be less desirable.
There is already a Web and NNTP gateway with Gmane: http://news.gmane.org/gmane.comp.python.devel
No need to rely on Google's mediocre services.
Regards
Antoine.
Highly recommended as effectively zero spam. -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence
On 10 February 2015 at 00:29, Neil Girdhar
function(**kw_arguments, **more_arguments) If the key "key1" is in both dictionaries, more_arguments wins, right?
There was some debate and it was decided that duplicate keyword arguments would remain an error (for now at least). If you want to merge the dictionaries with overriding, then you can still do:
function(**{**kw_arguments, **more_arguments})
because **-unpacking in dicts overrides as you guessed.
Eww. Seriously, function(**{**kw_arguments, **more_arguments}) feels more like a Perl "executable line noise" construct than anything I'd ever want to see in Python. And taking something that doesn't work and saying you can make it work by wrapping **{...} round it just seems wrong. Paul
On Tue, Feb 10, 2015 at 1:33 AM, Paul Moore
On 10 February 2015 at 00:29, Neil Girdhar
wrote: function(**kw_arguments, **more_arguments) If the key "key1" is in both dictionaries, more_arguments wins, right?
There was some debate and it was decided that duplicate keyword arguments would remain an error (for now at least). If you want to merge the dictionaries with overriding, then you can still do:
function(**{**kw_arguments, **more_arguments})
because **-unpacking in dicts overrides as you guessed.
Eww. Seriously, function(**{**kw_arguments, **more_arguments}) feels more like a Perl "executable line noise" construct than anything I'd ever want to see in Python. And taking something that doesn't work and saying you can make it work by wrapping **{...} round it just seems wrong.
+1 to this and similar reasoning I find the syntax proposed in PEP 448 incredibly obtuse, and I don't think it's worth it. Python has never placed terseness of expression as its primary goal, but this is mainly what the PEP is aiming at. -1 on the PEP for me, at least in its current form. Eli
On 02/10/2015 10:33 AM, Paul Moore wrote:
On 10 February 2015 at 00:29, Neil Girdhar
wrote: function(**kw_arguments, **more_arguments) If the key "key1" is in both dictionaries, more_arguments wins, right?
There was some debate and it was decided that duplicate keyword arguments would remain an error (for now at least). If you want to merge the dictionaries with overriding, then you can still do:
function(**{**kw_arguments, **more_arguments})
because **-unpacking in dicts overrides as you guessed.
Eww. Seriously, function(**{**kw_arguments, **more_arguments}) feels more like a Perl "executable line noise" construct than anything I'd ever want to see in Python. And taking something that doesn't work and saying you can make it work by wrapping **{...} round it just seems wrong.
I don't think people would want to write the above. I like the "sequence and dict flattening" part of the PEP, mostly because it is consistent and should be easy to understand, but the comprehension syntax enhancements seem to be bad for readability and "comprehending" what the code does. The call syntax part is a mixed bag: on the one hand it is nice to be consistent with the extended possibilities in literals (flattening), but on the other hand there would be small but annoying inconsistencies anyways (e.g. the duplicate kwarg case above). Georg
Georg Brandl wrote:
The call syntax part is a mixed bag: on the one hand it is nice to be consistent with the extended possibilities in literals (flattening), but on the other hand there would be small but annoying inconsistencies anyways (e.g. the duplicate kwarg case above).
That inconsistency already exists -- duplicate keys are allowed in dict literals but not calls:
{'a':1, 'a':2} {'a': 2}
-- Greg
I am still in favor of this PEP but have run out of time to review it and the feedback. I'm going on vacation for a week or so, maybe I'll find time, if not I'll start reviewing this around Feb 23. -- --Guido van Rossum (python.org/~guido)
Donald Stufft wrote:
why is:
print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?
It could potentially be a little more efficient by eliminating the construction of an intermediate list.
defining + or | or some other symbol for something similar to [1] + [2] but for dictionaries. This would mean that you could simply do:
func(**dict1 | dict(y=1) | dict2)
Same again, multiple ** avoids construction of an itermediate dict. -- Greg
Le 10 févr. 2015 06:48, "Greg Ewing"
It could potentially be a little more efficient by eliminating the construction of an intermediate list.
Is it the case in the implementation? If it has to create a temporary list/tuple, I will prefer to not use it. After long years of development, I chose to limit myself to one instruction per line. It is for different reason: - I spend more time to read code than to write code, readability matters - it's easier to debug: most debugger work line by line, or at least it's a convinient way to use them. If you use instruction per instruction, usually you have to read assember/bytecode to get the current instruction - profilers computes statistics per line,not per instruction (it's also the case for tracemalloc) - tracebacks only give the line number, not the column - etc. So I now prefer more verbise code even it is longer to write and may look less efficient.
Same again, multiple ** avoids construction of an itermediate dict.
Again, is it the case in the implementation. It may be possible to modify CPython to really avoid a temporary dict (at least for some kind of Python functions), but it would a large refactoring. Usually, if an operation is not efficient, it's not implement, so users don't try to use it and may even try to write their code differently (to avoid the performance issue). (But slow operations exist like list.remove.) Victor
Victor Stinner wrote:
Le 10 févr. 2015 06:48, "Greg Ewing"
mailto:greg.ewing@canterbury.ac.nz> a écrit : It could potentially be a little more efficient by eliminating the construction of an intermediate list.
Is it the case in the implementation? If it has to create a temporary list/tuple, I will prefer to not use it.
The function call machinery will create a new tuple for the positional args in any case. But if you manually combine your * args into a tuple before calling, there are *two* tuple allocations being done. Passing all the * args directly into the call would allow one of them to be avoided. Similarly for dicts and ** args. -- Greg
Donald Stufft wrote:
However [*item for item in ranges] is mapped more to something like this:
result = [] for item in iterable: result.extend(*item)
Actually it would be result.extend(item) But if that bothers you, you could consider the expansion to be result = [] for item in iterable: for item1 in item: result.append(item) In other words, the * is shorthand for an extra level of looping.
and it acts differently than if you just did *item outside of a list comprehension.
Not sure what you mean by that. It seems closely analogous to the use of * in a function call to me. -- Greg
On Feb 10, 2015, at 12:55 AM, Greg Ewing
wrote: Donald Stufft wrote:
However [*item for item in ranges] is mapped more to something like this: result = [] for item in iterable: result.extend(*item)
Actually it would be
result.extend(item)
But if that bothers you, you could consider the expansion to be
result = [] for item in iterable: for item1 in item: result.append(item)
In other words, the * is shorthand for an extra level of looping.
and it acts differently than if you just did *item outside of a list comprehension.
Not sure what you mean by that. It seems closely analogous to the use of * in a function call to me.
Putting aside the proposed syntax the current two statements are currently true: 1. The statement *item is roughly the same thing as (item[0], item[1], item[n]) 2. The statement [something for thing in iterable] is roughly the same as: result = [] for thing in iterable: result.append(something) This is a single loop where an expression is ran for each iteration of the loop, and the return result of that expression is appended to the result. If you combine these two things, the "something" in #2 becuase *item, and since *item is roughly the same thing as (item[0], item[1], item[n]) what you end up with is something that *should* behave like: result = [] for thing in iterable: result.append((thing[0], thing[1], thing[n])) Or to put it another way:
[*item for item in [[1, 2, 3], [4, 5, 6]] [(1, 2, 3), (4, 5, 6)]
Is a lot more consistent with what *thing and list comprehensions already mean in Python than for the answer to be [1, 2, 3, 4, 5, 6]. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Tue, Feb 10, 2015 at 1:31 AM, Donald Stufft
On Feb 10, 2015, at 12:55 AM, Greg Ewing
wrote: However [*item for item in ranges] is mapped more to something like
Donald Stufft wrote: this:
result = [] for item in iterable: result.extend(*item)
Actually it would be
result.extend(item)
But if that bothers you, you could consider the expansion to be
result = [] for item in iterable: for item1 in item: result.append(item)
In other words, the * is shorthand for an extra level of looping.
and it acts differently than if you just did *item outside of a list comprehension.
Not sure what you mean by that. It seems closely analogous to the use of * in a function call to me.
Putting aside the proposed syntax the current two statements are currently true:
1. The statement *item is roughly the same thing as (item[0], item[1], item[n]) 2. The statement [something for thing in iterable] is roughly the same as: result = [] for thing in iterable: result.append(something) This is a single loop where an expression is ran for each iteration of the loop, and the return result of that expression is appended to the result.
If you combine these two things, the "something" in #2 becuase *item, and since *item is roughly the same thing as (item[0], item[1], item[n]) what you end up with is something that *should* behave like:
result = [] for thing in iterable: result.append((thing[0], thing[1], thing[n]))
That is what [list(something) for thing in iterable] does. The iterable unpacking rule might have been better explained as follows: ———— On the left of assignment * is packing, e.g. a, b, *cs = iterable On the right of an assignment, * is an unpacking, e.g. xs = a, b, *cs ———— In either case, the elements of "cs" are treated the same as a and b. Do you agree that [*x for x in [as, bs, cs]] === [*as, *bs, *cs] ? Then the elements of *as are unpacked into the list, the same way that those elements are currently unpacked in a regular function call f(*as) === f(as[0], as[1], ...) Similarly, [*as, *bs, *cs] === [as[0], as[1], …, bs[0], bs[1], …, cs[0], cs[1], …] The rule for function calls is analogous: ———— In a function definition, * is a packing, collecting extra positional argument in a list. E.g., def f(*args): In a function call, * is an unpacking, expanding an iterable to populate positional arguments. E.g., f(*args) — PEP 448 proposes having arbitrary numbers of unpackings in arbitrary positions. I will be updating the PEP this week if I can find the time.
Or to put it another way:
[*item for item in [[1, 2, 3], [4, 5, 6]] [(1, 2, 3), (4, 5, 6)]
Is a lot more consistent with what *thing and list comprehensions already mean in Python than for the answer to be [1, 2, 3, 4, 5, 6]. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
Donald Stufft wrote:
1. The statement *item is roughly the same thing as (item[0], item[1], item[n])
No, it's not -- that would make it equivalent to tuple(item), which is not what it means in any of its existing usages. What it *is* roughly equivalent to is item[0], item[1], item[n] i.e. *without* the parens, whatever that means in the context concerned. In the context of a function call, it has the effect of splicing the sequence in as if you had written each item out as a separate expression. You do have a valid objection insofar as this currently has no meaning at all in a comprehension, i.e. this is a syntax error: [item[0], item[1], item[n] for item in items] So we would be giving a meaning to something that doesn't currently have a meaning, rather than changing an existing meaning, if you see what I mean. -- Greg
participants (19)
-
Antoine Pitrou
-
Barry Warsaw
-
Benjamin Peterson
-
Donald Stufft
-
Eli Bendersky
-
Ethan Furman
-
Georg Brandl
-
Greg Ewing
-
Guido van Rossum
-
Ian Lee
-
John Wong
-
Larry Hastings
-
Mark Lawrence
-
Neil Girdhar
-
Nick Coghlan
-
Paul Moore
-
Serhiy Storchaka
-
Terry Reedy
-
Victor Stinner