I made some minor updates to PEP 580 (PEP editors: please merge
https://github.com/python/peps/pull/741) and its reference implementation:
- Added a new introductory section explaining the basic idea.
- The C protocol no longer deals with __name__; a __name__ attribute is
required but the protocol does not deal with its implementation.
- The PEP no longer deals with profiling. This means that profiling only
works for actual instances of builtin_function_or_method and
method_descriptor. Profiling arbitrary callables would be nice, but that
is deferred to a future PEP.
The last two items are meant to simplify the PEP (although this is
debatable since "simple" is very subjective).
On 22 June 2018 at 02:26, Antoine Pitrou <solipsis(a)pitrou.net> wrote:
> Indeed. But, for a syntax addition such as PEP 572, I think it would be
> a good idea to ask their opinion to teaching/education specialists.
> As far as I'm concerned, if teachers and/or education specialists were
> to say PEP 572 is not a problem, my position would shift from negative
> towards neutral.
I asked a handful of folks at the Education Summit the next day about it:
* for the basic notion of allowing expression level name binding using
the "NAME := EXPR" notation, the reactions ranged from mildly negative
(I read it as only a "-0" rather than a "-1") to outright positive.
* for the reactions to my description of the currently proposed parent
local scoping behaviour in comprehensions, I'd use the word
"horrified", and feel I wasn't overstating the response :)
While I try to account for the fact that I implemented the current
comprehension semantics for the 3.x series, and am hence biased
towards considering them the now obvious interpretation, it's also the
case that generator expressions have worked like nested functions
since they were introduced in Python 2.4 (more than 13 years ago now),
and comprehensions have worked the same way as generator expressions
since Python 3.0 (which has its 10th birthday coming up in December
This means that I take any claims that the legacy Python 2.x
interpretation of comprehension behaviour is intuitively obvious with
an enormous grain of salt - for the better part of a decade now, every
tool at a Python 3 user's disposal (the fact that the iteration
variable is hidden from the current scope, reading the language
reference , printing out locals(), using the dis module, stepping
through code in a debugger, writing their own tracing function, and
even observing the quirky interaction with class scopes) will have
nudged them towards the "it's a hidden nested function" interpretation
of expected comprehension behaviour.
Acquiring the old mental model for the way comprehensions work pretty
much requires a developer to have started with Python 2.x themselves
(perhaps even before comprehensions and lexical closures were part of
the language), or else have been taught the Python 2 comprehension
model by someone else - there's nothing in Python 3's behaviour to
encourage that point of view, and plenty of
functional-language-inspired documentation to instead encourage folks
to view comprehensions as tightly encapsulated declarative container
I'm currently working on a concept proposal at
https://github.com/ncoghlan/peps/pull/2 that's much closer to PEP 572
than any of my previous `given` based suggestions: for already
declared locals, it devolves to being the same as PEP 572 (except that
expressions are allowed as top level statements), but for any names
that haven't been previously introduced, it prohibits assigning to a
name that doesn't already have a defined scope, and instead relies on
a new `given` clause on various constructs that allows new target
declarations to be introduced into the current scope (such that "if
x:= f():" implies "x" is already defined as a target somewhere else in
the current scope, while "if x := f() given x:" potentially introduces
"x" as a new local target the same way a regular assignment statement
One of the nicer features of the draft proposal is that if all you
want to do is export the iteration variable from a comprehension, you
don't need to use an assignment expression at all: you can just append
"... given global x" or "... given nonlocal x" and export the
iteration variable directly to the desired outer scope, the same way
you can in the fully spelled out nested function equivalent.
 From https://docs.python.org/3.0/reference/expressions.html#displays-for-lists-s…:
'Note that the comprehension is executed in a separate scope, so names
assigned to in the target list don’t “leak” in the enclosing scope.'
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
As anticippated, after a final round of feedback I am hereby accepting PEP
572, Assignment Expressions: https://www.python.org/dev/peps/pep-0572/
Thanks to everyone who participated in the discussion or sent a PR.
Below is a list of changes since the last post (https://mail.python.org/
pipermail/python-dev/2018-July/154557.html) -- they are mostly cosmetic so
I won't post the doc again, but if you want to go over them in detail,
here's the history of the file on GitHub: https://github.com/python/
peps/commits/master/pep-0572.rst, and here's a diff since the last posting:
https://github.com/python/peps/compare/26e6f61f...master (sadly it's
repo-wide -- you can click on Files changed and then navigate to
- Tweaked the example at line 95-100 to use result = ... rather than return
... so as to make a different rewrite less feasible
- Replaced the weak "2-arg iter" example with Giampaolo Roloda's while
chunk := file.read(8192): process(chunk)
- *Added prohibition of unparenthesized assignment expressions in
annotations and lambdas*
- Clarified that TargetScopeError is a *new* subclass of SyntaxError
- Clarified the text forbidding assignment to comprehension loop control
- Clarified that the prohibition on := with annotation applies to
*inline* annotation (i.e. they cannot be syntactically combined in the
- Added conditional expressions to the things := binds less tightly than
- Dropped section "This could be used to create ugly code"
- Clarified the example in Appendix C
Now on to the implementation work! (Maybe I'll sprint on this at the
core-dev sprint in September.)
--Guido van Rossum (python.org/~guido)
I'm working on making pyc stable, via stablizing marshal.dumps()
Sadly, it makes marshal.dumps() 40% slower.
Luckily, this overhead is small (only 4%) for dumps(compile(source)) case.
So my question is: May I remove unstable but faster code?
Or should I make this optional and we maintain two complex code?
If so, should this option enabled by default or not?
For example, xmlrpc uses marshal. But xmlrpc has significant overhead
other than marshaling, like dumps(compile(source)) case. So I expect
marshal.dumps() performance is not critical for it too.
Is there any real application which marshal.dumps() performance is critical?
INADA Naoki <songofacandy(a)gmail.com>
I wish I had more time to make my case, but with the PEP 572 pronouncement
imminent, let me make an attempt to save Python from having two assignment
I've re-read the PEP, and honestly I am warming up to the idea of allowing
a limited form of assignment in expressions. It looks like in the current
form, the PEP supports only well-motivated cases where the return value of
the assignment expression is non-controversial. It also appears that there
are no cases where = can be substituted for := and not cause a syntax
error. This means that ":" in ":=" is strictly redundant.
Interestingly, Python already has a precedent for using redundant ":" - the
line-ending ":" in various statements is redundant, but it is helpful both
when reading and writing the code.
On the other hand, ':' in ':=' looks like an unnecessary embellishment.
When we use ':=', we already know that we are inside an expression and
being inside an expression is an obvious context for the reader, the writer
and the interpreter.
I also believe, allowing a limited form of assignment in expressions is a
simpler story to tell to the existing users than an introduction of a new
operator that is somewhat like '=', but cannot be used where you currently
use '=' and only in places where '=' is currently prohibited.
On 2018-07-11 10:50, Victor Stinner wrote:
> As you wrote, the
> cost of function costs is unlikely the bottleneck of application.
With that idea, METH_FASTCALL is not needed either. I still find it very
strange that nobody seems to question all the crazy existing
optimizations for function calls in CPython, yet claiming at the same
time that those are just stupid micro-optimizations which are surely not
important for real applications.
Anyway, I'm thinking about real-life benchmarks but that's quite hard.
One issue is that PEP 580 by itself does not make existing faster, but
allows faster code to be written in the future. A second issue is that
Cython (my main application) already contains optimizations for
Cython-to-Cython calls. So, to see the actual impact of PEP 580, I
should disable those.
On 2018-07-11 10:27, Antoine Pitrou wrote:
> I agree PEP 580 is extremely complicated and it's not obvious what the
> maintenance burden will be in the long term.
But the status quo is also very complicated! If somebody would write a
PEP describing the existing implementation of builtin_function_or_method
and method_descriptor with all its optimizations, probably you would
also find it complicated.
Have you actually looked at the existing implementation in
Python/ceval.c and Object/call.c for calling objects? One of the things
that PEP 580 offers is replacing 5 (yes, five!) functions
_PyMethodDef_RawFastCallDict by a single function PyCCall_FASTCALL.
Anyway, it would help if you could say why you (and others) think that
it's complicated. Sure, there are many details to be considered (for
example, the section about Descriptor behavior), but those are not
essential to understand what the PEP does. I wrote the PEP as a complete
specification, give full details. Maybe I should add a section just
explaining the core ideas without details?
On 2018-07-11 00:48, Victor Stinner wrote:
> About your benchmark results:
> "FASTCALL unbound method(obj, 1, two=2): Mean +- std dev: 42.6 ns +- 29.6 ns"
> That's a very big standard deviation :-(
Yes, I know. My CPU was overheating and was slowed down. But that seemed
to have happened for a small number of benchmarks only.
But given that you find these benchmarks stupid anyway, I assume that
you don't really care.
I've been using asyncio a lot lately and have encountered this problem
several times. Imagine you want to do a lot of queries against a database,
spawning 10000 tasks in parallel will probably cause a lot of them to fail.
What you need in a task pool of sorts, to limit concurrency and do only 20
requests in parallel.
If we were doing this synchronously, we wouldn't spawn 10000 threads using
10000 connections, we would use a thread pool with a limited number of
threads and submit the jobs into its queue.
To me, tasks are (somewhat) logically analogous to threads. The solution
that first comes to mind is to create an AsyncioTaskExecutor with a
submit(coro, *args, **kwargs) method. Put a reference to the coroutine and
its arguments into an asyncio queue. Spawn n tasks pulling from this queue
and awaiting the coroutines.
It'd probably be useful to have this in the stdlib at some point.
Date: Wed, 13 Jun 2018 22:45:22 +0200
> From: Michel Desmoulin <desmoulinmichel(a)gmail.com>
> To: python-dev(a)python.org
> Subject: [Python-Dev] A more flexible task creation
> Message-ID: <bca6b319-c436-c8c2-bb0e-6707f0495c49(a)gmail.com>
> Content-Type: text/plain; charset=utf-8
> I was working on a concurrency limiting code for asyncio, so the user
> may submit as many tasks as one wants, but only a max number of tasks
> will be submitted to the event loop at the same time.
> However, I wanted that passing an awaitable would always return a task,
> no matter if the task was currently scheduled or not. The goal is that
> you could add done callbacks to it, decide to force schedule it, etc
> I dug in the asyncio.Task code, and encountered:
> def __init__(self, coro, *, loop=None):
> I was surprised to see that instantiating a Task class has any side
> effect at all, let alone 2, and one of them being to be immediately
> scheduled for execution.
> I couldn't find a clean way to do what I wanted: either you
> loop.create_task() and you get a task but it runs, or you don't run
> anything, but you don't get a nice task object to hold on to.
> I tried several alternatives, like returning a future, and binding the
> future awaiting to the submission of a task, but that was complicated
> code that duplicated a lot of things.
> I tried creating a custom task, but it was even harder, setting a custom
> event policy, to provide a custom event loop with my own create_task()
> accepting parameters. That's a lot to do just to provide a parameter to
> Task, especially if you already use a custom event loop (e.g: uvloop). I
> was expecting to have to create a task factory only, but task factories
> can't get any additional parameters from create_task()).
> Additionally I can't use ensure_future(), as it doesn't allow to pass
> any parameter to the underlying Task, so if I want to accept any
> awaitable in my signature, I need to provide my own custom ensure_future().
> All those implementations access a lot of _private_api, and do other
> shady things that linters hate; plus they are fragile at best. What's
> more, Task being rewritten in C prevents things like setting self._coro,
> so we can only inherit from the pure Python slow version.
> In the end, I can't even await the lazy task, because it blocks the
> entire program.
> Hence I have 2 distinct, but independent albeit related, proposals:
> - Allow Task to be created but not scheduled for execution, and add a
> parameter to ensure_future() and create_task() to control this. Awaiting
> such a task would just do like asyncio.sleep(O) until it is scheduled
> for execution.
> - Add an parameter to ensure_future() and create_task() named "kwargs"
> that accept a mapping and will be passed as **kwargs to the underlying
> created Task.
> I insist on the fact that the 2 proposals are independent, so please
> don't reject both if you don't like one or the other. Passing a
> parameter to the underlying custom Task is still of value even without
> the unscheduled instantiation, and vice versa.
> Also, if somebody has any idea on how to make a LazyTask that we can
> await on without blocking everything, I'll take it.