Here, I am proposing a change on python type annotation.
Python was born to be a simple and elegant language. However recent change has again introduce new incompatibility to python.
The PEP 484 is proposing a type hint which can annotate the type of each parameters. How ever code written in this format can not be run for python3.5 and below.
It is an exciting new feature to be able to know the data type from the code, But I am afraid this is not worth such a incompatibility.
Here I want to propose a new way of annotation in python as follows
def reportAge(name, age):
''' this a a greeting function and some other comment...
!str, int -> str
return name+' is ' + age
we can put the annotation in the comment block and use a symbol '!' or other symbol suitable to lead a annotation line.
the annotation should be positionally corresponding to the parameters.
Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
I like programming languages in which all are expressions (including
function declarations, branching and loops) and you can use an
assignment at any point, but Python is built on other ways, and I like
Python too. PEP 572 looks violating several Python design principles.
Python looks simple language, and this is its strong side. I believe
most Python users are not professional programmers -- they are
sysadmins, scientists, hobbyists and kids -- but Python is suitable for
them because its clear syntax and encouraging good style of programming.
In particularly mutating and non-mutating operations are separated. The
assignment expression breaks this. There should be very good reasons for
doing this. But it looks to me that all examples for PEP 572 can be
written better without using the walrus operator.
> results = [(x, y, x/y) for x in input_data if (y := f(x)) > 0]
results = [(x, y, x/y) for x in input_data for y in [f(x)] if y > 0]
> stuff = [[y := f(x), x/y] for x in range(5)]
stuff = [[y, x/y] for x in range(5) for y in [f(x)]]
This idiom looks unusual for you? But this is a legal Python syntax, and
it is not more unusual than the new walrus operator. This idiom is not
commonly used because there is very little need of using above examples
in real code. And I'm sure that the walrus operator in comprehension
will be very rare unless PEP 572 will encourage writing complicated
comprehensions. Most users prefer to write an explicit loop.
I want to remember that PEP 572 started from the discussion on
Python-ideas which proposed a syntax for writing the following code as a
smooth_signal = 
average = initial_value
for xt in signal:
average = (1-decay)*average + decay*xt
Using the "for in " idiom this can be written (if you prefer
smooth_signal = [average
for average in [initial_value]
for x in signal
for average in [(1-decay)*average + decay*x]]
Try now to write this using PEP 572. The walrus operator turned to be
less suitable for solving the original problem because it doesn't help
to initialize the initial value.
Examples from PEP 572:
> # Loop-and-a-half
> while (command := input("> ")) != "quit":
> print("You entered:", command)
The straightforward way:
command = input("> ")
if command == "quit": break
print("You entered:", command)
The clever way:
for command in iter(lambda: input("> "), "quit"):
print("You entered:", command)
> # Capturing regular expression match objects
> # See, for instance, Lib/pydoc.py, which uses a multiline spelling
> # of this effect
> if match := re.search(pat, text):
> print("Found:", match.group(0))
> # The same syntax chains nicely into 'elif' statements, unlike the
> # equivalent using assignment statements.
> elif match := re.search(otherpat, text):
> print("Alternate found:", match.group(0))
> elif match := re.search(third, text):
> print("Fallback found:", match.group(0))
It may be more efficient to use a single regular expression which
consists of multiple or-ed patterns marked as different groups. For
example see the cute regex-based tokenizer in gettext.py:
> _token_pattern = re.compile(r"""
> (?P<WHITESPACES>[ \t]+) | # spaces and horizontal tabs
> (?P<NUMBER>[0-9]+\b) | # decimal integer
> (?P<NAME>n\b) | # only n is allowed
> (?P<PARENTHESIS>[()]) |
> (?P<OPERATOR>[-*/%+?:]|[><!]=?|==|&&|\|\|) | # !, *, /, %, +, -, <, >,
> # <=, >=, ==, !=, &&, ||,
> # ? :
> # unary and bitwise ops
> # not allowed
> (?P<INVALID>\w+|.) # invalid token
> """, re.VERBOSE|re.DOTALL)
> def _tokenize(plural):
> for mo in re.finditer(_token_pattern, plural):
> kind = mo.lastgroup
> if kind == 'WHITESPACES':
> value = mo.group(kind)
> if kind == 'INVALID':
> raise ValueError('invalid token in plural form: %s' % value)
> yield value
> yield ''
I have not found any code similar to the PEP 572 example in pydoc.py. It
has different code:
> pattern = re.compile(r'\b((http|ftp)://\S+[\w/]|'
> r'RFC[- ]?(\d+)|'
> r'PEP[- ]?(\d+)|'
> start, end = match.span()
> all, scheme, rfc, pep, selfdot, name = match.groups()
> if scheme:
> url = escape(all).replace('"', '"')
> results.append('<a href="%s">%s</a>' % (url, url))
> elif rfc:
> url = 'http://www.rfc-editor.org/rfc/rfc%d.txt' % int(rfc)
> results.append('<a href="%s">%s</a>' % (url, escape(all)))
> elif pep:
It doesn't look as a sequence of re.search() calls. It is more clear and
efficient, and using the assignment expression will not make it better.
> # Reading socket data until an empty string is returned
> while data := sock.recv():
> print("Received data:", data)
for data in iter(sock.recv, b''):
print("Received data:", data)
> if pid := os.fork():
> # Parent code
> # Child code
pid = os.fork()
# Parent code
# Child code
It looks to me that there is no use case for PEP 572. It just makes
On 2018-07-05 13:32, INADA Naoki wrote:
> Core devs interested in this area is limited resource.
I know and unfortunately there is nothing that I can do about that. It
would be a pity that PEP 580 (or a variant like PEP 576) is not accepted
simply because no core developer cares enough.
> As far as I understand, there are some important topics to discuss.
> a. Low level calling convention, including argument parsing API.
> b. New API for calling objects without argument tuple and dict.
> c. How more types can support FASTCALL, LOAD_METHOD and CALL_METHOD.
> d. How to reorganize existing builtin types, without breaking stable ABI.
Right, that's why I wanted PEP 580 to be only about (c) and nothing
else. I made the mistake in PEP 575 of also involving (d).
I still don't understand why we must finish (a) before we can even start
> Reference implementation helps discussion.
METH_FASTCALL and argument parsing for METH_FASTCALL is already
implemented in CPython. Not in documented public functions, but the
And PEP 580 also has a reference implementation:
On 2018-06-28, 00:58 GMT, Ned Deily wrote:
> On behalf of the Python development community and the Python 3.7 release
> team, we are pleased to announce the availability of Python 3.7.0.
I am working on updating openSUSE packages to python 3.7, but
I have hit quite large number of failing tests (the testsuite
obviously passed with 3.6), see
(click on the red "failed" label to get logs). I fell into
a bout of depression, only to discover that we are not alone in
this problem ... Debian doesn't seem to do much better
https://is.gd/HKBU4j. Surprisingly, Fedora seems to pass the
testsuite https://is.gd/E0KA53; interesting, I will have to
investigate which of their many patches did the trick.
Anybody has any idea, what's going on, please? Did anybody on
the python.org side run test suites on Linux?
https://matej.ceplovi.cz/blog/, Jabber: mcepl(a)ceplovi.cz
GPG Finger: 3C76 A027 CA45 AD70 98B5 BC1D 7920 5802 880B C9D8
The difference between death and taxes is death doesn't get worse
every time Congress meets
-- Will Rogers
On 2018-07-05 14:20, INADA Naoki wrote:
> like you ignored my advice about creating realistic benchmark for
> calling 3rd party callable before talking about performance...
I didn't really want to ignore that, I just didn't know what to do.
As far as I can tell, the official Python benchmark suite is
However, that deals only with pure Python code, not with the C API.
So those benchmarks are not relevant to PEP 580.
On 2018-07-06 06:07, INADA Naoki wrote:
> Maybe, one way to improve METH_FASTCALL | METH_KEYWORDS can be this.
> kwds can be either tuple or dict.
But that would be just pushing the complexity down to the callee. I'd
rather have a simpler protocol at the expense of a slightly more complex
I also don't see the point: the calls where performance truly matters
typically don't use keyword arguments anyway (independently of whether
the called function accepts them).
Moreover, the large majority of functions take normal keyword arguments,
not **kwargs. When parsing those arguments, the dict would need to be
unpacked anyway. So you don't gain much by forcing the callee to handle
that instead of doing it in PyCCall_FASTCALL().
Functions just passing through **kwargs (say, functools.lru_cache) don't
need a dict either: they can implement the C call protocol of PEP 580
with METH_FASTCALL and then call the wrapped function also using FASTCALL.
So really the only remaining case is when the callee wants to do
something with **kwargs as dict. But I find it hard to come up with a
natural use case for that, especially one where performance matters. And
even then, that function could just use METH_VARARGS.
So I don't see any compelling reason to allow a dict in METH_FASTCALL.
Now that it's a done deal, I am closely reviewing the semantics section
of PEP 572. (I had expected one more posting of the final PEP, but it
seems the acceptance came somewhere in a thread that was already muted.)
Since there has been no final posting that I'm aware of, I'm referring
to https://www.python.org/dev/peps/pep-0572/ as of about an hour before
posting this (hopefully it doesn't take me that long).
To be clear, I am *only* looking at the "Syntax and semantics" section.
So if something has been written down elsewhere in the PEP, please take
my questions as a request to have it referenced from this section. I
also gave up on the discussion by the third python-dev thread - if there
were things decided that you think I'm stupid for not knowing, it
probably means they never made it into the PEP.
= Syntax and Semantics
Could we include the changes necessary to
https://docs.python.org/3/reference/grammar.html in order to specify
where these expressions are valid? And ideally check that they work.
This may expose new exceptional cases, but will also clarify some of the
existing ones, especially for those of us who write Python parsers.
== Exceptional cases
Are the cases in the "Exceptional cases" section supposed to raise
SyntaxError on compilation? That seems obvious, but no harm in stating
it. (FWIW, I'd vote to ban the "bad" cases in style guides or by forcing
parentheses, rather than syntactically. And for anyone who wonders why
that's different from my position on slashes in f-strings, it's because
I don't think we can ever resolve these cases but I hope that one day we
can fix f-string slashes :) )
== Scope of the target
The PEP uses the phrase "an assignment expression occurs in a
comprehension" - what does this mean? Does it occur when/where it is
compiled, instantiated, or executed? This is important because where it
occurs determines which scope will be modified. For sanity sake, I want
to assume that it means compiled, but now what happens when that scope
>>> def f():
... return (a := i for i in range(5))
[0, 1, 2, 3, 4] # or a new error because the scope has gone?
I'll push back real hard on doing the assignment in the scope where the
generator is executed:
>>> def do_secure_op(name, numbers):
... authorised = check_authorised(name)
... if not all(numbers):
... raise ValueError()
... if not authorised:
... raise SecurityError()
... print('You made it!')
>>> do_secure_op('whatever', (authorised := i for i in [1, 2, 3]))
You made it!
NameError: name 'authorised' is undefined
>From the any()/all() examples, it seems clear that the target scope for
the assignment has to be referenced from the generator scope (but not
for other comprehension types, which can simply do one transfer of the
assigned name after fully evaluating all the contents). Will this
reference keep the frame object alive for as long as the generator
exists? Can it be a weak reference? Are assignments just going to be
silently ignored when the frame they should assign to is gone? I'd like
to see these clarified in the main text.
When an assignment is "expressly invalid" due to avoiding "edge cases",
does this mean we should raise a SyntaxError? Or a runtime error? I'm
not sure how easily these can be detected by our current compiler (or
runtime, for that matter), but in the other tools that I work on it
isn't going to be a trivial check.
Also, I'm not clear at all on why [i := i+1 for i in range(5)] is a
problem? Similarly for the other examples here. There's nothing wrong
with `for i in range(5): i = i+1`, so why forbid this?
== Relative precedence
"may be used directly in a positional function call argument" - why not
use the same syntax as generator expressions? Require parentheses unless
it's the only argument. It seems like that's still got a TODO on it from
one of the examples, so consider this a vote for matching
== Differences between assignment expressions
I'm pretty sure the equivalent of "x = y = z = 0" would be "z := (y :=
(x := 0))". Not that it matters when there are no side-effects of
assignment (unless we decide to raise at runtime for invalid
assignments), but it could become a point of confusion for people in the
future to see it listed like this. Assignment expressions always
evaluate from innermost to outermost.
Gramatically, "Single assignment targets *other than* NAME are not
supported" would be more precise. And for specification's sake, does
"not supported" mean "is a syntax error"?
The "equivalent needs extra parentheses" examples add two sets of extra
parentheses. Are both required? Or just the innermost set?
Apologies for the lack of context. I've gone back and added the section
headings for as I read through this section.
Victor Stinner in "Assignment expression and coding style: the while
True case" and others have brought to attention
that the AE as currently written doesn't support all the capabilities of
the assignment statement, namely:
* tuple unpacking
* augmented assignment
(I titled the letter "all capabilities" 'cuz I may've missed something.)
Personally, I'm for the unpacking but against augmentation 'cuz it has
proven incomprehensible as per the 5 Jul 2018 04:22:36 +0300 letter.
Recently on Python-Dev:
On 2018-07-03 15:24, Chris Barker wrote:
> On Tue, Jul 3, 2018 at 2:51 PM, Chris Angelico <rosuav(a)gmail.com
> On Wed, Jul 4, 2018 at 7:37 AM, Serhiy Storchaka <storchaka(a)gmail.com>
> > I believe most Python users are not
> > professional programmers -- they are sysadmins, scientists, hobbyists
> > and kids --
> 
> fair enough, but I think we all agree that *many*, if not most, Python users
> are "not professional programmers". While on the other hand everyone involved
> in discussion on python-dev and python-ideas is a serious (If not
> "professional") programmer.
Python Audience - wants clarity:
Not sure I'd say that most users are not professionals, but one major strength
of Python is its suitability as a teaching language, which enlarges the
community every year.
Additionally, I have noticed a dichotomy between prolific "C programmers" who've
supported this PEP and many Python programmers who don't want it. While C-devs
use this construct all the time, their stereotypical Python counterpart is often
looking for simplicity and clarity instead. That's why we're here, folks.
Value - good:
Several use cases are handled well by PEP 572. However it has been noted that
complexity must be capped voluntarily relatively early—or the cure soon becomes
worse than the disease.
Frequency - not much:
The use cases for assignment-expressions are not exceedingly common, coming up
here and there. Their omission has been a very mild burden and we've done
without for a quarter century.
Believe the authors agreed that it won't be used too often and won't typically
be mis- or overused.
New Syntax - a high burden:
For years I've read on these lists that syntax changes must clear a high
threshold of the (Value*Frequency)/Burden (or VF/B) ratio.
Likewise, a few folks have compared PEP 572 to 498 (f-strings) which some former
detractors have come to appreciate. Don't believe this comparison applies well,
since string interpolation is useful a hundred times a day, more concise, clear,
and runs faster than previous functionality. Threshold was easily cleared there.
An incongruous/partially redundant new syntax to perform existing functionality
more concisely feels too low on the VF/B ratio IMHO. Value is good though
mixed, frequency is low, and burden is higher than we'd like, resulting in "meh"
and binary reactions.
Indeed many modern languages omit this feature specifically in an effort to
reduce complexity, ironically citing the success of Python in support. Less is
Fortunately there is a compromise design that is chosen often these days in new
languages---restricting these assignments to if/while (potentially comp/gen)
statements. We can also reuse the existing "EXPR as NAME" syntax that already
exists and is widely enjoyed.
This compromise design:
1 Handles the most common cases (of a group of infrequent cases)
0 Doesn't handle more obscure cases.
1 No new syntax (through reuse)
1 Looks Pythonic as hell
1 Difficult to misuse, complexity capped
1 Handles the most common cases (of a group of infrequent cases)
1 Handles even more obscure cases.
0 New syntax
0 Denser look: more colons, parens, expression last
0 Some potential for misuse, complexity uncapped
Thanks for reading, happy independence,