Attached is a draft PEP on adding + and - operators to dict for
discussion.
This should probably go here:
https://github.com/python/peps
but due to technical difficulties at my end, I'm very limited in what I
can do on Github (at least for now). If there's anyone who would like to
co-author and/or help with the process, that will be appreciated.
--
Steven

I'd like to suggest what I think would be a simple addition to `def` and
`class` blocks. I don't know if calling those "Assignment Blocks" is
accurate, but I just mean to refer to block syntaxes that assign to a name.
Anyway, I propose a combined return-def structure, and optionally also
allowing a return-class version. Omitting the name would be allowable, as
well.
This would only apply to a `def` or `class` statement made as the last part
of the function body, of course.
def ignore_exc(exc_type):
return def (func):
@wraps(func)
return def (*args, **kwargs):
try:
return func(*args, **kwargs)
except exc_type:
pass
Thanks for considering and for any comments, thoughts, or feedback on the
idea!

It came to my attention that:
In the original PEP True and False are said to be singletons
https://www.python.org/dev/peps/pep-0285/, but it's not in the Data Model
https://docs.python.org/3/reference/datamodel.html
This came to my attention by code wanting to own the valid values in a
dict's key:
if settings[MY_KEY] is True:
...
If True and False are singletons in the spec (and not only in the CPython
implementation), it should be prominent and well known.
Cheers,
--
Juancarlo *Añez*

There's been a lot of discussion about an operator to merge two dicts. I
participated in the beginning but quickly felt overwhelmed by the endless
repetition, so I muted most of the threads.
But I have been thinking about the reason (some) people like operators, and
a discussion I had with my mentor Lambert Meertens over 30 years ago came
to mind.
For mathematicians, operators are essential to how they think. Take a
simple operation like adding two numbers, and try exploring some of its
behavior.
add(x, y) == add(y, x) (1)
Equation (1) expresses the law that addition is commutative. It's usually
written using an operator, which makes it more concise:
x + y == y + x (1a)
That feels like a minor gain.
Now consider the associative law:
add(x, add(y, z)) == add(add(x, y), z) (2)
Equation (2) can be rewritten using operators:
x + (y + z) == (x + y) + z (2a)
This is much less confusing than (2), and leads to the observation that the
parentheses are redundant, so now we can write
x + y + z (3)
without ambiguity (it doesn't matter whether the + operator binds tighter
to the left or to the right).
Many other laws are also written more easily using operators. Here's one
more example, about the identity element of addition:
add(x, 0) == add(0, x) == x (4)
compare to
x + 0 == 0 + x == x (4a)
The general idea here is that once you've learned this simple notation,
equations written using them are easier to *manipulate* than equations
written using functional notation -- it is as if our brains grasp the
operators using different brain machinery, and this is more efficient.
I think that the fact that formulas written using operators are more easily
processed *visually* has something to do with it: they engage the brain's
visual processing machinery, which operates largely subconsciously, and
tells the conscious part what it sees (e.g. "chair" rather than "pieces of
wood joined together"). The functional notation must take a different path
through our brain, which is less subconscious (it's related to reading and
understanding what you read, which is learned/trained at a much later age
than visual processing).
The power of visual processing really becomes apparent when you combine
multiple operators. For example, consider the distributive law:
mul(n, add(x, y)) == add(mul(n, x), mul(n, y)) (5)
That was painful to write, and I believe that at first you won't see the
pattern (or at least you wouldn't have immediately seen it if I hadn't
mentioned this was the distributive law).
Compare to:
n * (x + y) == n * x + n * y (5a)
Notice how this also uses relative operator priorities. Often
mathematicians write this even more compact:
n(x+y) == nx + ny (5b)
but alas, that currently goes beyond the capacities of Python's parser.
Another very powerful aspect of operator notation is that it is convenient
to apply them to objects of different types. For example, laws (1) through
(5) also work when n, x, y and z are same-size vectors (substituting a
vector of zeros for the literal "0"), and also if x, y and z are matrices
(note that n has to be a scalar).
And you can do this with objects in many different domains. For example,
the above laws (1) through (5) apply to functions too (n being a scalar
again).
By choosing the operators wisely, mathematicians can employ their visual
brain to help them do math better: they'll discover new interesting laws
sooner because sometimes the symbols on the blackboard just jump at you and
suggest a path to an elusive proof.
Now, programming isn't exactly the same activity as math, but we all know
that Readability Counts, and this is where operator overloading in Python
comes in. Once you've internalized the simple properties which operators
tend to have, using + for string or list concatenation becomes more
readable than a pure OO notation, and (2) and (3) above explain (in part)
why that is.
Of course, it's definitely possible to overdo this -- then you get Perl.
But I think that the folks who point out "there is already a way to do
this" are missing the point that it really is easier to grasp the meaning
of this:
d = d1 + d2
compared to this:
d = d1.copy()
d = d1.update(d2)
and it is not just a matter of fewer lines of code: the first form allows
us to use our visual processing to help us see the meaning quicker -- and
without distracting other parts of our brain (which might already be
occupied by keeping track of the meaning of d1 and d2, for example).
Of course, everything comes at a price. You have to learn the operators,
and you have to learn their properties when applied to different object
types. (This is true in math too -- for numbers, x*y == y*x, but this
property does not apply to functions or matrices; OTOH x+y == y+x applies
to all, as does the associative law.)
"But what about performance?" I hear you ask. Good question. IMO,
readability comes first, performance second. And in the basic example (d =
d1 + d2) there is no performance loss compared to the two-line version
using update, and a clear win in readability. I can think of many
situations where performance difference is irrelevant but readability is of
utmost importance, and for me this is the default assumption (even at
Dropbox -- our most performance critical code has already been rewritten in
ugly Python or in Go). For the few cases where performance concerns are
paramount, it's easy to transform the operator version to something else --
*once you've confirmed it's needed* (probably by profiling).
--
--Guido van Rossum (python.org/~guido)

I would like to propose an enhancement to function annotations. Here is
the motivating use case:
When using callbacks I would like to declare the signature once as a type
alias and use it to type hint both the function accepting the callback and
the callbacks themselves.
Currently I can declare the function signare
CallbackType = Callable[[int, str], Any]
and use it for the function/method accepting the callback
def my_func(callabck: CallbackType):
pass
however it does not work for the callback itself, I have to repeat myself
def my_callback(a: int, b: str) -> None:
pass
and if I change the signature in CallbackType the typechecker has to know
that my_callback will be passed to my_func in order to detect the error.
I propose to add a new syntax that declares the type of the function after
the function name.
def my_callback: CallbackType(a, b):
pass
any further parameter annotations would be disallowed:
def my_callback: CallbackType(a: int, b: str): # Syntax error - duplicate
annotations
pass
If the function parameters do not match the type signare, type hinters
would flag this as a type mismatch.
def my_callback: CallbackType(a): # type mismatch
pass
My original thought was that if CallbackType was not a Callable this would
also be a type error.
However I have since realized this could be used for declaring the return
type of properties:
For example
class MyClass(object):
@property
def my_prop: int(self)
return 10
c = MyClass()
Then c.my_prop would be type hinted as an integer.
What do people think?
Cheers
Tim

I am in desperate need of a dict similar structure that allows sets and/or
dicts as keys *and* values. My application is NLP conceptual plagiarism
detection. Dealing with infinite grammars communicating illogical
concepts. Would be even better if keys could nest the same data structure,
e.g. set(s) or dict(s) in set(s) or dict(s) of the set(s) or dict(s) as
key(s).
In order to detect conceptual plagiarism, I need to populate a data
structure with if/then equivalents as a decision tree. But my equivalents
have potentially infinite ways of arranging them syntactically* and*
semantically.
A dict having keys with identical set values treats each key as a distinct
element. I am dealing with semantics or elemental equivalents and many
different statements treated as equivalent statements involving if/then
(key/value) or a implies b, where a and/or b can be an element or an
if/then as an element. Modeling the syntactic equivalences of such claims
is paramount, and in order to do that, I need the data structure.
Hello, I am Stephanie. I have never contributed to any open source. I am
about intermediate at python and I am a self-directed learner/hobbyist. I
am trying to prove with my code that a particular very famous high profile
pop debate intellectual is plagiarizing Anders Breivik. I can show it via
observation, but his dishonesty is dispersed among many different
talks/lectures. I am dealing with a large number of speaking hours as
transcripts containing breadcrumbs that are very difficult for a human to
piece together as having come from the manifesto which is 1515 pages and
about half copied from other sources. The concepts stolen are
rearrangements and reorganizations of the same identical claims and themes.
He occasionally uses literal string plagiarism but not very much at once.
He is very good at elaboration which makes it even more difficult.
Thank you, for your time,
Stephanie

Hi,
I would like to discuss on the idea of a code (minor) version
evolver/re-writer (or at least a change indicator). Let's see one wants
to add a feature on the next version and some small grammar change is
needed, then the script upgrades/evolves first the current code and then
the new version can be installed/used.
Something like:
>> new-code = python-next('3.7', current-code)
>> is-python-code('3.8', new-code)
>> True
How hard is that? or what is possible and what not?
where should it happen? on the installer?
Thanks in advance!
--francis

Hi,
the idea here is just to add the __larrow__ and __rarrow__ operators for
<- and ->.
E.g. of use on dicts :
>>> d1 = {'a':1, 'b':1 }
>>> d2 = {'a':2 }
>>> d3 = d1 -> d2
>>> d3
{'a':1, 'b':1 }
>>> d1 = {'a':1, 'b':1 }
>>> d2 = {'a':2 }
>>> d3 = d1 <- d2
>>> d3
{'a':2, 'b':1 }
Or on bools as Modus Ponens [1]
Or your idea/imagination here :-)
Regards,
--francis
[1] https://en.wikipedia.org/wiki/Modus_ponens