First off, I have admittedly not read all of this thread.  However, as designer of DSL's in python, I wanted to jump in on a couple of things I have seen suggested.  Sorry If I am repeating comments already made.  Over the last few years I have thought about every suggestion I have seen in this thread and they all don't work or are undesirable for one reason or another.

Regarding what code will become simpler if an assignment operator was available.  I currently walk the AST to rewrite assignments into the form I want.  This is code is really hard to read if you are not familiar with the python AST, but the task it is performing is not hard to understand at all (replace assignment nodes with calls to a function call to dsl_assign(target_names, value, globals(), locals()).  The dsl_assign function basically performs some type checking before doing the assignment.  Once again the code is much harder to understand than it should be as it operates on names of variables and the globals / locals dictionaries instead of on the variables themselves.  Also to get the hook into the AST I have to have use an importer which further obscures my code and makes use kinda annoying as one has to do the following:
import dsl # sets up the importer to rewrite the AST 
import dsl_code # the code which should morally be the main but most be imported after dsl 
Granted anything in a function or a class can be rewritten with a decorator but module level code must be rewritten by an importer.

The problem with overloading obj@=value:
As Yanghao have pointed out @ comes with expectations of behavior.  Granted I would gamble most developers are unaware that @ is python operator but it still has meaning and as such I don't like abusing it.

The problem with using obj[:]=value:
Similar to @ getitem and slices have meaning which I don't necessarily want to override. Granted this is least objectionable solution I have seen although, it creates weird requirements for types that have __getitem__ so it can just be used by inheriting `TypedAssignment` or something similar. 

The problem with descriptors:
They are hard to pass to functions, for example consider trying to fold assignment.
signals = [Signal('x'), Signal('y'), Signal('z')]
out = functools.reduce(operator.iassign, signals)
signals = SignalNamespace()
signal_names =  ['x', 'y', 'z']
out = functools.reduce(lambda v, name: settattr(signals, name, v))
In general one has to pass the name of the signal and the namespace to a function instead the signal itself which is problematic.

The problem with exec:
First off its totally unpythonic, but even if I hide the exec with importer magic it still doesn't give me the behavior I want.
Consider the following
class TypeCheckDict(dict, MutableMapping): # dict needed to be used as globals
  Dictionary which  binds keys to a type on first assignment then type checks on future
  assignement. Will infer type if not already bound. """
  __slots__ = '_d'
  def __init__(self, d=_MISSING):
    if d is _MISSING:
      d = {}
    self._d = d

  def __getitem__(self, name):
    v = self._d[name][1]
    if v is _MISSING:
      raise ValueError()
      return v

  def __setitem__(self, name, value):
    if name not in self._d:
      if isinstance(value, type):
        self._d[name] = [value, _MISSING]
        self._d[name] = [type(value), _MISSING]
      elif isinstance(value, self._d[name][0]):
        self._d[name][1] = value
        raise TypeError(f'{value} is not a {self._d[name][0]}')
  # __len__ __iter__ __delitem__ just dispatch to self._d

S = '''
x = int
x = 1
x = 'a'
exec(S, TypeCheckDict(), TypeCheckDict()) # raises TypeError 'a' is not a int

S = '''
def foo(): # type of foo inferred
  x = int
  x = 'a'
exec(S, TypeCheckDict(), TypeCheckDict()) # doesn't raise an error as a normal dict is used in foo

Caleb Donovick

On Thu, Jun 6, 2019 at 12:54 AM Yanghao Hua <> wrote:
On Wed, Jun 5, 2019 at 11:31 AM Chris Angelico <> wrote:
> Part of the reason you can't just treat the + operator as a method
> call is that there are reflected methods. Consider:
> class Int(int):
>     def __radd__(self, other):
>         print("You're adding %s to me!" % other)
>         return 1234
> x = Int(7)
> print(x + 1)
> print(1 + x)
> If these were implemented as x.__add__(1) and (1).__add__(x), the
> second one would use the default implementation of addition. The left
> operand would be the only one able to decide how something should be
> implemented.

Yep, just did an experiment in Scala, where you can do x + 1, but not
1 + x. So it looses some flexibility in terms of how you write your
expression, but still, it looks OK to only write x + 1 and when you
write 1 + x you get an error immediately.
Python-Ideas mailing list -- python-dev(a)
To unsubscribe send an email to python-ideas-leave(a)