As Python focuses on readability, why not use % sign for actual percentages?
rate = 0.1058 # float
rate = 10.58% # percent, similar to above
It does not interfere with modulo operator as modulo follows a different
a = x % y
This looks like a small feature but it will surely set Python a level
higher in terms of readability.
Thanks a lot!
I'd like to bounce this proposal off everyone and see if it's worth
formulating as a PEP. I haven't found any prior discussion of it, but as we
all know, searches can easily miss things, so if this is old hat please LMK.
*Summary: *The construction
with expr1 as var1, expr2 as var2, ...:
fails (with an AttributeError) unless each expression returns a value
satisfying the context manager protocol. Instead, we should permit any
expression to be used. If a value does not expose an __enter__ method, it
should behave as though its __enter__ method is return self; if it does not
have an __exit__ method, it should behave as though that method is return
*Rationale: *The with statement has proven to be a valued extension to
Python. In addition to providing improved readability for block scoping, it
has strongly encouraged the use of scoped cleanups for objects which
require them, such as files and mutices, in the process eliminating a lot
of annoying bugs. I would argue that at present, whenever dealing with an
object which requires such cleanup at a known time, with should be the
default way of doing it, and *not* doing so is the sort of thing one should
be explaining in a code comment. However, the current syntax makes a few
common patterns harder to implement than they should be.
For example, this is a good pattern:
with functionReturningFile(...) as input:
There are many cases where an Optional[file] makes sense as a parameter, as
well; for example, an optional debug output stream, or an input source
which may either be a file (if provided) or some non-file source (by
default). Likewise, there are many cases where a function may naturally
return an Optional[file], e.g. "open the file if the user has provided the
filename." However, the following is *not* valid Python:
with functionReturningOptionalFile(...) as input:
To handle this case, one has a few options. One may only use the 'with' in
the known safe cases:
inputFile = functionReturningOptionalFile(...)
with inputFile as input:
(NB that this requires factoring the with statement body into its own
function, which may separately reduce readability and/or introduce
overhead); one may dispense with the 'with' clause and do it in the
input = functionReturningOptionalFile(...)
(This sacrifices all the benefits of the with statement, and requires the
caller to explicitly call the cleanup methods, increasing error-proneness);
or one may construct an explicit 'dev-null' class and return it instead of
.... implement the entire File API, including a context manager ...
(This can only be described as god-awful, especially for complex API's like
One obvious option would be to allow None to act as a context manager as
well. We might contrast this with PEP 336
<https://www.python.org/dev/peps/pep-0336/>, "Make None Callable." This was
rejected (rightly, I think) because "it is considered a feature that None
raises an error if called." For example, it means that if a function
variable has been nulled, attempting to call it later raises an error, as
this usually indicates a code mistake. In the case where that is not
correct, it is easy to assign a noop lambda to the function variable
instead of None, thus allowing the error-checking and the
function-deactivating behaviors to both persist, and in a clear and easily
In this case, OTOH, the AttributeError raised if None is passed to a with
statement has significantly lower value. As the example above illustrates,
there are many cases where None is an entirely legitimate value to want to
pass, and unlike in the other situation, there is no equally easy way to
pass it. Furthermore, if the passing of None *is* an error in some case, it
is more useful to see that error at the site where the variable is actually
used in the with statement body -- the thing for which it does not make
sense to use None -- rather than at a structural declaration which
essentially defines a variable scope.
This is also the reason why such a change would impact relatively little
existing code: code already has to be structured to prevent this from
happening. If the assigned expression in the with statement could only
return None as a result of a code bug, and a piece of existing code is
relying on the with statement to catch it, it would instead fall through
and be caught by their own body code, presumably giving a more coherent
error anyway. This is a nonzero change in behavior, but it's well within
the scope of behavior changes which normally occur from version to version.
One alternative to this proposal would be to have only None allowed to act
as a context manager. However, None is not particularly special in this
regard; the logic above applies to any function which might return a Union
type. Furthermore, allowing it for any type would permit the following
construction as well:
with var1 as expr1, var2 as expr2, ...
.... body ...
where the common factor between the variables is no longer their need for a
guaranteed cleanup operation, but simply that they are semantically all
tied to a single scope of the code. This improves code clarity, as it
allows the syntax to follow the intent more closely, and also eliminates
one other ugliness. In present Python, the required syntax for the above
var1 = expr1
var3 = expr3
with var2 as expr2, var4 as expr4:
... body ...
where the variables in the 'with' statement are those which satisfy the
context manager protocol, and the ones above it are those which do not
satisfy the protocol. The split between the two is entirely tied to a
nonlocal fact about the code, namely the implementation of the return
values of each of the expressions, making it nonobvious which is which.
Worse, if the expressions depend on each other in sequence, this may have
to be broken up into
var1 = expr1
with var2 as expr2:
var3 = expr3(var1, var2)
with var4 as expr4(var3, ...):
.... body ...
This seems to lose on every measure of clarity and maintainability relative
to the single compound 'with' statement above.
Finally, one may ask if an (effective) default implementation of a protocol
is ever a good idea. "Hidden defaults" are a great way to trigger
surprising behavior, after all. However, in this case I would argue that
the proposed default behavior is sufficiently obvious that there is no
risk. Someone seeing a compound 'with' statement of the above form would
naturally assume that its consequence is (a) to set each varN to the
corresponding exprN, and (b) to execute any scope-initializers tied to
exprN. Likewise, someone would naturally assume that nothing at all happens
at scope exit, which is exactly the behavior of __exit__ being 'return
False'. In fact, this *increases* local code clarity, since the
counter-case -- where the implementation of each defaults (effectively) to
raising an AttributeError -- is nonobvious and so requires that "nonlocal
knowledge" of the code to assemble with statements.
*Specific implementation proposal: *Actually defining __enter__ and
__exit__ methods for each object would be a lot of overhead for no good
value. Instead, we can easily implement this as a change to the specified
behavior of the 'with' statement, simply by changing the error-handling
behavior in the SETUP_WITH
cases in ceval.c. If this does proceed to the PEP stage, I'll put together
a changelist, but it's very straightforward. Null values for enter and exit
are no longer errors; if enter is null, then instead of decrementing the
refcount of mgr and calling enter, we leave the mgr refcount alone and push
it onto the stack in place of the result. If exit is null, we simply push
it onto the stack just like we would normally, and ignore it in
I see python3.5 accepted PEP465 adding a new infix operator for matrix
(@, @=), which made matrix formula's much less painful to read in
Python. There are still more use cases like this in other areas.
While looking at Chisel (a hardware construction language build on top
of scala), where you can create arbitrary new operators e.g. := used
for specific purposes, I realize there is no way to overload the
behavior of the assignment operator in python unless new operators are
introduced. In the python world, e.g. MyHDL (Hardware description
language in python) uses something like signal.next = 5 ... which is 5
more chars to type (.next part) for every single signal assignment,
and assign new value to a signal is the one most commonly used
operations in hardware design world.
The .next could have been saved by using python descriptors but now
you have to type something like "obj.signal = 5" instead of "signal =
5", and it does not work if you want a local signal, where signal = 5
will always make signal to be 5, instead of feeding 5 into this
I have experimented by adding two new python operators, left arrow: <-
and right arrow ->, which users can define their behaviors. and it
still looks like kind of the non-blocking assignment operators in
hardware description languages (e.g. verilog <=). Also it could be
used to build data flows like a -> b -> c -> d -> ...
Another side effect is this "__arrow__" call can fully replace
descriptors and allow you to generate descriptor-like behavior on
object initialization time instead of class definition time, e.g. in
__init__(self,...) to dynamically create self.abc like fields which
can be then accessed like obj.abc <- 3 etc. This currently with
descriptors can only be achieved elegantly by using meta classes.
This is the first time I am writing to python-ideas list, just want to
hear what you guys think, should python actually allow operators to be
first class citizens? does the arrow operator makes sense at all? Any
other better ways to redefine the "assignment" behavior by user?
As Cameron Simpson already pointed out, your query is off-topic for the
Python-Dev mailing list and should be taken to the Python-Ideas mailing
list, which is for speculative discussion of new designs.
Like Cameron, I've CCed Python-Ideas. Please send any follow-ups to that
list and not Python-Dev.
You asked this question:
On Tue, May 28, 2019 at 09:35:51PM -0600, Montana Burr wrote:
> Ok, now I'm mildly curious to knpw:
> What is the justification for causing list == 3 to evaluate to False,
> besides the obvious "a list cannot equal a number"?
I concur with Terry Reedy -- what more justification is needed? A list
cannot equal a number, so the default behaviour ought to be to return
False. What would you have the default behaviour be?
People have already suggested that getting the numpy-style behaviour is
simple with a list comprehension, but the other technique is to subclass
list, override ``__eq__`` and give it the behaviour you want.
This belongs on python-ideas, not python-dev. I've directed replies to
this message there. Comments below.
On 26May2019 21:52, Montana Burr <montana.burr(a)gmail.com> wrote:
>NumPy arrays have this awesome feature, where array == 3 does an
>element-wise comparison and returns a list. For example:
>It would be cool if Python had similar functionality for lists.
map(lamdba item: item==3, [1,2,3,4,5])
I'm not sure this rates extra Python features.
Personally I'm -1 on this suggestion because == traditionally returns a
Boolean, NumPy notwithstanding. Your example above doesn't return a
>If that is not possible, perhaps we could consider allowing developers
>to overload operators on built-in types within the context of a project or
>module. For example, an overload in one module would have no effect on the
>same operator in a different module (such as any Python standard modules.)
This is usally done by overloading dunder methods on classes. if you
class subclasses a builtin eg int or list then the instances get the
>Additionally, let's then give the developers the option to explicitly
>import an overload from other modules. So, people could develop a module
>with the purpose of providing overloads that make complete sense within a
If you go the subclass route you could do this with a mixin class (a
class providing methods but little else, intended to be part of the MRO
of a subclass).
Cameron Simpson <cs(a)cskk.id.au>
I was working on bpo20443 <https://bugs.python.org/issue20443> but then i
realized it changes behavior of the compiler and some functions so i want
to propose this change to here and then write a pep. I have a draft pr, it
introduces a new future flag and as far as i can understand from the future
docs, i need a PEP.
Before writing to PEP, is there a core developer wants to sponsor my PEP?
Reference implementation: https://github.com/python/cpython/pull/13527
GraalPython is an implementation on the Truffle VM, by Oracle Labs. The
VM gives Python high-performance interoperability with other languages-
Interpreter," which I think means that it executes an interpreter over AST
nodes, then compiles and inlines those nodes to machine code as necessary.
By default, Truffle VM with the Graal JIT enabled. The Graal JIT compiles
AST down to machine code.
significantly more memory than V8, while having lower performance on pure
know if this is already possible in Python through Python's built-in C
extensions. (Since CPython still has to go through its normal dot attribute
lookup machinery, I'd imagine the performance benefits of converting a
class to a C struct would be little.)
TL;DR: GraalPython is a Python implementation with a JIT, an easy way to
extensions, and an easily extensible AST interpreter.
On Wed, May 22, 2019 at 10:03 PM Yanghao Hua <yanghao.py(a)gmail.com> wrote:
> > To be first-class citizens, operators would have to be able to be
> > passed to functions - for instance:
> > def frob(x, y, oper):
> > return x oper y
> > assert frob(10, 20, +) == 30
> > assert frob(10, 20, *) == 200
> > The nearest Python currently has to this is the "operator" module, in
> > which you'll find function versions of the operators:
> > def frob(x, y, oper):
> > return oper(x, y)
> > assert frob(10, 20, operator.add) == 30
> > assert frob(10, 20, operator.mul) == 200
> > Is this what you're talking about?
> Yes this is exactly what I am talking about .. and this is exactly
> what scala does.
> In scala, you can do "a + b", as well as "a.+(b)", where "+" is just a
> function of object a.
> You know people keep building domain specific languages for specific
> problems, and scala seems currently the only one that can truely
> enable people to build DSL elegantly. I actually do not know at all
> how to do this in CPython as I just started playing with Python
> internals a few days ago, but having an arrow operator would solve my
> immediate problem in terms of using Python to build a DSL for hardware
> design (elegantly!).
> Does this make sense?
Yes, it does make sense. Forgive my lack of Scala knowledge, but is it
possible for 'b' in your example to be the one that handles the
addition? Specific case:
def __add__(self, other): return other + 7
def __radd__(self, other): return 7 + other
seven = Seven()
print(seven + 1)
print(2 + seven)
This works in Python even though 2.+(seven) wouldn't know how to
handle this object. So it returns NotImplemented, and Python says, oh
okay, maybe the other object knows how to do this. (That's what
__radd__ is for.) This is why Python's operators are all defined by
the language, with multiple levels of protocol.
> Perhaps you might be able to do what you want using an import hook. I have done some experiments with introducing some new operators that way: https://github.com/aroberge/experimental/blob/master/experimental/transform…
Thanks for the link, it is a very interesting module.
I have to confess there is one more level of intention I have not
talked about ... it is about analyzing the python VM code and
translate it to C++/SystemC/Verilog or even doing direct synthesis to
hardware basic building blocks (think about AND/OR gates, Flip-Flops
etc.). So basically we need a way to tell the difference of a "<-"
arrow assign from the normal assign. So having source code
modification is definitely an interesting idea, but difficult for this
second step to proceed. As one of the first step of checking if a
module/function is synthesize-able is to make sure it does NOT just
call arbitrary functions. (but of course I can try to decorate the
function in certain ways ... but then you have to force this policy to
all users for their logic block definitions ... I really wanted an
almost one-to-one mapping between a python function to a hardware
block ... without even a single decorator ...). I will give it a
deeper try to see how far I can go though with this library (it has a
few hundred lines of code but supporting arrow operators in python is
really just below 100 lines of code ..)
> If you are proposing to add "<-" and "->" to the fundamental Python
> grammar, then it could work as PEP465 does. But if you are proposing
> that classes can define new lexical operators, it would have to work
> very very differently. You started with "operators as first class
> citizens," but I don't know what you mean by that.
It is my bad to mix this two topics together ... I am not strongly
proposing for now to have python to support arbitrary operator
definitions, I can live with the arrow propose if it makes sense. I
see also even the term "first class citizen" makes a confusion here
... by which python do have certain extent of support for the existing
operators. What I should have emphasized more is the dynamic nature of
creating new operators. Which scala seems does a better job in this
area for now.
> When Python compiles a file, it does not read the imported files at
> all. That doesn't happen until run time. So how will Python know what
> "<-" means in circuit.py? It's never seen this token before, so it
> can't know what special method it should map onto. It can't even know
> what the token is. What if it encounters "x ~++<-+ y"? What are the
> tokens in that source?
I see your point now. Scala does this by simply looking up a method on
the object named as "<-" e.g. x <- y is equivalent to "x.<-(y)". If "x
~++<-+ y" is encountered scala will interpret it to "x.~++<-+(y)" most
likely (haven't tried), they had also a precedence rule regarding user
defined operators. So if we want to change python to support arbitrary
operators, the operator itself probably need to be loaded by a VM
instruction, and then finding that operator on the object being
operated. This sounds like a lot of overhead with probably even a
string comparison needed? I think in the short term this is going to
be difficult to justify and I have not looked into the performance
implications. But anyway in this case the importing seems not
necessary, as the operator should have *NOT* being implemented in
hdl.py, but is part of the object used in circuit.py.