I don't know if this was already debated but I don't know how to search
in the whole archive of the list.
For now the adoption of pyproject.toml file is more difficult because
toml is not in the standard library.
Each tool which wants to use pyproject.toml has to add a toml lib as a
conditional or hard dependency.
Since toml is now the standard configuration file format, it's strange
the python does not support it in the stdlib lije it would have been
strange to not have the configparser module.
I know it's complicated to add more and more thing to the stdlib but I
really think it is necessary for python packaging being more consistent.
Maybe we could thought to a readonly lib to limit the added code.
If it's conceivable, I'd be happy to help in it.
Nice Day guys and girls.
As suggested by Toshio Kuratomi at https://bugs.python.org/issue36656, I
am raising this here for inclusion in the shutil module.
Mimicking POSIX, os.symlink() will raise FileExistsError if the link
name to be created already exists.
A common use case is overwriting an existing file (often a symlink) with
a symlink. Naively, one would delete the file named link_name file if it
exists, then call symlink(). This "solution" is already 3 lines of code,
and without exception handling it introduces the race condition of a
file named link_name being created between unlink and symlink.
Depending on the functionality required, I suggest:
* os.symlink() - the new link name is expected to NOT exist
* shutil.symlink() - the new symlink replaces an existing file
Handling all possible race conditions (some detailed in issue36656) is
non-trivial, however this is the best that I have come up with so far:
import os, tempfile
def symlink(target, link_name):
'''Create a symbolic link link_name pointing to target.
Overwrites link_name if it exists. '''
# os.replace() may fail if files are on different filesystems
link_dir = os.path.dirname(link_name)
# Link to a temporary filename that doesn't exist
temp_link_name = tempfile.mktemp(dir=link_dir)
# os.* functions mimic as closely as possible system functions
# The POSIX symlink() returns EEXIST if link_name already exists
# Replace link_name with temp_link_name
# Pre-empt os.replace on a directory with a nicer message
raise IsADirectoryError(f"Cannot symlink over existing
The documentation (https://docs.python.org/3/library/shutil.html) I
suggest for this is:
Create a symbolic link named link_name pointing to target, overwriting
target if it exists. If link_name is a directory, IsADirectoryError is
raised. To not overwrite target, use os.symlink()
It would be tempting to do:
But this has a race condition when replacing a symlink should should
*always* exist, eg:
/lib/critical.so -> /lib/critical.so.1.2
When upgrading by:
There is a point in time when /lib/critical.so doesn't exist.
One issue I see with my suggested code is that the file at
temp_link_name could be changed before target is replaced with it. This
is mitigated by the randomness introduced by mktemp().
While it is far less likely that a file is accessed with a random and
unknown name than with an existing known name, I seek input on a
solution if this is an unacceptable risk.
* https://bugs.python.org/issue36656 (already mentioned above)
As Python focuses on readability, why not use % sign for actual percentages?
rate = 0.1058 # float
rate = 10.58% # percent, similar to above
It does not interfere with modulo operator as modulo follows a different
a = x % y
This looks like a small feature but it will surely set Python a level
higher in terms of readability.
Thanks a lot!
I'd like to bounce this proposal off everyone and see if it's worth
formulating as a PEP. I haven't found any prior discussion of it, but as we
all know, searches can easily miss things, so if this is old hat please LMK.
*Summary: *The construction
with expr1 as var1, expr2 as var2, ...:
fails (with an AttributeError) unless each expression returns a value
satisfying the context manager protocol. Instead, we should permit any
expression to be used. If a value does not expose an __enter__ method, it
should behave as though its __enter__ method is return self; if it does not
have an __exit__ method, it should behave as though that method is return
*Rationale: *The with statement has proven to be a valued extension to
Python. In addition to providing improved readability for block scoping, it
has strongly encouraged the use of scoped cleanups for objects which
require them, such as files and mutices, in the process eliminating a lot
of annoying bugs. I would argue that at present, whenever dealing with an
object which requires such cleanup at a known time, with should be the
default way of doing it, and *not* doing so is the sort of thing one should
be explaining in a code comment. However, the current syntax makes a few
common patterns harder to implement than they should be.
For example, this is a good pattern:
with functionReturningFile(...) as input:
There are many cases where an Optional[file] makes sense as a parameter, as
well; for example, an optional debug output stream, or an input source
which may either be a file (if provided) or some non-file source (by
default). Likewise, there are many cases where a function may naturally
return an Optional[file], e.g. "open the file if the user has provided the
filename." However, the following is *not* valid Python:
with functionReturningOptionalFile(...) as input:
To handle this case, one has a few options. One may only use the 'with' in
the known safe cases:
inputFile = functionReturningOptionalFile(...)
with inputFile as input:
(NB that this requires factoring the with statement body into its own
function, which may separately reduce readability and/or introduce
overhead); one may dispense with the 'with' clause and do it in the
input = functionReturningOptionalFile(...)
(This sacrifices all the benefits of the with statement, and requires the
caller to explicitly call the cleanup methods, increasing error-proneness);
or one may construct an explicit 'dev-null' class and return it instead of
.... implement the entire File API, including a context manager ...
(This can only be described as god-awful, especially for complex API's like
One obvious option would be to allow None to act as a context manager as
well. We might contrast this with PEP 336
<https://www.python.org/dev/peps/pep-0336/>, "Make None Callable." This was
rejected (rightly, I think) because "it is considered a feature that None
raises an error if called." For example, it means that if a function
variable has been nulled, attempting to call it later raises an error, as
this usually indicates a code mistake. In the case where that is not
correct, it is easy to assign a noop lambda to the function variable
instead of None, thus allowing the error-checking and the
function-deactivating behaviors to both persist, and in a clear and easily
In this case, OTOH, the AttributeError raised if None is passed to a with
statement has significantly lower value. As the example above illustrates,
there are many cases where None is an entirely legitimate value to want to
pass, and unlike in the other situation, there is no equally easy way to
pass it. Furthermore, if the passing of None *is* an error in some case, it
is more useful to see that error at the site where the variable is actually
used in the with statement body -- the thing for which it does not make
sense to use None -- rather than at a structural declaration which
essentially defines a variable scope.
This is also the reason why such a change would impact relatively little
existing code: code already has to be structured to prevent this from
happening. If the assigned expression in the with statement could only
return None as a result of a code bug, and a piece of existing code is
relying on the with statement to catch it, it would instead fall through
and be caught by their own body code, presumably giving a more coherent
error anyway. This is a nonzero change in behavior, but it's well within
the scope of behavior changes which normally occur from version to version.
One alternative to this proposal would be to have only None allowed to act
as a context manager. However, None is not particularly special in this
regard; the logic above applies to any function which might return a Union
type. Furthermore, allowing it for any type would permit the following
construction as well:
with var1 as expr1, var2 as expr2, ...
.... body ...
where the common factor between the variables is no longer their need for a
guaranteed cleanup operation, but simply that they are semantically all
tied to a single scope of the code. This improves code clarity, as it
allows the syntax to follow the intent more closely, and also eliminates
one other ugliness. In present Python, the required syntax for the above
var1 = expr1
var3 = expr3
with var2 as expr2, var4 as expr4:
... body ...
where the variables in the 'with' statement are those which satisfy the
context manager protocol, and the ones above it are those which do not
satisfy the protocol. The split between the two is entirely tied to a
nonlocal fact about the code, namely the implementation of the return
values of each of the expressions, making it nonobvious which is which.
Worse, if the expressions depend on each other in sequence, this may have
to be broken up into
var1 = expr1
with var2 as expr2:
var3 = expr3(var1, var2)
with var4 as expr4(var3, ...):
.... body ...
This seems to lose on every measure of clarity and maintainability relative
to the single compound 'with' statement above.
Finally, one may ask if an (effective) default implementation of a protocol
is ever a good idea. "Hidden defaults" are a great way to trigger
surprising behavior, after all. However, in this case I would argue that
the proposed default behavior is sufficiently obvious that there is no
risk. Someone seeing a compound 'with' statement of the above form would
naturally assume that its consequence is (a) to set each varN to the
corresponding exprN, and (b) to execute any scope-initializers tied to
exprN. Likewise, someone would naturally assume that nothing at all happens
at scope exit, which is exactly the behavior of __exit__ being 'return
False'. In fact, this *increases* local code clarity, since the
counter-case -- where the implementation of each defaults (effectively) to
raising an AttributeError -- is nonobvious and so requires that "nonlocal
knowledge" of the code to assemble with statements.
*Specific implementation proposal: *Actually defining __enter__ and
__exit__ methods for each object would be a lot of overhead for no good
value. Instead, we can easily implement this as a change to the specified
behavior of the 'with' statement, simply by changing the error-handling
behavior in the SETUP_WITH
cases in ceval.c. If this does proceed to the PEP stage, I'll put together
a changelist, but it's very straightforward. Null values for enter and exit
are no longer errors; if enter is null, then instead of decrementing the
refcount of mgr and calling enter, we leave the mgr refcount alone and push
it onto the stack in place of the result. If exit is null, we simply push
it onto the stack just like we would normally, and ignore it in
I see python3.5 accepted PEP465 adding a new infix operator for matrix
(@, @=), which made matrix formula's much less painful to read in
Python. There are still more use cases like this in other areas.
While looking at Chisel (a hardware construction language build on top
of scala), where you can create arbitrary new operators e.g. := used
for specific purposes, I realize there is no way to overload the
behavior of the assignment operator in python unless new operators are
introduced. In the python world, e.g. MyHDL (Hardware description
language in python) uses something like signal.next = 5 ... which is 5
more chars to type (.next part) for every single signal assignment,
and assign new value to a signal is the one most commonly used
operations in hardware design world.
The .next could have been saved by using python descriptors but now
you have to type something like "obj.signal = 5" instead of "signal =
5", and it does not work if you want a local signal, where signal = 5
will always make signal to be 5, instead of feeding 5 into this
I have experimented by adding two new python operators, left arrow: <-
and right arrow ->, which users can define their behaviors. and it
still looks like kind of the non-blocking assignment operators in
hardware description languages (e.g. verilog <=). Also it could be
used to build data flows like a -> b -> c -> d -> ...
Another side effect is this "__arrow__" call can fully replace
descriptors and allow you to generate descriptor-like behavior on
object initialization time instead of class definition time, e.g. in
__init__(self,...) to dynamically create self.abc like fields which
can be then accessed like obj.abc <- 3 etc. This currently with
descriptors can only be achieved elegantly by using meta classes.
This is the first time I am writing to python-ideas list, just want to
hear what you guys think, should python actually allow operators to be
first class citizens? does the arrow operator makes sense at all? Any
other better ways to redefine the "assignment" behavior by user?
As Cameron Simpson already pointed out, your query is off-topic for the
Python-Dev mailing list and should be taken to the Python-Ideas mailing
list, which is for speculative discussion of new designs.
Like Cameron, I've CCed Python-Ideas. Please send any follow-ups to that
list and not Python-Dev.
You asked this question:
On Tue, May 28, 2019 at 09:35:51PM -0600, Montana Burr wrote:
> Ok, now I'm mildly curious to knpw:
> What is the justification for causing list == 3 to evaluate to False,
> besides the obvious "a list cannot equal a number"?
I concur with Terry Reedy -- what more justification is needed? A list
cannot equal a number, so the default behaviour ought to be to return
False. What would you have the default behaviour be?
People have already suggested that getting the numpy-style behaviour is
simple with a list comprehension, but the other technique is to subclass
list, override ``__eq__`` and give it the behaviour you want.
This belongs on python-ideas, not python-dev. I've directed replies to
this message there. Comments below.
On 26May2019 21:52, Montana Burr <montana.burr(a)gmail.com> wrote:
>NumPy arrays have this awesome feature, where array == 3 does an
>element-wise comparison and returns a list. For example:
>It would be cool if Python had similar functionality for lists.
map(lamdba item: item==3, [1,2,3,4,5])
I'm not sure this rates extra Python features.
Personally I'm -1 on this suggestion because == traditionally returns a
Boolean, NumPy notwithstanding. Your example above doesn't return a
>If that is not possible, perhaps we could consider allowing developers
>to overload operators on built-in types within the context of a project or
>module. For example, an overload in one module would have no effect on the
>same operator in a different module (such as any Python standard modules.)
This is usally done by overloading dunder methods on classes. if you
class subclasses a builtin eg int or list then the instances get the
>Additionally, let's then give the developers the option to explicitly
>import an overload from other modules. So, people could develop a module
>with the purpose of providing overloads that make complete sense within a
If you go the subclass route you could do this with a mixin class (a
class providing methods but little else, intended to be part of the MRO
of a subclass).
Cameron Simpson <cs(a)cskk.id.au>
I was working on bpo20443 <https://bugs.python.org/issue20443> but then i
realized it changes behavior of the compiler and some functions so i want
to propose this change to here and then write a pep. I have a draft pr, it
introduces a new future flag and as far as i can understand from the future
docs, i need a PEP.
Before writing to PEP, is there a core developer wants to sponsor my PEP?
Reference implementation: https://github.com/python/cpython/pull/13527
GraalPython is an implementation on the Truffle VM, by Oracle Labs. The
VM gives Python high-performance interoperability with other languages-
Interpreter," which I think means that it executes an interpreter over AST
nodes, then compiles and inlines those nodes to machine code as necessary.
By default, Truffle VM with the Graal JIT enabled. The Graal JIT compiles
AST down to machine code.
significantly more memory than V8, while having lower performance on pure
know if this is already possible in Python through Python's built-in C
extensions. (Since CPython still has to go through its normal dot attribute
lookup machinery, I'd imagine the performance benefits of converting a
class to a C struct would be little.)
TL;DR: GraalPython is a Python implementation with a JIT, an easy way to
extensions, and an easily extensible AST interpreter.
On Wed, May 22, 2019 at 10:03 PM Yanghao Hua <yanghao.py(a)gmail.com> wrote:
> > To be first-class citizens, operators would have to be able to be
> > passed to functions - for instance:
> > def frob(x, y, oper):
> > return x oper y
> > assert frob(10, 20, +) == 30
> > assert frob(10, 20, *) == 200
> > The nearest Python currently has to this is the "operator" module, in
> > which you'll find function versions of the operators:
> > def frob(x, y, oper):
> > return oper(x, y)
> > assert frob(10, 20, operator.add) == 30
> > assert frob(10, 20, operator.mul) == 200
> > Is this what you're talking about?
> Yes this is exactly what I am talking about .. and this is exactly
> what scala does.
> In scala, you can do "a + b", as well as "a.+(b)", where "+" is just a
> function of object a.
> You know people keep building domain specific languages for specific
> problems, and scala seems currently the only one that can truely
> enable people to build DSL elegantly. I actually do not know at all
> how to do this in CPython as I just started playing with Python
> internals a few days ago, but having an arrow operator would solve my
> immediate problem in terms of using Python to build a DSL for hardware
> design (elegantly!).
> Does this make sense?
Yes, it does make sense. Forgive my lack of Scala knowledge, but is it
possible for 'b' in your example to be the one that handles the
addition? Specific case:
def __add__(self, other): return other + 7
def __radd__(self, other): return 7 + other
seven = Seven()
print(seven + 1)
print(2 + seven)
This works in Python even though 2.+(seven) wouldn't know how to
handle this object. So it returns NotImplemented, and Python says, oh
okay, maybe the other object knows how to do this. (That's what
__radd__ is for.) This is why Python's operators are all defined by
the language, with multiple levels of protocol.