There's a whole matrix of these and I'm wondering why the matrix is
currently sparse rather than implementing them all. Or rather, why we
can't stack them as:
class foo(object):
@classmethod
@property
def bar(cls, ...):
...
Essentially the permutation are, I think:
{'unadorned'|abc.abstract}{'normal'|static|class}{method|property|non-callable
attribute}.
concreteness
implicit first arg
type
name
comments
{unadorned}
{unadorned}
method
def foo():
exists now
{unadorned} {unadorned} property
@property
exists now
{unadorned} {unadorned} non-callable attribute
x = 2
exists now
{unadorned} static
method @staticmethod
exists now
{unadorned} static property @staticproperty
proposing
{unadorned} static non-callable attribute {degenerate case -
variables don't have arguments}
unnecessary
{unadorned} class
method @classmethod
exists now
{unadorned} class property @classproperty or @classmethod;@property
proposing
{unadorned} class non-callable attribute {degenerate case - variables
don't have arguments}
unnecessary
abc.abstract {unadorned} method @abc.abstractmethod
exists now
abc.abstract {unadorned} property @abc.abstractproperty
exists now
abc.abstract {unadorned} non-callable attribute
@abc.abstractattribute or @abc.abstract;@attribute
proposing
abc.abstract static method @abc.abstractstaticmethod
exists now
abc.abstract static property @abc.staticproperty
proposing
abc.abstract static non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
abc.abstract class method @abc.abstractclassmethod
exists now
abc.abstract class property @abc.abstractclassproperty
proposing
abc.abstract class non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
I think the meanings of the new ones are pretty straightforward, but in
case they are not...
@staticproperty - like @property only without an implicit first
argument. Allows the property to be called directly from the class
without requiring a throw-away instance.
@classproperty - like @property, only the implicit first argument to the
method is the class. Allows the property to be called directly from the
class without requiring a throw-away instance.
@abc.abstractattribute - a simple, non-callable variable that must be
overridden in subclasses
@abc.abstractstaticproperty - like @abc.abstractproperty only for
@staticproperty
@abc.abstractclassproperty - like @abc.abstractproperty only for
@classproperty
--rich
I didn't think of this when we were discussing 448. I ran into this today,
so I agree with you that it would be nice to have this.
Best,
Neil
On Monday, December 4, 2017 at 1:02:09 AM UTC-5, Eric Wieser wrote:
>
> Hi,
>
> I've been thinking about the * unpacking operator while writing some numpy
> code. PEP 448 allows the following:
>
> values = 1, *some_tuple, 2
> object[(1, *some_tuple, 2)]
>
> It seems reasonable to me that it should be extended to allow
>
> item = object[1, *some_tuple, 2]
> item = object[1, *some_tuple, :]
>
> Was this overlooked in the original proposal, or deliberately rejected?
>
> Eric
>
At the moment, the array module of the standard library allows to
create arrays of different numeric types and to initialize them from
an iterable (eg, another array).
What's missing is the possiblity to specify the final size of the
array (number of items), especially for large arrays.
I'm thinking of suffix arrays (a text indexing data structure) for
large texts, eg the human genome and its reverse complement (about 6
billion characters from the alphabet ACGT).
The suffix array is a long int array of the same size (8 bytes per
number, so it occupies about 48 GB memory).
At the moment I am extending an array in chunks of several million
items at a time at a time, which is slow and not elegant.
The function below also initializes each item in the array to a given
value (0 by default).
Is there a reason why there the array.array constructor does not allow
to simply specify the number of items that should be allocated? (I do
not really care about the contents.)
Would this be a worthwhile addition to / modification of the array module?
My suggestions is to modify array generation in such a way that you
could pass an iterator (as now) as second argument, but if you pass a
single integer value, it should be treated as the number of items to
allocate.
Here is my current workaround (which is slow):
def filled_array(typecode, n, value=0, bsize=(1<<22)):
"""returns a new array with given typecode
(eg, "l" for long int, as in the array module)
with n entries, initialized to the given value (default 0)
"""
a = array.array(typecode, [value]*bsize)
x = array.array(typecode)
r = n
while r >= bsize:
x.extend(a)
r -= bsize
x.extend([value]*r)
return x
The proposed implementation of dataclasses prevents defining fields with
defaults before fields without defaults. This can create limitations on
logical grouping of fields and on inheritance.
Take, for example, the case:
@dataclass
class Foo:
some_default: dict = field(default_factory=dict)
@dataclass
class Bar(Foo):
other_field: int
this results in the error:
5 @dataclass
----> 6 class Bar(Foo):
7 other_field: int
8
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in dataclass(_cls, init, repr, eq, order, hash, frozen)
751
752 # We're called as @dataclass, with a class.
--> 753 return wrap(_cls)
754
755
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in wrap(cls)
743
744 def wrap(cls):
--> 745 return _process_class(cls, repr, eq, order, hash, init,
frozen)
746
747 # See if we're being called as @dataclass or @dataclass().
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _process_class(cls, repr, eq, order, hash, init, frozen)
675 # in __init__. Use "self" if
possible.
676 '__dataclass_self__' if 'self' in
fields
--> 677 else 'self',
678 ))
679 if repr:
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _init_fn(fields, frozen, has_post_init, self_name)
422 seen_default = True
423 elif seen_default:
--> 424 raise TypeError(f'non-default argument {f.name!r} '
425 'follows default argument')
426
TypeError: non-default argument 'other_field' follows default argument
I understand that this is a limitation of positional arguments because the
effective __init__ signature is:
def __init__(self, some_default: dict = <something>, other_field: int):
However, keyword only arguments allow an entirely reasonable solution to
this problem:
def __init__(self, *, some_default: dict = <something>, other_field: int):
And have the added benefit of making the fields in the __init__ call
entirely explicit.
So, I propose the addition of a keyword_only flag to the @dataclass
decorator that renders the __init__ method using keyword only arguments:
@dataclass(keyword_only=True)
class Bar(Foo):
other_field: int
--George Leslie-Waksman
Good day all,
as a continuation of thread "OS related file operations (copy, move,
delete, rename...) should be placed into one module"
https://mail.python.org/pipermail/python-ideas/2017-January/044217.html
please consider making pathlib to a central file system module with putting
file operations (copy, move, delete, rmtree etc) into pathlib.
BR,
George
Hi folks,
I normally wouldn't bring something like this up here, except I think
that there is possibility of something to be done--a language
documentation clarification if nothing else, though possibly an actual
code change as well.
I've been having an argument with a colleague over the last couple
days over the proper way order of statements when setting up a
try/finally to perform cleanup of some action. On some level we're
both being stubborn I think, and I'm not looking for resolution as to
who's right/wrong or I wouldn't bring it to this list in the first
place. The original argument was over setting and later restoring
os.environ, but we ended up arguing over
threading.Lock.acquire/release which I think is a more interesting
example of the problem, and he did raise a good point that I do want
to bring up.
</prologue>
My colleague's contention is that given
lock = threading.Lock()
this is simply *wrong*:
lock.acquire()
try:
do_something()
finally:
lock.release()
whereas this is okay:
with lock:
do_something()
Ignoring other details of how threading.Lock is actually implemented,
assuming that Lock.__enter__ calls acquire() and Lock.__exit__ calls
release() then as far as I've known ever since Python 2.5 first came
out these two examples are semantically *equivalent*, and I can't find
any way of reading PEP 343 or the Python language reference that would
suggest otherwise.
However, there *is* a difference, and has to do with how signals are
handled, particularly w.r.t. context managers implemented in C (hence
we are talking CPython specifically):
If Lock.__enter__ is a pure Python method (even if it maybe calls some
C methods), and a SIGINT is handled during execution of that method,
then in almost all cases a KeyboardInterrupt exception will be raised
from within Lock.__enter__--this means the suite under the with:
statement is never evaluated, and Lock.__exit__ is never called. You
can be fairly sure the KeyboardInterrupt will be raised from somewhere
within a pure Python Lock.__enter__ because there will usually be at
least one remaining opcode to be evaluated, such as RETURN_VALUE.
Because of how delayed execution of signal handlers is implemented in
the pyeval main loop, this means the signal handler for SIGINT will be
called *before* RETURN_VALUE, resulting in the KeyboardInterrupt
exception being raised. Standard stuff.
However, if Lock.__enter__ is a PyCFunction things are quite
different. If you look at how the SETUP_WITH opcode is implemented,
it first calls the __enter__ method with _PyObjet_CallNoArg. If this
returns NULL (i.e. an exception occurred in __enter__) then "goto
error" is executed and the exception is raised. However if it returns
non-NULL the finally block is set up with PyFrame_BlockSetup and
execution proceeds to the next opcode. At this point a potentially
waiting SIGINT is handled, resulting in KeyboardInterrupt being raised
while inside the with statement's suite, and finally block, and hence
Lock.__exit__ are entered.
Long story short, because Lock.__enter__ is a C function, assuming
that it succeeds normally then
with lock:
do_something()
always guarantees that Lock.__exit__ will be called if a SIGINT was
handled inside Lock.__enter__, whereas with
lock.acquire()
try:
...
finally:
lock.release()
there is at last a small possibility that the SIGINT handler is called
after the CALL_FUNCTION op but before the try/finally block is entered
(e.g. before executing POP_TOP or SETUP_FINALLY). So the end result
is that the lock is held and never released after the
KeyboardInterrupt (whether or not it's handled somehow).
Whereas, again, if Lock.__enter__ is a pure Python function there's
less likely to be any difference (though I don't think the possibility
can be ruled out entirely).
At the very least I think this quirk of CPython should be mentioned
somewhere (since in all other cases the semantic meaning of the
"with:" statement is clear). However, I think it might be possible to
gain more consistency between these cases if pending signals are
checked/handled after any direct call to PyCFunction from within the
ceval loop.
Sorry for the tl;dr; any thoughts?
Hi,
For technical reasons, many functions of the Python standard libraries
implemented in C have positional-only parameters. Example:
-------
$ ./python
Python 3.7.0a0 (default, Feb 25 2017, 04:30:32)
>>> help(str.replace)
replace(self, old, new, count=-1, /) # <== notice "/" at the end
...
>>> "a".replace("x", "y") # ok
'a'
>>> "a".replace(old="x", new="y") # ERR!
TypeError: replace() takes at least 2 arguments (0 given)
-------
When converting the methods of the builtin str type to the internal
"Argument Clinic" tool (tool to generate the function signature,
function docstring and the code to parse arguments in C), I asked if
we should add support for keyword arguments in str.replace(). The
answer was quick: no! It's a deliberate design choice.
Quote of Yury Selivanov's message:
"""
I think Guido explicitly stated that he doesn't like the idea to
always allow keyword arguments for all methods. I.e. `str.find('aaa')`
just reads better than `str.find(needle='aaa')`. Essentially, the idea
is that for most of the builtins that accept one or two arguments,
positional-only parameters are better.
"""
http://bugs.python.org/issue29286#msg285578
I just noticed a module on PyPI to implement this behaviour on Python functions:
https://pypi.python.org/pypi/positional
My question is: would it make sense to implement this feature in
Python directly? If yes, what should be the syntax? Use "/" marker?
Use the @positional() decorator?
Do you see concrete cases where it's a deliberate choice to deny
passing arguments as keywords?
Don't you like writing int(x="123") instead of int("123")? :-) (I know
that Serhiy Storshake hates the name of the "x" parameter of the int
constructor ;-))
By the way, I read that "/" marker is unknown by almost all Python
developers, and [...] syntax should be preferred, but
inspect.signature() doesn't support this syntax. Maybe we should fix
signature() and use [...] format instead?
Replace "replace(self, old, new, count=-1, /)" with "replace(self,
old, new[, count=-1])" (or maybe even not document the default
value?).
Python 3.5 help (docstring) uses "S.replace(old, new[, count])".
Victor
I just spent a few minutes staring at a bug caused by a missing comma
-- I got a mysterious argument count error because instead of foo('a',
'b') I had written foo('a' 'b').
This is a fairly common mistake, and IIRC at Google we even had a lint
rule against this (there was also a Python dialect used for some
specific purpose where this was explicitly forbidden).
Now, with modern compiler technology, we can (and in fact do) evaluate
compile-time string literal concatenation with the '+' operator, so
there's really no reason to support 'a' 'b' any more. (The reason was
always rather flimsy; I copied it from C but the reason why it's
needed there doesn't really apply to Python, as it is mostly useful
inside macros.)
Would it be reasonable to start deprecating this and eventually remove
it from the language?
--
--Guido van Rossum (python.org/~guido)
This is a suggestion that comes up periodically here or on python-dev.
This proposal introduces a way to bind a temporary name to the value
of an expression, which can then be used elsewhere in the current
statement.
The nicely-rendered version will be visible here shortly:
https://www.python.org/dev/peps/pep-0572/
ChrisA
PEP: 572
Title: Syntax for Statement-Local Name Bindings
Author: Chris Angelico <rosuav(a)gmail.com>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 28-Feb-2018
Python-Version: 3.8
Post-History: 28-Feb-2018
Abstract
========
Programming is all about reusing code rather than duplicating it. When
an expression needs to be used twice in quick succession but never again,
it is convenient to assign it to a temporary name with very small scope.
By permitting name bindings to exist within a single statement only, we
make this both convenient and safe against collisions.
Rationale
=========
When an expression is used multiple times in a list comprehension, there
are currently several suboptimal ways to spell this, and no truly good
ways. A statement-local name allows any expression to be temporarily
captured and then used multiple times.
Syntax and semantics
====================
In any context where arbitrary Python expressions can be used, a named
expression can appear. This must be parenthesized for clarity, and is of
the form `(expr as NAME)` where `expr` is any valid Python expression,
and `NAME` is a simple name.
The value of such a named expression is the same as the incorporated
expression, with the additional side-effect that NAME is bound to that
value for the remainder of the current statement.
Just as function-local names shadow global names for the scope of the
function, statement-local names shadow other names for that statement.
They can also shadow each other, though actually doing this should be
strongly discouraged in style guides.
Example usage
=============
These list comprehensions are all approximately equivalent::
# Calling the function twice
stuff = [[f(x), f(x)] for x in range(5)]
# Helper function
def pair(value): return [value, value]
stuff = [pair(f(x)) for x in range(5)]
# Inline helper function
stuff = [(lambda v: [v,v])(f(x)) for x in range(5)]
# Extra 'for' loop - see also Serhiy's optimization
stuff = [[y, y] for x in range(5) for y in [f(x)]]
# Expanding the comprehension into a loop
stuff = []
for x in range(5):
y = f(x)
stuff.append([y, y])
# Using a statement-local name
stuff = [[(f(x) as y), y] for x in range(5)]
If calling `f(x)` is expensive or has side effects, the clean operation of
the list comprehension gets muddled. Using a short-duration name binding
retains the simplicity; while the extra `for` loop does achieve this, it
does so at the cost of dividing the expression visually, putting the named
part at the end of the comprehension instead of the beginning.
Statement-local name bindings can be used in any context, but should be
avoided where regular assignment can be used, just as `lambda` should be
avoided when `def` is an option.
Open questions
==============
1. What happens if the name has already been used? `(x, (1 as x), x)`
Currently, prior usage functions as if the named expression did not
exist (following the usual lookup rules); the new name binding will
shadow the other name from the point where it is evaluated until the
end of the statement. Is this acceptable? Should it raise a syntax
error or warning?
2. The current implementation [1] implements statement-local names using
a special (and mostly-invisible) name mangling. This works perfectly
inside functions (including list comprehensions), but not at top
level. Is this a serious limitation? Is it confusing?
3. The interaction with locals() is currently[1] slightly buggy. Should
statement-local names appear in locals() while they are active (and
shadow any other names from the same function), or should they simply
not appear?
4. Syntactic confusion in `except` statements. While technically
unambiguous, it is potentially confusing to humans. In Python 3.7,
parenthesizing `except (Exception as e):` is illegal, and there is no
reason to capture the exception type (as opposed to the exception
instance, as is done by the regular syntax). Should this be made
outright illegal, to prevent confusion? Can it be left to linters?
5. Similar confusion in `with` statements, with the difference that there
is good reason to capture the result of an expression, and it is also
very common for `__enter__` methods to return `self`. In many cases,
`with expr as name:` will do the same thing as `with (expr as name):`,
adding to the confusion.
References
==========
.. [1] Proof of concept / reference implementation
(https://github.com/Rosuav/cpython/tree/statement-local-variables)
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:
That should probably be its own thread
From: Python-ideas [mailto:python-ideas-bounces+tritium-list=sdamon.com@python.org] On Behalf Of Robert Vanden Eynde
Sent: Wednesday, February 28, 2018 4:48 PM
Cc: python-ideas <python-ideas(a)python.org>
Subject: Re: [Python-ideas] PEP 572: Statement-Local Name Bindings
We are currently like a dozen of people talking about multiple sections of a single subject.
Isn't it easier to talk on a forum?
Am I the only one who thinks mailing list isn't easy when lots of people talking about multiple subjects?
Of course we would put the link in the mailing list so that everyone can join.
A forum (or just few "issues" thread on github) is where we could have different thread in parallel, in my messages I end up with like 10 comments not all related, in a forum we could talk about everything and it would still be organized by subjects.
Also, it's more interactive than email on a global list, people can talk to each other in parallel, if I want to answer about a mail that was 10 mail ago, it gets quickly messy.
We could all discuss on a gist or some "Issues" thread on GitHub.
2018-02-28 22:38 GMT+01:00 Robert Vanden Eynde <robertve92(a)gmail.com <mailto:robertve92@gmail.com> >:
Le 28 févr. 2018 11:43, "Chris Angelico" <rosuav(a)gmail.com <mailto:rosuav@gmail.com> > a écrit :
> It's still right-to-left, which is as bad as middle-outward once you
> combine it with normal left-to-right evaluation. Python has very
> little of this [..]
I agree [....]
>> 2) talking about the implementation of thektulu in the "where =" part.
> ?
In the Alternate Syntax, I was talking about adding a link to the thektulu (branch where-expr) <https://github.com/thektulu/cpython/commits/where-expr>
implementation as a basis of proof of concept (as you did with the other syntax).
>> 3) "C problem that an equals sign in an expression can now create a name inding, rather than performing a comparison."
As you agreed, with the "ch with ch = getch()" syntax we won't accidentally switch a "==" for a "=".
I agree this syntax :
```
while (ch with ch = getch()):
...
```
doesn't read very well, but in the same way as in C or Java while(ch = getch()){} or worse ((ch = getch()) != null) syntax.
Your syntax "while (getch() as ch):" may have a less words, but is still not clearer.
As we spoke on Github, having this syntax in a while is only useful if the variable does leak.
>> 5) Any expression vs "post for" only
> I don't know what the benefit is here, but sure. As long as the
> grammar is unambiguous, I don't see any particular reason to reject
> this.
I would like to see a discussion of pros and cons, some might think like me or disagree, that's a strong langage question.
> 6) with your syntax, how does the simple case work (y+2 with y = x+1) ?
What simple case? The case where you only use the variable once? I'd
write it like this:
(x + 1) + 2
>> The issue is not only about reusing variable.
> If you aren't using the variable multiple times, there's no point
> giving it a name. Unless I'm missing something here?
Yes, variables are not there "just because we reuse them", but also to include temporary variables to better understand the code.
Same for functions, you could inline functions when used only once, but you introduce them for clarity no ?
```
a = v ** 2 / R # the acceleration in a circular motion
f = m * a # law of Newton
```
could be written as
```
f = m * (v ** 2 / R) # compute the force, trivial
```
But having temporary variables help a lot to understand the code, otherwise why would we create temporary variables ?
I can give you an example where you do a process and each time the variable is used only one.
>> 8)
>> (lambda y: [y, y])(x+1)
>> Vs
>> (lambda y: [y, y])(y=x+1)
Ewww. Remind me what the benefit is of writing the variable name that
many times? "Explicit" doesn't mean "utterly verbose".
Yep it's verbose, lambdas are verbose, that's why we created this PEP isn't it :)
> 10) Chaining, in the case of the "with =", in thektulu, parenthesis were
> mandatory:
>
> print((z+3 with z = y+2) with y = x+2)
>
> What happens when the parenthesis are dropped ?
>
> print(z+3 with y = x+2 with z = y+2)
>
> Vs
>
> print(z+3 with y = x+2 with z = y+2)
>
> I prefer the first one be cause it's in the same order as the "post for"
>
> [z + 3 for y in [ x+2 ] for z in [ y+2 ]]
> With my proposal, the parens are simply mandatory. Extending this to
> make them optional can come later.
Indeed, but that's still questions that can be asked.
>> 11) Scoping, in the case of the "with =" syntax, I think the parenthesis
>> introduce a scope :
>>
>> print(y + (y+1 where y = 2))
>>
>> Would raise a SyntaxError, it's probably better for the variable beeing
>> local and not in the current function (that would be a mess).
>>
>> Remember that in list comp, the variable is not leaked :
>>
>> x = 5
>> stuff = [y+2 for y in [x+1]
>> print(y) # SyntaxError
> Scoping is a fundamental part of both my proposal and the others I've
> seen here. (BTW, that would be a NameError, not a SyntaxError; it's
> perfectly legal to ask for the name 'y', it just hasn't been given any
> value.) By my definition, the variable is locked to the statement that
> created it, even if that's a compound statement. By the definition of
> a "(expr given var = expr)" proposal, it would be locked to that
> single expression.
Confer the discussion on scoping on github (https://github.com/python/peps/commit/2b4ca20963a24cf5faac054226857ea970547…) :
"""
In the current implementation it looks like it is like a regular assignment (function local then).
Therefore in the expression usage, the usefulness would be debatable (just assign before).
But in a list comprehension after the for (as I mentioned in my mail), aka. when used as a replacement for for y in [ x + 1 ] this would make sense.
But I think that it would be much better to have a local scope, in the parenthesis. So that print(y+2 where y = x + 1) wouldn't leak y. And when there are no parenthesis like in a = y+2 where y = x+1, it would imply one, giving the same effect as a = (y+2 where y = x+1). Moreover, it would naturally shadow variables in the outermost scope.
This would imply while data where data = sock.read(): does not leak data but as a comparison with C and Java, the syntax while((data = sock.read()) != null) is really really ugly and confusing.
"""