There's a whole matrix of these and I'm wondering why the matrix is
currently sparse rather than implementing them all. Or rather, why we
can't stack them as:
class foo(object):
@classmethod
@property
def bar(cls, ...):
...
Essentially the permutation are, I think:
{'unadorned'|abc.abstract}{'normal'|static|class}{method|property|non-callable
attribute}.
concreteness
implicit first arg
type
name
comments
{unadorned}
{unadorned}
method
def foo():
exists now
{unadorned} {unadorned} property
@property
exists now
{unadorned} {unadorned} non-callable attribute
x = 2
exists now
{unadorned} static
method @staticmethod
exists now
{unadorned} static property @staticproperty
proposing
{unadorned} static non-callable attribute {degenerate case -
variables don't have arguments}
unnecessary
{unadorned} class
method @classmethod
exists now
{unadorned} class property @classproperty or @classmethod;@property
proposing
{unadorned} class non-callable attribute {degenerate case - variables
don't have arguments}
unnecessary
abc.abstract {unadorned} method @abc.abstractmethod
exists now
abc.abstract {unadorned} property @abc.abstractproperty
exists now
abc.abstract {unadorned} non-callable attribute
@abc.abstractattribute or @abc.abstract;@attribute
proposing
abc.abstract static method @abc.abstractstaticmethod
exists now
abc.abstract static property @abc.staticproperty
proposing
abc.abstract static non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
abc.abstract class method @abc.abstractclassmethod
exists now
abc.abstract class property @abc.abstractclassproperty
proposing
abc.abstract class non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
I think the meanings of the new ones are pretty straightforward, but in
case they are not...
@staticproperty - like @property only without an implicit first
argument. Allows the property to be called directly from the class
without requiring a throw-away instance.
@classproperty - like @property, only the implicit first argument to the
method is the class. Allows the property to be called directly from the
class without requiring a throw-away instance.
@abc.abstractattribute - a simple, non-callable variable that must be
overridden in subclasses
@abc.abstractstaticproperty - like @abc.abstractproperty only for
@staticproperty
@abc.abstractclassproperty - like @abc.abstractproperty only for
@classproperty
--rich
I didn't think of this when we were discussing 448. I ran into this today,
so I agree with you that it would be nice to have this.
Best,
Neil
On Monday, December 4, 2017 at 1:02:09 AM UTC-5, Eric Wieser wrote:
>
> Hi,
>
> I've been thinking about the * unpacking operator while writing some numpy
> code. PEP 448 allows the following:
>
> values = 1, *some_tuple, 2
> object[(1, *some_tuple, 2)]
>
> It seems reasonable to me that it should be extended to allow
>
> item = object[1, *some_tuple, 2]
> item = object[1, *some_tuple, :]
>
> Was this overlooked in the original proposal, or deliberately rejected?
>
> Eric
>
At the moment, the array module of the standard library allows to
create arrays of different numeric types and to initialize them from
an iterable (eg, another array).
What's missing is the possiblity to specify the final size of the
array (number of items), especially for large arrays.
I'm thinking of suffix arrays (a text indexing data structure) for
large texts, eg the human genome and its reverse complement (about 6
billion characters from the alphabet ACGT).
The suffix array is a long int array of the same size (8 bytes per
number, so it occupies about 48 GB memory).
At the moment I am extending an array in chunks of several million
items at a time at a time, which is slow and not elegant.
The function below also initializes each item in the array to a given
value (0 by default).
Is there a reason why there the array.array constructor does not allow
to simply specify the number of items that should be allocated? (I do
not really care about the contents.)
Would this be a worthwhile addition to / modification of the array module?
My suggestions is to modify array generation in such a way that you
could pass an iterator (as now) as second argument, but if you pass a
single integer value, it should be treated as the number of items to
allocate.
Here is my current workaround (which is slow):
def filled_array(typecode, n, value=0, bsize=(1<<22)):
"""returns a new array with given typecode
(eg, "l" for long int, as in the array module)
with n entries, initialized to the given value (default 0)
"""
a = array.array(typecode, [value]*bsize)
x = array.array(typecode)
r = n
while r >= bsize:
x.extend(a)
r -= bsize
x.extend([value]*r)
return x
The proposed implementation of dataclasses prevents defining fields with
defaults before fields without defaults. This can create limitations on
logical grouping of fields and on inheritance.
Take, for example, the case:
@dataclass
class Foo:
some_default: dict = field(default_factory=dict)
@dataclass
class Bar(Foo):
other_field: int
this results in the error:
5 @dataclass
----> 6 class Bar(Foo):
7 other_field: int
8
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in dataclass(_cls, init, repr, eq, order, hash, frozen)
751
752 # We're called as @dataclass, with a class.
--> 753 return wrap(_cls)
754
755
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in wrap(cls)
743
744 def wrap(cls):
--> 745 return _process_class(cls, repr, eq, order, hash, init,
frozen)
746
747 # See if we're being called as @dataclass or @dataclass().
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _process_class(cls, repr, eq, order, hash, init, frozen)
675 # in __init__. Use "self" if
possible.
676 '__dataclass_self__' if 'self' in
fields
--> 677 else 'self',
678 ))
679 if repr:
~/.pyenv/versions/3.6.2/envs/clover_pipeline/lib/python3.6/site-packages/dataclasses.py
in _init_fn(fields, frozen, has_post_init, self_name)
422 seen_default = True
423 elif seen_default:
--> 424 raise TypeError(f'non-default argument {f.name!r} '
425 'follows default argument')
426
TypeError: non-default argument 'other_field' follows default argument
I understand that this is a limitation of positional arguments because the
effective __init__ signature is:
def __init__(self, some_default: dict = <something>, other_field: int):
However, keyword only arguments allow an entirely reasonable solution to
this problem:
def __init__(self, *, some_default: dict = <something>, other_field: int):
And have the added benefit of making the fields in the __init__ call
entirely explicit.
So, I propose the addition of a keyword_only flag to the @dataclass
decorator that renders the __init__ method using keyword only arguments:
@dataclass(keyword_only=True)
class Bar(Foo):
other_field: int
--George Leslie-Waksman
Good day all,
as a continuation of thread "OS related file operations (copy, move,
delete, rename...) should be placed into one module"
https://mail.python.org/pipermail/python-ideas/2017-January/044217.html
please consider making pathlib to a central file system module with putting
file operations (copy, move, delete, rmtree etc) into pathlib.
BR,
George
> On Friday, April 6, 2018 at 8:14:30 AM UTC-7, Guido van Rossum wrote:
> On Fri, Apr 6, 2018 at 7:47 AM, Peter O'Connor <peter.ed...(a)gmail.com> wrote:
>> So some more humble proposals would be:
>>
>> 1) An initializer to itertools.accumulate
>> functools.reduce already has an initializer, I can't see any controversy to adding an initializer to itertools.accumulate
>
> See if that's accepted in the bug tracker.
It did come-up once but was closed for a number reasons including lack of use cases. However, Peter's signal processing example does sound interesting, so we could re-open the discussion.
For those who want to think through the pluses and minuses, I've put together a Q&A as food for thought (see below). Everybody's design instincts are different -- I'm curious what you all think think about the proposal.
Raymond
---------------------------------------------
Q. Can it be done?
A. Yes, it wouldn't be hard.
_sentinel = object()
def accumulate(iterable, func=operator.add, start=_sentinel):
it = iter(iterable)
if start is _sentinel:
try:
total = next(it)
except StopIteration:
return
else:
total = start
yield total
for element in it:
total = func(total, element)
yield total
Q. Do other languages do it?
A. Numpy, no. R, no. APL, no. Mathematica, no. Haskell, yes.
* http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.accumulate.…
* https://stat.ethz.ch/R-manual/R-devel/library/base/html/cumsum.html
* http://microapl.com/apl/apl_concepts_chapter5.html
\+ 1 2 3 4 5
1 3 6 10 15
* https://reference.wolfram.com/language/ref/Accumulate.html
* https://www.haskell.org/hoogle/?hoogle=mapAccumL
Q. How much work for a person to do it currently?
A. Almost zero effort to write a simple helper function:
myaccum = lambda it, func, start: accumulate(chain([start], it), func)
Q. How common is the need?
A. Rare.
Q. Which would be better, a simple for-loop or a customized itertool?
A. The itertool is shorter but more opaque (especially with respect
to the argument order for the function call):
result = [start]
for x in iterable:
y = func(result[-1], x)
result.append(y)
versus:
result = list(accumulate(iterable, func, start=start))
Q. How readable is the proposed code?
A. Look at the following code and ask yourself what it does:
accumulate(range(4, 6), operator.mul, start=6)
Now test your understanding:
How many values are emitted?
What is the first value emitted?
Are the two sixes related?
What is this code trying to accomplish?
Q. Are there potential surprises or oddities?
A. Is it readily apparent which of assertions will succeed?
a1 = sum(range(10))
a2 = sum(range(10), 0)
assert a1 == a2
a3 = functools.reduce(operator.add, range(10))
a4 = functools.reduce(operator.add, range(10), 0)
assert a3 == a4
a4 = list(accumulate(range(10), operator.add))
a5 = list(accumulate(range(10), operator.add, start=0))
assert a5 == a6
Q. What did the Python 3.0 Whatsnew document have to say about reduce()?
A. "Removed reduce(). Use functools.reduce() if you really need it; however, 99 percent of the time an explicit for loop is more readable."
Q. What would this look like in real code?
A. We have almost no real-world examples, but here is one from a StackExchange post:
def wsieve(): # wheel-sieve, by Will Ness. ideone.com/mqO25A->0hIE89
wh11 = [ 2,4,2,4,6,2,6,4,2,4,6,6, 2,6,4,2,6,4,6,8,4,2,4,2,
4,8,6,4,6,2,4,6,2,6,6,4, 2,4,6,2,6,4,2,4,2,10,2,10]
cs = accumulate(cycle(wh11), start=11)
yield( next( cs)) # cf. ideone.com/WFv4f
ps = wsieve() # codereview.stackexchange.com/q/92365/9064
p = next(ps) # 11
psq = p*p # 121
D = dict( zip( accumulate(wh11, start=0), count(0))) # start from
sieve = {}
for c in cs:
if c in sieve:
wheel = sieve.pop(c)
for m in wheel:
if not m in sieve:
break
sieve[m] = wheel # sieve[143] = wheel@187
elif c < psq:
yield c
else: # (c==psq)
# map (p*) (roll wh from p) = roll (wh*p) from (p*p)
x = [p*d for d in wh11]
i = D[ (p-11) % 210]
wheel = accumulate(cycle(x[i:] + x[:i]), start=psq)
p = next(ps) ; psq = p*p
next(wheel) ; m = next(wheel)
sieve[m] = wheel
Hey List,
this is my very first approach to suggest a Python improvement I'd think
worth discussing.
At some point, maybe with Dart 2.0 or a little earlier, Dart is now
supporting multiline strings with "proper" identation (tried, but I can't
find the according docs at the moment. probably due to the rather large
changes related to dart 2.0 and outdated docs.)
What I have in mind is probably best described with an Example:
print("""
I am a
multiline
String.
""")
the closing quote defines the "margin indentation" - so in this example all
lines would get reduces by their leading 4 spaces, resulting in a "clean"
and unintended string.
anyways, if dart or not, doesn't matter - I like the Idea and I think
python3.x could benefit from it. If that's possible at all :)
I could also imagine that this "indentation cleanup" only is applied if the
last quotes are on their own line? Might be too complicated though, I can't
estimated or understand this...
thx for reading,
Marius
I am sure this has been discussed before, and this might not even be the
best place for this discussion, but I just wanted to make sure this has
been thought about.
What if pypi.org supported private repos at a cost, similar to npm?
This would be able to help support the cost of pypi, and hopefully make it
better/more reliable, thus in turn improving the python community.
If this discussion should happen somewhere else, let me know.
Nick
On 5 April 2018 at 07:58, Jannis Gebauer <ja.geb(a)me.com> wrote:
> What if there was some kind of “blessed” entity that runs these services and puts the majority of the revenue into a fund that funds development on PyPi (maybe trough the PSF)?
Having a wholly owned for-profit subsidiary that provides commercial
services as a revenue raising mechanism is certainly one way to
approach something like this without alienating sponsors or tax
authorities (although it may still alienate the vendors of now
competing services). It would require a big time commitment on the PSF
side to get everything set up though, as well as interest from key
folks in joining what would essentially be a single-language-focused
start up in an already crowded cross-language developer tools
marketplace. When the PSF as a whole is still operating with only a
handful of full or part time employees, it's far from clear that
setting something like that up would be the most effective possible
use of their time and energy.
At a more basic level, that kind of arrangement technically doesn't
require anyone's blessing, it could be as straightforward as
downstream tooling vendors signing up as PSF sponsors and saying
"please allocate our sponsorship contribution to the Packaging WG's
budget so that PyPI keeps operating well and the PyPA tooling keeps
improving, increasing the level of demand for our commercial Python
repository management services".
Historically that wouldn't have helped much, since the PSF itself has
struggled with effective project management (for a variety of
reasons), but one of the things I think the success of the MOSS grant
has shown is the significant strides that the PSF has made in budget
management in recent years, such that if funding is made available, it
can and will be spent effectively.
Cheers,
Nick.
P.S. PyPA contributors are also free agents in their own right, so
folks offering Python-centric developer workflow management tools or
features may decide that it's worth their while to invest more
directly in smoothing out some of the rough edges that currently still
exist. It's a mercenary way of looking at things, but in many cases,
it is *absolutely* possible to pay for the time and attention of
existing contributors, and if you can persuade them that your
proposals are reasonable, they'll often have an easier time than most
convincing other community contributors that it's a good way to go :)
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
Hi folks,
I normally wouldn't bring something like this up here, except I think
that there is possibility of something to be done--a language
documentation clarification if nothing else, though possibly an actual
code change as well.
I've been having an argument with a colleague over the last couple
days over the proper way order of statements when setting up a
try/finally to perform cleanup of some action. On some level we're
both being stubborn I think, and I'm not looking for resolution as to
who's right/wrong or I wouldn't bring it to this list in the first
place. The original argument was over setting and later restoring
os.environ, but we ended up arguing over
threading.Lock.acquire/release which I think is a more interesting
example of the problem, and he did raise a good point that I do want
to bring up.
</prologue>
My colleague's contention is that given
lock = threading.Lock()
this is simply *wrong*:
lock.acquire()
try:
do_something()
finally:
lock.release()
whereas this is okay:
with lock:
do_something()
Ignoring other details of how threading.Lock is actually implemented,
assuming that Lock.__enter__ calls acquire() and Lock.__exit__ calls
release() then as far as I've known ever since Python 2.5 first came
out these two examples are semantically *equivalent*, and I can't find
any way of reading PEP 343 or the Python language reference that would
suggest otherwise.
However, there *is* a difference, and has to do with how signals are
handled, particularly w.r.t. context managers implemented in C (hence
we are talking CPython specifically):
If Lock.__enter__ is a pure Python method (even if it maybe calls some
C methods), and a SIGINT is handled during execution of that method,
then in almost all cases a KeyboardInterrupt exception will be raised
from within Lock.__enter__--this means the suite under the with:
statement is never evaluated, and Lock.__exit__ is never called. You
can be fairly sure the KeyboardInterrupt will be raised from somewhere
within a pure Python Lock.__enter__ because there will usually be at
least one remaining opcode to be evaluated, such as RETURN_VALUE.
Because of how delayed execution of signal handlers is implemented in
the pyeval main loop, this means the signal handler for SIGINT will be
called *before* RETURN_VALUE, resulting in the KeyboardInterrupt
exception being raised. Standard stuff.
However, if Lock.__enter__ is a PyCFunction things are quite
different. If you look at how the SETUP_WITH opcode is implemented,
it first calls the __enter__ method with _PyObjet_CallNoArg. If this
returns NULL (i.e. an exception occurred in __enter__) then "goto
error" is executed and the exception is raised. However if it returns
non-NULL the finally block is set up with PyFrame_BlockSetup and
execution proceeds to the next opcode. At this point a potentially
waiting SIGINT is handled, resulting in KeyboardInterrupt being raised
while inside the with statement's suite, and finally block, and hence
Lock.__exit__ are entered.
Long story short, because Lock.__enter__ is a C function, assuming
that it succeeds normally then
with lock:
do_something()
always guarantees that Lock.__exit__ will be called if a SIGINT was
handled inside Lock.__enter__, whereas with
lock.acquire()
try:
...
finally:
lock.release()
there is at last a small possibility that the SIGINT handler is called
after the CALL_FUNCTION op but before the try/finally block is entered
(e.g. before executing POP_TOP or SETUP_FINALLY). So the end result
is that the lock is held and never released after the
KeyboardInterrupt (whether or not it's handled somehow).
Whereas, again, if Lock.__enter__ is a pure Python function there's
less likely to be any difference (though I don't think the possibility
can be ruled out entirely).
At the very least I think this quirk of CPython should be mentioned
somewhere (since in all other cases the semantic meaning of the
"with:" statement is clear). However, I think it might be possible to
gain more consistency between these cases if pending signals are
checked/handled after any direct call to PyCFunction from within the
ceval loop.
Sorry for the tl;dr; any thoughts?