Hi Ilya,
I'm not sure that this mailing list (Python-Dev) is the right place for
this discussion, I think that Python-Ideas (CCed) is the correct place.
For the benefit of Python-Ideas, I have left your entire post below, to
establish context.
[Ilya]
> I needed reversed(enumerate(x: list)) in my code, and have discovered
> that it wound't work. This is disappointing because operation is well
> defined.
It isn't really well-defined, since enumerate can operate on infinite
iterators, and you cannot reverse an infinite stream. Consider:
def values():
while True:
yield random.random()
a, b = reversed(enumerate(values())
What should the first pair of (a, b) be?
However, having said that, I think that your idea is not unreasonable.
`enumerate(it)` in the most general case isn't reversable, but if `it`
is reversable and sized, there's no reason why `enumerate(it)` shouldn't
be too.
My personal opinion is that this is a fairly obvious and straightforward
enhancement, one which (hopefully!) shouldn't require much, if any,
debate. I don't think we need a new class for this, I think enhancing
enumerate to be reversable if its underlying iterator is reversable
makes good sense.
But if you can show some concrete use-cases, especially one or two from
the standard library, that would help your case. Or some other languages
which offer this functionality as standard.
On the other hand, I think that there is a fairly lightweight work
around. Define a helper function:
def countdown(n):
while True:
yield n
n -= 1
then call it like this:
# reversed(enumerate(seq))
zip(countdown(len(seq)-1), reversed(seq)))
So it isn't terribly hard to work around this. But I agree that it would
be nice if enumerate encapsulated this for the caller.
One potentially serious question: what should `enumerate.__reversed__`
do when given a starting value?
reversed(enumerate('abc', 1))
Should that yield...?
# treat the start value as a start value
(1, 'c'), (0, 'b'), (-1, 'a')
# treat the start value as an end value
(3, 'c'), (2, 'b'), (1, 'a')
Something else?
My preference would be to treat the starting value as an ending value.
Steven
On Wed, Apr 01, 2020 at 08:45:34PM +0200, Ilya Kamenshchikov wrote:
> Hi,
>
> I needed reversed(enumerate(x: list)) in my code, and have discovered that
> it wound't work. This is disappointing because operation is well defined.
> It is also well defined for str type, range, and - in principle, but not
> yet in practice - on dictionary iterators - keys(), values(), items() as
> dictionaries are ordered now.
> It would also be well defined on any user type implementing __iter__,
> __len__, __reversed__ - think numpy arrays, some pandas dataframes, tensors.
>
> That's plenty of usecases, therefore I guess it would be quite useful to
> avoid hacky / inefficient solutions like described here:
> https://code.activestate.com/lists/python-list/706205/.
>
> If deemed useful, I would be interested in implementing this, maybe
> together with __reversed__ on dict keys, values, items.
>
> Best Regards,
> --
> Ilya Kamen
>
> -----------
> p.s.
>
> *Sketch* of what I am proposing:
>
> class reversible_enumerate:
>
> def __init__(self, iterable):
> self.iterable = iterable
> self.ctr = 0
>
> def __iter__(self):
> for e in self.iterable:
> yield self.ctr, e
> self.ctr += 1
>
> def __reversed__(self):
> try:
> ri = reversed(self.iterable)
> except Exception as e:
> raise Exception(
> "enumerate can only be reversed if iterable to
> enumerate can be reversed and has defined length."
> ) from e
>
> try:
> l = len(self.iterable)
> except Exception as e:
> raise Exception(
> "enumerate can only be reversed if iterable to
> enumerate can be reversed and has defined length."
> ) from e
>
> indexes = range(l-1, -1, -1)
> for i, e in zip(indexes, ri):
> yield i, e
>
> for i, c in reversed(reversible_enumerate("Hello World")):
> print(i, c)
>
> for i, c in reversed(reversible_enumerate([11, 22, 33])):
>
> print(i, c)
> _______________________________________________
> Python-Dev mailing list -- python-dev(a)python.org
> To unsubscribe send an email to python-dev-leave(a)python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/NDDKDUD…
> Code of Conduct: http://python.org/psf/codeofconduct/
Consider the following example:
import unittest
def foo():
for x in [1, 2, 'oops', 4]:
print(x + 100)
class TestFoo(unittest.TestCase):
def test_foo(self):
self.assertIs(foo(), None)
if __name__ == '__main__':
unittest.main()
If we were calling `foo` directly we could enter post-mortem debugging via `python -m pdb test.py`.
However since `foo` is wrapped in a test case, `unittest` eats the exception and thus prevents post-mortem debugging. `--failfast` doesn't help, the exception is still swallowed.
Since I am not aware of a solution that enables post-mortem debugging in such a case (without modifying the test scripts, please correct me if one exists), I propose adding a command-line option to `unittest` for [running test cases in debug mode](https://docs.python.org/3/library/unittest.html#unittest.TestCase.deb… so that post-mortem debugging can be used.
P.S.: There is also [this SO question](https://stackoverflow.com/q/4398967/3767239) on a similar topic.
Hi all,
I do not know maybe it was already discussed ... but the toolchain like LLVM is very mature and it can provide the simpler JIT compilation to machine code functionality and it will improve performance of the Python a lot !!
There's a whole matrix of these and I'm wondering why the matrix is
currently sparse rather than implementing them all. Or rather, why we
can't stack them as:
class foo(object):
@classmethod
@property
def bar(cls, ...):
...
Essentially the permutation are, I think:
{'unadorned'|abc.abstract}{'normal'|static|class}{method|property|non-callable
attribute}.
concreteness
implicit first arg
type
name
comments
{unadorned}
{unadorned}
method
def foo():
exists now
{unadorned} {unadorned} property
@property
exists now
{unadorned} {unadorned} non-callable attribute
x = 2
exists now
{unadorned} static
method @staticmethod
exists now
{unadorned} static property @staticproperty
proposing
{unadorned} static non-callable attribute {degenerate case -
variables don't have arguments}
unnecessary
{unadorned} class
method @classmethod
exists now
{unadorned} class property @classproperty or @classmethod;@property
proposing
{unadorned} class non-callable attribute {degenerate case - variables
don't have arguments}
unnecessary
abc.abstract {unadorned} method @abc.abstractmethod
exists now
abc.abstract {unadorned} property @abc.abstractproperty
exists now
abc.abstract {unadorned} non-callable attribute
@abc.abstractattribute or @abc.abstract;@attribute
proposing
abc.abstract static method @abc.abstractstaticmethod
exists now
abc.abstract static property @abc.staticproperty
proposing
abc.abstract static non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
abc.abstract class method @abc.abstractclassmethod
exists now
abc.abstract class property @abc.abstractclassproperty
proposing
abc.abstract class non-callable attribute {degenerate case -
variables don't have arguments} unnecessary
I think the meanings of the new ones are pretty straightforward, but in
case they are not...
@staticproperty - like @property only without an implicit first
argument. Allows the property to be called directly from the class
without requiring a throw-away instance.
@classproperty - like @property, only the implicit first argument to the
method is the class. Allows the property to be called directly from the
class without requiring a throw-away instance.
@abc.abstractattribute - a simple, non-callable variable that must be
overridden in subclasses
@abc.abstractstaticproperty - like @abc.abstractproperty only for
@staticproperty
@abc.abstractclassproperty - like @abc.abstractproperty only for
@classproperty
--rich
In Python 3.10 we will no longer be burdened by the old parser (though 3rd
party tooling needs to catch up).
One thing that the PEG parser makes possible in about 20 lines of code is
something not entirely different from the old print statement. I have a
prototype:
Python 3.10.0a0 (heads/print-statement-dirty:5ed19fcc1a, Jun 9 2020,
16:31:17)
[Clang 11.0.0 (clang-1100.0.33.8)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Cannot read termcap database;
using dumb terminal settings.
>>> print 2+2
4
>>> print "hello world"
hello world
>>> print "hello", input("Name:")
Name:Guido
hello Guido
>>> print 1, 2, 3, sep=", "
1, 2, 3
>>>
But wait, there's more! The same syntax will make it possible to call *any*
function:
>>> len "abc"
3
>>>
Or any method:
>>> import sys
>>> sys.getrefcount "abc"
24
>>>
Really, *any* method:
>>> class C:
... def foo(self, arg): print arg
...
>>> C().foo 2+2
4
>>>
There are downsides too, though. For example, you can't call a method
without arguments:
>>> print
<built-in function print>
>>>
Worse, the first argument cannot start with a parenthesis or bracket:
>>> print (1, 2, 3)
1 2 3
>>> C().foo (1, 2, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: C.foo() takes 2 positional arguments but 4 were given
>>> print (2+2), 42
4
(None, 42)
>>> C().foo [0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'method' object is not subscriptable
>>>
No, it's not April 1st. I am seriously proposing this (but I'll withdraw it
if the response is a resounding "boo, hiss"). After all, we currently have
a bunch of complexity in the parser just to give a helpful error message to
people used to Python 2's print statement:
>>> print 1, 2, 3
File "<stdin>", line 1
print 1, 2, 3
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(1,
2, 3)?
>>>
And IIRC there have been a number of aborted attempts at syntactic hacks to
allow people to call functions (like print) without parentheses, although
(I think) none of them made it into a PEP. The PEG parser makes this much
simpler, because it can simply backtrack -- by placing the grammar rule for
this syntax (tentatively called "call statement") last in the list of
alternatives for "small statement" we ensure that everything that's a valid
expression statement (including print() calls) is still an expression
statement with exactly the same meaning, while still allowing
parameter-less function calls, without lexical hacks. (There is no code in
my prototype that checks for a space after 'print' -- it just checks that
there's a name, number or string following a name, which is never legal
syntax.)
One possible extension I didn't pursue (yet -- dare me!) is to allow
parameter-less calls inside other expressions. For example, my prototype
does not support things like this:
>>> a = (len "abc")
File "<stdin>", line 1
a = (len "abc")
^
SyntaxError: invalid syntax
>>>
I think that strikes a reasonable balance between usability and reduced
detection of common errors.
I could also dial it back a bit, e.g. maybe it's too much to allow 'C().foo
x' and we should only allow dotted names (sufficient to access functions in
imported modules and method calls on variables). Or maybe we should only
allow simple names (allowing 'len x' but disallowing 'sys.getrefcount x'.
Or maybe we should really only bring back print statements.
I believe there are some other languages that support a similar grammar
(Ruby? R? Raku?) but I haven't investigated.
Thoughts?
--
--Guido van Rossum (python.org/~guido)
*Pronouns: he/him **(why is my pronoun here?)*
<http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-…>
On Wed, 22 Jul 2020 at 15:35, Antoine Pitrou <solipsis(a)pitrou.net> wrote:
> The deltas also look small for a micro-benchmark. I certainly don't
> think this is a sufficient reason to add a new datatype to Python.
>
I think some of the optimizations can be experimented with dict itself.
Deltas seem to be small, because I divide them by the number of timeit
runs. I think that a 30-45% gain in constructor speed is not something
small. The benchmarks in question have the Name that starts with
"constructor".
PS: sorry for double mail, I sent this message to only you by mistake.
On Wed, 22 Jul 2020 at 18:26, Guido van Rossum <guido(a)python.org> wrote:
> Did you study PEP 416 (frozendict) and PEP 603 (frozenmap)?
>
Yes. About frozenmap, at the very start I added to the benchmarks also
immutables.Map, but I removed it when I realized that it was slower than
frozendict in every bench. Maybe this is because immutables.Map is a C
extension and not a builtin type.
About PEP 416, yes, I tried to follow it. Indeed hash is calculated using
the strategy described in the PEP. I also take a look to frozenset and
tuple code.
Frankly I must admit that the rejection of PEP 416 was a good idea. Indeed
I started to implement an immutable dict only for a symmetry reason, and
because I am somewhat fascinated by functional programming and immutables,
without a rational reason.
I suppose I would have given up and admitted that a frozendict is quite
useless in Python, if I did not see that the constructor speed can be
faster.
I'm not sure 100% of the result. This is why I'm suggesting to try to
implement the speed optimizations to the constructor of dict first:
1. the optimizations could be not safe. Indeed recently I got a segfault
that I have to investigate
2. maybe dict can be really optimized further. After that, the difference
between dict and frozendict performance could be minimal
That said, maybe there are four good use cases for a frozendict:
1. they could be used for __annotations__ of function objects, and similar
cases
2. they could be used to implement "immutable" modules or classes
3. frozendict can be easily cached, like tuples.
4. as I said, as an alternative to MappingProxyType, since it's very slow
and it's used also in the C code of CPython and in the Python stdlib. This
maybe is not useful because:
a. the speed of MappingProxyType can be improved
b. MappingProxyType can't be replaced because you *really* want and
need a proxy
Hi all,
Seems like this topic was previously raised, but what if we add possibility to decorate non function properties in class:
```python
class Neuron:
@linear_activation
activation
```
Date: Fri, 26 Jun 2020 18:47:44 +0200
From: Hans Ginzel <hans(a)matfyz.cz>
To: Hans Ginzel <hans(a)artax.karlin.mff.cuni.cz>
Subject: Access (ordered) dict by index; insert slice
Hello,
thank you for making dict ordered.
Is it planned to access key,value pair(s) by index? See https://stackoverflow.com/a/44687752/2556118 for example. Both for reading and (re)writing?
Is it planned to insert pair(s) on exact index? Or generally to slice? See splice() in Perl, https://perldoc.perl.org/functions/splice.html.
Use case: Represent database table metadata (columns). It is useful as to access columns both by name and by index as to insert column on specific position, https://dev.mysql.com/doc/refman/8.0/en/alter-table.html, “ALTER TABLE ADD COLUMN [FIRST |AFTER col]” (consider default order or table storage size optimisation by aligning).
Thank you in advance,
Hans
PS1: Named tuples cannot be used, are immutable.
PS2: See https://metacpan.org/pod/perlref#Pseudo-hashes:-Using-an-array-as-a-hash
Hi,
In Python, there are multiple [compound
statements](https://docs.python.org/3/reference/compound_stmts.html)
with the `else` keyword.
For example:
```
for x in iterable:
if x == sentinel:
break
else:
print("Sentinel not found.")
```
or:
```
try:
do_something_sensitive()
except MyError:
print("Oops!")
else:
print("We're all safe.")
```
In my situation, I would like to mix the `with` statement with `else`.
In this case I would like that if no exception is raised within the
`with` to run the `else` part.
For example:
```
with my_context():
do_something_sensitive()
else:
print("We're all safe.")
```
Now imagine that in my `try .. except` block I have some heavy setup
to do before `do_something_sensitive()` and some heavy cleanup when
the exception occurs.
I'd like my context manager to do the preparation work, execute the
body, and cleanup. Or execute my else block only if there is no
exception.
Is there already a way to accomplish this in Python or can this be a
nice to have?
Regards,
Jimmy
The following comment is from the thread about adding kwd arg support to
the square bracket operator (eg, `Vec = Dict[i=float, j=float]`).
On Tue, Aug 4, 2020, 2:57 AM Greg Ewing <greg.ewing(a)canterbury.ac.nz> wrote:
On 4/08/20 1:16 pm, Steven D'Aprano wrote:
> Why would we want to even consider a new approach to handling keyword
> arguments which applies only to three dunder methods, `__getitem__`,
> `__setitem__` and `__delitem__`, instead of handling keyword arguments
> in the same way that every other method handles them?
These methods are already kind of screwy in that they don't
handle *positional* arguments in the usual way -- packing them
into a tuple instead of passing them as individual arguments.
I think this is messing up everyone's intuition on how indexing
should be extended to incorporate keyword args, or even whether
this should be done at all.
--
Greg
So here is the main question of this thread:
Is there really not a backwards compatible, creative way a transition to
positional args from a tuple in the item dunders could not be accomplished?
It's not my intention to champion any specific idea here, just creating a
specific space for ideas to be suggested and mulled over.
There could be several reasons for changing the item dunder signatures. The
immediate reason is making the intuition around how to add kwd arg support
to square brackets more obvious and sane.
A second reason is it might be more intuitive for users who have to learn
and remember that multiple arguments to [ ] get packed into a tuple, but
this doesn't happen anywhere else.
Another reason: it could make writing code for specialized libraries that
tend to abuse (for the good of us all!) item dunders, like pandas, much
easier. Right now such libraries have to rely on their own efforts to break
up a key:
def __getitem__(self, key):
try:
k1, k2 = key
except TypeError:
raise TypeError("two tuple key required")
But for regular function calls (as opposed to item getting) we get to write
our signature however we want and rely on the language to handle all of
this for us:
def f(k1, k2):
# no worries about parsing out the arguments
-----------
One idea: change the "real" names of the dunders. Give `type` default
versions of the new dunders that direct the call to the old dunder names.
The new get and del dunders would have behavior and signatures like (I am
including **__kwargs since that could be an option in the future) :
def __getx__(self, /, *__key, **__kwargs):
return self.__getitem__(__key, **__kwargs)
def __delx__(self, /,, *__key, **__kwargs):
del self.__delitem__(__key, **__kwargs)
However the set dunder signature would be a problem, because to mirror the
current behavior we end up writing what is now a syntax error:
def __setx__(self, /, *__key, __value, **__kwargs):
self.__setitem__(__key, __value, **__kwargs)
The intended meaning above would be that the last positional argument gets
assigned to __value. Maybe someone could suggest a way to fix this.
Item getting, setting, deleting would call these new dunders instead of the
old ones. I haven't thought through how to handle inheritance-- I'm sort of
hoping someone smarter than me could come up with a way to solve that...:
class My:
def __getx__(self, my_arg, *args, my_kwarg, **kwargs):
# the way I have written things this super call will cause a
recursion error
v = super().__getitem__(*args, **kwargs)
return combine(my_arg, my_kwarg, v)
By the way I'm not bike shedding on what these new dunder names should be,
though we could probably do worse then getx, setx, and delx (nice and
short!).
-----------
Ok now this is the part where I get to wait for everyone smarter than me to
show the errors of my ways/how naive I am. :)