If you have python code that really would benefit from iterating in
reverse, there already is the "__reversed__" special method, and "reversed"
builtin. The doc string for it says "Return a reverse iterator"
You guys probably already know about it. It just hadn't been mentioned, so
I thought I'd put it out there.
Fair warning: this is long.
> I was trying to google if there were some attempts to add pattern
> matching feature (as seen in e.g. haskell) to Python. I couldn't however
> find anything. Were there any proposals for a feature like this?
Many. In no particular order:
These are in addition to the ones Paul gave. There are some more
attempts at related areas (unification and other forms of logic
programming, multiple dispatch) and discussions without implementations
that I didn't list.
Before I say anything else, I should disclose my bias: I think pattern
matching would be a great addition to Python. Python tries hard to be
readable, and pattern matching would make certain kinds of code much
more legible than Python currently allows. I'll give two examples, one
canonical and one from something I've been working on.
The first is red-black tree balancing. Not by accident,
rosettacode.org's pattern matching comparison page
( http://rosettacode.org/wiki/Pattern_matching ) uses that as its
example for pattern matching in different languages. Most of the
implementations fit onto a single page. I won't argue they're marvels
of readability, but that's mostly because of too-succinct variable
naming. Here's an implementation of the same algorithm in Python:
. I'd paste the code, but it's too long to be instructive because the
algorithm gets scattered across against five nontrivial functions,
each of which has nested conditionals, not counting the implicit
nesting from the functions calling one another.
The second is performing algebra on the struct module's format strings.
I've written some automatic optimization of them with concatenation and
scalar multiplication, so that for instance 'b' + 'b' = '2b' rather than
'bb' and '10s' + '10s' = '10s10s'. The value of this is in dealing with
format strings that can be combined in various ways to describe the
layouts of complicated binary data formats. With some context stripped
def _check_add(self, other):
if not isinstance(other, _StructBase):
if not self.same_endianness(other):
if self.endianness == other.endianness or len(self.endianness) >
if isinstance(other, _StructSequence):
def __add__(self, other):
struct_sequence = self._check_add(other)
if isinstance(other, _StructSequence):
if other._elements == :
begin = other._elements
rest = other._elements[1:]
begin = other
rest = 
if self._elements == :
if self._elements[-1].character == begin.character:
return struct_sequence(self._elements[:-1] +
[self._elements[-1] + begin] + rest)
return struct_sequence(self._elements + [begin] + rest)
The exact details aren't important, my point is this also leads to
complicated nested conditionals because to know what actions I need to
perform on a given data type, I need checks to make sure the endianness
of the two strings matches, multiple type checks, checks for whether one
or the other object is empty, and so on, then after verifying what I'm
dealing with, decompositions to break a list apart into its first
element and remainder and handling the empty-list case. This
implementation isn't even complete because it doesn't, for instance,
deal with all the possible endianness permutations. There are some
simplifications possible, for instance moving all information described
by finite sets like endianness into types (so there's a StructSequence,
a BigEndianStructSequence, a LittleEndianStructSequence, ...) but this
can't handle all the necessary conditionals and doesn't scale well.
Those are two small-scale examples. Some broad areas where pattern
matching is useful are:
1) Parsing: It's useful both for performing manipulations on grammars
like left factoring and operating on the trees that parsers produce.
2) Mathematics: manipulations of abstract mathematical objects like
algebraic expressions (not numerical analysis), for instance algebraic
3) Recursive data structures: Python doesn't have algebraic data types
or embed recursion into everything the way languages like Haskell do,
but there's plenty of recursively-structured data out in the wild like
HTML and JSON.
4) Compilers/interpreters: this use is closely related to parsing and
recursive data structures because compilers and interpreters are often
acting on abstract syntax trees. Often transformations like flattening
an AST into bytecode and constant folding can be implemented with
I'll stop there because those are the applications I know best, but I'm
sure there are others. Pattern matching also subsumes features
including multiple dispatch on types that have been proposed, discussed,
and implemented many times for Python. I'd argue the existence of more
than a dozen versions of pattern matching and their cumulative downloads
from PyPi says the desire for the feature isn't zero, but I'm not sure
how indicative they are.
Whether I've convinced you or not that pattern matching is a worthwhile
addition to the language, I'd like it, and if I could, I'd write a
library to get it. Past discussions on pattern matching have started
and ended with debates about the syntax for adding it as a core language
feature. Because the whole point of pattern matching is readability,
since you can't do anything with it you couldn't with a
sufficiently-complicated nested if-elif-else, arguing about syntax isn't
putting the cart before the horse like it is with some features, but
there's more to implementing pattern matching than just syntax, and I'd
like to expand the discussion to some of those issues. More than that,
though, I think pattern matching with imperfect syntax would still be
useful, and I'd rather see a mypy-like library now than a language
feature that might make it into Python in some distant future. In that
spirit, I'll offer some syntax that doesn't involve changing the
grammar. To emulate a case expression like Haskell, the only way I can
see to do it is abusing 'with' statements because they're the only
block-like statement aside from exceptions that allow assignment:
with match(expr) as case:
with case(pattern) as matches:
target_list = matches
with case(pattern) as matches:
target_list = matches
Pattern matching fits naturally into for statements and comprehensions:
for target_list in pattern(expr):
(expr for target_list in pattern(expr) if expr)
A syntax similar to functools.singledispatch's works for pattern
For defining list and dictionary patterns themselves, my best thought is
abusing __getitem__, ellipsis, and the slice syntax:
Listpattern[1, ..., 'a': int]
[1, 'a', 'b', 'c', 2] # matches, name a is bound to 2
[3, 4, 5] # doesn't match because the first element is 1, no binding
[1, 2, None] # also doesn't match because None is not an int
Dictpattern['a': 1, 'b': 2, 'c': 'a': str]
['a': 1, 'b': 2, 'c': 'foo'] # matches, name a is bound to 'foo'
['a': 1, 'b': 2, 'c': 'bar', 'd': 3] # doesn't match because no ellipsis
These are far from the best possible syntax available by changing
Python's grammar or syntax, and some of them would benefit from using
Macropy to rewrite ASTs. I'm sure they could be improved even within
those constraints, and if anyone has ideas, I'm all ears.
A lot of work has been done on pattern matching in object-oriented
languages with and without algebraic data types. I think Scala's
implementation is most mature and well worth looking at for ideas.
(Macropy's experimental pattern matching is patterned on Scala's.)
"Patterns as Objects in
Grace" ( http://homepages.ecs.vuw.ac.nz/~djp/files/DLS2012.pdf ) is a
paper on pattern matching in an object-oriented teaching language with
structural typing. "Robust Scripting Via
Patterns" ( http://hirzels.com/martin/papers/dls12-thorn-patterns.pdf )
is about pattern matching in Thorn, a dynamically-typed language
implemented on the JVM. Thorn's pattern matching is integrated into the
language at a deep level, and it has some good ideas about using
patterns in control structures, some of which I shamelessly stole in
writing the above suggestions for pattern matching syntax in Python.
I'm just scratching the surface here, much more can be and has been
I haven't talked at all about the nitty-gritty of implementation, though
I can. I think the most productive way to have that discussion on this
list would be to look at what smaller features would be most helpful for
implementing pattern matching as a library, and likewise for syntax,
shades of the recent discussion about macros.
On Wednesday, April 15, 2015 4:04 PM, Steven D'Aprano <steve(a)pearwood.info> wrote:
> > On Wed, Apr 15, 2015 at 10:25:58PM +0000, Andrew Barnert wrote:
>> The paradigm examples of bidirectional iteration are linked-list-like
>> and tree-like iteration. In particular, almost all kinds of sorted
>> containers can be iterated in sorted order bidirectionally, but not
> In Python terms though, if you think you want a linked-list, you
> probably actually want a regular list. I'm not so sure about trees, but
> the lack of any trees in the std lib is probably a hint that a dict
> makes a good replacement.
Well, there _are_ trees in the stdlib, buried inside application-level things like HTML documents. And, if sorted collections were added to the stdlib, they'd probably be trees too. There's also a linked list buried inside OrderedDict. (You could ask why it doesn't just use a list.)
That being said, this is part of the reason I didn't think my related idea needed to be added to Python, or even proposed. Linked lists and trees are very important data structures to recursive-decomposition functional programming, but they're not nearly as important as people think to other paradigms. It's not just Python itself that proves this; std::vector (a resizable array, like Python's list) is by far the most used container in C++, and not including std::unordered_map (a hash table, like Python's dict) was the most widely-touted gap in the stdlib until it was finally added. And the Alexandrescu presentations I mentioned also imply that at least some of the reasons the iterator categories are used in code written for C++ and its descendants is that you're kind of forced into it, and with a bette API many of these uses wouldn't arise.
>> There aren't too many fundamental algorithms that require
>> bidirectional iteration but not random-access iteration, or that can
>> be more efficient with bidirectional than with forward, but there are
> Are any of them unable to be efficiently reimplemented in terms of
> random access lists? What does bidirectional iteration give us that
> sequences don't?
Any time you use an iterator transformation like filter or partition, what you get is (or could be) bidirectionally iterable, but it's not randomly accessible. And filtering lists is a pretty common thing to do (common enough for filter to be a builtin, and even for an equivalent operation to have syntactic support).
Even when you use iterator transformations like map that are conceptually randomly accessible, it's hard to see how they could be actually randomly accessed in today's Python. We could provide __add__ and __subtract__ a la C pointer arithmetic, but a lot of people would probably prefer not to use them when not necessary. Or we could just make views fundamental and largely sideline iterators (which does work pretty well for numpy), but that would be a huge and radical change to Python. Or we could provide views and iterators side by side, but that makes the language twice as big and twice as hard to fit in your head.
> The rest of your post is interesting, but I don't see the relevance to
> *Python*. Bidirectional iteration isn't being proposed in a vacuum, it
> has to compete with existing alternatives such as lists and dicts.
That's kind of my point: bidirectional iteration doesn't fit into Python; forward, bidirectional, and random access iteration _might_ fit into Python, but only by radically changing Python in a way that's almost certainly not acceptable. (In my blog post, at the end, I tried to work out whether there are smaller steps in that direction that might be worthwhile on their own, but none of them seemed worthwhile enough for me to even bring up on python-ideas, much less push for.)
Call me a fool, but in the vein of the finest of half baked ideas, I
present to you an idea that hasn't even decided on the shape of the tin yet.
Using a pseudo random selection of ideas set out in various forums
I present to you, in the linked PDF, a possible Python future.
Python 3.9.9 - The 'I Have A Dream' Version
my ongoing thought experiment on a Python future I would like to see.
Because, is it not written
If you don't know where you're going, how will you know when you get
Whether an infinite number of monkeys could shape this into reality may
never be put to the test, but it's a start.
Or it may not be. Or it may be a start and the end. The future's as tricksy
as hobbitses sometimes.
* Python The
* Removing the notion of 'virtual' environments. Everything's just an
* Splitting the standard library in twain. stdlib = 'baselib' + 'extlib'.
- baselib = Enough to start up a Python environment / install pip.
- extlib = The rest of the current stdlib. Changes more frequently than
* A new tool called pippin to manage the extlib
* Formalising what a package index is
* Addressing a Python project's lifecycle.
* Configuration handling
* Removing the use of 'lib' suffixes from almost all packages.
* Claiming any name in PyPi starting with 'py'
I have aimed for
* Separation of concerns
* Consistency in naming of tool commands and options.
Your thoughts are welcome.
Next steps are
* Add all relevant PEPs - both proposed and accepted.
* Add all modules to either baselib, extlib or deletelib
* Firm up stdlib / environment / config file locations (system / user,
Windows / Linux / OSX)
* Create outline of pip / twine code to see what can be put into external
packages for re-use / removal from pip.
* Create a filesystem structure for projects and configuration files /
cookiecutter templates to achieve this.
* Enhance definition of tools and propose code structures.
I hope I'm posting in correct place.
I would like to propose an idea for iterating over an iterator in reverse.
Built-in function: it should retrieve previous element from an
iterator by calling its __prev__() method.
Add new, optional method to iterator protocol:
This method should return previous element from iterator or raise
StopIteration exception if the is no previous element.
This should be optional, because it would be hard to implement this
behaviour in some iterators (e.g. It would be hard to do this for file
iterator that returns lines). Other iterators should be modified to
include __prev__() (e.g. itertools.count(), list iterator).
If an iterator doesn't support prev TypeError should be raise when
prev is called.
Please let me know what you think about this? If there are no obvious
problems I will write a PEP for this.
ps. I've tried searching for this idea on python-ideas and couldn't
find anything. If I missed it and my email is a duplicate, sorry.
As a teacher who often ends up teaching python, sometimes I/my students
encounter little anomalies
[Bugs?? I hesitate to use the word given how often Ive seen a noob exclaim:
"Compiler has a BUG" without knowing abc of the syntax
So just a question (or 2):
In python3 please check
Do those help texts look helpful/meaningful?
Is this the right list for such questions?
A bare "except:" clause catches any exception. As of Python 3.0, this
can be spelled more explicitly as "except BaseException:", and AFAIK
the only difference between the two is that someone might rebind
String exceptions were abolished during the 2.x line, without causing
major upheaval. The deprecation and ultimate abolition of the bare
except syntax would have less consequence than that, as it would
merely require a mechanical transformation to "except BaseException:",
without needing any corresponding changes at raise site.
* Remove the attractive nuisance of "hmm, my code's throwing an error
that I don't recognize, I'll use try/except" and just catching
* Eliminates the edge case wherein "except:" and "except ():" are
almost diametrically opposite in meaning. Not often a concern, but
we've just had a lengthy thread on python-list discussing this. It's
generally agreed that an empty tuple has to be interpreted as catching
nothing, but the false parallel with an empty clause is problematic.
* May make it harder to write Py2/Py3-compatible code. In Py2,
"except:" will catch old-style-class exceptions.
* Any removal of any feature can cause examples and published code to break.
* Forcing everyone to look at the code that uses "except:" means extra
work to upgrade Python, but it might mean they notice a problem
somewhere. Maybe it should be "except Exception", or maybe something
even more specific. Is this a pro or a con?
I have a specialized use case for exceptions which need to analyze their
__cause__ property before they're fully initialized. The property isn't set
until after init, and (forgive my ignorance of Python's source!) appears to
be done at the C level in a way that it can't be observed with a property
setter or __setattr__. I currently create a method to do this post-init
work as a workaround, and push the burden of making sure it gets called on
the code using the exceptions--but this makes the exceptions idiomatic and
A magic method called once all of the special exception properties are
already set would suffice. The exception could define something like
__raised__(self) to react to the new values of those properties.
A more comprehensive approach might be a __raise__(self, cause, context,
traceback...) method which has full control over setting the properties,
but I assume there are good reasons why the usual channels (setters and
__setattr__) are cut out of the loop.
I was encouraged to bring discussion here (to see if there's any
traction) after opening an enhancement request (
I think it'd be nice to have the ability to subscript/slice instances of
pathlib.PurePath and its subclasses e.g.
path = Path()
assert path[n:m:k] == type(path)(*path.parts[n:m:k])
Maybe it's worth adding some check with custom exception thrown in case of
invalid index/slice. What do you think? I'll open new issue and submit a
patch if you approve of this...
recently I posted PEP 487, a simpler customization of class creation.
For newcomers: I propose the introduction of a __init_subclass__ classmethod
which initializes subclasses of a class, simplifying what metaclasses can
It took me a while to digest all the ideas from the list here, but well, we're
not in a hurry. So, I updated PEP 487, and pushed the changes to github
I applied the following changes:
PEP 487 contained the possibility to set a namespace for subclasses.
The most important usecase for this feature would be to have an
OrderedDict as the class definition namespace. As Eric Snow pointed
out, that will be soon standard anyways, so I took out this feature from
the PEP. The implementation on PyPI now just uses an OrderedDict
as a namespace, anticipating the expected changes to CPython.
I also did some reading on possible usecases for PEP 487, so that it
actually may be used by someone. Some Traits-like thing is a standard
usecase, so I looked especially at IPython's traitlets, which are a simple
example of that usecase.
Currently traitlets use both __new__ and __init__ of a metaclass. So I
tried to also introduce a __new_subclass__ the same way I introduced
__init_subclass__. This turned out much harder than I thought, actually
impossible, because it is type.__new__ that sets the method resolution
order, so making super() work in a __new_subclass__ hook is a
chicken-egg problem: we need the MRO to find the next base class to
call, but the most basic base class is the one creating the MRO. Nick,
how did you solve that problem in PEP 422?
Anyhow, I think that traitlets can also be written just using
__init_subclass__. There is just this weird hint in the docs that you should
use __new__ for metaclasses, not __init__, a hint I never understood
as the reasons when to use __new__ or __init__ are precisely the same
for normal classes and metaclasses. So I think we don't miss out much
when not having __new_subclass__.
I also updated the implementation of PEP 487, it's still at