(Context: Continuing to prepare for the core dev sprint next week. Since
the sprint is near, *I'd greatly appreciate any quick comments, feedback
Following up my collection of past beginning contributor experiences, I've
collected these experiences in a dedicated GitHub repo and written a
(subjective!) summary of main themes that I recognize in the stories, which
I've also included in the repo.
A "TL;DR" bullet list of those main themes:
* Slow/no responsiveness
* Long, slow process
* Hard to find where to contribute
* Mentorship helps a lot, but is scarce
* A lot to learn to get started
* It's intimidating
More specifically, something that has come up often is that maintaining
momentum for new contributors is crucial for them to become long-term
contributors. Most often, this comes up in relation to the first two
points: Suggestions or PRs are completely receive no attention at all
("ignored") or stop receiving attention at some point ("lost to the void").
Unfortunately, the probability of this is pretty high for any issue/PR, so
for a new contributor this is almost guaranteed to happen while working on
one of their first few contributions. I've seen this happen many times, and
have found that I have to personally follow promising contributors' work to
ensure that this doesn't happen to them. I've also seen contributors learn
to actively seek out core devs when these situations arise, which is often
a successful tactic, but shouldn't be necessary so often.
Now, this is in large part a result of the fact that us core devs are not a
very large group, made up almost entirely of volunteers working on this in
their spare time. Last I checked, the total amount of paid development time
dedicated to developing Python is less than 3 full-time (i.e. ~100 hours a
The situation being problematic is clear enough that the PSF had concrete
plans to hire paid developers to review issues and PRs. However, those
plans have been put on hold indefinitely, since the PSF's funding has
shrunk dramatically since the COVID-19 outbreak (no PyCon!).
So, what can be done? Besides raising more funds (see a note on this
below), I think we can find ways to reduce how often issues/PRs become
"stalled". Here are some ideas:
1. *Generate reminders for reviewers when an issue or PR becomes "stalled'
due to them.* Personally, I've found that both b.p.o. and GitHub make it
relatively hard to remember to follow up on all of the many issues/PRs
you've taken part in reviewing. It takes considerable attention and
discipline to do so consistently, and reminders like these would have
helped me. Many (many!) times, all it took to get an issue/PR moving
forward (or closed) was a simple "ping?" comment.
2. *Generate reminders for contributors when an issue or PR becomes
"stalled" due to them.* Similar to the above, but I consider these separate.
3. *Advertise something like a "2-for-1" standing offer for reviews.* This
would give contributors an "official", acceptable way to get attention for
their issue/PR, other than "begging" for attention on a mailing list. There
are good ways for new contributors to be of significant help despite being
new to the project, such as checking whether old bugs are still relevant,
searching for duplicate issues, or applying old patches to the current code
and creating a PR. (This would be similar to Martin v. Löwis's 5-for-1
offer in 2012, which had little success but lead to some interesting
4. *Encourage core devs to dedicate some of their time to working through
issues/PRs which are "ignored" or "stalled".* This would require first
generating reliable lists of issues/PRs in such states. This could be in
various forms, such as predefined GitHub/b.p.o. queries, a dedicated
web-page, a periodic message similar to b.p.o.'s "weekly summary" email, or
dedicated tags/labels for issues/PRs. (Perhaps prioritize "stalled" over
- Tal Einat
This is a mailing list repost of the Discourse thread at
The rendered version of the PEP can be found here:
The full text is also quoted in the Discourse thread.
The remainder of this email is the same introduction that I posted on Discourse.
I’m largely a fan of the Structural Pattern Matching proposal in PEP
634, but there’s one specific piece of the syntax proposal that I
strongly dislike: the idea of basing the distinction between capture
patterns and value patterns purely on whether they use a simple name
or a dotted name.
Thus PEP 642, which retains most of PEP 634 unchanged, but adjusts
value checks to use an explicit prefix syntax (either `?EXPR` for
equality constraints, or `?is EXPR` for identity constraints), rather
than relying on users learning that literals and attribute lookups in
a capture pattern mean a value lookup check, while simple names mean a
capture pattern (unlike both normal expressions, where all three mean
a value lookup, and assignment targets, where both simple and dotted
names bind a new reference).
The PEP itself has a lot of words explaining why I’ve made the design
decisions I have, as well as the immediate and potential future
benefits offered by using an explicit prefix syntax for value
constraints, but the super short form goes like this:
* if you don’t like match statements at all, or wish we were working
on designing a C-style switch statement instead, then PEP 642 isn’t
going to appeal to you any more than PEP 634 does
* if, like me, you don’t like the idea of breaking the existing
property of Python that caching the result of a value lookup
subexpression in a local variable and then using that variable in
place of the original subexpression should “just work”, then PEP 642’s
explicit constraint prefix syntax may be more to your liking
* however, if the idea of the `?` symbol becoming part of Python’s
syntax doesn’t appeal to you, then you may consider any improved
clarity of intent that PEP 642 might offer to not be worth that cost
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
The timeline for this year's election will be the same as last year.
* The nomination period will begin Nov 1, 2020 (do not post nominations
* Nomination period will end Nov 15, 2020
* Voting will begin Dec 1, 2020
* Voting will end Dec 15, 2020
Nominations will be collected via https://discuss.python.org/ (more details
to follow on Nov 1).
New for this year: Ernest W. Durbin III will be running the vote along with
the assistance of Joe Carey, a PSF employee. They will be co-admins going
forward. I have cc'ed them in on this thread as well in case there are any
To avoid BytesWarning, the compiler needs to do some hack when they
need to store bytes and str constants in one dict or set.
BytesWarning has maintenance costs. It is not huge, but significant.
When can we remove it? My idea is:
3.10: Deprecate the -b option.
3.11: Make the -b option no-op. Bytes warning never emits.
3.12: Remove the -b option.
BytesWarning will be deprecated in the document, but not to be removed.
Users who want to use the -b option during 2->3 conversion need to use
Python ~3.10 for a while.
Inada Naoki <songofacandy(a)gmail.com>
I’m not on this list. But I have offered to help - if there are tasks that need to be done to help this I can help put the weight of a commercial entity behind it whether that involves assigning our developers to work on this, helping pay for external developers to do so, or assisting with access to machine resources.
For the record there are multiple illumos distributions and most are both free and run reasonably well in virtual machines. Claiming that developers don’t have access as a reason to discontinue the port is a bit disingenuous. Anyone can get access if they want and if they can figure out how to login and use Linux then this should be pretty close to trivial for them.
What’s more likely is that some group of developers aren’t interested in supporting stuff they don’t actively use. I get it. It’s easier to work in a monoculture. But in this case there are many many more users of this that would be impacted than a naive examination of downloads will show.
Of course this all presumes that the core Python team still places value on being a cross platform portable tool. I can help solve most of the other concerns - except for this one.
I propose to drop the Solaris support in Python to reduce the Python
I wrote a draft PR to show how much code could be removed (around 700
lines in 65 files):
In 2016, I asked if we still wanted to maintain the Solaris support in
Python, because Solaris buildbots were failing for longer than 6
months and nobody was able to fix them. It was requested to find a
core developer volunteer to fix Solaris issues and to set up a Solaris
Four years later, nothing has happened. Moreover, in 2018, Oracle laid
off the Solaris development engineering staff. There are around 25
open Python bugs specific to Solaris.
I see 3 options:
* Current best effort support (no change): changes only happen if a
core dev volunteers to review and merge a change written by a
* Schedule the removal in 2 Python releases (Python 3.12) and start to
announce that Solaris support is going to be removed
* Remove the Solaris code right now (my proposition): Solaris code
will have to be maintained outside the official Python code base, as
Solaris has a few specific features visible at the Python level:
select.devpoll, os.stat().st_fstype and stat.S_ISDOOR().
While it's unclear to me if Oracle still actively maintains Solaris
(latest release in 2018, no major update since 2018), Illumos and
OpenSolaris (variants or "forks") still seem to be active.
In 2019, a Solaris blog post explains that Solaris 11.4 still uses
Python 2.7 but plans to migrate to Python 3, and Python 3.4 is also
available. These two Python versions are no longer supported.
The question is if the Python project has to maintain the Solaris
specific code or if this code should now be maintained outside Python.
What do you think? Should we wait 5 more years? Should we expect a
company will offer to maintain the Solaris support? Is there a
motivated core developer to fix Solaris issue? As I wrote, nothing has
happened in the last 4 years...
Night gathers, and now my watch begins. It shall not end until my death.
CPython is slow. We all know that, yet little is done to fix it.
I'd like to change that.
I have a plan to speed up CPython by a factor of five over the next few
years. But it needs funding.
I am aware that there have been several promised speed ups in the past
that have failed. You might wonder why this is different.
Here are three reasons:
1. I already have working code for the first stage.
2. I'm not promising a silver bullet. I recognize that this is a
substantial amount of work and needs funding.
3. I have extensive experience in VM implementation, not to mention a
PhD in the subject.
My ideas for possible funding, as well as the actual plan of
development, can be found here:
I'd love to hear your thoughts on this.
first of all, I'm a big fan of the changes being proposed here since in my
code I prefer the 'union' style of logic over the OO style.
I was curious, though, if there are any plans for the match operator to
support async stuff. I'm interested in the problem of waiting on multiple
asyncio tasks concurrently, and having a branch of code execute depending
on the task.
Currently this can be done by using asyncio.wait, looping over the done set
and executing an if-else chain there, but this is quite tiresome. Go has a
select statement (https://tour.golang.org/concurrency/5) that looks like
fmt.Println("Received from ch1")
fmt.Println("Received from ch2")
Speaking personally, this is a Go feature I miss a lot when writing asyncio
code. The syntax is similar to what's being proposed here. Although it
could be a separate thing added later, async match, I guess.
> Message: 2
> Date: Thu, 22 Oct 2020 09:48:54 -0700
> From: Guido van Rossum <guido(a)python.org>
> Subject: [Python-Dev] Pattern matching reborn: PEP 622 is dead, long
> live PEP 634, 635, 636
> To: Python-Dev <python-dev(a)python.org>
> Content-Type: multipart/alternative;
> Content-Type: text/plain; charset="UTF-8"
> After the pattern matching discussion died out, we discussed it with the
> Steering Council. Our discussion ended fairly positive, but there were a
> lot of problems with the text. We decided to abandon the old PEP 622 and
> break it up into three parts:
> - PEP 634: Specification
> - PEP 635: Motivation and Rationale
> - PEP 636: Tutorial
> This turned out to be more work than I had expected (basically we wrote all
> new material) but we've finally made it to a point where we can request
> feedback and submit the new version to the SC for approval.
> While the text of the proposal is completely different, there aren't that
> many substantial changes:
> - We changed walrus patterns ('v := p') to AS patterns ('p as v').
> - We changed the method of comparison for literals None, False, True to use
> 'is' instead of '=='.
> - SyntaxError if an irrefutable case is followed by another case block.
> - SyntaxError if an irrefutable pattern occurs on the left of '|', e.g.
> 'x | [x]'.
> - We dropped the `@sealed` decorator and everything aimed at static type
>  An irrefutable pattern is one that never fails, notably a wildcard or a
> capture. An irrefutable case has an irrefutable pattern at the top and no
> guard. Irrefutability is defined recursively, since an '|' with an
> irrefutable pattern on either side is itself irrefutable, and so is an AS
> pattern with an irrefutable pattern before 'as'.
> The following issues were specifically discussed with the SC:
> - Concerns about side effects and undefined behavior. There's now some
> specific language about this in a few places (giving the compiler freedom
> to optimize), and a section "Side Effects and Undefined Behavior".
> - Footgun if `case NAME:` is followed by another case. This is now a
> - Adding an 'else' clause. We decided not to add this; motivation in PEP
> - Alternative 'OR' symbol. Not changed; see PEP 635.
> - Alternative wildcard symbol. Not changed, but Thomas wrote PEP 640 which
> proposes '?' as a general assignment target. PEP 635 has some language
> against that idea.
> - Alternative indentation schemes. We decided to stick with the original
> proposal; see PEP 635.
> - Marking all capture variables with a sigil. We all agreed this was a bad
> idea; see PEP 635.
> --Guido van Rossum (python.org/~guido)
> *Pronouns: he/him **(why is my pronoun here?)*
Hi Nick and Everyone,
We had actually considered a similar idea (i.e. load sigils) during
the design phase of pattern matching. In the interest of having a
rule that is as simple as possible, we had proposed to use a leading
dot as a universal marker. Tin's example would thus have been written
case (src, None): ...
case (.c, msg): ...
case (.s, msg): ...
However, this idea was met with some resistance. After briefly
looking at various alternatives again, we eventually decided to defer
this discussion entirely, allowing for the community to perhaps gain
some experience with the basic pattern matching infrastructure and
have a more in-depth discussion later on.
Paul also wrote :
> Nice to hear that there're (high-hierarchy) people who want to do
> 2nd round on intent-explicitizing sigils, thanks.
While we from the PEP-622/634/635/636 team are quite adamant that
stores should *not* be marked, having a second round of discussion
about load sigils is quite exactly what we aimed for! However, we
should consider this to be a discussion about an *extension* of the
existing PEPs (634-636), rather than about modifying them:
*The introduction of a load sigil (be it the dot or a question mark or
anything else) can actually be discussed quite independently of the
rest of pattern matching.*
You might have noticed that the original PEP 622 contained a lot more
than the current PEPs 634-636. This is intentional: with the current
pattern matching PEPs, we boiled down the entire concept to the basic
infrastructure that we need in order to get it going; a basic "starter
kit" if you will. There are a lot of ideas around for extending this
basic pattern matching and make it much more powerful and versatile,
including load sigils as proposed by PEP 642. But let us perhaps just
start with pattern matching---hopefully in 3.10 :)---and then
gradually build on that. Otherwise, I am afraid we will just keep
running in circles and never get it to lift off.
PEP 634/5/6 presents a possible implementation of pattern matching for
Much of the discussion around PEP 634, and PEP 622 before it, seems to
imply that PEP 634 is synonymous with pattern matching; that if you
reject PEP 634 then you are rejecting pattern matching.
That simply isn't true.
Can we discuss whether we want pattern matching in Python and
the broader semantics first, before dealing with low level details?
Do we want pattern matching in Python at all?
Pattern matching works really well in statically typed, functional
The lack of mutability, constrained scope and the ability of the
compiler to distinguish let variables from constants means that pattern
matching code has fewer errors, and can be compiled efficiently.
Pattern matching works less well in dynamically-typed, functional
languages and statically-typed, procedural languages.
Nevertheless, it works well enough for it to be a popular feature in
both erlang and rust.
In dynamically-typed, procedural languages, however, it is not clear (at
least not to me) that it works well enough to be worthwhile.
That is not say that pattern matching could never be of value in Python,
but PEP 635 fails to demonstrate that it can (although it does a better
job than PEP 622).
Should match be an expression, or a statement?
Do we want a fancy switch statement, or a powerful expression?
Expressions have the advantage of not leaking (like comprehensions in
Python 3), but statements are easier to work with.
Can pattern matching make it clear what is assigned?
Embedding the variables to be assigned into a pattern, makes the pattern
concise, but requires discarding normal Python syntax and inventing a
new sub-language. Could we make patterns fit Python better?
Is it possible to make assignment to variables clear, and unambiguous,
and allow the use of symbolic constants at the same time?
I think it is, but PEP 634 fails to do this.
How should pattern matching be integrated with the object model?
What special method(s) should be added? How and when should they be called?
PEP 634 largely disregards the object model, meaning it has many special
cases, and is inefficient.
The semantics must be well defined.
Language extensions PEPs should define the semantics of those
extensions. For example, PEP 343 and PEP 380 both did.
PEP 634 just waves its hands and talks about undefined behavior, which
I would ask anyone who wants pattern matching adding to Python, to not
support PEP 634.
PEP 634 just isn't a good fit for Python, and we deserve something better.