It has been a while since I posted a copy of PEP 1 to the mailing
lists and newsgroups. I've recently done some updating of a few
sections, so in the interest of gaining wider community participation
in the Python development process, I'm posting the latest revision of
PEP 1 here. A version of the PEP is always available on-line at
-------------------- snip snip --------------------
Title: PEP Purpose and Guidelines
Version: $Revision: 1.36 $
Last-Modified: $Date: 2002/07/29 18:34:59 $
Author: Barry A. Warsaw, Jeremy Hylton
Post-History: 21-Mar-2001, 29-Jul-2002
What is a PEP?
PEP stands for Python Enhancement Proposal. A PEP is a design
document providing information to the Python community, or
describing a new feature for Python. The PEP should provide a
concise technical specification of the feature and a rationale for
We intend PEPs to be the primary mechanisms for proposing new
features, for collecting community input on an issue, and for
documenting the design decisions that have gone into Python. The
PEP author is responsible for building consensus within the
community and documenting dissenting opinions.
Because the PEPs are maintained as plain text files under CVS
control, their revision history is the historical record of the
Kinds of PEPs
There are two kinds of PEPs. A standards track PEP describes a
new feature or implementation for Python. An informational PEP
describes a Python design issue, or provides general guidelines or
information to the Python community, but does not propose a new
feature. Informational PEPs do not necessarily represent a Python
community consensus or recommendation, so users and implementors
are free to ignore informational PEPs or follow their advice.
PEP Work Flow
The PEP editor, Barry Warsaw <peps(a)python.org>, assigns numbers
for each PEP and changes its status.
The PEP process begins with a new idea for Python. It is highly
recommended that a single PEP contain a single key proposal or new
idea. The more focussed the PEP, the more successfully it tends
to be. The PEP editor reserves the right to reject PEP proposals
if they appear too unfocussed or too broad. If in doubt, split
your PEP into several well-focussed ones.
Each PEP must have a champion -- someone who writes the PEP using
the style and format described below, shepherds the discussions in
the appropriate forums, and attempts to build community consensus
around the idea. The PEP champion (a.k.a. Author) should first
attempt to ascertain whether the idea is PEP-able. Small
enhancements or patches often don't need a PEP and can be injected
into the Python development work flow with a patch submission to
the SourceForge patch manager or feature request tracker.
The PEP champion then emails the PEP editor <peps(a)python.org> with
a proposed title and a rough, but fleshed out, draft of the PEP.
This draft must be written in PEP style as described below.
If the PEP editor approves, he will assign the PEP a number, label
it as standards track or informational, give it status 'draft',
and create and check-in the initial draft of the PEP. The PEP
editor will not unreasonably deny a PEP. Reasons for denying PEP
status include duplication of effort, being technically unsound,
not providing proper motivation or addressing backwards
compatibility, or not in keeping with the Python philosophy. The
BDFL (Benevolent Dictator for Life, Guido van Rossum) can be
consulted during the approval phase, and is the final arbitrator
of the draft's PEP-ability.
If a pre-PEP is rejected, the author may elect to take the pre-PEP
to the comp.lang.python newsgroup (a.k.a. python-list(a)python.org
mailing list) to help flesh it out, gain feedback and consensus
from the community at large, and improve the PEP for
The author of the PEP is then responsible for posting the PEP to
the community forums, and marshaling community support for it. As
updates are necessary, the PEP author can check in new versions if
they have CVS commit permissions, or can email new PEP versions to
the PEP editor for committing.
Standards track PEPs consists of two parts, a design document and
a reference implementation. The PEP should be reviewed and
accepted before a reference implementation is begun, unless a
reference implementation will aid people in studying the PEP.
Standards Track PEPs must include an implementation - in the form
of code, patch, or URL to same - before it can be considered
PEP authors are responsible for collecting community feedback on a
PEP before submitting it for review. A PEP that has not been
discussed on python-list(a)python.org and/or python-dev(a)python.org
will not be accepted. However, wherever possible, long open-ended
discussions on public mailing lists should be avoided. Strategies
to keep the discussions efficient include, setting up a separate
SIG mailing list for the topic, having the PEP author accept
private comments in the early design phases, etc. PEP authors
should use their discretion here.
Once the authors have completed a PEP, they must inform the PEP
editor that it is ready for review. PEPs are reviewed by the BDFL
and his chosen consultants, who may accept or reject a PEP or send
it back to the author(s) for revision.
Once a PEP has been accepted, the reference implementation must be
completed. When the reference implementation is complete and
accepted by the BDFL, the status will be changed to `Final.'
A PEP can also be assigned status `Deferred.' The PEP author or
editor can assign the PEP this status when no progress is being
made on the PEP. Once a PEP is deferred, the PEP editor can
re-assign it to draft status.
A PEP can also be `Rejected'. Perhaps after all is said and done
it was not a good idea. It is still important to have a record of
PEPs can also be replaced by a different PEP, rendering the
original obsolete. This is intended for Informational PEPs, where
version 2 of an API can replace version 1.
PEP work flow is as follows:
Draft -> Accepted -> Final -> Replaced
Some informational PEPs may also have a status of `Active' if they
are never meant to be completed. E.g. PEP 1.
What belongs in a successful PEP?
Each PEP should have the following parts:
1. Preamble -- RFC822 style headers containing meta-data about the
PEP, including the PEP number, a short descriptive title
(limited to a maximum of 44 characters), the names, and
optionally the contact info for each author, etc.
2. Abstract -- a short (~200 word) description of the technical
issue being addressed.
3. Copyright/public domain -- Each PEP must either be explicitly
labelled as placed in the public domain (see this PEP as an
example) or licensed under the Open Publication License.
4. Specification -- The technical specification should describe
the syntax and semantics of any new language feature. The
specification should be detailed enough to allow competing,
interoperable implementations for any of the current Python
platforms (CPython, JPython, Python .NET).
5. Motivation -- The motivation is critical for PEPs that want to
change the Python language. It should clearly explain why the
existing language specification is inadequate to address the
problem that the PEP solves. PEP submissions without
sufficient motivation may be rejected outright.
6. Rationale -- The rationale fleshes out the specification by
describing what motivated the design and why particular design
decisions were made. It should describe alternate designs that
were considered and related work, e.g. how the feature is
supported in other languages.
The rationale should provide evidence of consensus within the
community and discuss important objections or concerns raised
7. Backwards Compatibility -- All PEPs that introduce backwards
incompatibilities must include a section describing these
incompatibilities and their severity. The PEP must explain how
the author proposes to deal with these incompatibilities. PEP
submissions without a sufficient backwards compatibility
treatise may be rejected outright.
8. Reference Implementation -- The reference implementation must
be completed before any PEP is given status 'Final,' but it
need not be completed before the PEP is accepted. It is better
to finish the specification and rationale first and reach
consensus on it before writing code.
The final implementation must include test code and
documentation appropriate for either the Python language
reference or the standard library reference.
PEPs are written in plain ASCII text, and should adhere to a
rigid style. There is a Python script that parses this style and
converts the plain text PEP to HTML for viewing on the web.
PEP 9 contains a boilerplate template you can use to get
started writing your PEP.
Each PEP must begin with an RFC822 style header preamble. The
headers must appear in the following order. Headers marked with
`*' are optional and are described below. All other headers are
PEP: <pep number>
Title: <pep title>
Version: <cvs version string>
Last-Modified: <cvs date string>
Author: <list of authors' real names and optionally, email addrs>
* Discussions-To: <email address>
Status: <Draft | Active | Accepted | Deferred | Final | Replaced>
Type: <Informational | Standards Track>
* Requires: <pep numbers>
Created: <date created on, in dd-mmm-yyyy format>
* Python-Version: <version number>
Post-History: <dates of postings to python-list and python-dev>
* Replaces: <pep number>
* Replaced-By: <pep number>
The Author: header lists the names and optionally, the email
addresses of all the authors/owners of the PEP. The format of the
author entry should be
address(a)dom.ain (Random J. User)
if the email address is included, and just
Random J. User
if the address is not given. If there are multiple authors, each
should be on a separate line following RFC 822 continuation line
conventions. Note that personal email addresses in PEPs will be
obscured as a defense against spam harvesters.
Standards track PEPs must have a Python-Version: header which
indicates the version of Python that the feature will be released
with. Informational PEPs do not need a Python-Version: header.
While a PEP is in private discussions (usually during the initial
Draft phase), a Discussions-To: header will indicate the mailing
list or URL where the PEP is being discussed. No Discussions-To:
header is necessary if the PEP is being discussed privately with
the author, or on the python-list or python-dev email mailing
lists. Note that email addresses in the Discussions-To: header
will not be obscured.
Created: records the date that the PEP was assigned a number,
while Post-History: is used to record the dates of when new
versions of the PEP are posted to python-list and/or python-dev.
Both headers should be in dd-mmm-yyyy format, e.g. 14-Aug-2001.
PEPs may have a Requires: header, indicating the PEP numbers that
this PEP depends on.
PEPs may also have a Replaced-By: header indicating that a PEP has
been rendered obsolete by a later document; the value is the
number of the PEP that replaces the current document. The newer
PEP must have a Replaces: header containing the number of the PEP
that it rendered obsolete.
PEP Formatting Requirements
PEP headings must begin in column zero and the initial letter of
each word must be capitalized as in book titles. Acronyms should
be in all capitals. The body of each section must be indented 4
spaces. Code samples inside body sections should be indented a
further 4 spaces, and other indentation can be used as required to
make the text readable. You must use two blank lines between the
last line of a section's body and the next section heading.
You must adhere to the Emacs convention of adding two spaces at
the end of every sentence. You should fill your paragraphs to
column 70, but under no circumstances should your lines extend
past column 79. If your code samples spill over column 79, you
should rewrite them.
Tab characters must never appear in the document at all. A PEP
should include the standard Emacs stanza included by example at
the bottom of this PEP.
A PEP must contain a Copyright section, and it is strongly
recommended to put the PEP in the public domain.
When referencing an external web page in the body of a PEP, you
should include the title of the page in the text, with a
footnote reference to the URL. Do not include the URL in the body
text of the PEP. E.g.
Refer to the Python Language web site  for more details.
When referring to another PEP, include the PEP number in the body
text, such as "PEP 1". The title may optionally appear. Add a
footnote reference that includes the PEP's title and author. It
may optionally include the explicit URL on a separate line, but
only in the References section. Note that the pep2html.py script
will calculate URLs automatically, e.g.:
Refer to PEP 1  for more information about PEP style
 PEP 1, PEP Purpose and Guidelines, Warsaw, Hylton
If you decide to provide an explicit URL for a PEP, please use
this as the URL template:
PEP numbers in URLs must be padded with zeros from the left, so as
to be exactly 4 characters wide, however PEP numbers in text are
Reporting PEP Bugs, or Submitting PEP Updates
How you report a bug, or submit a PEP update depends on several
factors, such as the maturity of the PEP, the preferences of the
PEP author, and the nature of your comments. For the early draft
stages of the PEP, it's probably best to send your comments and
changes directly to the PEP author. For more mature, or finished
PEPs you may want to submit corrections to the SourceForge bug
manager or better yet, the SourceForge patch manager so that
your changes don't get lost. If the PEP author is a SF developer,
assign the bug/patch to him, otherwise assign it to the PEP
When in doubt about where to send your changes, please check first
with the PEP author and/or PEP editor.
PEP authors who are also SF committers, can update the PEPs
themselves by using "cvs commit" to commit their changes.
Remember to also push the formatted PEP text out to the web by
doing the following:
% python pep2html.py -i NUM
where NUM is the number of the PEP you want to push out. See
% python pep2html.py --help
Transferring PEP Ownership
It occasionally becomes necessary to transfer ownership of PEPs to
a new champion. In general, we'd like to retain the original
author as a co-author of the transferred PEP, but that's really up
to the original author. A good reason to transfer ownership is
because the original author no longer has the time or interest in
updating it or following through with the PEP process, or has
fallen off the face of the 'net (i.e. is unreachable or not
responding to email). A bad reason to transfer ownership is
because you don't agree with the direction of the PEP. We try to
build consensus around a PEP, but if that's not possible, you can
always submit a competing PEP.
If you are interested assuming ownership of a PEP, send a message
asking to take over, addressed to both the original author and the
PEP editor <peps(a)python.org>. If the original author doesn't
respond to email in a timely manner, the PEP editor will make a
unilateral decision (it's not like such decisions can be
References and Footnotes
 This historical record is available by the normal CVS commands
for retrieving older revisions. For those without direct access
to the CVS tree, you can browse the current and past PEP revisions
via the SourceForge web site at
 The script referred to here is pep2html.py, which lives in
the same directory in the CVS tree as the PEPs themselves.
Try "pep2html.py --help" for details.
The URL for viewing PEPs on the web is
 PEP 9, Sample PEP Template
This document has been placed in the public domain.
In Python 2.5 `0or` was accepted by the Python parser. It became an
error in 2.6 because "0o" became recognizing as an incomplete octal
number. `1or` still is accepted.
On other hand, `1if 2else 3` is accepted despites the fact that "2e" can
be recognized as an incomplete floating point number. In this case the
tokenizer pushes "e" back and returns "2".
Shouldn't it do the same with "0o"? It is possible to make `0or` be
parseable again. Python implementation is able to tokenize this example:
$ echo '0or' | ./python -m tokenize
1,0-1,1: NUMBER '0'
1,1-1,3: NAME 'or'
1,3-1,4: OP '['
1,4-1,5: OP ']'
1,5-1,6: NEWLINE '\n'
2,0-2,0: ENDMARKER ''
On other hand, all these examples look weird. There is an assymmetry:
`1or 2` is a valid syntax, but `1 or2` is not. It is hard to recognize
visually the boundary between a number and the following identifier or
keyword, especially if numbers can contain letters ("b", "e", "j", "o",
"x") and underscores, and identifiers can contain digits. On both sides
of the boundary can be letters, digits, and underscores.
I propose to change the Python syntax by adding a requirement that there
should be a whitespace or delimiter between a numeric literal and the
It's finally time to schedule the last releases in Python 2's life. There will be two more releases of Python 2.7: Python 2.7.17 and Python 2.7.18.
Python 2.7.17 release candidate 1 will happen on October 5th followed by the final release on October 19th.
I'm going to time Python 2.7.18 to coincide with PyCon 2020 in April, so attendees can enjoy some collective catharsis. We'll still say January 1st is the official EOL date.
Thanks to Sumana Harihareswara, there's now a FAQ about the Python 2 sunset on the website: https://www.python.org/doc/sunset-python-2/
Read in the browser: https://www.python.org/dev/peps/pep-0585/ <https://www.python.org/dev/peps/pep-0585/>
Read the source: https://raw.githubusercontent.com/python/peps/master/pep-0585.rst <https://raw.githubusercontent.com/python/peps/master/pep-0585.rst>
The following PEP has been discussed on typing-sig already and a prototype implementation exists for it. I'm extending it now for wider feedback on python-dev, with the intent to present the final version for the Steering Council's consideration by mid-March.
Title: Type Hinting Generics In Standard Collections
Author: Łukasz Langa <lukasz(a)python.org>
Discussions-To: Typing-Sig <typing-sig(a)python.org>
Type: Standards Track
Static typing as defined by PEPs 484, 526, 544, 560, and 563 was built incrementally on top of the existing Python runtime and constrained by existing syntax and runtime behavior. This led to the existence of a duplicated collection hierarchy in the typing module due to generics (for example typing.List and the built-in list).
This PEP proposes to enable support for the generics syntax in all standard collections currently available in the typing module.
Rationale and Goals
This change removes the necessity for a parallel type hierarchy in the typing module, making it easier for users to annotate their programs and easier for teachers to teach Python.
Generic (n.) -- a type that can be parametrized, typically a container. Also known as a parametric type or a generic type. For example: dict.
Parametrized generic -- a specific instance of a generic with the expected types for container elements provided. Also known as a parametrized type. For example: dict[str, int].
Tooling, including type checkers and linters, will have to be adapted to recognize standard collections as generics.
On the source level, the newly described functionality requires Python 3.9. For use cases restricted to type annotations, Python files with the "annotations" future-import (available since Python 3.7) can parametrize standard collections, including builtins. To reiterate, that depends on the external tools understanding that this is valid.
Starting with Python 3.7, when from __future__ import annotations is used, function and variable annotations can parametrize standard collections directly. Example:
from __future__ import annotations
def find(haystack: dict[str, list[int]]) -> int:
Usefulness of this syntax before PEP 585 <https://www.python.org/dev/peps/pep-0585> is limited as external tooling like Mypy does not recognize standard collections as generic. Moreover, certain features of typing like type aliases or casting require putting types outside of annotations, in runtime context. While these are relatively less common than type annotations, it's important to allow using the same type syntax in all contexts. This is why starting with Python 3.9, the following collections become generic using __class_getitem__() to parametrize contained types:
tuple # typing.Tuple
list # typing.List
dict # typing.Dict
set # typing.Set
frozenset # typing.FrozenSet
type # typing.Type
collections.abc.Set # typing.AbstractSet
contextlib.AbstractContextManager # typing.ContextManager
contextlib.AbstractAsyncContextManager # typing.AsyncContextManager
re.Pattern # typing.Pattern, typing.re.Pattern
re.Match # typing.Match, typing.re.Match
Importing those from typing is deprecated. Due to PEP 563 <https://www.python.org/dev/peps/pep-0563> and the intention to minimize the runtime impact of typing, this deprecation will not generate DeprecationWarnings. Instead, type checkers may warn about such deprecated usage when the target version of the checked program is signalled to be Python 3.9 or newer. It's recommended to allow for those warnings to be silenced on a project-wide basis.
The deprecated functionality will be removed from the typing module in the first Python version released 5 years after the release of Python 3.9.0.
Parameters to generics are available at runtime
Preserving the generic type at runtime enables introspection of the type which can be used for API generation or runtime type checking. Such usage is already present in the wild.
Just like with the typing module today, the parametrized generic types listed in the previous section all preserve their type parameters at runtime:
>>> tuple[int, ...]
>>> ChainMap[str, list[str]]
This is implemented using a thin proxy type that forwards all method calls and attribute accesses to the bare origin type with the following exceptions:
the __repr__ shows the parametrized type;
the __origin__ attribute points at the non-parametrized generic class;
the __args__ attribute is a tuple (possibly of length 1) of generic types passed to the original __class_getitem__;
the __parameters__ attribute is a lazily computed tuple (possibly empty) of unique type variables found in __args__;
the __getitem__ raises an exception to disallow mistakes like dict[str][str]. However it allows e.g. dict[str, T][int] and in that case returns dict[str, int].
This design means that it is possible to create instances of parametrized collections, like:
>>> l = list[str]()
>>> list is list[str]
>>> list == list[str]
>>> list[str] == list[str]
>>> list[str] == list[int]
>>> isinstance([1, 2, 3], list[str])
TypeError: isinstance() arg 2 cannot be a parametrized generic
>>> issubclass(list, list[str])
TypeError: issubclass() arg 2 cannot be a parametrized generic
>>> isinstance(list[str], types.GenericAlias)
Objects created with bare types and parametrized types are exactly the same. The generic parameters are not preserved in instances created with parametrized types, in other words generic types erase type parameters during object creation.
One important consequence of this is that the interpreter does not attempt to type check operations on the collection created with a parametrized type. This provides symmetry between:
l: list[str] = 
l = list[str]()
For accessing the proxy type from Python code, it will be exported from the types module as GenericAlias.
Pickling or (shallow- or deep-) copying a GenericAlias instance will preserve the type, origin, attributes and parameters.
Future standard collections must implement the same behavior.
A proof-of-concept or prototype implementation <https://bugs.python.org/issue39481> exists.
Keeping the status quo forces Python programmers to perform book-keeping of imports from the typing module for standard collections, making all but the simplest annotations cumbersome to maintain. The existence of parallel types is confusing to newcomers (why is there both list and List?).
The above problems also don't exist in user-built generic classes which share runtime functionality and the ability to use them as generic type annotations. Making standard collections harder to use in type hinting from user classes hindered typing adoption and usability.
It would be easier to implement __class_getitem__ on the listed standard collections in a way that doesn't preserve the generic type, in other words:
>>> tuple[int, ...]
>>> collections.ChainMap[str, list[str]]
This is problematic as it breaks backwards compatibility: current equivalents of those types in the typing module do preserve the generic type:
>>> from typing import List, Tuple, ChainMap
>>> Tuple[int, ...]
>>> ChainMap[str, List[str]]
As mentioned in the "Implementation" section, preserving the generic type at runtime enables runtime introspection of the type which can be used for API generation or runtime type checking. Such usage is already present in the wild.
Additionally, implementing subscripts as identity functions would make Python less friendly to beginners. Say, if a user is mistakenly passing a list type instead of a list object to a function, and that function is indexing the received object, the code would no longer raise an error.
>>> l = list
TypeError: 'type' object is not subscriptable
With __class_getitem__ as an identity function:
>>> l = list
The indexing being successful here would likely end up raising an exception at a distance, confusing the user.
Disallowing instantiation of parametrized types
Given that the proxy type which preserves __origin__ and __args__ is mostly useful for runtime introspection purposes, we might have disallowed instantiation of parametrized types.
In fact, forbidding instantiation of parametrized types is what the typing module does today for types which parallel builtin collections (instantiation of other parametrized types is allowed).
The original reason for this decision was to discourage spurious parametrization which made object creation up to two orders of magnitude slower compared to the special syntax available for those builtin collections.
This rationale is not strong enough to allow the exceptional treatment of builtins. All other parametrized types can be instantiated, including parallels of collections in the standard library. Moreover, Python allows for instantiation of lists using list() and some builtin collections don't provide special syntax for instantiation.
Making isinstance(obj, list[str]) perform a check ignoring generics
An earlier version of this PEP suggested treating parametrized generics like list[str] as equivalent to their non-parametrized variants like list for purposes of isinstance() and issubclass(). This would be symmetrical to how list[str]() creates a regular list.
This design was rejected because isinstance() and issubclass() checks with parametrized generics would read like element-by-element runtime type checks. The result of those checks would be surprising, for example:
>>> isinstance([1, 2, 3], list[str])
Note the object doesn't match the provided generic type but isinstance() still returns True because it only checks whether the object is a list.
If a library is faced with a parametrized generic and would like to perform an isinstance() check using the base type, that type can be retrieved using the __origin__ attribute on the parametrized generic.
Making isinstance(obj, list[str]) perform a runtime type check
This functionality requires iterating over the collection which is a destructive operation in some of them. This functionality would have been useful, however implementing the type checker within Python that would deal with complex types, nested type checking, type variables, string forward references, and so on is out of scope for this PEP.
Note on the initial draft
An early version of this PEP discussed matters beyond generics in standard collections. Those unrelated topics were removed for clarity.
This document is placed in the public domain or under the CC0-1.0-Universal license, whichever is more permissive.
I think that PEP 573 is ready to be accepted, to greatly improve the state
of extension modules in CPython 3.9.
It has come a long way since the original proposal and went through several
iterations and discussions by various interested people, effectively
reducing its scope quite a bit. So this is the last call for comments on
the latest version of the PEP, before I will pronounce on it. Please keep
the discussion in this thread.
An unfortunate side-effect of making PyInterpreterState in Python 3.8 opaque is it removed [PEP 523](https://www.python.org/dev/peps/pep-0523/) support. https://www.python.org/dev/peps/pep-0523/ was opened to try and fix this, but there seems to be a stalemate in the issue.
A key question is at what API level should setting the frame evaluation function live at? No one is suggesting the stable ABI, but there isn't agreement between CPython or the internal API (there's also seems to be a suggestion to potentially remove PEP 523 support entirely).
And regardless of either, there also seems to be a disagreement about providing getters and setters to continue to try and hide PyInterpreterState regardless of which API level support is provided at (if any).
If you have an opinion please weight in on the issue.
Python 3.9 introduces many small incompatible changes which broke tons
of Python projects, including popular projects, some of them being
unmaintained but still widely used (like nose, last release in 2015).
Miro and me consider that Python 3.9 is pushing too much pressure on
projects maintainers to either abandon Python 2.7 right now (need to
update the CI, the documentation, warn users, etc.), or to introduce a
*new* compatibility layer to support Python 3.9: layer which would be
dropped as soon as Python 2.7 support will be dropped (soon-ish).
Python 3.9 is too early to accumulate so many incompatible changes on
purpose, some incompatible changes like the removal of collections
aliases to ABC should be reverted, and reapplied on Python 3.10.
Python 3.9 would be the last release which still contain compatibility
layers for Python 2.7.
Said differently, we request to maintain the small compatibility layer
in Python for one more cycle, instead of requesting every single
project maintainer to maintain it on their side. We consider that the
general maintenance burden is lower if it's kept in Python for now.
== Fedora COPR notify packages broken by Python 3.9 ==
In Python 3.9, Victor introduced tons of incompatible changes at the
beginning of the devcycle. His plan was to push as many as possible,
and later decide what to do... This time has come :-) He wrote PEP 606
"Python Compatibility Version" and we wrote PEP 608 "Coordinated
Python release", but both have been rejected. At least, it seems like
most people agreed that having a CI to get notified of broken projects
We are updating the future Fedora 33 to replace Python 3.8 with Python
3.9. We are using a tool called "COPR" which is like a sandbox and can
be seen as the CI discussed previously. It rebuilds Fedora using
Python 3.9 as /usr/bin/python3 (and /usr/bin/python !). We now have a
good overview of broken packages and which incompatible changes broke
- Describes the Fedora change.
- Has package failures. Some packages fail because of broken dependencies.
- Has open Python 3.9 bug reports for Fedora packages. Some problems
have been fixed upstream already before reaching Fedora, most are only
fixed when the Fedora maintainers report the problems back to upstream
Right now, there are 150+ packages broken by Python 3.9 incompatible changes.
== Maintenance burden ==
Many Python projects have not yet removed Python 2 support and Python
2 CI. It's not that they would be in the "we will not drop Python 2
support ever" camp, it's just that they have not done it yet. Removing
Python 2 compatibility code from the codebase and removing it from the
documentation and metadata and CI is a boring task, doesn't bring
anything to users, and it might take a new major release of the
library. At this point, we are very early in 2020 to expect most
projects to have already done this.
At the same time, we would like them to support Python 3.9 as soon in
the release cycle as possible. By removing Python 2 compatibility
layers from the 3.9 standard library, we are forcing the projects
maintainers to re-invent their own compatibility layers and copy-paste
stuff like this around. Example:
from collections.abc import Sequence
# Python 2.7 doesn't have collections.abc
from collections import Sequence
While if we remove collections.Sequence in 3.10, they will face this
decision early in 2021 and they will simply fix their code by adding
".abc" at the proper places, not needing any more compatibility
layers. Of course, there will be projects that will still have
declared Python 2 support in 2021, but it will arguably not be that
While it's certainly tempting to have "more pure" code in the standard
library, maintaining the compatibility shims for one more release
isn't really that big of a maintenance burden, especially when
comparing with dozens (hundreds?) of third party libraries essentially
maintaining their own.
An good example of a broken package is the nose project which is no
longer maintained (according to their website): the last release was
in 2015. It remains a very popular test runner. According to
libraries.io, it has with 3 million downloads per month, 41.7K
dependent repositories and 325 dependent packages. We patched nose in
Fedora to fix Python 3.5, 3.6, 3.8 and now 3.9 compatibility issues.
People installing nose from PyPI with "pip install" get the
unmaintained flavor which is currently completely broken on Python
Someone should take over the nose project and maintain it again, or
every single project using nose should pick another tool (unittest,
nose2, pytest, whatever else). Both options will take a lot of time.
== Request to revert some incompatible changes ==
Incompatible changes which require "if <python3>: (...) else: (...)"
or "try: <python3 code> except (...): <python2 code>":
* Removed tostring/fromstring methods in array.array and base64 modules
* Removed collections aliases to ABC classes
* Removed fractions.gcd() function (which is similar to math.gcd())
* Remove "U" mode of open(): having to use io.open() just for Python 2
makes the code uglier
* Removed old plistlib API: 2.7 doesn't have the new API
== Kept incompatible changes ==
Ok-ish incompatible changes (mostly only affects compatiblity
libraries like six):
* _dummy_thread and dummy_threading modules removed: broke six, nine
and future projects. six and nine are already fixed for 3.9.
OK incompatible changes (can be replaced by the same code on 2.7 and 3.9):
* isAlive() method of threading.Thread has been removed:
Thread.is_alive() method is available in 2.7 and 3.9
* xml.etree.ElementTree.getchildren() and
xml.etree.ElementTree.getiterator() methods are removed from 3.9, but
list()/iter() works in 2.7 and 3.9
== Call to check for DeprecationWarning in your own projects ==
You must pay attention to DeprecationWarning in Python 3.9: it will be
the last "compatibility layer" release, incompatible changes will be
reapplied to Python 3.10.
For example, you can use the development mode to see
DeprecationWarning and ResourceWarning: use the "-X dev" command line
option or set the PYTHONDEVMODE=1 environment variable. Or you can use
the PYTHONWARNINGS=default environment variable to see
You might even want to treat all warnings as errors to ensure that you
don't miss any when you run your test suite in your CI. You can use
PYTHONWARNINGS=error, and combine it with PYTHONDEVMODE=1.
Warnings filters can be used to ignore warnings in third party code,
see the documentation:
-- Victor Stinner and Miro Hrončok for Fedora
I have successfully built Python 3.8.1 on QNX, but ran into a problem when using 'make -j4'. The build process invariably hangs with multiple single-threaded child processes stuck indefinitely waiting on semaphores. These semaphores will clearly never be posted to, as the processes are all single threaded and the semaphores are not shared with any other process.
A backtrace shows that the the offending calls come from run_at_forkers(), which is not surprising. I consider a multi-threaded fork() to be an ill-defined operation and my arch-nemesis... The problem here is that the value of the semaphore inherited from the parent shows the semaphore to be unavailable, even though the semaphore *object* itself is not the same as the one used by the parent (they share the same virtual address, but in different address spaces with different backing memory).
A few (noob) questions:
1. Is there a way to correlate the C backtrace from the core dump to the Python code that resulted in this hang?
2. It is well-known that a multi-threaded fork() is inherently unsafe, and POSIX says that no non-async-signal-safe operations are allowed between the fork() and exec() calls. I even saw comments to this effect in the Python source code ;-) So why is it done?
3. Any reason not to use posix_spawn() instead of fork()/exec()? While some systems implement posix_spawn() with fork()/exec() others (at least QNX) implements it without first creating a duplicate of the parent, making it both more efficient and safer in a multi-threaded parent.
4. thread_pthread.h seems to go to great lengths to implement locks without using native mutexes. I found one reference in the code dating to 1994 as to why that is done. Is it still applicable? Contrary to the claim in that comment the semantics for trying to lock an already-locked mutex and for unlocking an unowned mutex are well-defined.