webmaster has already heard from 4 people who cannot install it.
I sent them to the bug tracker or to python-list but they seem
not to have gone either place. Is there some guide I should be
sending them to, 'how to debug installation problems'?
Laura
Hi,
On Twitter, Raymond Hettinger wrote:
"The decision making process on Python-dev is an anti-pattern,
governed by anecdotal data and ambiguity over what problem is solved."
https://twitter.com/raymondh/status/887069454693158912
About "anecdotal data", I would like to discuss the Python startup time.
== Python 3.7 compared to 2.7 ==
First of all, on speed.python.org, we have:
* Python 2.7: 6.4 ms with site, 3.0 ms without site (-S)
* master (3.7): 14.5 ms with site, 8.4 ms without site (-S)
Python 3.7 startup time is 2.3x slower with site (default mode), or
2.8x slower without site (-S command line option).
(I will skip Python 3.4, 3.5 and 3.6 which are much worse than Python 3.7...)
So if an user complained about Python 2.7 startup time: be prepared
for a 2x - 3x more angry user when "forced" to upgrade to Python 3!
== Mercurial vs Git, Python vs C, startup time ==
Startup time matters a lot for Mercurial since Mercurial is compared
to Git. Git and Mercurial have similar features, but Git is written in
C whereas Mercurial is written in Python. Quick benchmark on the
speed.python.org server:
* hg version: 44.6 ms +- 0.2 ms
* git --version: 974 us +- 7 us
Mercurial startup time is already 45.8x slower than Git whereas tested
Mercurial runs on Python 2.7.12. Now try to sell Python 3 to Mercurial
developers, with a startup time 2x - 3x slower...
I tested Mecurial 3.7.3 and Git 2.7.4 on Ubuntu 16.04.1 using "python3
-m perf command -- ...".
== CPython core developers don't care? no, they do care ==
Christian Heimes, Naoki INADA, Serhiy Storchaka, Yury Selivanov, me
(Victor Stinner) and other core developers made multiple changes last
years to reduce the number of imports at startup, optimize impotlib,
etc.
IHMO all these core developers are well aware of the competition of
programming languages, and honesty Python startup time isn't "good".
So let's compare it to other programming languages similar to Python.
== PHP, Ruby, Perl ==
I measured the startup time of other programming languages which are
similar to Python, still on the speed.python.org server using "python3
-m perf command -- ...":
* perl -e ' ': 1.18 ms +- 0.01 ms
* php -r ' ': 8.57 ms +- 0.05 ms
* ruby -e ' ': 32.8 ms +- 0.1 ms
Wow, Perl is quite good! PHP seems as good as Python 2 (but Python 3
is worse). Ruby startup time seems less optimized than other
languages.
Tested versions:
* perl 5, version 22, subversion 1 (v5.22.1)
* PHP 7.0.18-0ubuntu0.16.04.1 (cli) ( NTS )
* ruby 2.3.1p112 (2016-04-26) [x86_64-linux-gnu]
== Quick Google search ==
I also searched for "python startup time" and "python slow startup
time" on Google and found many articles. Some examples:
"Reducing the Python startup time"
http://www.draketo.de/book/export/html/498
=> "The python startup time always nagged me (17-30ms) and I just
searched again for a way to reduce it, when I found this: The
Python-Launcher caches GTK imports and forks new processes to reduce
the startup time of python GUI programs."
https://nelsonslog.wordpress.com/2013/04/08/python-startup-time/
=> "Wow, Python startup time is worse than I thought."
"How to speed up python starting up and/or reduce file search while
loading libraries?"
https://stackoverflow.com/questions/15474160/how-to-speed-up-python-startin…
=> "The first time I log to the system and start one command it takes
6 seconds just to show a few line of help. If I immediately issue the
same command again it takes 0.1s. After a couple of minutes it gets
back to 6s. (proof of short-lived cache)"
"How does one optimise the startup of a Python script/program?"
https://www.quora.com/How-does-one-optimise-the-startup-of-a-Python-script-…
=> "I wrote a Python program that would be used very often (imagine
'cd' or 'ls') for very short runtimes, how would I make it start up as
fast as possible?"
"Python Interpreter Startup time"
https://bytes.com/topic/python/answers/34469-pyhton-interpreter-startup-time
"Python is very slow to start on Windows 7"
https://stackoverflow.com/questions/29997274/python-is-very-slow-to-start-o…
=> "Python takes 17 times longer to load on my Windows 7 machine than
Ubuntu 14.04 running on a VM"
=> "returns in 0.614s on Windows and 0.036s on Linux"
"How to make a fast command line tool in Python" (old article Python 2.5.2)
https://files.bemusement.org/talks/OSDC2008-FastPython/
=> "(...) some techniques Bazaar uses to start quickly, such as lazy imports."
--
So please continue efforts for make Python startup even faster to beat
all other programming languages, and finally convince Mercurial to
upgrade ;-)
Victor
Hi folks,
As some people here know I've been working off and on for a while to
improve CPython's support of Cygwin. I'm motivated in part by a need
to have software working on Python 3.x on Cygwin for the foreseeable
future, preferably with minimal graft. (As an incidental side-effect
Python's test suite--especially of system-level functionality--serves
as an interesting test suite for Cygwin itself too.)
This is partly what motivated PEP 539 [1], although that PEP had the
advantage of benefiting other POSIX-compatible platforms as well (and
in fact was fixing an aspect of CPython that made it unfriendly to
supporting other platforms).
As far as I can tell, the first commit to Python to add any kind of
support for Cygwin was made by Guido (committing a contributed patch)
back in 1999 [2]. Since then, bits and pieces have been added for
Cygwin's benefit over time, with varying degrees of impact in terms of
#ifdefs and the like (for the most part Cygwin does not require *much*
in the way of special support, but it does have some differences from
a "normal" POSIX-compliant platform, such as the possibility for
case-insensitive filesystems and executables that end in .exe). I
don't know whether it's ever been "officially supported" but someone
with a longer memory of the project can comment on that. I'm not sure
if it was discussed at all or not in the context of PEP 11.
I have personally put in a fair amount of effort already in either
fixing issues on Cygwin (many of these issues also impact MinGW), or
more often than not fixing issues in the CPython test suite on
Cygwin--these are mostly tests that are broken due to invalid
assumptions about the platform (for example, that there is always a
"root" user with uid=0; this is not the case on Cygwin). In other
cases some tests need to be skipped or worked around due to
platform-specific bugs, and Cygwin is hardly the only case of this in
the test suite.
I also have an experimental AppVeyor configuration for running the
tests on Cygwin [3], as well as an experimental buildbot (not
available on the internet, but working). These currently rely on a
custom branch that includes fixes needed for the test suite to run to
completion without crashing or hanging (e.g.
https://bugs.python.org/issue31885). It would be nice to add this as
an official buildbot, but I'm not sure if it makes sense to do that
until it's "green", or at least not crashing. I have several other
patches to the tests toward this goal, and am currently down to ~22
tests failing.
Before I do any more work on this, however, it would be best to once
and for all clarify the support for Cygwin in CPython, as it has never
been "officially supported" nor unsupported--this way we can avoid
having this discussion every time a patch related to Cygwin comes up.
I could provide some arguments for why I believe Cygwin should
supported, but before this gets too long I'd just like to float the
idea of having the discussion in the first place. It's also not
exactly clear to me how to meet the standards in PEP 11 for supporting
a platform--in particular it's not clear when a buildbot is considered
"stable", or how to achieve that without getting necessary fixes
merged into the main branch in the first place.
Thanks,
Erik
[1] https://www.python.org/dev/peps/pep-0539/
[2] https://github.com/python/cpython/commit/717d1fdf2acbef5e6b47d9b4dcf48ef182…
[3] https://ci.appveyor.com/project/embray/cpython
Hello,
would it be possible to guarantee that dict literals are ordered in v3.7?
The issue is well-known and the workarounds are tedious, example:
https://mail.python.org/pipermail/python-ideas/2015-December/037423.html
If the feature is guaranteed now, people can rely on it around v3.9.
Stefan Krah
I've written a short(ish) PEP for the proposal to change the default
warnings filters to show DeprecationWarning in __main__:
https://www.python.org/dev/peps/pep-0565/
The core proposal itself is just the idea in
https://bugs.python.org/issue31975 (i.e. adding
"default::DeprecationWarning:__main__" to the default filter set), but
the PEP fills in some details on the motivation for the original
change to the defaults, and why the current proposal is to add a new
filter for __main__, rather than dropping the default
DeprecationWarning filter entirely.
The PEP also proposes repurposing the existing FutureWarning category
to explicitly mean "backwards compatibility warnings that should be
shown to users of Python applications" since:
- we don't tend to use FutureWarning for its original nominal purpose
(changes that will continue to run but will do something different)
- FutureWarning was added in 2.3, so it's available in all still
supported versions of Python, and is shown by default in all of them
- it's at least arguably a less-jargony spelling of
DeprecationWarning, and hence more appropriate for displaying to end
users that may not have encountered the specific notion of "API
deprecation"
Cheers,
Nick.
==============
PEP: 565
Title: Show DeprecationWarning in __main__
Author: Nick Coghlan <ncoghlan(a)gmail.com>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 12-Nov-2017
Python-Version: 3.7
Post-History: 12-Nov-2017
Abstract
========
In Python 2.7 and Python 3.2, the default warning filters were updated to hide
DeprecationWarning by default, such that deprecation warnings in development
tools that were themselves written in Python (e.g. linters, static analysers,
test runners, code generators) wouldn't be visible to their users unless they
explicitly opted in to seeing them.
However, this change has had the unfortunate side effect of making
DeprecationWarning markedly less effective at its primary intended purpose:
providing advance notice of breaking changes in APIs (whether in CPython, the
standard library, or in third party libraries) to users of those APIs.
To improve this situation, this PEP proposes a single adjustment to the
default warnings filter: displaying deprecation warnings attributed to the main
module by default.
This change will mean that code entered at the interactive prompt and code in
single file scripts will revert to reporting these warnings by default, while
they will continue to be silenced by default for packaged code distributed as
part of an importable module.
The PEP also proposes a number of small adjustments to the reference
interpreter and standard library documentation to help make the warnings
subsystem more approachable for new Python developers.
Specification
=============
The current set of default warnings filters consists of::
ignore::DeprecationWarning
ignore::PendingDeprecationWarning
ignore::ImportWarning
ignore::BytesWarning
ignore::ResourceWarning
The default ``unittest`` test runner then uses ``warnings.catch_warnings()``
``warnings.simplefilter('default')`` to override the default filters while
running test cases.
The change proposed in this PEP is to update the default warning filter list
to be::
default::DeprecationWarning:__main__
ignore::DeprecationWarning
ignore::PendingDeprecationWarning
ignore::ImportWarning
ignore::BytesWarning
ignore::ResourceWarning
This means that in cases where the nominal location of the warning (as
determined by the ``stacklevel`` parameter to ``warnings.warn``) is in the
``__main__`` module, the first occurrence of each DeprecationWarning will once
again be reported.
This change will lead to DeprecationWarning being displayed by default for:
* code executed directly at the interactive prompt
* code executed directly as part of a single-file script
While continuing to be hidden by default for:
* code imported from another module in a ``zipapp`` archive's ``__main__.py``
file
* code imported from another module in an executable package's ``__main__``
submodule
* code imported from an executable script wrapper generated at installation time
based on a ``console_scripts`` or ``gui_scripts`` entry point definition
As a result, API deprecation warnings encountered by development tools written
in Python should continue to be hidden by default for users of those tools
While not its originally intended purpose, the standard library documentation
will also be updated to explicitly recommend the use of
``FutureWarning`` (rather
than ``DeprecationWarning``) for backwards compatibility warnings that are
intended to be seen by *users* of an application.
This will give the following three distinct categories of backwards
compatibility warning, with three different intended audiences:
* ``PendingDeprecationWarning``: reported by default only in test runners that
override the default set of warning filters. The intended audience is Python
developers that take an active interest in ensuring the future compatibility
of their software (e.g. professional Python application developers with
specific support obligations).
* ``DeprecationWarning``: reported by default for code that runs directly in
the ``__main__`` module (as such code is considered relatively unlikely to
have a dedicated test suite), but relies on test suite based reporting for
code in other modules. The intended audience is Python developers that are at
risk of upgrades to their dependencies (including upgrades to Python itself)
breaking their software (e.g. developers using Python to script environments
where someone else is in control of the timing of dependency upgrades).
* ``FutureWarning``: always reported by default. The intended audience is users
of applications written in Python, rather than other Python developers
(e.g. warning about use of a deprecated setting in a configuration file
format).
Given its presence in the standard library since Python 2.3, ``FutureWarning``
would then also have a secondary use case for libraries and frameworks that
support multiple Python versions: as a more reliably visible alternative to
``DeprecationWarning`` in Python 2.7 and versions of Python 3.x prior to 3.7.
Motivation
==========
As discussed in [1_] and mentioned in [2_], Python 2.7 and Python 3.2 changed
the default handling of ``DeprecationWarning`` such that:
* the warning was hidden by default during normal code execution
* the `unittest`` test runner was updated to re-enable it when running tests
The intent was to avoid cases of tooling output like the following::
$ devtool mycode/
/usr/lib/python3.6/site-packages/devtool/cli.py:1:
DeprecationWarning: 'async' and 'await' will become reserved keywords
in Python 3.7
async = True
... actual tool output ...
Even when `devtool` is a tool specifically for Python programmers, this is not
a particularly useful warning, as it will be shown on every invocation, even
though the main helpful step an end user can take is to report a bug to the
developers of ``devtool``. The warning is even less helpful for general purpose
developer tools that are used across more languages than just Python.
However, this change proved to have unintended consequences for the following
audiences:
* anyone using a test runner other than the default one built into ``unittest``
(since the request for third party test runners to change their default
warnings filters was never made explicitly)
* anyone using the default ``unittest`` test runner to test their Python code
in a subprocess (since even ``unittest`` only adjusts the warnings settings
in the current process)
* anyone writing Python code at the interactive prompt or as part of a directly
executed script that didn't have a Python level test suite at all
In these cases, ``DeprecationWarning`` ended up become almost entirely
equivalent to ``PendingDeprecationWarning``: it was simply never seen at all.
Limitations on PEP Scope
========================
This PEP exists specifically to explain both the proposed addition to the
default warnings filter for 3.7, *and* to more clearly articulate the rationale
for the original change to the handling of DeprecationWarning back in Python 2.7
and 3.2.
This PEP does not solve all known problems with the current approach to handling
deprecation warnings. Most notably:
* the default ``unittest`` test runner does not currently report deprecation
warnings emitted at module import time, as the warnings filter
override is only
put in place during test execution, not during test discovery and loading.
* the default ``unittest`` test runner does not currently report deprecation
warnings in subprocesses, as the warnings filter override is applied directly
to the loaded ``warnings`` module, not to the ``PYTHONWARNINGS`` environment
variable.
* the standard library doesn't provide a straightforward way to opt-in to seeing
all warnings emitted *by* a particular dependency prior to upgrading it
(the third-party ``warn`` module [3_] does provide this, but enabling it
involves monkeypatching the standard library's ``warnings`` module).
* re-enabling deprecation warnings by default in __main__ doesn't help in
handling cases where software has been factored out into support modules, but
those modules still have little or no automated test coverage. Near term, the
best currently available answer is to run such applications with
``PYTHONWARNINGS=default::DeprecationWarning`` or
``python -W default::DeprecationWarning`` and pay attention to their
``stderr`` output. Longer term, this is really a question for researchers
working on static analysis of Python code: how to reliably find usage of
deprecated APIs, and how to infer that an API or parameter is deprecated
based on ``warnings.warn`` calls, without actually running either the code
providing the API or the code accessing it
While these are real problems with the status quo, they're excluded from
consideration in this PEP because they're going to require more complex
solutions than a single additional entry in the default warnings filter,
and resolving them at least potentially won't require going through the PEP
process.
For anyone interested in pursuing them further, the first two would be
``unittest`` module enhancement requests, the third would be a ``warnings``
module enhancement request, while the last would only require a PEP if
inferring API deprecations from their contents was deemed to be an intractable
code analysis problem, and an explicit function and parameter marker syntax in
annotations was proposed instead.
References
==========
.. [1] stdlib-sig thread proposing the original default filter change
(https://mail.python.org/pipermail/stdlib-sig/2009-November/000789.html)
.. [2] Python 2.7 notification of the default warnings filter change
(https://docs.python.org/3/whatsnew/2.7.html#changes-to-the-handling-of-depr…)
.. [3] Emitting warnings based on the location of the warning itself
(https://pypi.org/project/warn/)
Copyright
=========
This document has been placed in the public domain.
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
Hi Mark,
it looks like the PEP is dormant for over two years now. I had multiple people ask me over the past few days about it though, so I wanted to ask if this is moving forward.
I also cc python-dev to see if anybody here is strongly in favor or against this inclusion. If the idea itself is uncontroversial, I could likely find somebody interested in implementing it. If not for 3.7 then at least for 3.8.
- Ł
Hi,
Could anyone put this five year-old bug about parsing iso8601 format date-times
on the front burner?
http://bugs.python.org/issue15873
In the comments there's a lot of hand-wringing about different variations that
bogged it down, but right now I only need it to handle the output of
datetime.isoformat():
>>> dt.isoformat()
'2017-10-20T08:20:08.986166+00:00'
Perhaps if we could get that minimum first step in, it could be iterated on and
made more lenient in the future.
Thank you,
-Mike
Based on the feedback I gather in early November,
I'm publishing the third draft for consideration on python-dev.
I hope you like it!
A nicely formatted rendering is available here:
https://www.python.org/dev/peps/pep-0563/
The full list of changes between this version and the previous draft
can be found here:
https://github.com/ambv/static-annotations/compare/python-dev1...python-dev2
- Ł
PEP: 563
Title: Postponed Evaluation of Annotations
Version: $Revision$
Last-Modified: $Date$
Author: Łukasz Langa <lukasz(a)langa.pl>
Discussions-To: Python-Dev <python-dev(a)python.org>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 8-Sep-2017
Python-Version: 3.7
Post-History: 1-Nov-2017, 21-Nov-2017
Resolution:
Abstract
========
PEP 3107 introduced syntax for function annotations, but the semantics
were deliberately left undefined. PEP 484 introduced a standard meaning
to annotations: type hints. PEP 526 defined variable annotations,
explicitly tying them with the type hinting use case.
This PEP proposes changing function annotations and variable annotations
so that they are no longer evaluated at function definition time.
Instead, they are preserved in ``__annotations__`` in string form.
This change is going to be introduced gradually, starting with a new
``__future__`` import in Python 3.7.
Rationale and Goals
===================
PEP 3107 added support for arbitrary annotations on parts of a function
definition. Just like default values, annotations are evaluated at
function definition time. This creates a number of issues for the type
hinting use case:
* forward references: when a type hint contains names that have not been
defined yet, that definition needs to be expressed as a string
literal;
* type hints are executed at module import time, which is not
computationally free.
Postponing the evaluation of annotations solves both problems.
Non-goals
---------
Just like in PEP 484 and PEP 526, it should be emphasized that **Python
will remain a dynamically typed language, and the authors have no desire
to ever make type hints mandatory, even by convention.**
This PEP is meant to solve the problem of forward references in type
annotations. There are still cases outside of annotations where
forward references will require usage of string literals. Those are
listed in a later section of this document.
Annotations without forced evaluation enable opportunities to improve
the syntax of type hints. This idea will require its own separate PEP
and is not discussed further in this document.
Non-typing usage of annotations
-------------------------------
While annotations are still available for arbitrary use besides type
checking, it is worth mentioning that the design of this PEP, as well
as its precursors (PEP 484 and PEP 526), is predominantly motivated by
the type hinting use case.
In Python 3.8 PEP 484 will graduate from provisional status. Other
enhancements to the Python programming language like PEP 544, PEP 557,
or PEP 560, are already being built on this basis as they depend on
type annotations and the ``typing`` module as defined by PEP 484.
In fact, the reason PEP 484 is staying provisional in Python 3.7 is to
enable rapid evolution for another release cycle that some of the
aforementioned enhancements require.
With this in mind, uses for annotations incompatible with the
aforementioned PEPs should be considered deprecated.
Implementation
==============
In Python 4.0, function and variable annotations will no longer be
evaluated at definition time. Instead, a string form will be preserved
in the respective ``__annotations__`` dictionary. Static type checkers
will see no difference in behavior, whereas tools using annotations at
runtime will have to perform postponed evaluation.
The string form is obtained from the AST during the compilation step,
which means that the string form might not preserve the exact formatting
of the source. Note: if an annotation was a string literal already, it
will still be wrapped in a string.
Annotations need to be syntactically valid Python expressions, also when
passed as literal strings (i.e. ``compile(literal, '', 'eval')``).
Annotations can only use names present in the module scope as postponed
evaluation using local names is not reliable (with the sole exception of
class-level names resolved by ``typing.get_type_hints()``).
Note that as per PEP 526, local variable annotations are not evaluated
at all since they are not accessible outside of the function's closure.
Enabling the future behavior in Python 3.7
------------------------------------------
The functionality described above can be enabled starting from Python
3.7 using the following special import::
from __future__ import annotations
A reference implementation of this functionality is available
`on GitHub <https://github.com/ambv/cpython/tree/string_annotations>`_.
Resolving Type Hints at Runtime
===============================
To resolve an annotation at runtime from its string form to the result
of the enclosed expression, user code needs to evaluate the string.
For code that uses type hints, the
``typing.get_type_hints(obj, globalns=None, localns=None)`` function
correctly evaluates expressions back from its string form. Note that
all valid code currently using ``__annotations__`` should already be
doing that since a type annotation can be expressed as a string literal.
For code which uses annotations for other purposes, a regular
``eval(ann, globals, locals)`` call is enough to resolve the
annotation.
In both cases it's important to consider how globals and locals affect
the postponed evaluation. An annotation is no longer evaluated at the
time of definition and, more importantly, *in the same scope* where it
was defined. Consequently, using local state in annotations is no
longer possible in general. As for globals, the module where the
annotation was defined is the correct context for postponed evaluation.
The ``get_type_hints()`` function automatically resolves the correct
value of ``globalns`` for functions and classes. It also automatically
provides the correct ``localns`` for classes.
When running ``eval()``,
the value of globals can be gathered in the following way:
* function objects hold a reference to their respective globals in an
attribute called ``__globals__``;
* classes hold the name of the module they were defined in, this can be
used to retrieve the respective globals::
cls_globals = vars(sys.modules[SomeClass.__module__])
Note that this needs to be repeated for base classes to evaluate all
``__annotations__``.
* modules should use their own ``__dict__``.
The value of ``localns`` cannot be reliably retrieved for functions
because in all likelihood the stack frame at the time of the call no
longer exists.
For classes, ``localns`` can be composed by chaining vars of the given
class and its base classes (in the method resolution order). Since slots
can only be filled after the class was defined, we don't need to consult
them for this purpose.
Runtime annotation resolution and class decorators
--------------------------------------------------
Metaclasses and class decorators that need to resolve annotations for
the current class will fail for annotations that use the name of the
current class. Example::
def class_decorator(cls):
annotations = get_type_hints(cls) # raises NameError on 'C'
print(f'Annotations for {cls}: {annotations}')
return cls
@class_decorator
class C:
singleton: 'C' = None
This was already true before this PEP. The class decorator acts on
the class before it's assigned a name in the current definition scope.
Runtime annotation resolution and ``TYPE_CHECKING``
---------------------------------------------------
Sometimes there's code that must be seen by a type checker but should
not be executed. For such situations the ``typing`` module defines a
constant, ``TYPE_CHECKING``, that is considered ``True`` during type
checking but ``False`` at runtime. Example::
import typing
if typing.TYPE_CHECKING:
import expensive_mod
def a_func(arg: expensive_mod.SomeClass) -> None:
a_var: expensive_mod.SomeClass = arg
...
This approach is also useful when handling import cycles.
Trying to resolve annotations of ``a_func`` at runtime using
``typing.get_type_hints()`` will fail since the name ``expensive_mod``
is not defined (``TYPE_CHECKING`` variable being ``False`` at runtime).
This was already true before this PEP.
Backwards Compatibility
=======================
This is a backwards incompatible change. Applications depending on
arbitrary objects to be directly present in annotations will break
if they are not using ``typing.get_type_hints()`` or ``eval()``.
Annotations that depend on locals at the time of the function
definition will not be resolvable later. Example::
def generate():
A = Optional[int]
class C:
field: A = 1
def method(self, arg: A) -> None: ...
return C
X = generate()
Trying to resolve annotations of ``X`` later by using
``get_type_hints(X)`` will fail because ``A`` and its enclosing scope no
longer exists. Python will make no attempt to disallow such annotations
since they can often still be successfully statically analyzed, which is
the predominant use case for annotations.
Annotations using nested classes and their respective state are still
valid. They can use local names or the fully qualified name. Example::
class C:
field = 'c_field'
def method(self) -> C.field: # this is OK
...
def method(self) -> field: # this is OK
...
def method(self) -> C.D: # this is OK
...
def method(self) -> D: # this is OK
...
class D:
field2 = 'd_field'
def method(self) -> C.D.field2: # this is OK
...
def method(self) -> D.field2: # this is OK
...
def method(self) -> field2: # this is OK
...
def method(self) -> field: # this FAILS, class D doesn't
... # see C's attributes, This was
# already true before this PEP.
In the presence of an annotation that isn't a syntactically valid
expression, SyntaxError is raised at compile time. However, since names
aren't resolved at that time, no attempt is made to validate whether
used names are correct or not.
Deprecation policy
------------------
Starting with Python 3.7, a ``__future__`` import is required to use the
described functionality. No warnings are raised.
In Python 3.8 a ``PendingDeprecationWarning`` is raised by the
compiler in the presence of type annotations in modules without the
``__future__`` import.
Starting with Python 3.9 the warning becomes a ``DeprecationWarning``.
In Python 4.0 this will become the default behavior. Use of annotations
incompatible with this PEP is no longer supported.
Forward References
==================
Deliberately using a name before it was defined in the module is called
a forward reference. For the purpose of this section, we'll call
any name imported or defined within a ``if TYPE_CHECKING:`` block
a forward reference, too.
This PEP addresses the issue of forward references in *type annotations*.
The use of string literals will no longer be required in this case.
However, there are APIs in the ``typing`` module that use other syntactic
constructs of the language, and those will still require working around
forward references with string literals. The list includes:
* type definitions::
T = TypeVar('T', bound='<type>')
UserId = NewType('UserId', '<type>')
Employee = NamedTuple('Employee', [('name', '<type>', ('id', '<type>')])
* aliases::
Alias = Optional['<type>']
AnotherAlias = Union['<type>', '<type>']
YetAnotherAlias = '<type>'
* casting::
cast('<type>', value)
* base classes::
class C(Tuple['<type>', '<type>']): ...
Depending on the specific case, some of the cases listed above might be
worked around by placing the usage in a ``if TYPE_CHECKING:`` block.
This will not work for any code that needs to be available at runtime,
notably for base classes and casting. For named tuples, using the new
class definition syntax introduced in Python 3.6 solves the issue.
In general, fixing the issue for *all* forward references requires
changing how module instantiation is performed in Python, from the
current single-pass top-down model. This would be a major change in the
language and is out of scope for this PEP.
Rejected Ideas
==============
Keeping the ability to use function local state when defining annotations
-------------------------------------------------------------------------
With postponed evaluation, this would require keeping a reference to
the frame in which an annotation got created. This could be achieved
for example by storing all annotations as lambdas instead of strings.
This would be prohibitively expensive for highly annotated code as the
frames would keep all their objects alive. That includes predominantly
objects that won't ever be accessed again.
To be able to address class-level scope, the lambda approach would
require a new kind of cell in the interpreter. This would proliferate
the number of types that can appear in ``__annotations__``, as well as
wouldn't be as introspectable as strings.
Note that in the case of nested classes, the functionality to get the
effective "globals" and "locals" at definition time is provided by
``typing.get_type_hints()``.
If a function generates a class or a function with annotations that
have to use local variables, it can populate the given generated
object's ``__annotations__`` dictionary directly, without relying on
the compiler.
Disallowing local state usage for classes, too
----------------------------------------------
This PEP originally proposed limiting names within annotations to only
allow names from the model-level scope, including for classes. The
author argued this makes name resolution unambiguous, including in cases
of conflicts between local names and module-level names.
This idea was ultimately rejected in case of classes. Instead,
``typing.get_type_hints()`` got modified to populate the local namespace
correctly if class-level annotations are needed.
The reasons for rejecting the idea were that it goes against the
intuition of how scoping works in Python, and would break enough
existing type annotations to make the transition cumbersome. Finally,
local scope access is required for class decorators to be able to
evaluate type annotations. This is because class decorators are applied
before the class receives its name in the outer scope.
Introducing a new dictionary for the string literal form instead
----------------------------------------------------------------
Yury Selivanov shared the following idea:
1. Add a new special attribute to functions: ``__annotations_text__``.
2. Make ``__annotations__`` a lazy dynamic mapping, evaluating
expressions from the corresponding key in ``__annotations_text__``
just-in-time.
This idea is supposed to solve the backwards compatibility issue,
removing the need for a new ``__future__`` import. Sadly, this is not
enough. Postponed evaluation changes which state the annotation has
access to. While postponed evaluation fixes the forward reference
problem, it also makes it impossible to access function-level locals
anymore. This alone is a source of backwards incompatibility which
justifies a deprecation period.
A ``__future__`` import is an obvious and explicit indicator of opting
in for the new functionality. It also makes it trivial for external
tools to recognize the difference between a Python files using the old
or the new approach. In the former case, that tool would recognize that
local state access is allowed, whereas in the latter case it would
recognize that forward references are allowed.
Finally, just-in-time evaluation in ``__annotations__`` is an
unnecessary step if ``get_type_hints()`` is used later.
Dropping annotations with -O
----------------------------
There are two reasons this is not satisfying for the purpose of this
PEP.
First, this only addresses runtime cost, not forward references, those
still cannot be safely used in source code. A library maintainer would
never be able to use forward references since that would force the
library users to use this new hypothetical -O switch.
Second, this throws the baby out with the bath water. Now *no* runtime
annotation use can be performed. PEP 557 is one example of a recent
development where evaluating type annotations at runtime is useful.
All that being said, a granular -O option to drop annotations is
a possibility in the future, as it's conceptually compatible with
existing -O behavior (dropping docstrings and assert statements). This
PEP does not invalidate the idea.
Pass string literals in annotations verbatim to ``__annotations__``
-------------------------------------------------------------------
This PEP originally suggested directly storing the contents of a string
literal under its respective key in ``__annotations__``. This was
meant to simplify support for runtime type checkers.
Mark Shannon pointed out this idea was flawed since it wasn't handling
situations where strings are only part of a type annotation.
The inconsistency of it was always apparent but given that it doesn't
fully prevent cases of double-wrapping strings anyway, it is not worth
it.
Make the name of the future import more verbose
-----------------------------------------------
Instead of requiring the following import::
from __future__ import annotations
the PEP could call the feature more explicitly, for example
``string_annotations``, ``stringify_annotations``,
``annotation_strings``, ``annotations_as_strings``, ``lazy_anotations``,
``static_annotations``, etc.
The problem with those names is that they are very verbose. Each of
them besides ``lazy_annotations`` would constitute the longest future
feature name in Python. They are long to type and harder to remember
than the single-word form.
There is precedence of a future import name that sounds overly generic
but in practice was obvious to users as to what it does::
from __future__ import division
Prior discussion
================
In PEP 484
----------
The forward reference problem was discussed when PEP 484 was originally
drafted, leading to the following statement in the document:
A compromise is possible where a ``__future__`` import could enable
turning *all* annotations in a given module into string literals, as
follows::
from __future__ import annotations
class ImSet:
def add(self, a: ImSet) -> List[ImSet]: ...
assert ImSet.add.__annotations__ == {
'a': 'ImSet', 'return': 'List[ImSet]'
}
Such a ``__future__`` import statement may be proposed in a separate
PEP.
python/typing#400
-----------------
The problem was discussed at length on the typing module's GitHub
project, under `Issue 400 <https://github.com/python/typing/issues/400>`_.
The problem statement there includes critique of generic types requiring
imports from ``typing``. This tends to be confusing to
beginners:
Why this::
from typing import List, Set
def dir(o: object = ...) -> List[str]: ...
def add_friends(friends: Set[Friend]) -> None: ...
But not this::
def dir(o: object = ...) -> list[str]: ...
def add_friends(friends: set[Friend]) -> None ...
Why this::
up_to_ten = list(range(10))
friends = set()
But not this::
from typing import List, Set
up_to_ten = List[int](range(10))
friends = Set[Friend]()
While typing usability is an interesting problem, it is out of scope
of this PEP. Specifically, any extensions of the typing syntax
standardized in PEP 484 will require their own respective PEPs and
approval.
Issue 400 ultimately suggests postponing evaluation of annotations and
keeping them as strings in ``__annotations__``, just like this PEP
specifies. This idea was received well. Ivan Levkivskyi supported
using the ``__future__`` import and suggested unparsing the AST in
``compile.c``. Jukka Lehtosalo pointed out that there are some cases
of forward references where types are used outside of annotations and
postponed evaluation will not help those. For those cases using the
string literal notation would still be required. Those cases are
discussed briefly in the "Forward References" section of this PEP.
The biggest controversy on the issue was Guido van Rossum's concern
that untokenizing annotation expressions back to their string form has
no precedent in the Python programming language and feels like a hacky
workaround. He said:
One thing that comes to mind is that it's a very random change to
the language. It might be useful to have a more compact way to
indicate deferred execution of expressions (using less syntax than
``lambda:``). But why would the use case of type annotations be so
all-important to change the language to do it there first (rather
than proposing a more general solution), given that there's already
a solution for this particular use case that requires very minimal
syntax?
Eventually, Ethan Smith and schollii voiced that feedback gathered
during PyCon US suggests that the state of forward references needs
fixing. Guido van Rossum suggested coming back to the ``__future__``
idea, pointing out that to prevent abuse, it's important for the
annotations to be kept both syntactically valid and evaluating correctly
at runtime.
First draft discussion on python-ideas
--------------------------------------
Discussion happened largely in two threads, `the original announcement
<https://mail.python.org/pipermail/python-ideas/2017-September/thread.html#4…>`_
and a follow-up called `PEP 563 and expensive backwards compatibility
<https://mail.python.org/pipermail/python-ideas/2017-September/thread.html#4…>`_.
The PEP received rather warm feedback (4 strongly in favor,
2 in favor with concerns, 2 against). The biggest voice of concern on
the former thread being Steven D'Aprano's review stating that the
problem definition of the PEP doesn't justify breaking backwards
compatibility. In this response Steven seemed mostly concerned about
Python no longer supporting evaluation of annotations that depended on
local function/class state.
A few people voiced concerns that there are libraries using annotations
for non-typing purposes. However, none of the named libraries would be
invalidated by this PEP. They do require adapting to the new
requirement to call ``eval()`` on the annotation with the correct
``globals`` and ``locals`` set.
This detail about ``globals`` and ``locals`` having to be correct was
picked up by a number of commenters. Nick Coghlan benchmarked turning
annotations into lambdas instead of strings, sadly this proved to be
much slower at runtime than the current situation.
The latter thread was started by Jim J. Jewett who stressed that
the ability to properly evaluate annotations is an important requirement
and backwards compatibility in that regard is valuable. After some
discussion he admitted that side effects in annotations are a code smell
and modal support to either perform or not perform evaluation is
a messy solution. His biggest concern remained loss of functionality
stemming from the evaluation restrictions on global and local scope.
Nick Coghlan pointed out that some of those evaluation restrictions from
the PEP could be lifted by a clever implementation of an evaluation
helper, which could solve self-referencing classes even in the form of a
class decorator. He suggested the PEP should provide this helper
function in the standard library.
Second draft discussion on python-dev
-------------------------------------
Discussion happened mainly in the `announcement thread <https://mail.python.org/pipermail/python-dev/2017-November/150062.html>`_,
followed by a brief discussion under `Mark Shannon's post
<https://mail.python.org/pipermail/python-dev/2017-November/150637.html>`_.
Steven D'Aprano was concerned whether it's acceptable for typos to be
allowed in annotations after the change proposed by the PEP. Brett
Cannon responded that type checkers and other static analyzers (like
linters or programming text editors) will catch this type of error.
Jukka Lehtosalo added that this situation is analogous to how names in
function bodies are not resolved until the function is called.
A major topic of discussion was Nick Coghlan's suggestion to store
annotations in "thunk form", in other words as a specialized lambda
which would be able to access class-level scope (and allow for scope
customization at call time). He presented a possible design for it
(`indirect attribute cells
<https://mail.python.org/pipermail/python-dev/2017-November/150141.html>`_).
This was later seen as equivalent to "special forms" in Lisp. Guido van
Rossum expressed worry that this sort of feature cannot be safely
implemented in twelve weeks (i.e. in time before the Python 3.7 beta
freeze).
After a while it became clear that the point of division between
supporters of the string form vs. supporters of the thunk form is
actually about whether annotations should be perceived as a general
syntactic element vs. something tied to the type checking use case.
Finally, Guido van Rossum declared he's rejecting the thunk idea
based on the fact that it would require a new building block in the
interpreter. This block would be exposed in annotations, multiplying
possible types of values stored in ``__annotations__`` (arbitrary
objects, strings, and now thunks). Moreover, thunks aren't as
introspectable as strings. Most importantly, Guido van Rossum
explicitly stated interest in gradually restricting the use of
annotations to static typing (with an optional runtime component).
Nick Coghlan got convinced to PEP 563, too, promptly beginning
the mandatory bike shedding session on the name of the ``__future__``
import. Many debaters agreed that ``annotations`` seems like
an overly broad name for the feature name. Guido van Rossum briefly
decided to call it ``string_annotations`` but then changed his mind,
arguing that ``division`` is a precedent of a broad name with a clear
meaning.
The final improvement to the PEP suggested in the discussion by Mark
Shannon was the rejection of the temptation to pass string literals
through to ``__annotations__`` verbatim.
A side-thread of discussion started around the runtime penalty of
static typing, with topic like the import time of the ``typing``
module (which is comparable to ``re`` without dependencies, and
three times as heavy as ``re`` when counting dependencies).
Acknowledgements
================
This document could not be completed without valuable input,
encouragement and advice from Guido van Rossum, Jukka Lehtosalo, and
Ivan Levkivskyi.
Copyright
=========
This document has been placed in the public domain.
..
Local Variables:
mode: indented-text
indent-tabs-mode: nil
sentence-end-double-space: t
fill-column: 70
coding: utf-8
End:
I've posted a new version of PEP 557, it should soon be available at
https://www.python.org/dev/peps/pep-0557/.
The only significant changes since the last version are:
- changing the "compare" parameter to be "order", since that more
accurately reflects what it does.
- Having the combination of "eq=False" and "order=True" raise an
exception instead of silently changing eq to True.
There were no other issues raised with the previous version of the PEP.
So with that, I think it's ready for a pronouncement.
Eric.