Note: I'm reposting this here after posting on the /r/Python reddit as
I've realised this is a better venue:
I have an idea for a new syntax element that combines any number of
statements and expressions into a single expression. I'm partial to
calling it a 'return' expression because that should be compatible with
existing code--there's guaranteed to be no existing code of the form
'return:'. But we could call it 'do' or 'val' or 'expr' or whatever.
expression ::= ... | return_expr
return_expr ::= "return" ":" statement* expression+
We can define all expression-oriented syntax in terms of return
expressions. I.e., wrap up all statement-oriented syntax inside an
x = (
if y == 1: z = 'a'
elif y == 2: z = 'b'
elif y == 3: z = 'c'
else: z = 'd'
Or a single-line example,
x = return: print "Blah"; 5
I know, it's the Python programmer's dreaded multi-line expression. Note
here that I'm not proposing any change to indentation rules. I'm relying
on parens to relax the rules. There's precedent for using parens in new
kinds of expressions--e.g. generator expressions. So the usage shouldn't
look alien in Python code.
Now the controversy. We can use a return expression to get a multi-line,
multi-statement, lambda, as long as lambda only cares that its body is
a single exprssion. Which I believe is the case, e.g. this is valid
f = lambda x: (
x + 1,
x + 2
Anyway, an imaginary lambda example:
# Need to wrap the return expr in parens to separate the lambdas.
lambda s: (
l = len(s)
l >= 5
# The next argument, a lambda without a return expr, happily lives
lambda err: notifications.send(err)
# Parens not needed here, only a single argument.
pnlMain.back_color = "green" if flag else "red"
# Must end with an expr, unlike a normal function. Think of it
# this way: we're 'inside' a return, we _have to_ return
# some value.
I believe 'return: ...' is an unintrusive and versatile solution. It's
_not_ meant to just forcefully shoehorn full functionality into lambdas
as I believe I show above; it doesn't break compatibility; it doesn't
require any indent/whitespace rule changes; and it's guaranteed to not
affect _any_ existing code.
'A much better idea would be to find a way to make all compound
statements into expressions, future-proofing the decision and avoiding
the redundancy between "compound statement statement" and "compound
'Something that has started to annoy me in the last couple of years is
the fact that most Python control statements cannot be used as
expressions. I feel this is a pretty deep limitation and personally I
don't feel it's well-justified.'
After a long editing process we've got PEP 484
<https://www.python.org/dev/peps/pep-0484/> (Type Hints) ready for your
review. This is by no means final, and several areas are either open (to be
resolved in a later draft) or postponed (to a different PEP altogether).
But there's enough meat that I think we can start having the discussion.
Please also see PEP 483 <https://www.python.org/dev/peps/pep-0483/> (The
Theory of Type Hint; copied and reformatted from the original Quip document
that I posted just before last Christmas) and PEP 482
<https://www.python.org/dev/peps/pep-0482/> (Literature Overview for Type
Hints, by Łukasz). Those are informational PEPs though; the actual spec is
focused in PEP 484 (the only one on the Standards Track).
As I said earlier, I hope to have a rough consensus before PyCon
<https://us.pycon.org/2015/> and working code (just the typing.py module
committed to CPython before the last 3.5 alpha
Here is the raw text of PEP 484. Fire away!!
Title: Type Hints
Author: Guido van Rossum <guido(a)python.org>, Jukka Lehtosalo <
jukka.lehtosalo(a)iki.fi>, Łukasz Langa <lukasz(a)langa.pl>
Discussions-To: Python-Dev <python-dev(a)python.org>
Type: Standards Track
This PEP introduces a standard syntax for type hints using annotations
on function definitions.
The proposal is strongly inspired by mypy [mypy]_.
The theory behind type hints and gradual typing is explained in PEP 483.
Rationale and Goals
PEP 3107 added support for arbitrary annotations on parts of a function
definition. Although no meaning was assigned to annotations then, there
has always been an implicit goal to use them for type hinting, which is
listed as the first possible use case in said PEP.
This PEP aims to provide a standard syntax for type annotations, opening
up Python code to easier static analysis and refactoring, potential
runtime type checking, and performance optimizations utilizing type
Type Definition Syntax
The syntax leverages PEP 3107-style annotations with a number of
extensions described in sections below. In its basic form, type hinting
is used by filling function annotations with classes::
def greeting(name: str) -> str:
return 'Hello ' + name
This denotes that the expected type of the ``name`` argument is ``str``.
Analogically, the expected return type is ``str``. Subclasses of
a specified argument type are also accepted as valid types for that
Abstract base classes, types available in the ``types`` module, and
user-defined classes may be used as type hints as well. Annotations
must be valid expressions that evaluate without raising exceptions at
the time the function is defined. In addition, the needs of static
analysis require that annotations must be simple enough to be
interpreted by static analysis tools. (This is an intentionally
somewhat vague requirement.)
.. FIXME: Define rigorously what is/isn't supported.
When used as an annotation, the expression ``None`` is considered
equivalent to ``NoneType`` (i.e., ``type(None)`` for type hinting
Type aliases are also valid type hints::
integer = int
def retry(url: str, retry_count: integer): ...
New names that are added to support features described in following
sections are available in the ``typing`` package.
Frameworks expecting callback functions of specific signatures might be
type hinted using ``Callable[[Arg1Type, Arg2Type], ReturnType]``.
from typing import Any, AnyArgs, Callable
def feeder(get_next_item: Callable[, Item]): ...
def async_query(on_success: Callable[[int], None], on_error:
Callable[[int, Exception], None]): ...
def partial(func: Callable[AnyArgs, Any], *args): ...
Since using callbacks with keyword arguments is not perceived as
a common use case, there is currently no support for specifying keyword
arguments with ``Callable``.
Since type information about objects kept in containers cannot be
statically inferred in a generic way, abstract base classes have been
extended to support subscription to denote expected types for container
from typing import Mapping, Set
def notify_by_email(employees: Set[Employee], overrides: Mapping[str,
Generics can be parametrized by using a new factory available in
``typing`` called ``TypeVar``. Example::
from typing import Sequence, TypeVar
T = TypeVar('T') # Declare type variable
def first(l: Sequence[T]) -> T: # Generic function
In this case the contract is that the returning value is consistent with
the elements held by the collection.
``TypeVar`` supports constraining parametric types to classes with any of
the specified bases. Example::
from typing import Iterable
X = TypeVar('X')
Y = TypeVar('Y', Iterable[X])
def filter(rule: Callable[[X], bool], input: Y) -> Y:
.. FIXME: Add an example with multiple bases defined.
In the example above we specify that ``Y`` can be any subclass of
Iterable with elements of type ``X``, as long as the return type of
``filter()`` will be the same as the type of the ``input``
.. FIXME: Explain more about how this works.
When a type hint contains names that have not been defined yet, that
definition may be expressed as a string, to be resolved later. For
example, instead of writing::
def notify_by_email(employees: Set[Employee]): ...
one might write::
def notify_by_email(employees: 'Set[Employee]'): ...
.. FIXME: Rigorously define this. Defend it, or find an alternative.
Since accepting a small, limited set of expected types for a single
argument is common, there is a new special factory called ``Union``.
from typing import Union
def handle_employees(e: Union[Employee, Sequence[Employee]]):
if isinstance(e, Employee):
e = [e]
A type factored by ``Union[T1, T2, ...]`` responds ``True`` to
``issubclass`` checks for ``T1`` and any of its subclasses, ``T2`` and
any of its subclasses, and so on.
One common case of union types are *optional* types. By default,
``None`` is an invalid value for any type, unless a default value of
``None`` has been provided in the function definition. Examples::
def handle_employee(e: Union[Employee, None]): ...
As a shorthand for ``Union[T1, None]`` you can write ``Optional[T1]``;
for example, the above is equivalent to::
from typing import Optional
def handle_employee(e: Optional[Employee]): ...
An optional type is also automatically assumed when the default value is
``None``, for example::
def handle_employee(e: Employee = None): ...
This is equivalent to::
def handle_employee(e: Optional[Employee] = None): ...
.. FIXME: Is this really a good idea?
A special kind of union type is ``Any``, a class that responds
``True`` to ``issubclass`` of any class. This lets the user
explicitly state that there are no constraints on the type of a
specific argument or return value.
Platform-specific type checking
In some cases the typing information will depend on the platform that
the program is being executed on. To enable specifying those
differences, simple conditionals can be used::
from typing import PY2, WINDOWS
text = unicode
text = str
def f() -> text: ...
loop = ProactorEventLoop
loop = UnixSelectorEventLoop
Arbitrary literals defined in the form of ``NAME = True`` will also be
accepted by the type checker to differentiate type resolution::
DEBUG = False
For the purposes of type hinting, the type checker assumes ``__debug__``
is set to ``True``, in other words the ``-O`` command-line option is not
used while type checking.
Compatibility with other uses of function annotations
A number of existing or potential use cases for function annotations
exist, which are incompatible with type hinting. These may confuse a
static type checker. However, since type hinting annotations have no
run time behavior (other than evaluation of the annotation expression
and storing annotations in the ``__annotations__`` attribute of the
function object), this does not make the program incorrect -- it just
makes it issue warnings when a static analyzer is used.
To mark portions of the program that should not be covered by type
hinting, use the following:
* a ``@no_type_checks`` decorator on classes and functions
* a ``# type: ignore`` comment on arbitrary lines
.. FIXME: should we have a module-wide comment as well?
Type Hints on Local and Global Variables
No first-class syntax support for explicitly marking variables as being
of a specific type is added by this PEP. To help with type inference in
complex cases, a comment of the following format may be used::
x =  # type: List[Employee]
In the case where type information for a local variable is needed before
if was declared, an ``Undefined`` placeholder might be used::
from typing import Undefined
x = Undefined # type: List[Employee]
y = Undefined(int)
If type hinting proves useful in general, a syntax for typing variables
may be provided in a future Python version.
Explicit raised exceptions
No support for listing explicitly raised exceptions is being defined by
this PEP. Currently the only known use case for this feature is
documentational, in which case the recommendation is to put this
information in a docstring.
The ``typing`` package
To open the usage of static type checking to Python 3.5 as well as older
versions, a uniform namespace is required. For this purpose, a new
package in the standard library is introduced called ``typing``. It
holds a set of classes representing builtin types with generics, namely:
* Dict, used as ``Dict[key_type, value_type]``
* List, used as ``List[element_type]``
* Set, used as ``Set[element_type]``. See remark for ``AbstractSet``
* FrozenSet, used as ``FrozenSet[element_type]``
* Tuple, used as ``Tuple[index0_type, index1_type, ...]``.
Arbitrary-length tuples might be expressed using ellipsis, in which
case the following arguments are considered the same type as the last
defined type on the tuple.
It also introduces factories and helper members needed to express
generics and union types:
* Any, used as ``def get(key: str) -> Any: ...``
* Union, used as ``Union[Type1, Type2, Type3]``
* TypeVar, used as ``X = TypeVar('X', Type1, Type2, Type3)`` or simply
``Y = TypeVar('Y')``
* Undefined, used as ``local_variable = Undefined # type: List[int]`` or
``local_variable = Undefined(List[int])`` (the latter being slower
* Callable, used as ``Callable[[Arg1Type, Arg2Type], ReturnType]``
* AnyArgs, used as ``Callable[AnyArgs, ReturnType]``
* AnyStr, equivalent to ``TypeVar('AnyStr', str, bytes)``
All abstract base classes available in ``collections.abc`` are
importable from the ``typing`` package, with added generics support:
* Set as ``AbstractSet``. This name change was required because ``Set``
in the ``typing`` module means ``set()`` with generics.
The library includes literals for platform-specific type hinting:
* PY3, equivalent to ``not PY2``
* UNIXOID, equivalent to ``not WINDOWS``
The following types are available in the ``typing.io`` module:
The following types are provided by the ``typing.re`` module:
* Match and Pattern, types of ``re.match()`` and ``re.compile()``
As a convenience measure, types from ``typing.io`` and ``typing.re`` are
also available in ``typing`` (quoting Guido, "There's a reason those
modules have two-letter names.").
The place of the ``typing`` module in the standard library
.. FIXME: complete this section
The main use case of type hinting is static analysis using an external
tool without executing the analyzed program. Existing tools used for
that purpose like ``pyflakes`` [pyflakes]_ or ``pylint`` [pylint]_
might be extended to support type checking. New tools, like mypy's
``mypy -S`` mode, can be adopted specifically for this purpose.
Type checking based on type hints is understood as a best-effort
mechanism. In other words, whenever types are not annotated and cannot
be inferred, the type checker considers such code valid. Type errors
are only reported in case of explicit or inferred conflict. Moreover,
as a mechanism that is not tied to execution of the code, it does not
affect runtime behaviour. In other words, even in the case of a typing
error, the program will continue running.
The implementation of a type checker, whether linting source files or
enforcing type information during runtime, is out of scope for this PEP.
.. FIXME: Describe stub modules.
.. FIXME: Describe run-time behavior of generic types.
PEP 482 lists existing approaches in Python and other languages.
Is type hinting Pythonic?
Type annotations provide important documentation for how a unit of code
should be used. Programmers should therefore provide type hints on
public APIs, namely argument and return types on functions and methods
considered public. However, because types of local and global variables
can be often inferred, they are rarely necessary.
The kind of information that type hints hold has always been possible to
achieve by means of docstrings. In fact, a number of formalized
mini-languages for describing accepted arguments have evolved. Moving
this information to the function declaration makes it more visible and
easier to access both at runtime and by static analysis. Adding to that
the notion that “explicit is better than implicit”, type hints are
This document could not be completed without valuable input,
encouragement and advice from Jim Baker, Jeremy Siek, Michael Matson
Vitousek, Andrey Vlasovskikh, and Radomir Dopieralski.
Influences include existing languages, libraries and frameworks
mentioned in PEP 482. Many thanks to their creators, in alphabetical
order: Stefan Behnel, William Edwards, Greg Ewing, Larry Hastings,
Anders Hejlsberg, Alok Menghrajani, Travis E. Oliphant, Joe Pamer,
Raoul-Gabriel Urma, and Julien Verlaguet.
This document has been placed in the public domain.
--Guido van Rossum (python.org/~guido)
Some of this has come up multiple times in the discussion of the is_close_to PEP and the math.nan idea, but I think it's actually a separate issue.
I think it might be worth adding some of the following to math:
* isnormal (like C)
* nextafter/nexttoward (like C) and/or nextplus/nextminus (like decimal)
* bitdifference (inverse of nexttoward)
* ulp (like Java)
* signbit (like C)
* isqnan, issnan (like decimal)
* makenan(payload, quiet=True, sign=False) and nanpayload(f)
* as_tuple and from_tuple (like decimal--although of course the tuple isn't quite the same)
In some cases, it would be perfectly reasonable for these functions to only be available on, e.g., Windows (maybe even only NT, not CE) plus any platform that confirms to either C99 or POSIX-2001 (which includes almost all non-Windows platforms you're likely to run into nowadays), or only on platforms that use IEEE 754-2008 or Intel-style IEEE 754-1985 (again, almost everything), etc., as appropriate; no need to try to implement them all fully generally from scratch for any C89 platform just in case someone decides to build Python 3.5 for 68k A/UX or something.
Some of you here probably saw/heard David Beazley at PyCon 2012, but
I'm only just now finding out about this thanks to the magic of the
internet. The talk is long, but I just want to highlight this one
little sequence (about 20 seconds)... and how python-ideas is
PEP 285 <http://legacy.python.org/dev/peps/pep-0285> provides some
justification for why arithmetic operations should be valid on bools and
why bool should inherit from int (see points (4) and (6) respectively).
Since it's been 12 years (it's possible this has been brought up again
between now and then), I thought it could be worthwhile to take another
I am mostly interested in a resurveying of the questions:
1) Would it still be very inconvenient to implement bools separately from
2) Do most Python users still agree that arithmetic operations should be
supported on booleans?
Something I noticed is that with PEP 484
<https://www.python.org/dev/peps/pep-0484/> (Type Hints) specified as is,
there would be no way to statically verify that a function will only
operate on ints and not bools. An example would be a function that can only
operate on integer values in a JSON dict created by the builtin `json`
module (using the default decoder) cannot exist, as that function could
operate on the boolean values of the dict.
It would be nice to have literal keywords/symbols. By this I mean, terms
that look like words but are not previously defined variables.
Some other languages prefix by colon, :foo, or use a backtick, `foo.
user=> (type :foo)
user=> (type `foo)
*Why is this useful?*
One use case in NumPy/Pandas use is to specify columns or fields of data
without resorting to strings. E.g.
df = pandas.load(...)
*What do people do now?*
Currently people use auto-generated attributes
auto-generated attributes work great until you want to use chained
strings work but feel unpleasant
This is a common language construct so my guess is that it has come up
before. Sadly Google searching the terms *keywords* and *symbols* results
it a lot of unrelated material. Can anyone point me to previous discussion?
There are clearly issues with using :foo in that it overlaps with slice
syntax, presumably some other character could be pressed into service if
this was found worthwhile.
I can come up with more motivating use cases if desired.
There has been a lot of chatter about this, which I think has served to
provide some clarity, at least to me. However, I'm concerned that the
upshot, at least for folks not deep into the discussion, will be: clearly
there are too many use-case specific details to put any one thing in the
std lib. But I still think we can provide something that is useful for most
use-cases, and would like to propose what that is, and what the decision
A function for the math module, called somethign like "is_close",
"approx_equal", etc. It will compute a relative tolerance, with a default
maybe around 1-e12, with the user able to specify the tolerance they want.
Optionally, the user can specify an "minimum absolute tolerance", it will
default to zero, but can be set so that comparisons to zero can be handled
The relative tolerance will be computed from the smallest of the two input
values, so as to get symmetry : is_close(a,b) == is_close(b,a). (this is
the Boost "strong" definition, and what is used by Steven D'Aprano's code
in the statistics test module)
Alternatively, the relative error could be computed against a particular
one of the input values (the second one?). This would be asymmetric, but be
more clear exactly how "relative" is defined, and be closer to what people
may expect when using it as a "actual vs expected" test. --- "expected"
would be the scaling value. If the tolerance is small, it makes very little
difference anyway, so I'm happy with whatever consensus moves us to. Note
that if we go this way, then the parameter names should make it at least a
little more clear -- maybe "actual" and "expected", rather than x and y or
a and b or... and the function name should be something like is_close_to,
rather than just is_close.
It will be designed for floating point numbers, and handle inf, -inf, and
NaN "properly". But is will also work with other numeric types, to the
extent that duck typing "just works" (i.e. division and comparisons all
complex numbers will be handled by:
is_close(x.real, y.real) and is_close(x.imag, y.imag)
(but i haven't written any code for that yet)
It will not do a simple absolute comparison -- that is the job of a
different function, or, better yet, folks just write it themselves:
abs(x - y) <= delta
really isn't much harder to write than a function call:
Here is a gist with a sample implementation:
I need to add more tests, and make the test proper unit tests, but it's a
I also need to see how it does with other data types than float --
hopefully, it will "just work" with the core set.
I hope we can come to some consensus that something like this is the way to
On Sun, Jan 18, 2015 at 11:27 AM, Ron Adam <ron3200(a)gmail.com> wrote:
> On 01/17/2015 11:37 PM, Chris Barker wrote:
>> (Someone claimed that 'nothing is close to zero'. This is
>> nonsensical both in applied math and everyday life.)
>> I'm pretty sure someone (more than one of use) asserted that "nothing is
>> *relatively* close to zero -- very different.
> Yes, that is the case.
> And I really wanted a way to have a default behavior that would do a
>> reasonable transition to an absolute tolerance near zero, but I no longer
>> thing that's possible. (numpy's implimentaion kind of does that, but it is
>> really wrong for small numbers, and if you made the default min_tolerance
>> the smallest possible representable number, it really wouldn't be useful.
> I'm going to try to summarise what I got out of this discussion. Maybe it
> will help bring some focus to the topic.
> I think there are two case's to consider.
> # The most common case.
> rel_is_good(actual, expected, delta) # value +- %delta.
> # Testing for possible equivalence?
> rel_is_close(value1, value2, delta) # %delta close to each other.
> I don't think they are quite the same thing.
> rel_is_good(9, 10, .1) --> True
> rel_is_good(10, 9, .1) --> False
> rel_is_close(9, 10, .1) --> True
> rel_is_close(10, 9, .1) --> True
> In the "is close" case, it shouldn't matter what order the arguments are
> given. The delta is the distance from the larger number the smaller number
> is. (of the same sign)
> So when calculating the relative error from two values, you want it to be
> consistent with the rel_is_close function.
> rel_is_close(a, b, delta) <---> rel_err(a, b) <= delta
> And you should not use the rel_err function in the rel_is_good function.
> The next issue is, where does the numeric accuracy of the data,
> significant digits, and the languages accuracy (ULPs), come into the
> My intuition.. I need to test the idea to make a firmer claim.. is that in
> the case of is_good, you want to exclude the uncertain parts, but with
> is_close, you want to include the uncertain parts.
> Two values "are close" if you can't tell one from the other with
> certainty. The is_close range includes any uncertainty.
> A value is good if it's within a range with certainty. And this excludes
> any uncertainty.
> This is where taking in consideration of an absolute delta comes in. The
> minimum range for both is the uncertainty of the data. But is_close and
> is_good do different things with it.
> Of course all of this only applies if you agree with these definitions of
> is_close, and is_good. ;)
> Python-ideas mailing list
> Code of Conduct: http://python.org/psf/codeofconduct/
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
Fixed it on my system! (OS X Yosemite)
Okay, I'll get working on this patch soon.
On Mon, Jan 19, 2015 at 5:39 PM, Chris Angelico <rosuav(a)gmail.com> wrote:
> On Tue, Jan 20, 2015 at 9:22 AM, Chris Angelico <rosuav(a)gmail.com> wrote:
> > Skipped tests are normal; if you're building on a system that doesn't
> > have GDB, it'll skip test_gdb, because there's nothing to test. The
> > failing tests, though, are likely to be a problem. What environment
> > are you building on? Are you experienced with building Python from
> > source?
> FWIW, here's my 'make test' summary. Debian Wheezy, Python 3.5 (tip),
> amd64 architecture.
> 379 tests OK.
> 1 test altered the execution environment:
> 11 tests skipped:
> test_devpoll test_gdb test_kqueue test_msilib test_ossaudiodev
> test_startfile test_tk test_ttk_guionly test_winreg test_winsound
It says "standards track", but it doesn't seem to work in Python 3.4. I am
really looking forward to this! Any chance we can get a from __future__
for inclusion in 3.4? Or maybe it can make it into 3.5?
I'm writing something like remote procedure calls, so type hinting would
be very interesting to me - I could send out the function parameter types
to the remote side, which could already check the type of the parameters
before even sending out a request.
I would however be interested in even more detailed information. Like
the range in which a number should lie - so giving a maximum and a minimum
value for an int. Would it be possible with a PEP 484 system to add information
like that? For example by inheriting from int?