I was excited to learn about SupportsInt, SupportsFloat, and
SupportsComplex protocols decorated with @runtime_checkable because I
thought they bridged the gap between ABCs and typing protocol classes,
allowing consistent type checks by static tools and at runtime.
However, I am unable to see how to use those classes in practice.
For example, SupportsComplex: I understand at runtime the
@runtime_checkable decorator merely checks whether __complex__ exists
in the type, but that is not a good predictor of how to use a numeric
Types that actually can be converted to complex fail the insinstance
check with SupportsComplex. That's the case of built-in int and float,
as well as NumPy integer and float types like float16 and uint8.
On the other hand, SupportsInt suggests that I can convert a complex
to int, but in fact the int and float types implement __complex__ only
to provide a better error message, according to Guido van Rossum in an
earlier thread here.
>>> isinstance(1+0j, SupportsInt)
Traceback (most recent call last):
TypeError: can't convert complex to int
The issue with SupportsInt also happens with SupportsFloat.
In addition, the way Mypy interprets these protocol classes is not the
same as what we can see at runtime.
Given the above, what are recommended use cases for the runtime
checkable SupportsInt, SupportsFloat, and SupportsComplex?
| Author of Fluent Python (O'Reilly, 2015)
| Technical Principal at ThoughtWorks
| Twitter: @ramalhoorg
(Moving this thread here from the other mailing list.)
Thanks for the update, Alfonso. Cool to know there's active work on this in
Pyre - and that proper support for variadics is close to becoming a
reality, if Mark is planning to submit a PEP.
In that sense, I believe that there are many reasons to think that is not a
> good idea to create yet another type checker for Python. In the particular
> case of Deepmind, my humble suggestion would be to contribute to Mypy or
> Pyre, at least for the part related with the type system.
Right. Initially I was thinking it might be better to have a separate tool
for shape checking so that it might be easier to plug into existing
infrastructure for folks who have already committed to a particular type
checker (e.g. we use pytype, so if we implemented support in Mypy we'd have
to run both, and that seemed like it might have some complications) - but
on reflection I agree it would be better to contribute to an existing
checker a) to make use of the existing framework that checker would provide
and b) yeah, to avoid proliferation of tools.
(Also, to be clear: we don't have any official work on this at DeepMind;
just a few of us interested and playing around in our free time.)
for example arithmetic on types, which actually is something that we are
> currently working on, in case that anyone is interested.
So it sounds like Pyre already has a pretty developed roadmap for this -
super cool :) In that case, maybe the question is: what are you working on
already, and what's left to be done - that is, what external contributions
would be most useful?
On Mon, 15 Jun 2020 at 11:39, <proyect.hd(a)gmail.com> wrote:
> Hi everyone,
> Thanks once again for bringing attention to this important topic.
> I think that we all agree that the main step in this direction is the
> introduction of variadics, what has been mentioned several times 1
> , 2 <https://github.com/python/typing/issues/513>, 3
> Variadic support is more mature than it seems. In the case of Pyre, from
> already one year ago we have support for variadics. The first official
> proposal that I recall was at last year Python Typing Summit (here
> The syntax is aligned with the proposal that Guido has shared. However,
> iirc the initial syntax relied to much on Concatenate/Expand making it
> verbose and ambiguous when there are 2 variadics. For that purpose, the
> current syntax relies on capture groups "" for manually specifying the
> part of the type that correspond to the variadic, and only requires
> Concatenate for concatenating types and variadics. More about the final
> syntax can be seen here (here
> Although I don't want this to become a PEP, the way the syntax works with
> the proposed example is:
> tf.Tensor[Batch, Time, [64, 64, 3]]
> Special cases could be considered to make it more ergonomic when there is
> only one variadic at the end, in general having an unambiguous syntax is a
> Regarding maturity, afaik Mark Mendoza had informal conversations with the
> community regarding the proposed syntax and got positive feedback, and he
> currently plans to submit a PEP, once the Parameter Specification PEP gets
> merged (here <https://www.python.org/dev/peps/pep-0612/>).
> Hence, if we assume that we are not so far from agreeing on a final
> syntax, then the next question is about having actual support for it. As I
> mentioned, Pyre already supports it so it could be used for testing ideas
> and giving developers the opportunity of writing code stubs before other
> type checkers get support. In that sense, I believe that there are many
> reasons to think that is not a good idea to create yet another type checker
> for Python. In the particular case of Deepmind, my humble suggestion would
> be to contribute to Mypy or Pyre, at least for the part related with the
> type system. Regarding Teddy's work, afaik Teddy was trying to contribute
> to Mypy itself so perhaps his checker is more a proof of concept. I hope
> that he will tell us more about it.
> About static shape checkers that rely on abstract interpretation, for sure
> there is room for them, but it will be better if they rely on the more
> advanced type specifications that variadics will introduce. After all,
> variadics at this point will only be useful for some use cases, since for
> many tensor operations other functionalities are need, for example
> arithmetic on types, which actually is something that we are currently
> working on, in case that anyone is interested.
> Finally, if there are many teams working in this direction I would suggest
> to organise some online meetings, inspired in what they do at MLIR (here
> PS: I would propose to keep future discussion in Python Typing mailing
> list. Many people that would be interested in this sort of discussions
> follow that list.
On Mon, 15 Jun 2020 at 11:36, Matthew Rahtz <mrahtz(a)google.com> wrote:
> Thanks for the links, Guido! That first one in particular has some
> interesting ideas I hadn't seen before. I'll incorporate them as options
> into the doc we're preparing.
> Adam -
> IMO the biggest bottleneck to porting my tool, or implementing any other
>> one, is a good abstract interpreter for Python that can handle programs
>> that are incomplete, have syntax errors, etc
> Interesting. I'm guessing you're saying this with a view to having good
> support in e.g. IDEs? Sergei, am I right in thinking you have some
> experience with this? I guess PyCharm must also have some internal solution
> to this; I'm not sure how hard it would be to use it as a standalone tool...
> I also wanted to add that in my experience implementing an interpreter in
>> Python will likely make it more difficult to write something good in the
>> long term, because we'll miss out on all the nice things like ADTs with
>> pattern matching, exhaustiveness checks, etc.
> Also interesting. Yes, let's keep this in mind.
> On Fri, 12 Jun 2020 at 17:24, Guido van Rossum <guido(a)python.org> wrote:
>> Here are some links to docs with well thought-out ideas for additions to
>> the standard Python (static) type system as described by PEP 484 to support
>> numpy(-ish) array types and shapes:
>> Here's a doc I've held on to (mostly written by Ivan Levkivskiy) with a
>> list of proposals that made the rounds at PyCon 2019 (and even before).
>> And check out the typing summit schedule:
>> On Fri, Jun 12, 2020 at 9:19 AM 'Adam Paszke' via Python shape checkers <
>> python-shape-checkers(a)googlegroups.com> wrote:
>>> Hi everyone,
>>> Thanks a lot for starting the list Matthew! It'll be great for all of us
>>> to get together.
>>> Since you've asked about the stage of my project, the Swift prototype is
>>> working pretty nicely, but I didn't get to porting it to Python (except for
>>> a *very early* prototype in Haskell). However, if there is sufficient
>>> interest in such an effort, then I'm happy to reprioritize and I'm pretty
>>> sure that I could spend even up to 50% of my time on that.
>>> IMO the biggest bottleneck to porting my tool, or implementing any other
>>> one, is a good abstract interpreter for Python that can handle programs
>>> that are incomplete, have syntax errors, etc. That's an effort we could
>>> definitely share, and in the future we can even prototype multiple
>>> different checkers on top of this common infrastructure. In Swift this was
>>> easy, because I could write a very simple interpreter for SIL (Swift
>>> Intermediate Representation), which was already quite low-level. Mirroring
>>> Python's semantics faithfully will be much more difficult as even "trivial"
>>> operations like attribute lookups can get extremely complicated.
>>> I also wanted to add that in my experience implementing an interpreter
>>> in Python will likely make it more difficult to write something good in the
>>> long term, because we'll miss out on all the nice things like ADTs with
>>> pattern matching, exhaustiveness checks, etc. Haskell/OCaml make those
>>> things a breeze. I do understand that they might make onboarding a little
>>> slower, but I wouldn't want to rule them as an option out just yet, as it
>>> makes a huge difference.
>>> Also, it's super cool to see that other people are getting to
>>> implementing more static checkers too. I'll have to read Teddy's thesis
>>> soon to better understand how it relates to what I did.
>>> On Fri, Jun 12, 2020 at 1:47 PM 'mrahtz' via Python shape checkers <
>>> python-shape-checkers(a)googlegroups.com> wrote:
>>>> Hey everyone!
>>>> First off - welcome to the group! There's been scattered interest in
>>>> shape checking for some time, so in coming all together in one place here
>>>> rather than in scattered email threads and GitHub issues and Slack channels
>>>> I'm hoping we can push this through to something suitable for widespread
>>>> To summarise the current state of shape annotation and checking, there
>>>> are three categories of things to care about:
>>>> - Defining the syntax for how code should be annotated with shapes
>>>> - Runtime shape checkers
>>>> - Static shape checkers
>>>> Stephan made a great start with Ideas for array shape typing in Python
>>>> A group of us at DeepMind have been working on a followup which goes into
>>>> more detail which we should be able to share soon once it's been cleaned up.
>>>> There's also the syntax that tsalib <https://github.com/ofnote/tsalib> uses,
>>>> though only allows annotation of shapes (without specification of e.g.
>>>> whether something is a tf.Tensor or a np.ndarray, and without any info
>>>> on data type).
>>>> *Runtime shape checkers*
>>>> Some existing options here include ShapeGuard
>>>> <https://github.com/Qwlouse/shapeguard> (which is what most folks at
>>>> DeepMind are using at the moment) and tsanley
>>>> <https://github.com/ofnote/tsanley>. We also have an internal
>>>> prototype at DeepMind that can check annotations like x:
>>>> tf.Tensor[Batch, Time, 64, 64, 3] which we're still working on.
>>>> *Static shape checkers*
>>>> This is where things get interesting. A static shape checker is
>>>> probably what it's going to take for us to be able to say the problem of
>>>> shape checking is 'solved'.
>>>> One bottleneck here is support for variadic generics in existing static
>>>> type checkers. Pyre apparently has prototype support for this in its
>>>> <https://github.com/facebook/pyre-check/blob/master/examples/pytorch/stubs/_…> type
>>>> (see also this PyTorch example
>>>> <https://github.com/facebook/pyre-check/blob/e16cfea51df9466bc84841065fb3149…>) but
>>>> as far as I know neither pytype or mypy support this yet.
>>>> In the meantime, Teddy Liu has developed a toy static checker
>>>> <https://github.com/theodoretliu/thesis> for his bachelor's thesis.
>>>> It's written in OCaml, but reckons it should be portable to Python if
>>>> Outside of the Python world, Adam Paszke has made a static checker for
>>>> Swift for TensorFlow <https://github.com/google-research/swift-tfp>.
>>>> He's also interested in developing something similar for Python.
>>>> *Next steps*
>>>> I think the general direction of next steps should be to continue
>>>> developing a static checker while simultaneously trying to work out the
>>>> details of what a full syntax would look like (based on what we find to
>>>> work well in practice with the static checker).
>>>> In particular, I'm wondering whether it would be worth porting Teddy's
>>>> checker to Python (which I'm assuming more people will be comfortable with
>>>> than OCaml) or whether we should join Adam in developing something from
>>>> scratch. Adam, how are things going on your end?
>>>> I'll be able to work on this full-time for two weeks from the 22nd as
>>>> part of a DeepMind hackathon, during which my plan is to finish the draft
>>>> syntax proposal doc and polish an internal prototype we have for a runtime
>>>> checker based on that syntax - but I'm also open to other ideas, if there
>>>> are other higher-leverage things to do.
>> --Guido van Rossum (python.org/~guido)
>> *Pronouns: he/him **(why is my pronoun here?)
Now that we (currently me and Jukka) are working on modular typeshed, we
think there are two things underspecified in PEP 561 about stub-only
* What to do with distributions that have only modules, but not packages?
Since it is not easy to add a data file (like .pyi) not in a package, we
use "dummy" packages named module_name-stubs with a single __init__.pyi.
* What to do with packages that have different stub files for Python 2 and
3 (for example six)? Currently we just add two packages to the
distribution, one six-stubs (as specified by PEP 561) and another
Are there any opinions/comments on these two points?
I have some functions that from a typing perspective look like:
def call_in_special_context(f, *args, **kwargs):
# In reality, runs 'f' in a thread or something
return f(*args, **kwargs)
An example in the stdlib would be https://docs.python.org/3/library/contextvars.html#contextvars.Context.run
Of course, right now, there's no good way to add types to this. But with PEP 612, it seems like we could write:
Ps = ParameterSpecification("Ps")
R = TypeVar("R")
def call_in_special_context(f: Callable[Ps, R], *args: Ps.args, **kwargs, Ps.kwargs) -> R:
None of the examples is the PEP actually look like this though – they all involve referencing Ps.args and Ps.kwargs inside the function/class scope that uses Ps, while here we need to reference them right in the same args list.
So, first question: is the example above something that would be supported under PEP 612? (And if so, a suggestion would be to add some examples like that to the text :-).)
Second question: there's also a convention that shows up in e.g. asyncio, of passing through *args, while reserving kwargs for the wrapper. For example:
def call_soon(self, callback, *args, context=None):
In PEP 612 notation, this method has type:
def call_soon(self, callback: Callable[Ps, Any], *args: Ps.args, context=None):
But I think that this is *not* allowed by PEP 612, because "These operators can only be used together, as the annotated types for *args and **kwargs", and the 'def baz' example. So even with PEP 612, we still can't type loop.call_soon. Is that correct?
Third question: suppose I have a function that captures some kwargs, but passes through unrecognized ones:
def call_in_special_context(f, *args, **kwargs, special_kwarg):
return f(*args, **kwargs)
In this case, you might expect to write:
def call_in_special_context(f: Callable[Ps, R], *args: Ps.args, **kwargs, Ps.kwargs, special_kwarg: SomeType) -> R:
The semantics would be that at call sites for call_in_special_context, the type checker would check that 'special_kwarg' matches the given type, and that all other kwargs are valid for 'f'.
Is this legal/supported?
P.S.: I'm not subscribed to the list; trying out posting through the mailman 3 web interface thing. So if you could CC me on replies that would be helpful :-)
Try to make sense of this table:
PEP 484 rejects the numeric tower (number.Number, number.Complex,
The typing module now offers number-related SupportsX protocols which
are runtime checkable, so I assumed some of these protocols could
replace the numeric tower in practice.
This is now more important than before, given the widespread use of
NumPy with its dozens of numeric types.
What is the current best practice for testing numeric types at
runtime, if the numeric tower is problematic?
What use cases prompted the inclusion of the number-related SupportsX
protocols as runtime checked ABCs?
ISSUES WITH PROTOCOLS
I wish I could forget about the numeric tower and use the SuportsX
protocols, but I don't understand some of the results I'm getting with
issubclass(complex, typing.SupportsFloat) returns True
but float(1+2j) raises TypeError: can't convert complex to float
(the complex class does implement __float__, but I get that TypeError)
Same issue above: issubclass(complex, typing.SupportsInt) is True but
int(1+2j) raises TypeError.
In addition, issubclass(fractions.Fraction, typing.SupportsInt)
returns False, but int(Fraction(7, 2) works, returns 3.
Is issubclass(NT, typing.SupportsInt) is true ONLY for NT in
[numpy.complex64, Decimal, and Fraction] but in fact all the numeric
types from the stdlib and NumPy that I tried can be passed to
complex() with no errors (as the first argument).
THE TABLE AND SCRIPT TO BUILD IT
I wrote a little script to create a table that shows these issues. See
the table and script here if you are interested:
The columns are concrete numeric types from the Python stdlib and NumPy.
The rows represent three kinds of tests:
1) issubclass results against numbers ABCs
Example: issubclass(number.Real, numpy.float16)
2) issubclass results against typing protocols
Example: issubclass(typing.SupportsFloat, numpy.float16)
3) application of a built-in to a value built from a concrete type,
given argument 1
Example 1: complex(float(1)) # result: (1+0j) with ComplexWarning
Example 2: float(complex(1)) # no result, TypeError: can't
convert complex to float
Example 3: round(numpy.complex64(1)) # result: (1+0j)
| Author of Fluent Python (O'Reilly, 2015)
| Technical Principal at ThoughtWorks
| Twitter: @ramalhoorg
This came up internally - pytype supports `ellipsis` as a global definition
in type stubs, and the poster was wondering how portable it was (seeing as
how it's not directly accessible in python itself). Are other type checkers
okay with `ellipsis` used as a type?
In this passage I quote:
3) SupportsComplex Is issubclass(NT, typing.SupportsInt) is true ONLY f
I think you mean, in the expression, typing.SupportsComplex rather than