The Python Steering Council reviewed PEP 647 -- User-Defined Type Guards, and is happy to accept the PEP for Python 3.10. Congratulations Eric! We have one concern about the semantics of the PEP however. In a sense, the PEP subverts the meaning of the return type defined in the signature of the type guard, to express an attribute of the type guard function. Meaning, type guard functions actually *do* return bools, but this is not reflected in the return type: "Using this new mechanism, the is_str_list function in the above example would be modified slightly. Its return type would be changed from bool to TypeGuard[List[str]]. This promises not merely that the return value is boolean, but that a true indicates the input to the function was of the specified type.” In fact, the promise that it returns a bool is de-facto knowledge you must have when you see “TypeGuard” in the return type. It is an implicit assumption. Generally this might not be a problem, however when a type guard function is used for multiple purposes (e.g. a type guard and a “regular” function), then the return type is misleading, since a TypeGuard object is *not* returned. It’s unclear what type checkers would do in this case. The SC debated alternatives, including the decorator syntax specifically mentioned in the Rejected Ideas. We also discussed making TypeGuard a “wrapping” type defining an __bool__() so that e.g. is_str_list() would be defined as such: def is_str_list(val: List[object]) -> TypeGuard[List[str]]: """Determines whether all objects in the list are strings""" return TypeGuard(all(isinstance(x, str) for x in val)) but this also isn’t quite accurate, and we were concerned that this might be highly inconvenient in practice. In a sense, the type guard-ness of the function is an attribute about the function, not about the parameters or return type, but there is no way to currently express that using Python or type checking syntax. I am not sure whether you considered and rejected this option, but if so, perhaps you could add some language to the Rejected Ideas about it. Ultimately we couldn’t come up with anything better, so we decided that the PEP as it stands solves the problem in a practical manner, and that this is for the most part a wart that users will just have to learn and internalize. Cheers, -Barry (on behalf of the Python Steering Council)
I don't have any decent proposal at the moment but I think coming up with a way to annotate side-effects of functions (including typeguard-ness) could come in handy. If we anticipate needing that, perhaps it would be beneficial to come up with that feature before implementing this PEP, lest we end up with something that could have benefitted from it but was released just before it. Though personally I like the PEP and have no qualms about having to learn that TypeGuard is "a bool with a side-effect"; I don't think it's a problem in the first place, there are less obvious, more complicated things in Python that I couldn't just intuit at a glance. On 06/04/2021 22:31, Barry Warsaw wrote:
[...] but this also isn’t quite accurate, and we were concerned that this might be highly inconvenient in practice. In a sense, the type guard-ness of the function is an attribute about the function, not about the parameters or return type, but there is no way to currently express that using Python or type checking syntax.
[...] Cheers, -Barry (on behalf of the Python Steering Council)
On Wed, Apr 7, 2021 at 12:21 AM Federico Salerno <salernof11@gmail.com> wrote:
I don't have any decent proposal at the moment but I think coming up with a way to annotate side-effects of functions (including typeguard-ness) could come in handy. If we anticipate needing that, perhaps it would be beneficial to come up with that feature before implementing this PEP, lest we end up with something that could have benefitted from it but was released just before it.
Though personally I like the PEP and have no qualms about having to learn that TypeGuard is "a bool with a side-effect"; I don't think it's a problem in the first place, there are less obvious, more complicated things in Python that I couldn't just intuit at a glance.
But it isn't a "side effect". It is a distinct concept that is important to the type checker. Note that in TypeScript this also doesn't look like a boolean -- it uses a unique syntax that has to be learned: function isCustomer(partner: any): partner is Customer { . . . } Arguably the TS syntax is more easily intuited without looking it up, but TS has a certain freedom in its syntactic design that we don't have for Python: new *syntax* has to be added to the Python parser and can't be backported, whereas new *types* (like `TypeGuard[T]`) can easily be backported via typing_extensions.py. We have really tried, but we did not come up with anything better than the current PEP. FWIW you might be interested in Annotated (PEP 593), which can be used to indicate various attributes of a type annotation. Before you suggest that we adopt that instead of PEP 647, we considered that, and the consensus is that that's not what Annotated is for (it's intended for conveying information to tools *other* than the type checker, for example schema checkers etc.). -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-c...>
On Apr 7, 2021, at 12:59, Guido van Rossum <guido@python.org> wrote:
Note that in TypeScript this also doesn't look like a boolean -- it uses a unique syntax that has to be learned:
function isCustomer(partner: any): partner is Customer { . . . }
Arguably the TS syntax is more easily intuited without looking it up, but TS has a certain freedom in its syntactic design that we don't have for Python: new *syntax* has to be added to the Python parser and can't be backported, whereas new *types* (like `TypeGuard[T]`) can easily be backported via typing_extensions.py.
Thanks Guido. Yes, we totally understand. I agree that this TS example is easier to reason about (at least for me), and that Python is limited in what syntax it can allow there. This is something the SC has been musing about, but as it’s not a fully formed idea, I’m a little hesitant to bring it up. That said, it’s somewhat relevant: We wonder if it may be time to in a sense separate the typing syntax from Python’s regular syntax. TypeGuards are a case where if typing had more flexibility to adopt syntax that wasn’t strictly legal “normal” Python, maybe something more intuitive could have been proposed. I wonder if the typing-sig has discussed this possibility (in the future, of course)?
We have really tried, but we did not come up with anything better than the current PEP.
Neither did the SC, thus the acceptance! :D I have no doubt typing-sig really tried hard!
FWIW you might be interested in Annotated (PEP 593), which can be used to indicate various attributes of a type annotation. Before you suggest that we adopt that instead of PEP 647, we considered that, and the consensus is that that's not what Annotated is for (it's intended for conveying information to tools *other* than the type checker, for example schema checkers etc.).
Agreed. It’s interesting that PEP 593 proposes a different approach to enriching the typing system. Typing itself is becoming a little ecosystem of its own, and given that many Python users are still not fully embracing typing, maybe continuing to tie the typing syntax to Python syntax is starting to strain. Cheers, -Barry
On Sun, Apr 11, 2021 at 1:31 PM Barry Warsaw <barry@python.org> wrote: [snip]
This is something the SC has been musing about, but as it’s not a fully formed idea, I’m a little hesitant to bring it up. That said, it’s somewhat relevant: We wonder if it may be time to in a sense separate the typing syntax from Python’s regular syntax. TypeGuards are a case where if typing had more flexibility to adopt syntax that wasn’t strictly legal “normal” Python, maybe something more intuitive could have been proposed. I wonder if the typing-sig has discussed this possibility (in the future, of course)?
We haven't discussed this in typing-sig, but it so happens that a similar idea for JavaScript was mentioned to me recently, and at the time I spent about 5 seconds thinking about how this could be useful for Python, too. Basically, where the original PEP 3107 proposed annotations to have the syntax of expressions and evaluate them as such, now that we've got PEP 563 which makes annotations available as strings and no longer attempts to evaluate them, we could relax this further and do something like just skipping tokens until a suitable delimiter is found (',' or ')' inside the parameter list, ':' for the return type). Of course, matching parentheses, brackets and braces should always be paired and the target delimiter should not terminate the scan inside such matched pairs. It occurs to me that right now is actually very good time to think about this a little more, because we're at a crossroads, of sorts: we could adopt Larry Hastings' PEP 649, which reverses PEP 563 and makes annotations available at runtime as objects (e.g., `def f(x: int)` would have the `int` type object in the annotation instead of the string `"int"`). Or we could reject PEP 649, which leaves the door open for a more relaxed annotation syntax in the future (earliest in 3.11). At the very least I recommend that the SC take this into account when they consider PEP 649. Accepting it has some nice benefits when it comes to the scoping rules for annotations -- but it would forever close the door for the "relaxed annotation syntax" idea you brought up. (Isn't it fun to be on the SC. :-) [snip]
Agreed. It’s interesting that PEP 593 proposes a different approach to enriching the typing system. Typing itself is becoming a little ecosystem of its own, and given that many Python users are still not fully embracing typing, maybe continuing to tie the typing syntax to Python syntax is starting to strain.
It definitely is. Type checkers are still young compared to Python itself, and their development speed is much faster than that of Python. So whenever new syntax is required the strain becomes obvious. Thanks for making that observation! -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-c...>
I'm in favour of the approach proposed in PEP 649. Movie trailer: "In a world where annotations are arbitrary non-Python syntax..." It seems to me we could always have annotations evaluate to Python expressions *and* support any arbitrary syntax (e.g. through Annotated[...] or similar mechanism). What would a relaxed inline syntax provide that a well-placed Annotated[type, ArbitraryNonPythonSyntax("...")] annotation wouldn't? . Paul On Sun, 2021-04-11 at 20:43 -0700, Guido van Rossum wrote:
On Sun, Apr 11, 2021 at 1:31 PM Barry Warsaw <barry@python.org> wrote: [snip]
This is something the SC has been musing about, but as it’s not a fully formed idea, I’m a little hesitant to bring it up. That said, it’s somewhat relevant: We wonder if it may be time to in a sense separate the typing syntax from Python’s regular syntax. TypeGuards are a case where if typing had more flexibility to adopt syntax that wasn’t strictly legal “normal” Python, maybe something more intuitive could have been proposed. I wonder if the typing- sig has discussed this possibility (in the future, of course)?
We haven't discussed this in typing-sig, but it so happens that a similar idea for JavaScript was mentioned to me recently, and at the time I spent about 5 seconds thinking about how this could be useful for Python, too.
Basically, where the original PEP 3107 proposed annotations to have the syntax of expressions and evaluate them as such, now that we've got PEP 563 which makes annotations available as strings and no longer attempts to evaluate them, we could relax this further and do something like just skipping tokens until a suitable delimiter is found (',' or ')' inside the parameter list, ':' for the return type). Of course, matching parentheses, brackets and braces should always be paired and the target delimiter should not terminate the scan inside such matched pairs.
It occurs to me that right now is actually very good time to think about this a little more, because we're at a crossroads, of sorts: we could adopt Larry Hastings' PEP 649, which reverses PEP 563 and makes annotations available at runtime as objects (e.g., `def f(x: int)` would have the `int` type object in the annotation instead of the string `"int"`). Or we could reject PEP 649, which leaves the door open for a more relaxed annotation syntax in the future (earliest in 3.11).
At the very least I recommend that the SC take this into account when they consider PEP 649. Accepting it has some nice benefits when it comes to the scoping rules for annotations -- but it would forever close the door for the "relaxed annotation syntax" idea you brought up. (Isn't it fun to be on the SC. :-)
[snip]
Agreed. It’s interesting that PEP 593 proposes a different approach to enriching the typing system. Typing itself is becoming a little ecosystem of its own, and given that many Python users are still not fully embracing typing, maybe continuing to tie the typing syntax to Python syntax is starting to strain.
It definitely is. Type checkers are still young compared to Python itself, and their development speed is much faster than that of Python. So whenever new syntax is required the strain becomes obvious. Thanks for making that observation!
_______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/2F5PVC5M... Code of Conduct: http://python.org/psf/codeofconduct/
On Sun, Apr 11, 2021 at 9:41 PM Paul Bryan <pbryan@anode.ca> wrote:
I'm in favour of the approach proposed in PEP 649.
Movie trailer: "In a world where annotations are arbitrary non-Python syntax..."
It seems to me we could always have annotations evaluate to Python expressions **and* *support any arbitrary syntax (e.g. through Annotated[...] or similar mechanism). What would a relaxed inline syntax provide that a well-placed Annotated[type, ArbitraryNonPythonSyntax("...")] annotation wouldn't? .
I'm not a fan of Annotated -- it's an escape hook of last resort, not the way to add new syntax in the future. New syntax should enhance usability and readability, and Annotated does neither. -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-c...>
On Sun, Apr 11, 2021 at 1:31 PM Barry Warsaw <barry@python.org <mailto:barry@python.org>> wrote:
[snip]
This is something the SC has been musing about, but as it’s not a fully formed idea, I’m a little hesitant to bring it up. That said, it’s somewhat relevant: We wonder if it may be time to in a sense separate the typing syntax from Python’s regular syntax. TypeGuards are a case where if typing had more flexibility to adopt syntax that wasn’t strictly legal “normal” Python, maybe something more intuitive could have been proposed. I wonder if the typing-sig has discussed this possibility (in the future, of course)?
I am strongly in favor of diverging type annotation syntax from Python syntax. Currently, type annotations are a very useful tool, but often clunky to use. Enhancements have been made, but design space is limited when working within existing Python syntax. Type annotations have a different set of rules, needs, and constraints than general-purpose Python code. This is similar to other domain specific languages like regular expressions. Ideally, Python itself would not check the syntax of annotations, except as needed for determining the end of an annotation. PEP 563 is a step in that direction. As far as I understand the arguments against PEP 563 and in favor of PEP 649 mostly boil down to "annotations are used outside of typing, these uses would need to use eval() in the future and eval() is slow". (At least from a user's perspective, there are more arguments from a Python maintainer's perspective that I can't comment on.) Are there benchmarks to verify that using eval() has a non-negligible effect for this use case? Overall, I don't find this to be a compelling argument when compared to the problem that PEP 649 would close all design space for type annotation syntax enhancements. - Sebastian
If you look deeper, the real complaints are all about the backwards incompatibility when it comes to locally-scoped types in annotations. I.e. def test(): class C: ... def func(arg: C): ... return func typing.get_type_hints(test()) # raises NameError: name 'C' is not defined And that is a considerable concern (we've always let backwards compatibility count more strongly than convenience of new features). While it was known this would change, there was no real deprecation of the old way. Alas. On Fri, Apr 16, 2021 at 1:51 AM Sebastian Rittau <srittau@rittau.biz> wrote:
On Sun, Apr 11, 2021 at 1:31 PM Barry Warsaw <barry@python.org> wrote:
[snip]
This is something the SC has been musing about, but as it’s not a fully formed idea, I’m a little hesitant to bring it up. That said, it’s somewhat relevant: We wonder if it may be time to in a sense separate the typing syntax from Python’s regular syntax. TypeGuards are a case where if typing had more flexibility to adopt syntax that wasn’t strictly legal “normal” Python, maybe something more intuitive could have been proposed. I wonder if the typing-sig has discussed this possibility (in the future, of course)?
I am strongly in favor of diverging type annotation syntax from Python syntax. Currently, type annotations are a very useful tool, but often clunky to use. Enhancements have been made, but design space is limited when working within existing Python syntax. Type annotations have a different set of rules, needs, and constraints than general-purpose Python code. This is similar to other domain specific languages like regular expressions. Ideally, Python itself would not check the syntax of annotations, except as needed for determining the end of an annotation. PEP 563 is a step in that direction.
As far as I understand the arguments against PEP 563 and in favor of PEP 649 mostly boil down to "annotations are used outside of typing, these uses would need to use eval() in the future and eval() is slow". (At least from a user's perspective, there are more arguments from a Python maintainer's perspective that I can't comment on.) Are there benchmarks to verify that using eval() has a non-negligible effect for this use case? Overall, I don't find this to be a compelling argument when compared to the problem that PEP 649 would close all design space for type annotation syntax enhancements.
- Sebastian
_______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/RWLOLMWL... Code of Conduct: http://python.org/psf/codeofconduct/
-- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-c...>
On 16 Apr 2021, at 16:59, Guido van Rossum wrote:
If you look deeper, the real complaints are all about the backwards incompatibility when it comes to locally-scoped types in annotations. I.e.
def test(): class C: ... def func(arg: C): ... return func
typing.get_type_hints(test()) # raises NameError: name 'C' is not defined
Can't this be solved by wrapping the annotation in a lambda, i.e. ```
def test(): ... class C: ... ... def func(arg: lambda: C): ... ... return func ... test().__annotations__['arg']() <class '__main__.test.<locals>.C'>
So `typing.get_type_hints()` would simply call an annotation if the
annotation was callable and replace it with the result of the call.
> And that is a considerable concern (we've always let backwards
> compatibility count more strongly than convenience of new features).
> While
> it was known this would change, there was no real deprecation of the
> old
> way. Alas.
>
> On Fri, Apr 16, 2021 at 1:51 AM Sebastian Rittau <srittau@rittau.biz>
> wrote:
>
>> On Sun, Apr 11, 2021 at 1:31 PM Barry Warsaw <barry@python.org>
>> wrote:
>>
>> [snip]
>>
>>> This is something the SC has been musing about, but as it’s not a
>>> fully
>>> formed idea, I’m a little hesitant to bring it up. That said,
>>> it’s
>>> somewhat relevant: We wonder if it may be time to in a sense
>>> separate the
>>> typing syntax from Python’s regular syntax. TypeGuards are a case
>>> where if
>>> typing had more flexibility to adopt syntax that wasn’t strictly
>>> legal
>>> “normal” Python, maybe something more intuitive could have been
>>> proposed.
>>> I wonder if the typing-sig has discussed this possibility (in the
>>> future,
>>> of course)?
>>>
>> I am strongly in favor of diverging type annotation syntax from
>> Python
>> syntax. Currently, type annotations are a very useful tool, but often
>> clunky to use. Enhancements have been made, but design space is
>> limited
>> when working within existing Python syntax. Type annotations have a
>> different set of rules, needs, and constraints than general-purpose
>> Python
>> code. This is similar to other domain specific languages like regular
>> expressions. Ideally, Python itself would not check the syntax of
>> annotations, except as needed for determining the end of an
>> annotation. PEP
>> 563 is a step in that direction.
>>
>> As far as I understand the arguments against PEP 563 and in favor of
>> PEP
>> 649 mostly boil down to "annotations are used outside of typing,
>> these uses
>> would need to use eval() in the future and eval() is slow". (At least
>> from
>> a user's perspective, there are more arguments from a Python
>> maintainer's
>> perspective that I can't comment on.) Are there benchmarks to verify
>> that
>> using eval() has a non-negligible effect for this use case? Overall,
>> I
>> don't find this to be a compelling argument when compared to the
>> problem
>> that PEP 649 would close all design space for type annotation syntax
>> enhancements.
>>
>> - Sebastian
>
> --
> --Guido van Rossum (python.org/~guido)
Servus,
Walter
El vie, 16 abr 2021 a las 10:01, Walter Dörwald (<walter@livinglogic.de>) escribió:
On 16 Apr 2021, at 16:59, Guido van Rossum wrote:
If you look deeper, the real complaints are all about the backwards incompatibility when it comes to locally-scoped types in annotations. I.e.
def test(): class C: ... def func(arg: C): ... return func
typing.get_type_hints(test()) # raises NameError: name 'C' is not defined
Can't this be solved by wrapping the annotation in a lambda, i.e.
def test(): ... class C: ... ... def func(arg: lambda: C): ... ... return func ... test().__annotations__['arg']() <class '__main__.test.<locals>.C'>
So typing.get_type_hints() would simply call an annotation if the annotation was callable and replace it with the result of the call.
That sort of thing can work, but just like string annotations it's not good for usability. Users using annotations will have to remember that in some contexts they need to wrap their annotation in a lambda, and unless they have a good understanding of how type annotations work under the hood, it will feel like a set of arbitrary rules. That's what I like about PEP 649: code like this would (hopefully!) just work without needing users to remember to use any special syntax.
On 16 Apr 2021, at 19:38, Jelle Zijlstra wrote:
El vie, 16 abr 2021 a las 10:01, Walter Dörwald (<walter@livinglogic.de>) escribió:
On 16 Apr 2021, at 16:59, Guido van Rossum wrote:
If you look deeper, the real complaints are all about the backwards incompatibility when it comes to locally-scoped types in annotations. I.e.
def test(): class C: ... def func(arg: C): ... return func
typing.get_type_hints(test()) # raises NameError: name 'C' is not defined
Can't this be solved by wrapping the annotation in a lambda, i.e.
def test(): ... class C: ... ... def func(arg: lambda: C): ... ... return func ... test().__annotations__['arg']() <class '__main__.test.<locals>.C'>
So typing.get_type_hints() would simply call an annotation if the annotation was callable and replace it with the result of the call.
That sort of thing can work, but just like string annotations it's not good for usability.
Yes, but it's close to what PEP 649 does. The PEP even calls it "implicit lambda expressions".
Users using annotations will have to remember that in some contexts they need to wrap their annotation in a lambda, and unless they have a good understanding of how type annotations work under the hood, it will feel like a set of arbitrary rules. That's what I like about PEP 649: code like this would (hopefully!) just work without needing users to remember to use any special syntax.
Yes, that's what I like about PEP 649 too. It just works (in most cases), and for scoping it works like an explicit lambda expression, which is nothing new to learn. If Python had taken the decision to evaluate default values for arguments not once at definition time, but on every call, I don't think that that would have been implemented via restringifying the AST for the default value. But then again, the difference between default values and type annotations is that Python *does* use the default values. In most cases however Python does not use the type annotations, only the type checker does. The problem is where Python code *does* want to use the type annotation. For this case PEP 649 is the more transparent approach. Servus, Walter
On Fri, Apr 16, 2021 at 1:51 AM Sebastian Rittau <srittau@rittau.biz> wrote:
I am strongly in favor of diverging type annotation syntax from Python syntax. Currently, type annotations are a very useful tool, but often clunky to use. Enhancements have been made, but design space is limited when working within existing Python syntax. Type annotations have a different set of rules, needs, and constraints than general-purpose Python code. This is similar to other domain specific languages like regular expressions. Ideally, Python itself would not check the syntax of annotations, except as needed for determining the end of an annotation.
Another example is a discussion a little while back on python-ideas about extending what's allowed inside square brackets. It started with a use-case for type specification. It turned out that there were other use cases, more tightly tied to the original meaning of __getitem__. Nevertheless, it struck me at the time that it would be nice if the Typing use case could be addressed without the complication of making something that made sense in two very different domains. - Chris -- Christopher Barker, PhD (Chris) Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython
On Mon, 12 Apr 2021, 1:48 pm Guido van Rossum, <guido@python.org> wrote:
At the very least I recommend that the SC take this into account when they consider PEP 649. Accepting it has some nice benefits when it comes to the scoping rules for annotations -- but it would forever close the door for the "relaxed annotation syntax" idea you brought up. (Isn't it fun to be on the SC. :-)
I may have missed someone else mentioning this, but I don't think this concern is necessarily true, as even if PEP 649 were accepted, the only pre-PEP-563 constraints it would reintroduce would be that all future type annotation syntax: * have a defined runtime effect; * that runtime effect be consistent with normal expressions when reusing existing syntax; and * be explicitly quoted when using type hinting syntax from later Python versions in code that needs to run on earlier versions Any PEPs adding new type hinting specific syntax would be free to define the runtime effect of the new syntax as "produces a string containing the text of the part of the annotation using the new syntax, as if the new syntax were explicitly quoted", even if we decided not to go ahead with the idea of applying those "produces a string" semantics to *all* annotations. Cheers, Nick.
Hm, I was specifically thinking of things that introduce new keywords. For example, TypeScript adds unary operators 'infer' and 'keyof'. It would be rather difficult to have to define those as soft keywords throughout the language. (We couldn't just make them unary keywords, since 'infer (x)' should continue to call the function 'infer', for example. In an annotation context that might not be a problem, since function calls in general aren't valid types.) IIRC Jukka also already brought up the possibility of using something like '(int) => str' instead of 'Callable[[int], str]' -- but it would be unpleasant if that syntax had a meaning like you propose outside annotations. On Sat, Apr 17, 2021 at 7:12 PM Nick Coghlan <ncoghlan@gmail.com> wrote:
On Mon, 12 Apr 2021, 1:48 pm Guido van Rossum, <guido@python.org> wrote:
At the very least I recommend that the SC take this into account when they consider PEP 649. Accepting it has some nice benefits when it comes to the scoping rules for annotations -- but it would forever close the door for the "relaxed annotation syntax" idea you brought up. (Isn't it fun to be on the SC. :-)
I may have missed someone else mentioning this, but I don't think this concern is necessarily true, as even if PEP 649 were accepted, the only pre-PEP-563 constraints it would reintroduce would be that all future type annotation syntax:
* have a defined runtime effect; * that runtime effect be consistent with normal expressions when reusing existing syntax; and * be explicitly quoted when using type hinting syntax from later Python versions in code that needs to run on earlier versions
Any PEPs adding new type hinting specific syntax would be free to define the runtime effect of the new syntax as "produces a string containing the text of the part of the annotation using the new syntax, as if the new syntax were explicitly quoted", even if we decided not to go ahead with the idea of applying those "produces a string" semantics to *all* annotations.
Cheers, Nick.
-- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-c...>
What about creating a new syntax for annotating metadata? For example, `type_hint :: metadata` could be equivalent to `Annotated[type_hint, "metadata"]`, and if we wanted type guards to look like TypeScript they could look like this: ``` def is_str_list(val: List[object]) -> bool :: is List[str]: ``` This might not be the best syntax, it's just one idea, but my point is that achieving all of these goals simultaneously seems quite doable: - Showing the actual return type - Showing metadata - Putting arbitrary non-Python syntax in metadata - Retrieving the type part of the annotation at runtime as an actual type value, not a string - Retrieving the metadata at runtime as a string - Avoiding the cumbersome `Annotated[]` syntax In addition, if someone wants annotations only for other metadata and not for types at all [1] then it's easy to just omit the type part, e.g ``` def foo(bar :: bar metadata) :: foo metadata: ``` Again, the `::` may be a bad way to do this, but the same principle of non-type-meta-only-annotations could probably be applied to other similar syntax proposals. I'm sure I'm not the first to suggest something like this, but I couldn't see anything in PEP 593. I was particularly expecting something like "New syntax" as a section under "Rejected ideas". The closest thing is "Using (Type, Ann1, Ann2, ...) instead of Annotated[Type, Ann1, Ann2, ...]" which felt a bit weak. [1] e.g. as I have done in https://github.com/alexmojaki/friendly_states which looks lovely but completely breaks mypy
Hi Barry, Thanks for the note. Apologies for the slow reply — your email got trapped in Microsoft’s spam filters, and I just noticed it. The idea of using a wrapper type was my first thought too. In fact, I implemented that solution in prototype form. It was disliked by almost everyone who tried to use the feature. The wrapper approach also got a negative reaction on the typing-sig when I posted the initial proto-spec. A wrapper prevents some common use cases (e.g. filter functions) and was found to be cumbersome and confusing. I understand your concern about the fact that type guards return bools but this is not reflected in the return type. This was debated at length in the typing-sig, and we considered various alternatives. In the end, we weren’t able to come up with anything better. I’m somewhat comfited by the fact that TypeScript’s formulation of this feature (which was the inspiration for the idea and is generally a well-liked feature in that language) also does not directly mention “boolean” in its return type annotation. Here’s an example of the syntax in TypeScript: ``` function isNone(type: Type): type is NoneType { return type.category === TypeCategory.None; } ``` -Eric On 4/6/21, 1:31 PM, "Barry Warsaw" <barry@python.org> wrote: The Python Steering Council reviewed PEP 647 -- User-Defined Type Guards, and is happy to accept the PEP for Python 3.10. Congratulations Eric! We have one concern about the semantics of the PEP however. In a sense, the PEP subverts the meaning of the return type defined in the signature of the type guard, to express an attribute of the type guard function. Meaning, type guard functions actually *do* return bools, but this is not reflected in the return type: "Using this new mechanism, the is_str_list function in the above example would be modified slightly. Its return type would be changed from bool to TypeGuard[List[str]]. This promises not merely that the return value is boolean, but that a true indicates the input to the function was of the specified type.” In fact, the promise that it returns a bool is de-facto knowledge you must have when you see “TypeGuard” in the return type. It is an implicit assumption. Generally this might not be a problem, however when a type guard function is used for multiple purposes (e.g. a type guard and a “regular” function), then the return type is misleading, since a TypeGuard object is *not* returned. It’s unclear what type checkers would do in this case. The SC debated alternatives, including the decorator syntax specifically mentioned in the Rejected Ideas. We also discussed making TypeGuard a “wrapping” type defining an __bool__() so that e.g. is_str_list() would be defined as such: def is_str_list(val: List[object]) -> TypeGuard[List[str]]: """Determines whether all objects in the list are strings""" return TypeGuard(all(isinstance(x, str) for x in val)) but this also isn’t quite accurate, and we were concerned that this might be highly inconvenient in practice. In a sense, the type guard-ness of the function is an attribute about the function, not about the parameters or return type, but there is no way to currently express that using Python or type checking syntax. I am not sure whether you considered and rejected this option, but if so, perhaps you could add some language to the Rejected Ideas about it. Ultimately we couldn’t come up with anything better, so we decided that the PEP as it stands solves the problem in a practical manner, and that this is for the most part a wart that users will just have to learn and internalize. Cheers, -Barry (on behalf of the Python Steering Council)
I propose that we just clarify this in the docs we'll write for TypeGuard. On Sat, Apr 10, 2021 at 8:33 AM Eric Traut <erictr@microsoft.com> wrote:
Hi Barry,
Thanks for the note. Apologies for the slow reply — your email got trapped in Microsoft’s spam filters, and I just noticed it.
The idea of using a wrapper type was my first thought too. In fact, I implemented that solution in prototype form. It was disliked by almost everyone who tried to use the feature. The wrapper approach also got a negative reaction on the typing-sig when I posted the initial proto-spec. A wrapper prevents some common use cases (e.g. filter functions) and was found to be cumbersome and confusing.
I understand your concern about the fact that type guards return bools but this is not reflected in the return type. This was debated at length in the typing-sig, and we considered various alternatives. In the end, we weren’t able to come up with anything better. I’m somewhat comfited by the fact that TypeScript’s formulation of this feature (which was the inspiration for the idea and is generally a well-liked feature in that language) also does not directly mention “boolean” in its return type annotation. Here’s an example of the syntax in TypeScript:
```
function isNone(type: Type): type is NoneType {
return type.category === TypeCategory.None;
}
```
-Eric
On 4/6/21, 1:31 PM, "Barry Warsaw" <barry@python.org> wrote:
The Python Steering Council reviewed PEP 647 -- User-Defined Type Guards, and is happy to accept the PEP for Python 3.10. Congratulations Eric!
We have one concern about the semantics of the PEP however. In a sense, the PEP subverts the meaning of the return type defined in the signature of the type guard, to express an attribute of the type guard function. Meaning, type guard functions actually *do* return bools, but this is not reflected in the return type:
"Using this new mechanism, the is_str_list function in the above example would be modified slightly. Its return type would be changed from bool to TypeGuard[List[str]]. This promises not merely that the return value is boolean, but that a true indicates the input to the function was of the specified type.”
In fact, the promise that it returns a bool is de-facto knowledge you must have when you see “TypeGuard” in the return type. It is an implicit assumption.
Generally this might not be a problem, however when a type guard function is used for multiple purposes (e.g. a type guard and a “regular” function), then the return type is misleading, since a TypeGuard object is *not* returned. It’s unclear what type checkers would do in this case.
The SC debated alternatives, including the decorator syntax specifically mentioned in the Rejected Ideas. We also discussed making TypeGuard a “wrapping” type defining an __bool__() so that e.g. is_str_list() would be defined as such:
def is_str_list(val: List[object]) -> TypeGuard[List[str]]:
"""Determines whether all objects in the list are strings"""
return TypeGuard(all(isinstance(x, str) for x in val))
but this also isn’t quite accurate, and we were concerned that this might be highly inconvenient in practice. In a sense, the type guard-ness of the function is an attribute about the function, not about the parameters or return type, but there is no way to currently express that using Python or type checking syntax.
I am not sure whether you considered and rejected this option, but if so, perhaps you could add some language to the Rejected Ideas about it. Ultimately we couldn’t come up with anything better, so we decided that the PEP as it stands solves the problem in a practical manner, and that this is for the most part a wart that users will just have to learn and internalize.
Cheers,
-Barry (on behalf of the Python Steering Council)
-- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-c...>
On 4/10/2021 1:02 PM, Guido van Rossum wrote:
I propose that we just clarify this in the docs we'll write for TypeGuard.
I agree. When I reviewed the PEP, my concern was not with 'TypeGuard' itself, once I understood more or less what it means, but with the explanation in the PEP. -- Terry Jan Reedy
participants (12)
-
Alex Hall
-
Barry Warsaw
-
Christopher Barker
-
Eric Traut
-
Federico Salerno
-
Guido van Rossum
-
Jelle Zijlstra
-
Nick Coghlan
-
Paul Bryan
-
Sebastian Rittau
-
Terry Reedy
-
Walter Dörwald