PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2
Attached is my second draft of PEP 649. The PEP and the prototype have both seen a marked improvement since round 1 in January; PEP 649 now allows annotations to refer to any variable they could see under stock semantics: * Local variables in the current function scope or in enclosing function scopes become closures and use LOAD_DEFER. * Class variables in the current class scope are made available using a new mechanism, in which the class dict is attached to the bound annotation function, then loaded into f_locals when the annotation function is run. Thus permitting LOAD_NAME opcodes to function normally. I look forward to your comments, //arry/
I like! I really appreciate the work you've put into this to get it this far. Questions and comments:
PEP 563 also requires using ``eval()`` or ``typing.get_type_hints()`` to examine annotations. Code updated to work with PEP 563 that calls ``eval()`` directly would have to be updated simply to remove the ``eval()`` call. Code using ``typing.get_type_hints()`` would continue to work unchanged, though future use of that function would become optional in most cases.
I think it is worth noting somewhere that string annotations are still valid, and should still be evaluated if so.
Because this PEP makes semantic changes to how annotations are evaluated, this PEP will be initially gated with a per-module ``from __future__ import co_annotations`` before it eventually becomes the default behavior.
Is it safe to assume that a module that does not import co_annotations, but imports a module that does, will exhibit PEP 649 behavior when the former accesses an annotation defined in the latter?
* *Code that sets annotations on module or class attributes from inside any kind of flow control statement.* It's currently possible to set module and class attributes with annotations inside an ``if`` or ``try`` statement, and it works as one would expect. It's untenable to support this behavior when this PEP is active.
Is the following an example of the above? @dataclass class Foo: if some_condition: x: int else: x: float If so, would the following still be valid? if some_condition: type_ = int else: type_ = float @dataclass class Foo: x: type_
* *Code in module or class scope that references or modifies the local* ``__annotations__`` *dict directly.* Currently, when setting annotations on module or class attributes, the generated code simply creates a local ``__annotations__`` dict, then sets mappings in it as needed. It's also possible for user code to directly modify this dict, though this doesn't seem like it's an intentional feature. Although it would be possible to support this after a fashion when this PEP was active, the semantics would likely be surprising and wouldn't make anyone happy.
I recognize the point you make later about its impact on static type checkers. Setting that aside, I'm wondering about caes where annotations can be dynamically generated, such as dataclasses.make_dataclass(...). And, I could see reasons for overwriting values in __annotations__, especially in the case where it may be stored as a string and one wants to later affix its evaluated value. These are considerations specific to runtime (dynamic) type checking. I wonder if it would make sense for each item in __annotations__ to be evaluated separately on first access of each key, rather than all __annotations__ on first access to the dict. Basically the dict would act as a LazyDict. It could also provide the benefit of lessening the expense of evaluating complex but otherwise unused annotations. Paul On Sun, 2021-04-11 at 18:55 -0700, Larry Hastings wrote:
Attached is my second draft of PEP 649. The PEP and the prototype have both seen a marked improvement since round 1 in January; PEP 649 now allows annotations to refer to any variable they could see under stock semantics: * Local variables in the current function scope or in enclosing function scopes become closures and use LOAD_DEFER. * Class variables in the current class scope are made available using a new mechanism, in which the class dict is attached to the bound annotation function, then loaded into f_locals when the annotation function is run. Thus permitting LOAD_NAME opcodes to function normally.
I look forward to your comments,
/arry _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/QSASX6PZ... Code of Conduct: http://python.org/psf/codeofconduct/
On 4/11/21 7:55 PM, Paul Bryan wrote:
PEP 563 also requires using ``eval()`` or ``typing.get_type_hints()`` to examine annotations. Code updated to work with PEP 563 that calls ``eval()`` directly would have to be updated simply to remove the ``eval()`` call. Code using ``typing.get_type_hints()`` would continue to work unchanged, though future use of that function would become optional in most cases.
I think it is worth noting somewhere that string annotations are still valid, and should still be evaluated if so.
That's not up to me, it's up to the static type checkers who created that idiom. But I assume they'll continue to support stringized annotations, whether manually or automatically created.
Because this PEP makes semantic changes to how annotations are evaluated, this PEP will be initially gated with a per-module ``from __future__ import co_annotations`` before it eventually becomes the default behavior.
Is it safe to assume that a module that does not import co_annotations, but imports a module that does, will exhibit PEP 649 behavior when the former accesses an annotation defined in the latter?
Yes.
* *Code that sets annotations on module or class attributes from inside any kind of flow control statement.* It's currently possible to set module and class attributes with annotations inside an ``if`` or ``try`` statement, and it works as one would expect. It's untenable to support this behavior when this PEP is active.
Is the following an example of the above?
@dataclass class Foo: if some_condition: x: int else: x: float If so, would the following still be valid?
if some_condition: type_ = int else: type_ = float @dataclass class Foo: x: type_
* *Code in module or class scope that references or modifies the local* ``__annotations__`` *dict directly.* Currently, when setting annotations on module or class attributes, the generated code simply creates a local ``__annotations__`` dict, then sets mappings in it as needed. It's also possible for user code to directly modify this dict, though this doesn't seem like it's an intentional feature. Although it would be possible to support this after a fashion when this PEP was active, the semantics would likely be surprising and wouldn't make anyone happy.
I recognize the point you make later about its impact on static type checkers. Setting that aside, I'm wondering about caes where annotations can be dynamically generated, such as dataclasses.make_dataclass(...). And, I could see reasons for overwriting values in __annotations__, especially in the case where it may be stored as a string and one wants to later affix its evaluated value. These are considerations specific to runtime (dynamic) type checking. It's fine to modify the __annotations__ dict after the creation of the class or module. It's code that modifies "__annotations__" from within
Your example was valid, and I think your workaround should be fine. Do you have a use case for this, or is this question motivated purely by curiosity? the class or module that is disallowed here. Similarly for dataclasses; once it creates a class object, it can explicitly set and / or modify the annotations dict on that class.
I wonder if it would make sense for each item in __annotations__ to be evaluated separately on first access /of each key/, rather than all __annotations__ on first access to the dict. Basically the dict would act as a LazyDict. It could also provide the benefit of lessening the expense of evaluating complex but otherwise unused annotations.
This would cause an immense proliferation of code objects (with some pre-bound to function objects). Rather than one code object per annotation dict, it would create one code object per annotation key. Also, we don't have a "lazy dict" object built in to Python, so we'd have to create one. I don't have any problems that this would solve, so I'm not super interested in it. Personally I'd want to see a real compelling use case for this feature before I'd consider adding it to Python. Of course, I'm not on the steering committee, so my opinion is only worth so much. //arry/
On Sun, 2021-04-11 at 23:34 -0700, Larry Hastings wrote:
Your example was valid, and I think your workaround should be fine. Do you have a use case for this, or is this question motivated purely by curiosity?
It took a few readings for me to understand the limitations in the PEP. My example and workaround were mostly for me to confirm I had read it correctly.
It's fine to modify the __annotations__ dict after the creation of the class or module. It's code that modifies "__annotations__" from within the class or module that is disallowed here. Similarly for dataclasses; once it creates a class object, it can explicitly set and / or modify the annotations dict on that class.
Thanks. I think this clarification should be added to the PEP. Paul
I'm a big fan of this PEP, for many reasons. But the fact that it addresses some of the issues with get_type_hints() is very important. dataclasses avoids calling get_type_hints() for performance reasons and because it doesn't always succeed, see https://github.com/python/typing/issues/508. I believe this issue is fixed by PEP 649. On 4/12/2021 2:34 AM, Larry Hastings wrote:
On 4/11/21 7:55 PM, Paul Bryan wrote:
I recognize the point you make later about its impact on static type checkers. Setting that aside, I'm wondering about caes where annotations can be dynamically generated, such as dataclasses.make_dataclass(...). And, I could see reasons for overwriting values in __annotations__, especially in the case where it may be stored as a string and one wants to later affix its evaluated value. These are considerations specific to runtime (dynamic) type checking. It's fine to modify the __annotations__ dict after the creation of the class or module. It's code that modifies "__annotations__" from within the class or module that is disallowed here. Similarly for dataclasses; once it creates a class object, it can explicitly set and / or modify the annotations dict on that class.
There won't be any direct impact to make_dataclass(). It doesn't do anything tricky here: it just builds up the annotations dictionary and passes it as __annotations__ to the class namespace in types.new_class(). After creating the class, it just applies the normal dataclass() decorator. Eric
I still prefer PEP 563. I will describe what we lost if PEP 597 is accepted and PEP 563 is rejected. ### Types not accessible in runtime First of all, PEP 563 solves not only forward references. Note that PEP 563 says: "we'll call any name imported or defined within a `if TYPE_CHECKING: block` a forward reference, too." https://www.python.org/dev/peps/pep-0563/#forward-references PEP 563 solves all problems relating to types not accessible in runtime. There are many reasons users can not get types used in annotations at runtime: * To avoid circular import * Types defined only in pyi files * Optional dependency that is slow to import or hard to install This is the most clear point where PEP 563 is better for some users. See this example: ``` from dataclasses import dataclass if 0: from fakemod import FakeType @dataclass class C: a : FakeType = 0 ``` This works on PEP 563 semantics (Python 3.10a7). User can get stringified annotation. With stock semantics, it cause NameError when importing so author can notice they need to quote "FakeType". With PEP 649 semantics, author may not notice this annotation cause error. User can not get any type hints at runtime. ### Type alias Another PEP 563 benefit is user can see simple type alias. Consider this example. ``` from typing import * AliasType = Union[List[Dict[Tuple[int, str], Set[int]]], Tuple[str, List[str]]] def f() -> AliasType: pass help(f) ``` Currently, help() calls `typing.get_type_hints()`. So it shows: ``` f() -> Union[List[Dict[Tuple[int, str], Set[int]]], Tuple[str, List[str]]] ``` But with PEP 563 semantics, we can stop evaluating annotations and user can see more readable alias type. ``` f() -> AliasType ``` As PEP 597 says, eval() is slow. But it can avoidable in many cases with PEP 563 semantics. I am not sure but I expect dataclass can avoid eval() too in PEP 563 semantics. Sphinx uses this feature already. See https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#confval-a... ### Relaxing annotation syntax As discussed in PEP 647 thread, we can consider having different syntax for annotation with PEP 597 semantics. Regards, On Mon, Apr 12, 2021 at 10:58 AM Larry Hastings <larry@hastings.org> wrote:
Attached is my second draft of PEP 649. The PEP and the prototype have both seen a marked improvement since round 1 in January; PEP 649 now allows annotations to refer to any variable they could see under stock semantics:
Local variables in the current function scope or in enclosing function scopes become closures and use LOAD_DEFER. Class variables in the current class scope are made available using a new mechanism, in which the class dict is attached to the bound annotation function, then loaded into f_locals when the annotation function is run. Thus permitting LOAD_NAME opcodes to function normally.
I look forward to your comments,
/arry
_______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/QSASX6PZ... Code of Conduct: http://python.org/psf/codeofconduct/
-- Inada Naoki <songofacandy@gmail.com>
On Tue, Apr 13, 2021 at 8:58 AM Larry Hastings <larry@hastings.org> wrote:
On 4/12/21 4:50 PM, Inada Naoki wrote:
As PEP 597 says, eval() is slow. But it can avoidable in many cases with PEP 563 semantics.
PEP 597 is "Add optional EncodingWarning". You said PEP 597 in one other place too. Did you mean PEP 649 in both places?
You're right. I meant PEP 649 vs PEP 563. I'm sorry.
Cheers,
/arry
-- Inada Naoki <songofacandy@gmail.com>
On 4/12/21 4:50 PM, Inada Naoki wrote:
PEP 563 solves all problems relating to types not accessible in runtime. There are many reasons users can not get types used in annotations at runtime:
* To avoid circular import * Types defined only in pyi files * Optional dependency that is slow to import or hard to install
It only "solves" these problems if you leave the annotation as a string. If PEP 563 is active, but you then use typing.get_type_hints() to examine the actual Python value of the annotation, all of these examples will fail with a NameError. So, in this case, "solves the problem" is a positive way of saying "hides a runtime error". I don't know what the use cases are for examining type hints at runtime, so I can't speak as to how convenient or inconvenient it is to deal with them strictly as strings. But it seems to me that examining annotations as their actual Python values would be preferable.
This is the most clear point where PEP 563 is better for some users. See this example:
``` from dataclasses import dataclass
if 0: from fakemod import FakeType
@dataclass class C: a : FakeType = 0 ```
This works on PEP 563 semantics (Python 3.10a7). User can get stringified annotation.
With stock semantics, it cause NameError when importing so author can notice they need to quote "FakeType".
With PEP 649 semantics, author may not notice this annotation cause error. User can not get any type hints at runtime.
Again, by "works on PEP 563 semantics", you mean "doesn't raise an error". But the code /has/ an error. It's just that it has been hidden by PEP 563 semantics. I don't agree that changing Python to automatically hide errors is an improvement. As the Zen says: "Errors should never pass silently." This is really the heart of the debate over PEP 649 vs PEP 563. If you examine an annotation, and it references an undefined symbol, should that throw an error? There is definitely a contingent of people who say "no, that's inconvenient for us". I think it should raise an error. Again from the Zen: "Special cases aren't special enough to break the rules." Annotations are expressions, and if evaluating an expression fails because of an undefined name, it should raise a NameError.
### Type alias
Another PEP 563 benefit is user can see simple type alias. Consider this example.
``` from typing import *
AliasType = Union[List[Dict[Tuple[int, str], Set[int]]], Tuple[str, List[str]]]
def f() -> AliasType: pass
help(f) ```
Currently, help() calls `typing.get_type_hints()`. So it shows:
``` f() -> Union[List[Dict[Tuple[int, str], Set[int]]], Tuple[str, List[str]]] ```
But with PEP 563 semantics, we can stop evaluating annotations and user can see more readable alias type.
``` f() -> AliasType ```
It's a matter of personal opinion whether "AliasType" or the full definition is better here. And it could lead to ambiguity, if the programmer assigns to "AliasType" more than once. Iif the programmer has a strong opinion that "AliasType" is better, they could use an annotation of 'AliasType'--in quotes. Although I haven't seen the topic discussed specifically, I assume that the static typing analysis tools will continue to support manually stringized annotations even if PEP 649 is accepted. Either way, this hypothetical feature might be "nice-to-have", but I don't think it's very important. I would certainly forego this behavior in favor of accepting PEP 649. Cheers, //arry/
On Tue, Apr 13, 2021 at 9:57 AM Larry Hastings <larry@hastings.org> wrote:
On 4/12/21 4:50 PM, Inada Naoki wrote:
PEP 563 solves all problems relating to types not accessible in runtime. There are many reasons users can not get types used in annotations at runtime:
* To avoid circular import * Types defined only in pyi files * Optional dependency that is slow to import or hard to install
It only "solves" these problems if you leave the annotation as a string. If PEP 563 is active, but you then use typing.get_type_hints() to examine the actual Python value of the annotation, all of these examples will fail with a NameError. So, in this case, "solves the problem" is a positive way of saying "hides a runtime error".
Of course, "get type which is unavailable in runtime" is unsolvable problem. PEP 597 doesn't solve it too. Author needs to quote the hint manually, and `typing.get_type_hints()` raises NameError too. And if author forget to quote, user can not get any type hints.
I don't know what the use cases are for examining type hints at runtime, so I can't speak as to how convenient or inconvenient it is to deal with them strictly as strings. But it seems to me that examining annotations as their actual Python values would be preferable.
This is use cases for examining type hints at runtime and stringified hints are OK. * Sphinx autodoc * help() * IPython and other REPLS showing type hint in popup.
``` from dataclasses import dataclass
if 0: from fakemod import FakeType
@dataclass class C: a : FakeType = 0 ```
This works on PEP 563 semantics (Python 3.10a7). User can get stringified annotation.
With stock semantics, it cause NameError when importing so author can notice they need to quote "FakeType".
With PEP 649 semantics, author may not notice this annotation cause error. User can not get any type hints at runtime.
Again, by "works on PEP 563 semantics", you mean "doesn't raise an error". But the code has an error. It's just that it has been hidden by PEP 563 semantics.
I don't agree that changing Python to automatically hide errors is an improvement. As the Zen says: "Errors should never pass silently."
This is really the heart of the debate over PEP 649 vs PEP 563. If you examine an annotation, and it references an undefined symbol, should that throw an error? There is definitely a contingent of people who say "no, that's inconvenient for us". I think it should raise an error. Again from the Zen: "Special cases aren't special enough to break the rules." Annotations are expressions, and if evaluating an expression fails because of an undefined name, it should raise a NameError.
I agree that this is the heart of the debate. I think "annotations are for type hitns". They are for: * Static type checkers * document. So I don't think `if TYPE_CHECKING` idiom is violating Python Zen. Regards, -- Inada Naoki <songofacandy@gmail.com>
On Tue, 2021-04-13 at 10:47 +0900, Inada Naoki wrote:
On Tue, Apr 13, 2021 at 9:57 AM Larry Hastings <larry@hastings.org> wrote:
This is really the heart of the debate over PEP 649 vs PEP 563. If you examine an annotation, and it references an undefined symbol, should that throw an error? There is definitely a contingent of people who say "no, that's inconvenient for us". I think it should raise an error. Again from the Zen: "Special cases aren't special enough to break the rules." Annotations are expressions, and if evaluating an expression fails because of an undefined name, it should raise a NameError.
I agree that this is the heart of the debate. I think "annotations are for type hitns". They are for:
* Static type checkers * document.
+ dynamic type validation, encoding and decoding (Pydantic, FastAPI, Fondat, et al.) Paul
On Tue, Apr 13, 2021 at 11:18 AM Paul Bryan <pbryan@anode.ca> wrote:
On Tue, 2021-04-13 at 10:47 +0900, Inada Naoki wrote:
On Tue, Apr 13, 2021 at 9:57 AM Larry Hastings <larry@hastings.org> wrote:
This is really the heart of the debate over PEP 649 vs PEP 563. If you examine an annotation, and it references an undefined symbol, should that throw an error? There is definitely a contingent of people who say "no, that's inconvenient for us". I think it should raise an error. Again from the Zen: "Special cases aren't special enough to break the rules." Annotations are expressions, and if evaluating an expression fails because of an undefined name, it should raise a NameError.
I agree that this is the heart of the debate. I think "annotations are for type hitns". They are for:
* Static type checkers * document.
+ dynamic type validation, encoding and decoding (Pydantic, FastAPI, Fondat, et al.)
Paul
OK. It is important use case too. Such use cases doesn't justify raising NameError instead of getting stringified type hints for documents for document use cases. On the other hand, if "dynamic type" is used heavily, eval() performance can be a problem. -- Inada Naoki <songofacandy@gmail.com>
On Tue, 2021-04-13 at 11:33 +0900, Inada Naoki wrote:
On Tue, Apr 13, 2021 at 11:18 AM Paul Bryan <pbryan@anode.ca> wrote:
On Tue, 2021-04-13 at 10:47 +0900, Inada Naoki wrote:
On Tue, Apr 13, 2021 at 9:57 AM Larry Hastings <larry@hastings.org> wrote:
This is really the heart of the debate over PEP 649 vs PEP 563. If you examine an annotation, and it references an undefined symbol, should that throw an error? There is definitely a contingent of people who say "no, that's inconvenient for us". I think it should raise an error. Again from the Zen: "Special cases aren't special enough to break the rules." Annotations are expressions, and if evaluating an expression fails because of an undefined name, it should raise a NameError.
I agree that this is the heart of the debate. I think "annotations are for type hitns". They are for:
* Static type checkers * document.
+ dynamic type validation, encoding and decoding (Pydantic, FastAPI, Fondat, et al.)
Paul
OK. It is important use case too.
Such use cases doesn't justify raising NameError instead of getting stringified type hints for documents for document use cases.
On the other hand, if "dynamic type" is used heavily, eval() performance can be a problem.
In 3.9 this cost is paid once when a type is defined. However, in 3.10, it gets expensive, because when the string is evaluated by get_type_hints, its result is not stored/cached anywhere (repeated calls to get_type_hints results in repeated evaluation). As a workaround, I have code to "affix" the evaluated expression in __annotations__ value. PEP 649 would resolve this and eliminate the need for such a hack. Paul
On Mon, Apr 12, 2021 at 7:47 PM Paul Bryan <pbryan@anode.ca> wrote:
In 3.9 this cost is paid once when a type is defined. However, in 3.10, it gets expensive, because when the string is evaluated by get_type_hints, its result is not stored/cached anywhere (repeated calls to get_type_hints results in repeated evaluation). As a workaround, I have code to "affix" the evaluated expression in __annotations__ value. PEP 649 would resolve this and eliminate the need for such a hack.
Why not submit a PR that adds caching to get_type_hints(), rather than promote a paradigm shift? -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
On Mon, 2021-04-12 at 19:52 -0700, Guido van Rossum wrote:
Why not submit a PR that adds caching to get_type_hints(), rather than promote a paradigm shift?
A couple of reasons: 1. In reviewing the code, I didn't find an obvious way to store cached values. Anything but a non-trivial change would suggest the need for a PEP of its own to document new behavior. 2. I've been hoping that PEP 649 would be adopted, making such a hack or any plan to cache type hints moot. Paul
Hi Larry, On 4/12/21, 6:57 PM, "Larry Hastings" <larry@midwinter.com on behalf of larry@hastings.org> wrote: Again, by "works on PEP 563 semantics", you mean "doesn't raise an error". But the code has an error. It's just that it has been hidden by PEP 563 semantics. I don't agree that changing Python to automatically hide errors is an improvement. As the Zen says: "Errors should never pass silently." This is really the heart of the debate over PEP 649 vs PEP 563. If you examine an annotation, and it references an undefined symbol, should that throw an error? There is definitely a contingent of people who say "no, that's inconvenient for us". I think it should raise an error. Again from the Zen: "Special cases aren't special enough to break the rules." Annotations are expressions, and if evaluating an expression fails because of an undefined name, it should raise a NameError. Normally in Python, if you reference a symbol in a function definition line, the symbol must be defined at that point in module execution. Forward references are not permitted, and will raise `NameError`. And yet you have implemented PEP 649, whose entire raison d'être is to implement a "special case" to "break the rules" by delaying evaluation of annotations such that a type annotation, unlike any other expression in the function definition line, may include forward reference names which will not be defined until later in the module. The use case for `if TYPE_CHECKING` imports is effectively the same. They are just forward references to names in other modules which can't be imported eagerly, because e.g. it would cause a cycle. Those who have used type annotations in earnest are likely to confirm that such inter-module forward references are just as necessary as intra-module forward references for the usability of type annotations. So it doesn't seem that we have here is a firm stand on principle of the Zen, it appears to rather be a disagreement about exactly where to draw the line on the "special case" that we all already seem to agree is needed. The Zen argument seems to be a bit of a circular one: I have defined PEP 649 semantics in precisely this way, therefore code that works with PEP 649 does not have an error, and code that does not work with PEP 649 "has an error" which must be surfaced! With PEP 563, although `get_type_hints()` cannot natively resolve inter-module forward references and raises `NameError`, it is possible to work around this by supplying a globals dict to `get_type_hints()` that has been augmented with those forward-referenced names. Under the current version of PEP 649, it becomes impossible to get access to such type annotations at runtime at all, without reverting to manually stringifying the annotation and then using something like `get_type_hints()`. So for users of type annotations who need `if TYPE_CHECKING` (which I think is most users of type annotations), the best-case overall effect of PEP 649 will be that a) some type annotations have to go back to being ugly strings in the source, and b) if type annotation values are needed at runtime, `get_type_hints()` will still be as necessary as it ever was. It is possible for PEP 649 to draw the line differently and support both intra-module and inter-module forward references in annotations, by doing something like https://github.com/larryhastings/co_annotations/pull/3 and replacing unknown names with forward-reference markers, so the annotation values are still accessible at runtime. This meets the real needs of users of type annotations better, and gives up none of the benefits of PEP 649. Carl
I've been thinking about this a bit, and I think that the way forward is for Python to ignore the text of annotations ("relaxed annotation syntax"), not to try and make it available as an expression. To be honest, the most pressing issue with annotations is the clumsy way that type variables have to be introduced. The current convention, `T = TypeVar('T')`, is both verbose (why do I have to repeat the name?) and widely misunderstood (many help request for mypy and pyright follow from users making a mistaken association between two type variables that are unrelated but share the same TypeVar definition). And relaxed annotation syntax alone doesn't solve this. Nevertheless I think that it's time to accept that annotations are for types -- the intention of PEP 3107 was to experiment with different syntax and semantics for types, and that experiment has resulted in the successful adoption of a specific syntax for types that is wildly successful. On Sun, Apr 11, 2021 at 7:02 PM Larry Hastings <larry@hastings.org> wrote:
Attached is my second draft of PEP 649. The PEP and the prototype have both seen a marked improvement since round 1 in January; PEP 649 now allows annotations to refer to any variable they could see under stock semantics:
- Local variables in the current function scope or in enclosing function scopes become closures and use LOAD_DEFER. - Class variables in the current class scope are made available using a new mechanism, in which the class dict is attached to the bound annotation function, then loaded into f_locals when the annotation function is run. Thus permitting LOAD_NAME opcodes to function normally.
I look forward to your comments,
*/arry* _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/QSASX6PZ... Code of Conduct: http://python.org/psf/codeofconduct/
-- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
Hi, Le 13/04/2021 à 04:24, Guido van Rossum a écrit :
I've been thinking about this a bit, and I think that the way forward is for Python to ignore the text of annotations ("relaxed annotation syntax"), not to try and make it available as an expression.
Then, what's wrong with quoting? It's just 2 characters, and prevent the user (or their IDE) from trying to parse them as Python syntax. As a comparison: docstrings do get quoting, even though they also have special semantics in the language. Cheers, Baptiste
On Tue, Apr 13, 2021 at 9:39 AM Baptiste Carvello < devel2021@baptiste-carvello.net> wrote:
Le 13/04/2021 à 04:24, Guido van Rossum a écrit :
I've been thinking about this a bit, and I think that the way forward is for Python to ignore the text of annotations ("relaxed annotation syntax"), not to try and make it available as an expression.
Then, what's wrong with quoting? It's just 2 characters, and prevent the user (or their IDE) from trying to parse them as Python syntax.
Informal user research has shown high resistance to quoting.
As a comparison: docstrings do get quoting, even though they also have special semantics in the language.
Not the same thing. Docstrings use English, which has no formal (enough) syntax. The idea for annotations is that they *do* have a formal syntax, it just evolves separately from that of Python itself. -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
Hi, tl;dr: imho the like or dislike of PEP 563 is related to whether people intend to learn a second syntax for typing, or would rather ignore it; both groups should be taken into account. Le 13/04/2021 à 19:30, Guido van Rossum a écrit :
On Tue, Apr 13, 2021 at 9:39 AM Baptiste Carvello <devel2021@baptiste-carvello.net <mailto:devel2021@baptiste-carvello.net>> wrote:
Then, what's wrong with quoting? It's just 2 characters, and prevent the user (or their IDE) from trying to parse them as Python syntax.
Informal user research has shown high resistance to quoting.
OK, but why? I'd bet it's due to an "aesthetic" concern: for typing users, type hints are code, not textual data. So it irks them to see them quoted and syntax-highlighted as text strings.
As a comparison: docstrings do get quoting, even though they also have special semantics in the language.
Not the same thing. Docstrings use English, which has no formal (enough) syntax. The idea for annotations is that they *do* have a formal syntax, it just evolves separately from that of Python itself.
If I may say it in my words: to both the parser and (more importantly) typing-savvy developers, type hints are code. I now see the point. But what about developers who won't learn this (future) incompatible typing syntax, and only encounter it in the wild? To them, those annotations are akin to docstrings: pieces of textual data that Python manages specially because of their role in the greater ecosystem, but that they can ignore because the program behavior is not modified. So it will irk them if annotations in this new syntax are not quoted or otherwise made distinguishable from code written in the normal Python syntax they understand. Again the "aesthetic" concern, and imho it explains in large part why some people dislike PEP 563. Can the needs of both groups of developers be addressed? Could code in the new typing syntax be marked with a specific syntactic marker, distinguishing it from both normal Python syntax and text strings? Then this new marker could also be used outside of annotations, to mark analysis-time-only imports or statements? Or is this all not worth the expense, and typing syntax can manage to stay compatible with normal Python syntax, in which case PEP 649 is the way to go? Cheers, Baptiste
On Wed, Apr 14, 2021 at 9:42 AM Baptiste Carvello < devel2021@baptiste-carvello.net> wrote:
Hi,
tl;dr: imho the like or dislike of PEP 563 is related to whether people intend to learn a second syntax for typing, or would rather ignore it; both groups should be taken into account.
On Tue, Apr 13, 2021 at 9:39 AM Baptiste Carvello <devel2021@baptiste-carvello.net <mailto:devel2021@baptiste-carvello.net>> wrote:
Then, what's wrong with quoting? It's just 2 characters, and prevent
Le 13/04/2021 à 19:30, Guido van Rossum a écrit : the
user (or their IDE) from trying to parse them as Python syntax.
Informal user research has shown high resistance to quoting.
OK, but why? I'd bet it's due to an "aesthetic" concern: for typing users, type hints are code, not textual data. So it irks them to see them quoted and syntax-highlighted as text strings.
No, what I heard is that, since in *most* cases the string quotes are not needed, people are surprised and annoyed when they encounter cases where they are needed. And if you have a large code base it takes an expensive run of the static type checker to find out that you've forgotten the quotes.
As a comparison: docstrings do get quoting, even though they also
have
special semantics in the language.
Not the same thing. Docstrings use English, which has no formal (enough) syntax. The idea for annotations is that they *do* have a formal syntax, it just evolves separately from that of Python itself.
If I may say it in my words: to both the parser and (more importantly) typing-savvy developers, type hints are code. I now see the point.
But what about developers who won't learn this (future) incompatible typing syntax, and only encounter it in the wild? To them, those annotations are akin to docstrings: pieces of textual data that Python manages specially because of their role in the greater ecosystem, but that they can ignore because the program behavior is not modified.
So it will irk them if annotations in this new syntax are not quoted or otherwise made distinguishable from code written in the normal Python syntax they understand. Again the "aesthetic" concern, and imho it explains in large part why some people dislike PEP 563.
They will treat it as anything else they don't quite understand -- they will ignore it unless it bites them. And the rule for finding the end of an annotation would be very simple -- just skip words until the next comma, close paren or colon, skipping matching brackets etc. Certainly I use this strategy all the time for quickly skimming code (in any language) that I don't need to completely understand -- "oh, this is where the parameters are processed, this is where the defaults are sorted out, and this is where the work is being done; and since I'm investigating why the default is weird, let me look at that part of the code in more detail."
Can the needs of both groups of developers be addressed? Could code in the new typing syntax be marked with a specific syntactic marker, distinguishing it from both normal Python syntax and text strings? Then this new marker could also be used outside of annotations, to mark analysis-time-only imports or statements?
There already is a special marker for annotations in function definitions -- for arguments, it's the colon following the parameter name, and for return types, it's the arrow after the parameter list. And for variable declarations ("x: int") the same colon also suffices. The idea to use the same marker for other analysis-time code is interesting, but the syntactic requirements are somewhat different -- annotations live at the "expression" level (informally speaking) and are already in a clearly indicated syntactic position -- analysis-time code looks just like other code and can occur in any position where statements can appear. So an appropriate marker would probably be an if-statement with a special condition, like "if TYPE_CHECKING" (I am all for making that a built-in constant, BTW). If you were thinking of backticks, sorry, I'm not biting. (There are folks who have other plans for those -- presumably because it's one of the few ASCII characters that currently has no meaning.)
Or is this all not worth the expense, and typing syntax can manage to stay compatible with normal Python syntax, in which case PEP 649 is the way to go?
I don't see much expense in the proposal to relax the syntax, and I see benefits for new kinds of type annotations (e.g. PEP 647 would have benefited). And certainly PEP 649 has considerable cost as well -- besides the cost of closing the door to relaxed annotation syntax, there's the engineering work of undoing the work that was done to make `from __future__ import annotations` the default (doing this was a significant effort spread over many commits, and undoing will be just as hard). And we would still have to support stringification when that import is explicitly given for several more releases. -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
On 4/14/21 10:44 AM, Guido van Rossum wrote:
besides the cost of closing the door to relaxed annotation syntax, there's the engineering work of undoing the work that was done to make `from __future__ import annotations` the default (doing this was a significant effort spread over many commits, and undoing will be just as hard).
I'm not sure either of those statements is true. Accepting PEP 649 as written would deprecate stringized annotations, it's true. But the SC can make any decision it wants here, including only accepting the new semantics of 649 without deprecating stringized annotations. They could remain in the language for another release (or two? or three?) while we "kick the can down the road". This is not without its costs too but it might be the best approach for now. As for undoing the effort to make stringized annotations the default, git should do most of the heavy lifting here. There's a technique where you check out the revision that made the change, generate a reverse patch, apply it, and check that in. This creates a new head which you then merge. That's what I did when I created my co_annotations branch, and at the time it was literally the work of ten minutes. I gather the list of changes is more substantial now, so this would have to be done multiple times, and it may be more involved. Still, if PEP 649 is accepted, I would happily volunteer to undertake this part of the workload. Cheers, //arry/
Let's just wait for the SC to join the discussion. I'm sure they will, eventually. On Wed, Apr 14, 2021 at 11:12 AM Larry Hastings <larry@hastings.org> wrote:
On 4/14/21 10:44 AM, Guido van Rossum wrote:
besides the cost of closing the door to relaxed annotation syntax, there's the engineering work of undoing the work that was done to make `from __future__ import annotations` the default (doing this was a significant effort spread over many commits, and undoing will be just as hard).
I'm not sure either of those statements is true.
Accepting PEP 649 as written would deprecate stringized annotations, it's true. But the SC can make any decision it wants here, including only accepting the new semantics of 649 without deprecating stringized annotations. They could remain in the language for another release (or two? or three?) while we "kick the can down the road". This is not without its costs too but it might be the best approach for now.
As for undoing the effort to make stringized annotations the default, git should do most of the heavy lifting here. There's a technique where you check out the revision that made the change, generate a reverse patch, apply it, and check that in. This creates a new head which you then merge. That's what I did when I created my co_annotations branch, and at the time it was literally the work of ten minutes. I gather the list of changes is more substantial now, so this would have to be done multiple times, and it may be more involved. Still, if PEP 649 is accepted, I would happily volunteer to undertake this part of the workload.
Cheers,
*/arry* _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/LRVFVLH4... Code of Conduct: http://python.org/psf/codeofconduct/
-- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
On Wed, Apr 14, 2021 at 12:08 PM Guido van Rossum <guido@python.org> wrote:
Let's just wait for the SC to join the discussion. I'm sure they will, eventually.
FYI the PEP has not been sent to us via https://github.com/python/steering-council/issues as ready for pronouncement, so we have not started officially discussing this PEP yet. -Brett
On Wed, Apr 14, 2021 at 11:12 AM Larry Hastings <larry@hastings.org> wrote:
On 4/14/21 10:44 AM, Guido van Rossum wrote:
besides the cost of closing the door to relaxed annotation syntax, there's the engineering work of undoing the work that was done to make `from __future__ import annotations` the default (doing this was a significant effort spread over many commits, and undoing will be just as hard).
I'm not sure either of those statements is true.
Accepting PEP 649 as written would deprecate stringized annotations, it's true. But the SC can make any decision it wants here, including only accepting the new semantics of 649 without deprecating stringized annotations. They could remain in the language for another release (or two? or three?) while we "kick the can down the road". This is not without its costs too but it might be the best approach for now.
As for undoing the effort to make stringized annotations the default, git should do most of the heavy lifting here. There's a technique where you check out the revision that made the change, generate a reverse patch, apply it, and check that in. This creates a new head which you then merge. That's what I did when I created my co_annotations branch, and at the time it was literally the work of ten minutes. I gather the list of changes is more substantial now, so this would have to be done multiple times, and it may be more involved. Still, if PEP 649 is accepted, I would happily volunteer to undertake this part of the workload.
Cheers,
*/arry* _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/LRVFVLH4... Code of Conduct: http://python.org/psf/codeofconduct/
-- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/> _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/V5ASSMVV... Code of Conduct: http://python.org/psf/codeofconduct/
My plan was to post it here and see what the response was first. Back in January, when I posted the first draft, I got some very useful feedback that resulted in some dramatic changes. This time around, so far, nobody has suggested even minor changes. Folks have just expressed their opinions about it (which is fine). Still left to do: ping the project leads of some other static type analysis projects and see if they have any feedback to contribute. Once the dust completely settles around the conversation here, I expect to formally submit the PEP, hopefully later this week. Cheers, //arry/ On 4/14/21 12:22 PM, Brett Cannon wrote:
On Wed, Apr 14, 2021 at 12:08 PM Guido van Rossum <guido@python.org <mailto:guido@python.org>> wrote:
Let's just wait for the SC to join the discussion. I'm sure they will, eventually.
FYI the PEP has not been sent to us via https://github.com/python/steering-council/issues <https://github.com/python/steering-council/issues> as ready for pronouncement, so we have not started officially discussing this PEP yet.
-Brett
On Wed, Apr 14, 2021 at 11:12 AM Larry Hastings <larry@hastings.org <mailto:larry@hastings.org>> wrote:
On 4/14/21 10:44 AM, Guido van Rossum wrote:
besides the cost of closing the door to relaxed annotation syntax, there's the engineering work of undoing the work that was done to make `from __future__ import annotations` the default (doing this was a significant effort spread over many commits, and undoing will be just as hard).
I'm not sure either of those statements is true.
Accepting PEP 649 as written would deprecate stringized annotations, it's true. But the SC can make any decision it wants here, including only accepting the new semantics of 649 without deprecating stringized annotations. They could remain in the language for another release (or two? or three?) while we "kick the can down the road". This is not without its costs too but it might be the best approach for now.
As for undoing the effort to make stringized annotations the default, git should do most of the heavy lifting here. There's a technique where you check out the revision that made the change, generate a reverse patch, apply it, and check that in. This creates a new head which you then merge. That's what I did when I created my co_annotations branch, and at the time it was literally the work of ten minutes. I gather the list of changes is more substantial now, so this would have to be done multiple times, and it may be more involved. Still, if PEP 649 is accepted, I would happily volunteer to undertake this part of the workload.
Cheers,
//arry/
_______________________________________________ Python-Dev mailing list -- python-dev@python.org <mailto:python-dev@python.org> To unsubscribe send an email to python-dev-leave@python.org <mailto:python-dev-leave@python.org> https://mail.python.org/mailman3/lists/python-dev.python.org/ <https://mail.python.org/mailman3/lists/python-dev.python.org/> Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/LRVFVLH4... <https://mail.python.org/archives/list/python-dev@python.org/message/LRVFVLH4AHF7SX5MOEUBPPII7UNINAMJ/> Code of Conduct: http://python.org/psf/codeofconduct/ <http://python.org/psf/codeofconduct/>
-- --Guido van Rossum (python.org/~guido <http://python.org/~guido>) /Pronouns: he/him //(why is my pronoun here?)/ <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/> _______________________________________________ Python-Dev mailing list -- python-dev@python.org <mailto:python-dev@python.org> To unsubscribe send an email to python-dev-leave@python.org <mailto:python-dev-leave@python.org> https://mail.python.org/mailman3/lists/python-dev.python.org/ <https://mail.python.org/mailman3/lists/python-dev.python.org/> Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/V5ASSMVV... <https://mail.python.org/archives/list/python-dev@python.org/message/V5ASSMVVAP4RZX3DOGJIDS52OEJ6LP7C/> Code of Conduct: http://python.org/psf/codeofconduct/ <http://python.org/psf/codeofconduct/>
Hi Larry, On 4/14/21, 1:56 PM, "Larry Hastings" <larry@midwinter.com on behalf of larry@hastings.org> wrote:
My plan was to post it here and see what the response was first. Back in January, when I posted the first draft, I got some very useful feedback that resulted in some dramatic changes. This time around, so far, nobody has suggested even minor changes. Folks have just expressed their opinions about it (which is fine).
This is not true. I suggested yesterday (in https://mail.python.org/archives/list/python-dev@python.org/message/DSZFE7XT... ) that PEP 649 could avoid making life worse for users of type annotations (relative to PEP 563) if it replaced runtime-undefined names with forward reference markers, as implemented in https://github.com/larryhastings/co_annotations/pull/3 Perhaps you've chosen to ignore the suggestion, but that's not the same as nobody suggesting any changes ;) Carl
Hi, Le 14/04/2021 à 19:44, Guido van Rossum a écrit :
No, what I heard is that, since in *most* cases the string quotes are not needed, people are surprised and annoyed when they encounter cases where they are needed. And if you have a large code base it takes an expensive run of the static type checker to find out that you've forgotten the quotes.
Well, I had assumed quotes would be used in all cases for consistency. Indeed, using them only if needed leads to surprises. Are there specific annoyances associated with quoting always, apart from the 2 more characters?
[...]
They will treat it as anything else they don't quite understand -- they will ignore it unless it bites them. And the rule for finding the end of an annotation would be very simple -- just skip words until the next comma, close paren or colon, skipping matching brackets etc.
That's assuming the syntax in the annotations doesn't diverge too much from the Python syntax as far as brackets etc are concerned. I must say I'm not too worried about typing. But the hypothetic "def foo(prec: --precision int):" is already less readable. Will finding the closing comma or colon always be obvious to the human reader? Cheers, Baptiste
On 4/14/21 1:42 PM, Baptiste Carvello wrote:
Are there specific annoyances associated with quoting always, apart from the 2 more characters?
Yes. Since the quoted strings aren't parsed by Python, syntax errors in these strings go undetected until somebody does parse them (e.g. your static type analyzer). Having the Python compiler de-compile them back into strings means they got successfully parsed. Though this doesn't rule out other errors, e.g. NameError. I thought this was discussed in PEP 563, but now I can't find it, so unfortunately I can't steer you towards any more info on the subject. Cheers, //arry/
Larry Hastings wrote:
On 4/14/21 1:42 PM, Baptiste Carvello wrote:
Are there specific annoyances associated with quoting always, apart from the 2 more characters?
Yes. Since the quoted strings aren't parsed by Python, syntax errors in these strings go undetected until somebody does parse them (e.g. your static type analyzer).
This is a real problem. But in theory, your code editor (and python) *could* parse the strings. They generally don't, but I'm not sure asking them to do that is much harder than asking them to deal with new syntax. -jJ
On Wed, 2021-04-14 at 22:42 +0200, Baptiste Carvello wrote:
That's assuming the syntax in the annotations doesn't diverge too much from the Python syntax as far as brackets etc are concerned. I must say I'm not too worried about typing. But the hypothetic "def foo(prec: --precision int):" is already less readable. Will finding the closing comma or colon always be obvious to the human reader?
To push the limit, let's add some default value: def foo(prec: --precision int = 123): ... vs. def foo(prec: "--precision int" = 123): ... And if a "type parameter" becomes numeric. For example: def foo(prec: --max bar 3000 = 123): ... vs. def foo(prec: "--max bar 3000" = 123): ... Now, the quotes seem even more readable as a delimiter. Paul
Baptiste Carvello wrote:
Le 14/04/2021 à 19:44, Guido van Rossum a écrit :
No, what I heard is that, since in *most* cases the string quotes are not needed, people are surprised and annoyed when they encounter cases where they are needed.
Well, I had assumed quotes would be used in all cases for consistency.
That does seem like a reasonable solution. Redundant, ugly, and annoying, but safe and consistent. Sort of like using type constraints in the first place. :D
... the rule for finding the end of an annotation would be very simple -- just skip words until the next comma, close paren or colon, skipping matching brackets etc.
... But the hypothetic "def foo(prec: --precision int):" is already less readable. Will finding the closing comma or colon always be obvious to the human reader?
Nope. "--" sometimes means "ignore the rest of the line, including the ")". At the moment, I can't remember where I've seen this outside of SQL, but I can guarantee that if I read it late enough at night, the *best* case would be that I notice the ambiguity, guess correctly and am only annoyed. -jJ
On 4/12/21 7:24 PM, Guido van Rossum wrote:
I've been thinking about this a bit, and I think that the way forward is for Python to ignore the text of annotations ("relaxed annotation syntax"), not to try and make it available as an expression.
To be honest, the most pressing issue with annotations is the clumsy way that type variables have to be introduced. The current convention, `T = TypeVar('T')`, is both verbose (why do I have to repeat the name?) and widely misunderstood (many help request for mypy and pyright follow from users making a mistaken association between two type variables that are unrelated but share the same TypeVar definition). And relaxed annotation syntax alone doesn't solve this.
Nevertheless I think that it's time to accept that annotations are for types -- the intention of PEP 3107 was to experiment with different syntax and semantics for types, and that experiment has resulted in the successful adoption of a specific syntax for types that is wildly successful.
I don't follow your reasoning. I'm glad that type hints have found success, but I don't see why that implies "and therefore we should restrict the use of annotations solely for type hints". Annotations are a useful, general-purpose feature of Python, with legitimate uses besides type hints. Why would it make Python better to restrict their use now? //arry/
On Tue, Apr 13, 2021 at 12:32 PM Larry Hastings <larry@hastings.org> wrote:
On 4/12/21 7:24 PM, Guido van Rossum wrote:
I've been thinking about this a bit, and I think that the way forward is for Python to ignore the text of annotations ("relaxed annotation syntax"), not to try and make it available as an expression.
To be honest, the most pressing issue with annotations is the clumsy way that type variables have to be introduced. The current convention, `T = TypeVar('T')`, is both verbose (why do I have to repeat the name?) and widely misunderstood (many help request for mypy and pyright follow from users making a mistaken association between two type variables that are unrelated but share the same TypeVar definition). And relaxed annotation syntax alone doesn't solve this.
Nevertheless I think that it's time to accept that annotations are for types -- the intention of PEP 3107 was to experiment with different syntax and semantics for types, and that experiment has resulted in the successful adoption of a specific syntax for types that is wildly successful.
I don't follow your reasoning. I'm glad that type hints have found success, but I don't see why that implies "and therefore we should restrict the use of annotations solely for type hints". Annotations are a useful, general-purpose feature of Python, with legitimate uses besides type hints. Why would it make Python better to restrict their use now?
Because typing is, to many folks, a Really Important Concept, and it's confusing to use the same syntax ("x: blah blah") for different purposes, in a way that makes it hard to tell whether a particular "blah blah" is meant as a type or as something else -- because you have to know what's introspecting the annotations before you can tell. And that introspection could be signalled by a magical decorator, but it could also be implicit: maybe you have a driver that calls a function based on a CLI entry point name, and introspects that function even if it's not decorated. OTOH, not requiring that annotations are syntactically valid expressions might liberate such a CLI library too: you could write things like def foo(prec: --precision int): ... -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
On 4/13/21 1:52 PM, Guido van Rossum wrote:
On Tue, Apr 13, 2021 at 12:32 PM Larry Hastings <larry@hastings.org <mailto:larry@hastings.org>> wrote:
On 4/12/21 7:24 PM, Guido van Rossum wrote:
I've been thinking about this a bit, and I think that the way forward is for Python to ignore the text of annotations ("relaxed annotation syntax"), not to try and make it available as an expression.
To be honest, the most pressing issue with annotations is the clumsy way that type variables have to be introduced. The current convention, `T = TypeVar('T')`, is both verbose (why do I have to repeat the name?) and widely misunderstood (many help request for mypy and pyright follow from users making a mistaken association between two type variables that are unrelated but share the same TypeVar definition). And relaxed annotation syntax alone doesn't solve this.
Nevertheless I think that it's time to accept that annotations are for types -- the intention of PEP 3107 was to experiment with different syntax and semantics for types, and that experiment has resulted in the successful adoption of a specific syntax for types that is wildly successful.
I don't follow your reasoning. I'm glad that type hints have found success, but I don't see why that implies "and therefore we should restrict the use of annotations solely for type hints". Annotations are a useful, general-purpose feature of Python, with legitimate uses besides type hints. Why would it make Python better to restrict their use now?
Because typing is, to many folks, a Really Important Concept, and it's confusing to use the same syntax ("x: blah blah") for different purposes, in a way that makes it hard to tell whether a particular "blah blah" is meant as a type or as something else -- because you have to know what's introspecting the annotations before you can tell. And that introspection could be signalled by a magical decorator, but it could also be implicit: maybe you have a driver that calls a function based on a CLI entry point name, and introspects that function even if it's not decorated.
I'm not sure I understand your point. Are you saying that we need to take away the general-purpose functionality of annotations, that's been in the language since 3.0, and restrict annotations to just type hints... because otherwise an annotation might not be used for a type hint, and then the programmer would have to figure out what it means? We need to take away the functionality from all other use cases in order to lend /clarity/ to one use case? Also, if you're stating that programmers get confused reading source code because annotations get used for different things at different places--surely that confirms that annotations are /useful/ for more than just type hints, in real-world code, today. I genuinely have no sense of how important static type analysis is in Python--personally I have no need for it--but I find it hard to believe that type hints are so overwhelmingly important that they should become the sole use case for annotations, and we need to take away this long-standing functionality, that you suggest is being successfully used side-by-side with type hints today, merely to make type hints clearer. Cheers, //arry/
On Wed, Apr 14, 2021 at 10:44 AM Larry Hastings <larry@hastings.org> wrote:
On 4/13/21 1:52 PM, Guido van Rossum wrote:
Because typing is, to many folks, a Really Important Concept, and it's confusing to use the same syntax ("x: blah blah") for different purposes, in a way that makes it hard to tell whether a particular "blah blah" is meant as a type or as something else -- because you have to know what's introspecting the annotations before you can tell. And that introspection could be signalled by a magical decorator, but it could also be implicit: maybe you have a driver that calls a function based on a CLI entry point name, and introspects that function even if it's not decorated.
I'm not sure I understand your point. Are you saying that we need to take away the general-purpose functionality of annotations, that's been in the language since 3.0, and restrict annotations to just type hints... because otherwise an annotation might not be used for a type hint, and then the programmer would have to figure out what it means? We need to take away the functionality from all other use cases in order to lend clarity to one use case?
I don't think we need to take away "general purpose functionality". But if we define type hinting is 1st class use case of annotations, annotations should be optimized for type hinting. General purpose use case should accept some limitation and overhead. On the other hand, if we decide general purpose functionality is 1st class too, we shouldn't annotation syntax different from Python syntax. But annotations should be optimized for type hinting anyway. General purpose use case used only is a limited part of application. On the other hand, type hint can be used almost everywhere in application code base. It must cheap enough. Regards, -- Inada Naoki <songofacandy@gmail.com>
For the record, Cython allows using annotations for typing: https://cython.readthedocs.io/en/latest/src/tutorial/pure.html#pep-484-type-... I don't know if they are fully compatible with the type hints we're talking about here. Regards Antoine. On Wed, 14 Apr 2021 10:58:07 +0900 Inada Naoki <songofacandy@gmail.com> wrote:
On Wed, Apr 14, 2021 at 10:44 AM Larry Hastings <larry@hastings.org> wrote:
On 4/13/21 1:52 PM, Guido van Rossum wrote:
Because typing is, to many folks, a Really Important Concept, and it's confusing to use the same syntax ("x: blah blah") for different purposes, in a way that makes it hard to tell whether a particular "blah blah" is meant as a type or as something else -- because you have to know what's introspecting the annotations before you can tell. And that introspection could be signalled by a magical decorator, but it could also be implicit: maybe you have a driver that calls a function based on a CLI entry point name, and introspects that function even if it's not decorated.
I'm not sure I understand your point. Are you saying that we need to take away the general-purpose functionality of annotations, that's been in the language since 3.0, and restrict annotations to just type hints... because otherwise an annotation might not be used for a type hint, and then the programmer would have to figure out what it means? We need to take away the functionality from all other use cases in order to lend clarity to one use case?
I don't think we need to take away "general purpose functionality". But if we define type hinting is 1st class use case of annotations, annotations should be optimized for type hinting. General purpose use case should accept some limitation and overhead.
On the other hand, if we decide general purpose functionality is 1st class too, we shouldn't annotation syntax different from Python syntax.
But annotations should be optimized for type hinting anyway. General purpose use case used only is a limited part of application. On the other hand, type hint can be used almost everywhere in application code base. It must cheap enough.
Regards,
It looks like a small subset of PEP 484, syntactically. So it should be fine. Possibly cython might be interested in using a relaxed notation if it is ever introduced, e.g. ‘long long’ or ‘static int’ (for a return type)? On Wed, Apr 14, 2021 at 02:27 Antoine Pitrou <antoine@python.org> wrote:
For the record, Cython allows using annotations for typing:
https://cython.readthedocs.io/en/latest/src/tutorial/pure.html#pep-484-type-...
I don't know if they are fully compatible with the type hints we're talking about here.
Regards
Antoine.
On Wed, Apr 14, 2021 at 10:44 AM Larry Hastings <larry@hastings.org> wrote:
On 4/13/21 1:52 PM, Guido van Rossum wrote:
Because typing is, to many folks, a Really Important Concept, and it's
confusing to use the same syntax ("x: blah blah") for different purposes, in a way that makes it hard to tell whether a particular "blah blah" is meant as a type or as something else -- because you have to know what's introspecting the annotations before you can tell. And that introspection could be signalled by a magical decorator, but it could also be implicit: maybe you have a driver that calls a function based on a CLI entry point name, and introspects that function even if it's not decorated.
I'm not sure I understand your point. Are you saying that we need to
take away the general-purpose functionality of annotations, that's been in
On Wed, 14 Apr 2021 10:58:07 +0900 Inada Naoki <songofacandy@gmail.com> wrote: the language since 3.0, and restrict annotations to just type hints... because otherwise an annotation might not be used for a type hint, and then the programmer would have to figure out what it means? We need to take away the functionality from all other use cases in order to lend clarity to one use case?
I don't think we need to take away "general purpose functionality". But if we define type hinting is 1st class use case of annotations, annotations should be optimized for type hinting. General purpose use case should accept some limitation and overhead.
On the other hand, if we decide general purpose functionality is 1st class too, we shouldn't annotation syntax different from Python syntax.
But annotations should be optimized for type hinting anyway. General purpose use case used only is a limited part of application. On the other hand, type hint can be used almost everywhere in application code base. It must cheap enough.
Regards,
_______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/VJWVKMQF... Code of Conduct: http://python.org/psf/codeofconduct/
-- --Guido (mobile)
On Tue, Apr 13, 2021 at 6:58 PM Inada Naoki <songofacandy@gmail.com> wrote:
On Wed, Apr 14, 2021 at 10:44 AM Larry Hastings <larry@hastings.org> wrote:
On 4/13/21 1:52 PM, Guido van Rossum wrote:
Because typing is, to many folks, a Really Important Concept, and it's
confusing to use the same syntax ("x: blah blah") for different purposes, in a way that makes it hard to tell whether a particular "blah blah" is meant as a type or as something else -- because you have to know what's introspecting the annotations before you can tell. And that introspection could be signalled by a magical decorator, but it could also be implicit: maybe you have a driver that calls a function based on a CLI entry point name, and introspects that function even if it's not decorated.
I'm not sure I understand your point. Are you saying that we need to
take away the general-purpose functionality of annotations, that's been in the language since 3.0, and restrict annotations to just type hints... because otherwise an annotation might not be used for a type hint, and then the programmer would have to figure out what it means? We need to take away the functionality from all other use cases in order to lend clarity to one use case?
I don't think we need to take away "general purpose functionality". But if we define type hinting is 1st class use case of annotations, annotations should be optimized for type hinting. General purpose use case should accept some limitation and overhead.
On the other hand, if we decide general purpose functionality is 1st class too, we shouldn't annotation syntax different from Python syntax.
Has anyone reached out to people like Pydantic, FastAPI, typer, etc. to see what they think of this PEP? For instance, are they having issues with the way things are today enough that this is a very clear win for them? -Brett
But annotations should be optimized for type hinting anyway. General purpose use case used only is a limited part of application. On the other hand, type hint can be used almost everywhere in application code base. It must cheap enough.
Regards,
-- Inada Naoki <songofacandy@gmail.com> _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/UGFWTZUG... Code of Conduct: http://python.org/psf/codeofconduct/
On Tue, Apr 13, 2021 at 6:48 PM Larry Hastings <larry@hastings.org> wrote:
On 4/13/21 1:52 PM, Guido van Rossum wrote:
On Tue, Apr 13, 2021 at 12:32 PM Larry Hastings <larry@hastings.org> wrote:
On 4/12/21 7:24 PM, Guido van Rossum wrote:
I've been thinking about this a bit, and I think that the way forward is for Python to ignore the text of annotations ("relaxed annotation syntax"), not to try and make it available as an expression.
To be honest, the most pressing issue with annotations is the clumsy way that type variables have to be introduced. The current convention, `T = TypeVar('T')`, is both verbose (why do I have to repeat the name?) and widely misunderstood (many help request for mypy and pyright follow from users making a mistaken association between two type variables that are unrelated but share the same TypeVar definition). And relaxed annotation syntax alone doesn't solve this.
Nevertheless I think that it's time to accept that annotations are for types -- the intention of PEP 3107 was to experiment with different syntax and semantics for types, and that experiment has resulted in the successful adoption of a specific syntax for types that is wildly successful.
I don't follow your reasoning. I'm glad that type hints have found success, but I don't see why that implies "and therefore we should restrict the use of annotations solely for type hints". Annotations are a useful, general-purpose feature of Python, with legitimate uses besides type hints. Why would it make Python better to restrict their use now?
Because typing is, to many folks, a Really Important Concept, and it's confusing to use the same syntax ("x: blah blah") for different purposes, in a way that makes it hard to tell whether a particular "blah blah" is meant as a type or as something else -- because you have to know what's introspecting the annotations before you can tell. And that introspection could be signalled by a magical decorator, but it could also be implicit: maybe you have a driver that calls a function based on a CLI entry point name, and introspects that function even if it's not decorated.
I'm not sure I understand your point. Are you saying that we need to take away the general-purpose functionality of annotations, that's been in the language since 3.0, and restrict annotations to just type hints... because otherwise an annotation might not be used for a type hint, and then the programmer would have to figure out what it means? We need to take away the functionality from all other use cases in order to lend *clarity* to one use case?
Yes, that's how I see it. And before you get too dramatic about it, the stringification of annotations has been in the making a long time, with the community's and the SC's support. You came up with a last-minute attempt to change it, using the PEP process to propose to *revert* the decision already codified in PEP 563 and implemented in the master branch. But you've waited until the last minute (feature freeze is in three weeks) and IMO you're making things awkward for the SC (who can and will speak for themselves).
Also, if you're stating that programmers get confused reading source code because annotations get used for different things at different places--surely that confirms that annotations are *useful* for more than just type hints, in real-world code, today.
No, it doesn't, it's just a hypothetical that they *would* be confused if there *were* other uses. Personally I haven't used any libraries that use non-type-hint annotations, but I've been told they exist. I genuinely have no sense of how important static type analysis is in
Python--personally I have no need for it--but I find it hard to believe that type hints are so overwhelmingly important that they should become the sole use case for annotations, and we need to take away this long-standing functionality, that you suggest is being successfully used side-by-side with type hints today, merely to make type hints clearer.
For projects and teams that use type hints, they are *very* important. For example, they are so important to the Instagram team at Facebook that they wrote their own static type checker when they found mypy wasn't fast enough for their million-line codebase. And of course they were so important to Dropbox that they sponsored a multi-year, multi-person effort to create mypy in the first place. The amount of feedback we've received for mypy indicates that it's not just those two companies that are using type hints. -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
I favour annotations for type hints; the writing's been on the wall for some time. I think the necessary escape hatch for those using it for other purposes should be Annotated[Any, ...] (or a similar, nicer alternative). Guido, one of the difficulties I'm having is understanding the direction you're going with "relaxed syntax". PEP 649 is concrete; it's hard to weigh its merits against the usability—even feasibility—of incorporating an as yet undefined relaxed syntax. At the end of the day, such syntax is going to have to be represented in some structure. If one were to accept that annotations are for type hints only, is the debate then the difference between a Python type (which PEP 649 would yield) and some other as yet undefined structure? Paul On Wed, 2021-04-14 at 10:24 -0700, Guido van Rossum wrote:
On Tue, Apr 13, 2021 at 6:48 PM Larry Hastings <larry@hastings.org> wrote:
On 4/13/21 1:52 PM, Guido van Rossum wrote:
On Tue, Apr 13, 2021 at 12:32 PM Larry Hastings <larry@hastings.org> wrote:
On 4/12/21 7:24 PM, Guido van Rossum wrote:
I've been thinking about this a bit, and I think that the way forward is for Python to ignore the text of annotations ("relaxed annotation syntax"), not to try and make it available as an expression.
To be honest, the most pressing issue with annotations is the clumsy way that type variables have to be introduced. The current convention, `T = TypeVar('T')`, is both verbose (why do I have to repeat the name?) and widely misunderstood (many help request for mypy and pyright follow from users making a mistaken association between two type variables that are unrelated but share the same TypeVar definition). And relaxed annotation syntax alone doesn't solve this.
Nevertheless I think that it's time to accept that annotations are for types -- the intention of PEP 3107 was to experiment with different syntax and semantics for types, and that experiment has resulted in the successful adoption of a specific syntax for types that is wildly successful.
I don't follow your reasoning. I'm glad that type hints have found success, but I don't see why that implies "and therefore we should restrict the use of annotations solely for type hints". Annotations are a useful, general-purpose feature of Python, with legitimate uses besides type hints. Why would it make Python better to restrict their use now?
Because typing is, to many folks, a Really Important Concept, and it's confusing to use the same syntax ("x: blah blah") for different purposes, in a way that makes it hard to tell whether a particular "blah blah" is meant as a type or as something else -- because you have to know what's introspecting the annotations before you can tell. And that introspection could be signalled by a magical decorator, but it could also be implicit: maybe you have a driver that calls a function based on a CLI entry point name, and introspects that function even if it's not decorated.
I'm not sure I understand your point. Are you saying that we need to take away the general-purpose functionality of annotations, that's been in the language since 3.0, and restrict annotations to just type hints... because otherwise an annotation might not be used for a type hint, and then the programmer would have to figure out what it means? We need to take away the functionality from all other use cases in order to lend clarity to one use case?
Yes, that's how I see it.
And before you get too dramatic about it, the stringification of annotations has been in the making a long time, with the community's and the SC's support. You came up with a last-minute attempt to change it, using the PEP process to propose to *revert* the decision already codified in PEP 563 and implemented in the master branch. But you've waited until the last minute (feature freeze is in three weeks) and IMO you're making things awkward for the SC (who can and will speak for themselves).
Also, if you're stating that programmers get confused reading source code because annotations get used for different things at different places--surely that confirms that annotations are useful for more than just type hints, in real-world code, today.
No, it doesn't, it's just a hypothetical that they *would* be confused if there *were* other uses. Personally I haven't used any libraries that use non-type-hint annotations, but I've been told they exist.
I genuinely have no sense of how important static type analysis is in Python--personally I have no need for it--but I find it hard to believe that type hints are so overwhelmingly important that they should become the sole use case for annotations, and we need to take away this long-standing functionality, that you suggest is being successfully used side-by-side with type hints today, merely to make type hints clearer.
For projects and teams that use type hints, they are *very* important. For example, they are so important to the Instagram team at Facebook that they wrote their own static type checker when they found mypy wasn't fast enough for their million-line codebase. And of course they were so important to Dropbox that they sponsored a multi- year, multi-person effort to create mypy in the first place. The amount of feedback we've received for mypy indicates that it's not just those two companies that are using type hints.
_______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/UONAPCPQ... Code of Conduct: http://python.org/psf/codeofconduct/
On Wed, Apr 14, 2021 at 10:47 AM Paul Bryan <pbryan@anode.ca> wrote:
I favour annotations for type hints; the writing's been on the wall for some time. I think the necessary escape hatch for those using it for other purposes should be Annotated[Any, ...] (or a similar, nicer alternative).
Guido, one of the difficulties I'm having is understanding the direction you're going with "relaxed syntax". PEP 649 is concrete; it's hard to weigh its merits against the usability—even feasibility—of incorporating an as yet undefined relaxed syntax.
At the end of the day, such syntax is going to have to be represented in some structure. If one were to accept that annotations are for type hints only, is the debate then the difference between a Python type (which PEP 649 would yield) and some other as yet undefined structure?
In `__annotations__` it would be a string, as currently implemented in the 3.10 alpha code. The string just might not be parsable as an expression. In the AST, it will have to be a new node that just collects tokens and bracketed things; that could be an array of low-level tokens. -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
What would you expect get_type_hints(...) to return with relaxed syntax? Today, for type hint annotations, it returns a type, which I'd argue is an important feature to preserve (in it or some successor). On Wed, 2021-04-14 at 10:54 -0700, Guido van Rossum wrote:
On Wed, Apr 14, 2021 at 10:47 AM Paul Bryan <pbryan@anode.ca> wrote:
I favour annotations for type hints; the writing's been on the wall for some time. I think the necessary escape hatch for those using it for other purposes should be Annotated[Any, ...] (or a similar, nicer alternative).
Guido, one of the difficulties I'm having is understanding the direction you're going with "relaxed syntax". PEP 649 is concrete; it's hard to weigh its merits against the usability—even feasibility—of incorporating an as yet undefined relaxed syntax.
At the end of the day, such syntax is going to have to be represented in some structure. If one were to accept that annotations are for type hints only, is the debate then the difference between a Python type (which PEP 649 would yield) and some other as yet undefined structure?
In `__annotations__` it would be a string, as currently implemented in the 3.10 alpha code. The string just might not be parsable as an expression.
In the AST, it will have to be a new node that just collects tokens and bracketed things; that could be an array of low-level tokens.
On Wed, Apr 14, 2021 at 11:03 AM Paul Bryan <pbryan@anode.ca> wrote:
What would you expect get_type_hints(...) to return with relaxed syntax? Today, for type hint annotations, it returns a type, which I'd argue is an important feature to preserve (in it or some successor).
It would have to return some other representation. Presumably the (purely hypothetical) new syntax would be syntactic sugar for something that can be expressed as an object, just like (as of PEP 604) X | Y is syntactic sugar for Union[X, Y]. -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-change-the-world/>
On 4/12/21 7:24 PM, Guido van Rossum wrote:
To be honest, the most pressing issue with annotations is the clumsy way that type variables have to be introduced. The current convention, `T = TypeVar('T')`, is both verbose (why do I have to repeat the name?) and widely misunderstood (many help request for mypy and pyright follow from users making a mistaken association between two type variables that are unrelated but share the same TypeVar definition).
This repeat-the-name behavior has been in Python for a long time, e.g. Point = namedtuple('Point', ['x', 'y']) namedtuple() shipped with Python 2.6 in 2008. So if that's the most pressing issue with annotations, annotations must be going quite well, because we've known about this for at least 13 years without attempting to solve it. I've always assumed that this repetition was worth the minor inconvenience. You only have to retype the name once, and the resulting code is clear and readable, with predictable behavior. A small price to pay to preserve Python's famous readability. For what it's worth--and forgive me for straying slightly into python-ideas territory--/if/ we wanted to eliminate the need to repeat the name, I'd prefer a general-purpose solution rather than something tailored specifically for type hints. In a recent private email conversation on a different topic, I proposed this syntax: bind <id> <expression> This statement would be equivalent to id = expression('id') Cheers, //arry/
Hi, Le 12/04/2021 à 03:55, Larry Hastings a écrit :
I look forward to your comments,
2 reading notes: * in section "Annotations That Refer To Class Variables":
If it's possible that an annotation function refers to class variables--if all these conditions are true:
* The annotation function is being defined inside a class scope. * The generated code for the annotation function has at least one ``LOAD_NAME`` instruction.
I'm afraid I don't really understand the second condition. Would it be possible to rephrase it in a less technical way, i.e. some condition on the user code itself, not on what the implementation does with it. * in section "Interactive REPL Shell":
For the sake of simplicity, in this case we forego delayed evaluation.
This has the unpleasant consequence that any code using forward references cannot be copy-pasted into the REPL. While such copy-pasting is a very casual practice and does already often break, it is sometimes useful in quick'n dirty prototyping. Would it be possible to specify that in this case, a possible NameError in evaluation is caught, and the annotation is set to None or some sentinel value? Cheers, Baptiste
On 4/13/2021 4:21 AM, Baptiste Carvello wrote:
Le 12/04/2021 à 03:55, Larry Hastings a écrit :
* in section "Interactive REPL Shell":
For the sake of simplicity, in this case we forego delayed evaluation.
The intention of the code + codeop modules is that people should be able to write interactive consoles that simulate the standard REPL. For example: Python 3.10.0a7+ (heads/master-dirty:a9cf69df2e, Apr 12 2021, 15:36:39) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.
import code code.interact() Python 3.10.0a7+ (heads/master-dirty:a9cf69df2e, Apr 12 2021, 15:36:39) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) # Call has not returned. Prompt is from code.InteractiveConsole. def f(x:int): -> float
f.__annotations__ # should match REPL result
^Z
now exiting InteractiveConsole...
Now back to repl
If the REPL compiles with "mode='single' and spec is changes to "when mode is 'single'", then above should work. Larry, please test with your proposed implementation. -- Terry Jan Reedy
On 4/13/21 3:28 PM, Terry Reedy wrote:
On 4/13/2021 4:21 AM, Baptiste Carvello wrote:
Le 12/04/2021 à 03:55, Larry Hastings a écrit :
* in section "Interactive REPL Shell":
For the sake of simplicity, in this case we forego delayed evaluation.
The intention of the code + codeop modules is that people should be able to write interactive consoles that simulate the standard REPL. For example:
Python 3.10.0a7+ (heads/master-dirty:a9cf69df2e, Apr 12 2021, 15:36:39) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.
import code code.interact() Python 3.10.0a7+ (heads/master-dirty:a9cf69df2e, Apr 12 2021, 15:36:39) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) # Call has not returned. Prompt is from code.InteractiveConsole. def f(x:int): -> float
f.__annotations__ # should match REPL result
^Z
now exiting InteractiveConsole...
Now back to repl
If the REPL compiles with "mode='single' and spec is changes to "when mode is 'single'", then above should work. Larry, please test with your proposed implementation.
A couple things! 1. I apologize if the PEP wasn't clear, but this section was talking about the problem of /module/ annotations in the implicit __main__ module when using the interactive REPL. Annotations on other objects (classes, functions, etc) defined in the interactive REPL work as expected. 2. The above example has a minor bug: when defining a return annotation on a function, the colon ending the function declaration goes /after/ the return annotation. It should have been "def f(x:int) -> float:". 3. The above example works fine when run in my branch. 4. You need to "from __future__ import co_annotations" in order to activate delayed evaluation of annotations using code objects in my branch. I added that (inside the code.interact() shell!) and it still works fine. So I'm not sure what problem you're proposing to solve with this "mode is single" stuff. * If you thought there was a problem with defining annotations on functions and classes defined in the REPL, good news!, it was never a problem. * If you're solving the problem of defining annotations on the interactive module /itself,/ I don't understand what your proposed solution is or how it would work. The problem is, how do you create a code object that defines all the annotations on a module, when the module never finishes being defined because it's the interactive shell? Cheers, //arry/
I created simple benchmark: https://gist.github.com/methane/abb509e5f781cc4a103cc450e1e7925d This benchmark creates 1000 annotated functions and measure time to load and exec. And here is the result. All interpreters are built without --pydebug, --enable-optimization, and --with-lto. ``` # Python 3.9 w/ stock semantics $ python3 ~/ann_test.py 1 code size: 121011 unmarshal: avg: 0.33605549649801103 +/- 0.007382938279889738 exec: avg: 0.395090194279328 +/- 0.001004608380122509 # Python 3.9 w/ PEP 563 semantics $ python3 ~/ann_test.py 2 code size: 121070 unmarshal: avg: 0.3407619891455397 +/- 0.0011833618746421965 exec: avg: 0.24590165729168803 +/- 0.0003123404336687428 # master branch w/ PEP 563 semantics $ ./python ~/ann_test.py 2 code size: 149086 unmarshal: avg: 0.45410854648798704 +/- 0.00107521956753799 exec: avg: 0.11281821667216718 +/- 0.00011939747308270317 # master branch + optimization (*) w/ PEP 563 semantics $ ./python ~/ann_test.py 2 code size: 110488 unmarshal: avg: 0.3184352931333706 +/- 0.0015278719180908732 exec: avg: 0.11042822999879717 +/- 0.00018108884723599264 # co_annotatins reference implementation w/ PEP 649 semantics $ ./python ~/ann_test.py 3 code size: 229679 unmarshal: avg: 0.6402394526172429 +/- 0.0006400500128250688 exec: avg: 0.09774857209995388 +/- 9.275466265195788e-05 # co_annotatins reference implementation + optimization (*) w/ PEP 649 semantics $ ./python ~/ann_test.py 3 code size: 204963 unmarshal: avg: 0.5824743471574039 +/- 0.007219086642131638 exec: avg: 0.09641968684736639 +/- 0.0001416784753249878 ``` (*) I found constant folding creates new tuple every time even though same tuple is in constant table. See https://github.com/python/cpython/pull/25419 For co_annotations, I cherry-pick https://github.com/python/cpython/pull/23056 too. -- Inada Naoki <songofacandy@gmail.com>
I added memory usage data by tracemalloc. ``` # Python 3.9 w/ old semantics $ python3 ann_test.py 1 code size: 121011 memory: (385200, 385200) unmarshal: avg: 0.3341682574478909 +/- 3.700437551781949e-05 exec: avg: 0.4067857594229281 +/- 0.0006858555167675445 # Python 3.9 w/ PEP 563 semantics $ python3 ann_test.py 2 code size: 121070 memory: (398675, 398675) unmarshal: avg: 0.3352349083404988 +/- 7.749102039824168e-05 exec: avg: 0.24610224328935146 +/- 0.0008628035427956459 # master + optimization w/ PEP 563 semantics $ ./python ~/ann_test.py 2 code size: 110488 memory: (193572, 193572) unmarshal: avg: 0.31316645480692384 +/- 0.00011766086337841035 exec: avg: 0.11456295938696712 +/- 0.0017481202239372398 # co_annotations + optimization w/ PEP 649 semantics $ ./python ~/ann_test.py 3 code size: 204963 memory: (208273, 208273) unmarshal: avg: 0.597023528907448 +/- 0.00016614519056599577 exec: avg: 0.09546191191766411 +/- 0.00018099485135812695 ``` Summary: * Both of PEP 563 and PEP 649 has low memory consumption than Python 3.9. * Importing time (unmarshal+exec) is about 0.7sec on old semantics and PEP 649, 0.43sec on PEP 563. On Thu, Apr 15, 2021 at 10:31 AM Inada Naoki <songofacandy@gmail.com> wrote:
I created simple benchmark: https://gist.github.com/methane/abb509e5f781cc4a103cc450e1e7925d
This benchmark creates 1000 annotated functions and measure time to load and exec. And here is the result. All interpreters are built without --pydebug, --enable-optimization, and --with-lto.
``` # Python 3.9 w/ stock semantics
$ python3 ~/ann_test.py 1 code size: 121011 unmarshal: avg: 0.33605549649801103 +/- 0.007382938279889738 exec: avg: 0.395090194279328 +/- 0.001004608380122509
# Python 3.9 w/ PEP 563 semantics
$ python3 ~/ann_test.py 2 code size: 121070 unmarshal: avg: 0.3407619891455397 +/- 0.0011833618746421965 exec: avg: 0.24590165729168803 +/- 0.0003123404336687428
# master branch w/ PEP 563 semantics
$ ./python ~/ann_test.py 2 code size: 149086 unmarshal: avg: 0.45410854648798704 +/- 0.00107521956753799 exec: avg: 0.11281821667216718 +/- 0.00011939747308270317
# master branch + optimization (*) w/ PEP 563 semantics $ ./python ~/ann_test.py 2 code size: 110488 unmarshal: avg: 0.3184352931333706 +/- 0.0015278719180908732 exec: avg: 0.11042822999879717 +/- 0.00018108884723599264
# co_annotatins reference implementation w/ PEP 649 semantics
$ ./python ~/ann_test.py 3 code size: 229679 unmarshal: avg: 0.6402394526172429 +/- 0.0006400500128250688 exec: avg: 0.09774857209995388 +/- 9.275466265195788e-05
# co_annotatins reference implementation + optimization (*) w/ PEP 649 semantics
$ ./python ~/ann_test.py 3 code size: 204963 unmarshal: avg: 0.5824743471574039 +/- 0.007219086642131638 exec: avg: 0.09641968684736639 +/- 0.0001416784753249878 ```
(*) I found constant folding creates new tuple every time even though same tuple is in constant table. See https://github.com/python/cpython/pull/25419 For co_annotations, I cherry-pick https://github.com/python/cpython/pull/23056 too.
-- Inada Naoki <songofacandy@gmail.com>
-- Inada Naoki <songofacandy@gmail.com>
Thanks for doing this! I don't think PEP 649 is going to be accepted or rejected based on either performance or memory usage, but it's nice to see you confirmed that its performance and memory impact is acceptable. If I run "ann_test.py 1", the annotations are already turned into strings. Why do you do it that way? It makes stock semantics look better, because manually stringized annotations are much faster than evaluating real expressions. It seems to me that the test would be more fair if test 1 used real annotations. So I added this to "lines": from types import SimpleNamespace foo = SimpleNamespace() foo.bar = SimpleNamespace() foo.bar.baz = float I also changed quote(t) so it always returned t unchanged. When I ran it that way, stock semantics "exec" time got larger. Cheers, //arry/ On 4/14/21 6:44 PM, Inada Naoki wrote:
I added memory usage data by tracemalloc.
``` # Python 3.9 w/ old semantics $ python3 ann_test.py 1 code size: 121011 memory: (385200, 385200) unmarshal: avg: 0.3341682574478909 +/- 3.700437551781949e-05 exec: avg: 0.4067857594229281 +/- 0.0006858555167675445
# Python 3.9 w/ PEP 563 semantics $ python3 ann_test.py 2 code size: 121070 memory: (398675, 398675) unmarshal: avg: 0.3352349083404988 +/- 7.749102039824168e-05 exec: avg: 0.24610224328935146 +/- 0.0008628035427956459
# master + optimization w/ PEP 563 semantics $ ./python ~/ann_test.py 2 code size: 110488 memory: (193572, 193572) unmarshal: avg: 0.31316645480692384 +/- 0.00011766086337841035 exec: avg: 0.11456295938696712 +/- 0.0017481202239372398
# co_annotations + optimization w/ PEP 649 semantics $ ./python ~/ann_test.py 3 code size: 204963 memory: (208273, 208273) unmarshal: avg: 0.597023528907448 +/- 0.00016614519056599577 exec: avg: 0.09546191191766411 +/- 0.00018099485135812695 ```
Summary:
* Both of PEP 563 and PEP 649 has low memory consumption than Python 3.9. * Importing time (unmarshal+exec) is about 0.7sec on old semantics and PEP 649, 0.43sec on PEP 563.
On Thu, Apr 15, 2021 at 10:31 AM Inada Naoki <songofacandy@gmail.com> wrote:
I created simple benchmark: https://gist.github.com/methane/abb509e5f781cc4a103cc450e1e7925d
This benchmark creates 1000 annotated functions and measure time to load and exec. And here is the result. All interpreters are built without --pydebug, --enable-optimization, and --with-lto.
``` # Python 3.9 w/ stock semantics
$ python3 ~/ann_test.py 1 code size: 121011 unmarshal: avg: 0.33605549649801103 +/- 0.007382938279889738 exec: avg: 0.395090194279328 +/- 0.001004608380122509
# Python 3.9 w/ PEP 563 semantics
$ python3 ~/ann_test.py 2 code size: 121070 unmarshal: avg: 0.3407619891455397 +/- 0.0011833618746421965 exec: avg: 0.24590165729168803 +/- 0.0003123404336687428
# master branch w/ PEP 563 semantics
$ ./python ~/ann_test.py 2 code size: 149086 unmarshal: avg: 0.45410854648798704 +/- 0.00107521956753799 exec: avg: 0.11281821667216718 +/- 0.00011939747308270317
# master branch + optimization (*) w/ PEP 563 semantics $ ./python ~/ann_test.py 2 code size: 110488 unmarshal: avg: 0.3184352931333706 +/- 0.0015278719180908732 exec: avg: 0.11042822999879717 +/- 0.00018108884723599264
# co_annotatins reference implementation w/ PEP 649 semantics
$ ./python ~/ann_test.py 3 code size: 229679 unmarshal: avg: 0.6402394526172429 +/- 0.0006400500128250688 exec: avg: 0.09774857209995388 +/- 9.275466265195788e-05
# co_annotatins reference implementation + optimization (*) w/ PEP 649 semantics
$ ./python ~/ann_test.py 3 code size: 204963 unmarshal: avg: 0.5824743471574039 +/- 0.007219086642131638 exec: avg: 0.09641968684736639 +/- 0.0001416784753249878 ```
(*) I found constant folding creates new tuple every time even though same tuple is in constant table. See https://github.com/python/cpython/pull/25419 For co_annotations, I cherry-pick https://github.com/python/cpython/pull/23056 too.
-- Inada Naoki <songofacandy@gmail.com>
On Thu, Apr 15, 2021 at 11:09 AM Larry Hastings <larry@hastings.org> wrote:
Thanks for doing this! I don't think PEP 649 is going to be accepted or rejected based on either performance or memory usage, but it's nice to see you confirmed that its performance and memory impact is acceptable.
If I run "ann_test.py 1", the annotations are already turned into strings. Why do you do it that way? It makes stock semantics look better, because manually stringized annotations are much faster than evaluating real expressions.
Because `if TYPE_CHECKING` and manually stringified annotation is used in real world applications. I want to mix both use cases. -- Inada Naoki <songofacandy@gmail.com>
I updated the benchmark little: * Added no annotation mode for baseline performance. * Better stats output. https://gist.github.com/methane/abb509e5f781cc4a103cc450e1e7925d ``` # No annotation (master + GH-25419) $ ./python ~/ann_test.py 0 code size: 102967 bytes memory: 181288 bytes unmarshal: avg: 299.301ms +/-1.257ms exec: avg: 104.019ms +/-0.038ms # PEP 563 (master + GH-25419) $ ./python ~/ann_test.py 2 code size: 110488 bytes memory: 193572 bytes unmarshal: avg: 313.032ms +/-0.068ms exec: avg: 108.456ms +/-0.048ms # PEP 649 (co_annotations + GH-25419 + GH-23056) $ ./python ~/ann_test.py 3 code size: 204963 bytes memory: 209257 bytes unmarshal: avg: 587.336ms +/-2.073ms exec: avg: 97.056ms +/-0.046ms # Python 3.9 $ python3 ann_test.py 0 code size: 108955 bytes memory: 173296 bytes unmarshal: avg: 333.527ms +/-1.750ms exec: avg: 90.810ms +/-0.347ms $ python3 ann_test.py 1 code size: 121011 bytes memory: 385200 bytes unmarshal: avg: 334.929ms +/-0.055ms exec: avg: 400.260ms +/-0.249ms ``` ## Rough estimation of annotation overhead Python 3.9 w/o PEP 563 code (pyc) size: +11% memory usage: +122% (211bytes / function) import time: +73% (*) PEP 563 code (pyc) size: +7.3% memory usage: +0.68% (13.3bytes / function) import time: +4.5% PEP 649 code (pyc) size: +99% memory usage: +15% (28 bytes / function) import time: +70% (*) import time can be much more slower for complex annotations. ## Conclusion * PEP 563 is close to "zero overhead" in memory consumption. And import time overhead is ~5%. Users can write type annotations without caring overhead. * PEP 649 is much better than old semantics for memory usage and import time. But import time is still longer than no annotation code. * The import time overhead is coming from unmarshal, not from eval(). If we implement a "lazy load" mechanizm for docstrings and annotations, overhead will become cheaper. * pyc file become bigger (but who cares?) I will read PEP 649 implementation to find missing optimizations other than GH-25419 and GH-23056. -- Inada Naoki <songofacandy@gmail.com>
Thanks Brett Cannon for suggesting to get Samuel Colvin (Pydantic) and me, Sebastián Ramírez (FastAPI and Typer) involved in this. TL;DR: it seems to me PEP 649 would be incredibly important/useful for Pydantic, FastAPI, Typer, and similar tools, and their communities. ## About FastAPI, Pydantic, Typer Some of you probably don't know what these tools are, so, in short, FastAPI is a web API framework based on Pydantic. It has had massive growth and adoption, and very positive feedback. FastAPI was included for the first time in the last Python developers survey and it's already the third most used web framework, and apparently the fastest growing one: https://www.jetbrains.com/lp/python-developers-survey-2020/. It was also recently recommended by ThoughtWorks for the enterprises: https://www.thoughtworks.com/radar/languages-and-frameworks?blipid=202104087 And it's currently used in lots of organizations, some widely known, chances are your orgs already use it in some way. Pydantic, in very short, is a library that looks a lot like dataclasses (and also supports them), but it uses the same type annotations not only for type hints, but also for data validation, serialization (e.g. to JSON) and documentation (JSON Schema). The key feature of FastAPI (thanks to Pydantic) is using the same type annotations for _more_ than just type hinting: data validation, serialization, and documentation. All those features are provided by default when building an API with FastAPI and Pydantic. Typer is a library for building CLIs, based on Click, but using the same ideas from type annotations, from FastAPI and Pydantic. ## Why PEP 649 You can read Samuel's message to this mailing list here: https://mail.python.org/archives/list/python-dev@python.org/thread/7VMJWFGHV... And a longer discussion of how PEP 563 affects Pydantic here: https://github.com/samuelcolvin/pydantic/issues/2678 He has done most of the work to support these additional features from type annotations. So he would have the deepest insight into the tradeoffs/issues. From my point of view, just being able to use local variables in Pydantic models would be enough to justify PEP 649. With PEP 563, if a developer decides to create a Pydantic model inside a function (say a factory function) they would probably get an error. And it would probably not be obvious that they have to declare the model in the top level of the module. The main feature of FastAPI and Pydantic is that they are very easy/intuitive to use. People from many backgrounds are now quickly and efficiently building APIs with best practices. I've read some isolated comments of people that were against type annotations in general, saying that these tools justify adopting them. And I've also seen comments from people coming from other languages and fields, and adopting Python just to be able to use these tools. Many of these developers are not Python experts, and supporting them and their intuition as much as possible when using these tools would help towards the PSF goal to:
[...] support and facilitate the growth of a diverse and international community of Python programmers.
## Community support To avoid asking people to spam here, Samuel and I are collecting "likes" in: * This tweet: https://twitter.com/tiangolo/status/1382800928982642692 * This issue: https://github.com/samuelcolvin/pydantic/issues/2678 I just sent that tweet, I expect/hope it will collect some likes in support by the time you see it. ## Questions I'm not very familiar with the internals of Python, and I'm not sure how the new syntax for `Union`s using the vertical bar character ("pipe", "|") work. But would PEP 649 still support things like this?: def run(arg: int | str = 0): pass And would it be inspectable at runtime? ## Additional comments The original author of PEP 563 was Łukasz Langa. I was recently chatting with him about Typer and annotations. And he showed interest, support, and help. I think he probably knows the benefits of the way these libraries use type annotations and I would like/hope to read his opinion on all this. Or alternatively, any possible ideas for how to handle these things in tools like Pydantic.
On 4/15/21 2:02 PM, Sebastián Ramírez wrote:
## Questions
I'm not very familiar with the internals of Python, and I'm not sure how the new syntax for `Union`s using the vertical bar character ("pipe", "|") work.
But would PEP 649 still support things like this?:
def run(arg: int | str = 0): pass
And would it be inspectable at runtime?
As far as I can tell, absolutely PEP 649 would support this feature. Under the covers, all PEP 649 is really doing is changing the destination that annotation expressions get compiled to. So anything that works in an annotation with "stock" semantics would work fine with PEP 649 semantics too, barring the exceptions specifically listed in the PEP (e.g. annotations defined in conditionals, walrus operator, etc). Cheers, //arry/
I will read PEP 649 implementation to find missing optimizations other than GH-25419 and GH-23056.
I found each "__co_annotation__" has own name like "func0.__co_annotation__". It increased pyc size a little. I created a draft pull request for cherry-picking GH-25419 and GH-23056 and using just "__co_annotation__" as a name. https://github.com/larryhastings/co_annotations/pull/9/commits/48a99e0aafa2d... ``` # previous result $ ./python ~/ann_test.py 3 code size: 204963 bytes memory: 209257 bytes unmarshal: avg: 587.336ms +/-2.073ms exec: avg: 97.056ms +/-0.046ms # Use single name $ ./python ~/ann_test.py 3 code size: 182088 bytes memory: 209257 bytes unmarshal: avg: 539.841ms +/-0.227ms exec: avg: 98.351ms +/-0.064ms ``` It reduced code size and unmarshal time by 10%. I confirmed GH-25419 and GH-23056 works very well. All same constants are shared. Unmarshal time is still slow. It is caused by unmarshaling code object. But I don't know where is the bottleneck: parsing marshal file, or creating code object. --- Then, I tried to measure method annotation overhead. Code: https://gist.github.com/methane/abb509e5f781cc4a103cc450e1e7925d#file-ann_te... Result: ``` # No annotation $ ./python ~/ann_test_method.py 0 code size: 113019 bytes memory: 256008 bytes unmarshal: avg: 336.665ms +/-6.185ms exec: avg: 176.791ms +/-3.067ms # PEP 563 $ ./python ~/ann_test_method.py 2 code size: 120532 bytes memory: 269004 bytes unmarshal: avg: 348.285ms +/-0.102ms exec: avg: 176.933ms +/-4.343ms # PEP 649 (all optimization included) $ ./python ~/ann_test_method.py 3 code size: 196135 bytes memory: 436565 bytes unmarshal: avg: 579.680ms +/-0.147ms exec: avg: 259.781ms +/-7.087ms ``` PEP 563 vs 649 * code size: +63% * memory: +62% * import time: +60% PEP 649 annotation overhead (compared with no annotation): * code size: +83 byte/method * memory: +180 byte/method * import time: +326 us/method It is disappointing because having thousands methods is very common for web applications. Unlike simple function case, PEP 649 creates function object instead of code object for __co_annotation__ of methods. It cause this overhead. Can we avoid creating functions for each annotation? -- Inada Naoki <songofacandy@gmail.com>
On 4/15/21 9:24 PM, Inada Naoki wrote:
Unlike simple function case, PEP 649 creates function object instead of code object for __co_annotation__ of methods. It cause this overhead. Can we avoid creating functions for each annotation?
As the implementation of PEP 649 currently stands, there are two reasons why the compiler might pre-bind the __co_annotations__ code object to a function, instead of simply storing the code object: * If the annotations refer to a closure ("freevars" is nonzero), or * If the annotations /possibly/ refer to a class variable (the annotations code object contains either LOAD_NAME or LOAD_CLASSDEREF). If the annotations refer to a closure, then the code object also needs to be bound with the "closure" tuple. If the annotations possibly refer to a class variable, then the code object also needs to be bound with the current "f_locals" dict. (Both could be true.) Unfortunately, when generating annotations on a method, references to builtins (e.g. "int", "str") seem to generate LOAD_NAME instructions instead of LOAD_GLOBAL. Which means pre-binding the function happens pretty often for methods. I believe in your benchmark it will happen every time. There's a lot of code, and a lot of runtime data structures, inside compile.c and symtable.c behind the compiler's decision about whether something is NAME vs GLOBAL vs DEREF etc, and I wasn't comfortable with seeing if I could fix it. Anyway I assume it wasn't "fixable". The compiler would presumably already prefer to generate LOAD_GLOBAL vs LOAD_NAME, because LOAD_GLOBAL would be cheaper every time for a global or builtin. The fact that it already doesn't do so implies that it can't. At the moment I have only one idea for a possible optimization, as follows. Instead of binding the function object immediately, it /might/ be cheaper to write the needed values into a tuple, then only actually bind the function object on demand (like normal). I haven't tried this because I assumed the difference at runtime would be negligible. On one hand, you're creating a function object; on the other you're creating a tuple. Either way you're creating an object at runtime, and I assumed that bound functions weren't /that/ much more expensive than tuples. Of course I could be very wrong about that. The other thing is, it would be a lot of work to even try the experiment. Also, it's an optimization, and I was more concerned with correctness... and getting it done and getting this discussion underway. What follows are my specific thoughts about how to implement this optimization. In this scenario, the logic in the compiler that generates the code object would change to something like this: has_closure = co.co_freevars != 0 has_load_name = co.co_code does not contain LOAD_NAME or LOAD_CLASSDEREF bytecodes if not (has_closure or has_load_name): co_ann = co elif has_closure and (not has_load_name): co_ann = (co, freevars) elif (not has_closure) and has_load_name: co_ann = (co, f_locals) else: co_ann = (co, freevars, f_locals) setattr(o, "__co_annotations__", co_ann) (The compiler would have to generate instructions creating the tuple and setting its members, then storing the resulting object on the object with the annotations.) Sadly, we can't pre-create this "co_ann" tuple as a constant and store it in the .pyc file, because the whole point of the tuple is to contain one or more objects only created at runtime. The code implementing __co_annotations__ in the three objects (function, class, module) would examine the object it got. If it was a code object, it would bind it; if it was a tuple, it would unpack the tuple and use the values based on their type: // co_ann = internal storage for __co_annotations__ if isinstance(co_ann, FunctionType) or (co_ann == None): return co_ann co, freevars, locals = None if isinstance(co_ann, CodeType): co = co_ann else: assert isinstance(co_ann, tuple) assert 1 <= len(co_ann) <= 3 for o in co_ann: if isinstance(o, CodeObject): assert not co co = o elif isinstance(o, tuple): assert not freevars freevars = o elif isinstance(o, dict): assert not locals locals = o else: raise ValueError(f"illegal value in co_annotations tuple: {o!r}") co_ann = make_function(co, freevars=freevars, locals=locals) return co_ann If you experiment with this approach, I'd be glad to answer questions about it, either here or on Github, etc. Cheers, //arry/
On Fri, 16 Apr 2021, 3:14 pm Larry Hastings, <larry@hastings.org> wrote:
Anyway I assume it wasn't "fixable". The compiler would presumably already prefer to generate LOAD_GLOBAL vs LOAD_NAME, because LOAD_GLOBAL would be cheaper every time for a global or builtin. The fact that it already doesn't do so implies that it can't.
Metaclass __prepare__ methods can inject names into the class namespace that the compiler doesn't know about, so yeah, it unfortunately has to be conservative and use LOAD_NAME in class level code. Cheers, Nick.
El sáb, 17 abr 2021 a las 8:30, Nick Coghlan (<ncoghlan@gmail.com>) escribió:
On Fri, 16 Apr 2021, 3:14 pm Larry Hastings, <larry@hastings.org> wrote:
Anyway I assume it wasn't "fixable". The compiler would presumably already prefer to generate LOAD_GLOBAL vs LOAD_NAME, because LOAD_GLOBAL would be cheaper every time for a global or builtin. The fact that it already doesn't do so implies that it can't.
Metaclass __prepare__ methods can inject names into the class namespace that the compiler doesn't know about, so yeah, it unfortunately has to be conservative and use LOAD_NAME in class level code.
But of course, most metaclasses don't. I wonder if there are cases where
the compiler can statically figure out that there are no metaclass shenanigans going on, and emit LOAD_GLOBAL anyway. It seems safe at least when the class has no base classes and no metaclass=.
Cheers, Nick.
_______________________________________________
Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-leave@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/IZJYDHWJ... Code of Conduct: http://python.org/psf/codeofconduct/
On Sun, 18 Apr 2021, 1:59 am Jelle Zijlstra, <jelle.zijlstra@gmail.com> wrote:
El sáb, 17 abr 2021 a las 8:30, Nick Coghlan (<ncoghlan@gmail.com>) escribió:.
Metaclass __prepare__ methods can inject names into the class namespace that the compiler doesn't know about, so yeah, it unfortunately has to be conservative and use LOAD_NAME in class level code.
But of course, most metaclasses don't. I wonder if there are cases where
the compiler can statically figure out that there are no metaclass shenanigans going on, and emit LOAD_GLOBAL anyway. It seems safe at least when the class has no base classes and no metaclass=.
Aye, that particular case is one the symtable pass could at least theoretically identify. As soon as there is a name to resolve in the class header, though, it's no longer safe for the compiler to make assumptions :( Cheers, Nick.
participants (14)
-
Antoine Pitrou
-
Baptiste Carvello
-
Brett Cannon
-
Carl Meyer
-
Eric V. Smith
-
Guido van Rossum
-
Inada Naoki
-
Jelle Zijlstra
-
Jim J. Jewett
-
Larry Hastings
-
Nick Coghlan
-
Paul Bryan
-
Sebastián Ramírez
-
Terry Reedy