Hi everybody, firstly, this is just an email for personal interest and has nothing to do directly with development so this mailing list may not be quite the right place (I am going to hang around on #pypy...). I am a student and generally interested with the pypy development, especially with the rpython language, and I have some general questions: What is your view on the new typing/mypy things that are happening on python-dev (pep 484)? What I mean is will this make the typing system of rpython evolve? Could RTyper be adapted to work on pep 484 annotations (would it actually be useful)? I read a bit of the paper about rpython listed on the docs and i had the feeling that your typing is a bit more low level. The quite different goals and contraints that the 2 type system have may explain that they look different, but could there be an interaction (in one way or another)? An other question that is related: it's maybe early to think about that but could it be reasonable to expect that pypy will better optimize pep-484-annotated python programs? The thrusting of these user annotations is indeed a problem, so a pypy option could specify that we want it to thrust the type annotations. It may then be worth just writing programs in rpython directly. These questions are quite hypothetical so I don't expect concrete answers, just thoughts! If someone wants to react to this or point me to other (theoretical) ressources about rpython... :) Bonsoir, Peio
Hi, On 25 January 2015 at 00:05, Ho33e5 <ho33e5@gmail.com> wrote:
What is your view on the new typing/mypy things that are happening on python-dev (pep 484)? What I mean is will this make the typing system of rpython evolve? Could RTyper be adapted to work on pep 484 annotations (would it actually be useful)?
You are confusing RPython for being "Python-with-type-annotations". It is not: RPython does not have explicit types annotations. I think that this alone invalidates the rest of your discussion about RPython. So let's instead talk about "APython", which would be Python-with-type-annotations. (If we're designing some new language, it can be like Python 3.x for x large enough to include support for the pep 484 syntax, as opposed to RPython which is a subset of Python 2.)
An other question that is related: it's maybe early to think about that but could it be reasonable to expect that pypy will better optimize pep-484-annotated python programs? The thrusting of these user annotations is indeed a problem, so a pypy option could specify that we want it to thrust the type annotations.
The type annotations have not been written with low-level performance in mind. For example, there is no standard type annotation that means "this is a machine-sized integer"; you have only "this is a Python 3 int object", which is a Python 2 "int-or-long". Similarly, there is no mention in PEP 484 about specifying the type of instance attributes, for example. So APython would need a subtly different set of types to work on. Let's ignore the problem that this breaks compatibility with any other tool written for PEP 484. It is very unclear how much speed PyPy could potentially gain. Basically we would trade removing an unknown but likely small fraction of the guards emitted by the JIT compiler against the very real result of PyPy segfaulting as soon as there is a mismatch somewhere. At least in C++ the compiler does some real type checking and reports to you. Supporting the APython approach would basically be a lot of hard work (for us) with the end result of giving users a sure way to shoot themselves in the foot. I would argue that there are plenty of old ways to shoot yourself in the foot which are at least more supported by a large number of tools. For example, C++ comes with two essential tools that would be missing: the first is the C++ compiler itself, which does a better job at reporting typing errors than any best-effort type checker can; the second is gdb. I would argue that you first need equivalents of these two tools. Exact type checking is stricter than best-effort. I doubt that it is possible to write such a tool that would accept large 3rd-party Python libraries which have not been structured for type-checking in the first place. If you think otherwise, then I would say it is your job to write it --- this would seem like a reasonable first step along this path :-) A bientôt, Armin.
From the perspective of static compilation, if you have an infinitely
Wouldn't it be possible to get a performance improvement from type annotations without sacrificing correctness? powerful type inference engine, there shouldn't be any difference to have type annotations or not. The reason that programmer supplied type annotations are useful is because in practice compilers are not infinitely smart, and for efficiency, you need to judiciously apply widening to the type inference process. e.g. telling the compiler "regardless of what the actual types this function is called by, it's written to generically operate on super type C so you can save work on the type inference and just assume type C here" I think the same effect could be useful in dynamic compilation. You have to place guards to ensure correct behavior, but there are a lot of different places or methods you can do to insert guards to have the same effect and you pretty much have to guess which one is best. Programmer supplied type annotations could be used as a hint to place guards more intelligently without sacrificing correctness. Of course, I haven't done any Pypy development, so I don't know how feasible this is in practice. On Sun, Jan 25, 2015 at 12:28 AM, Armin Rigo <arigo@tunes.org> wrote:
Hi,
What is your view on the new typing/mypy things that are happening on
On 25 January 2015 at 00:05, Ho33e5 <ho33e5@gmail.com> wrote: python-dev
(pep 484)? What I mean is will this make the typing system of rpython evolve? Could RTyper be adapted to work on pep 484 annotations (would it actually be useful)?
You are confusing RPython for being "Python-with-type-annotations". It is not: RPython does not have explicit types annotations.
I think that this alone invalidates the rest of your discussion about RPython. So let's instead talk about "APython", which would be Python-with-type-annotations. (If we're designing some new language, it can be like Python 3.x for x large enough to include support for the pep 484 syntax, as opposed to RPython which is a subset of Python 2.)
An other question that is related: it's maybe early to think about that but could it be reasonable to expect that pypy will better optimize pep-484-annotated python programs? The thrusting of these user annotations is indeed a problem, so a pypy option could specify that we want it to thrust the type annotations.
The type annotations have not been written with low-level performance in mind. For example, there is no standard type annotation that means "this is a machine-sized integer"; you have only "this is a Python 3 int object", which is a Python 2 "int-or-long". Similarly, there is no mention in PEP 484 about specifying the type of instance attributes, for example.
So APython would need a subtly different set of types to work on. Let's ignore the problem that this breaks compatibility with any other tool written for PEP 484. It is very unclear how much speed PyPy could potentially gain. Basically we would trade removing an unknown but likely small fraction of the guards emitted by the JIT compiler against the very real result of PyPy segfaulting as soon as there is a mismatch somewhere. At least in C++ the compiler does some real type checking and reports to you. Supporting the APython approach would basically be a lot of hard work (for us) with the end result of giving users a sure way to shoot themselves in the foot.
I would argue that there are plenty of old ways to shoot yourself in the foot which are at least more supported by a large number of tools. For example, C++ comes with two essential tools that would be missing: the first is the C++ compiler itself, which does a better job at reporting typing errors than any best-effort type checker can; the second is gdb. I would argue that you first need equivalents of these two tools.
Exact type checking is stricter than best-effort. I doubt that it is possible to write such a tool that would accept large 3rd-party Python libraries which have not been structured for type-checking in the first place. If you think otherwise, then I would say it is your job to write it --- this would seem like a reasonable first step along this path :-)
A bientôt,
Armin. _______________________________________________ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
In theory it's possible. In practice I struggle to imagine an example where it would make real difference (unlike other hints, e.g. "this value tends to be smaller than 5" or "this list tends to store only/mostly integers" etc.) On Sun, Jan 25, 2015 at 11:13 AM, Robert Grosse <n210241048576@gmail.com> wrote:
Wouldn't it be possible to get a performance improvement from type annotations without sacrificing correctness?
From the perspective of static compilation, if you have an infinitely powerful type inference engine, there shouldn't be any difference to have type annotations or not. The reason that programmer supplied type annotations are useful is because in practice compilers are not infinitely smart, and for efficiency, you need to judiciously apply widening to the type inference process. e.g. telling the compiler "regardless of what the actual types this function is called by, it's written to generically operate on super type C so you can save work on the type inference and just assume type C here"
I think the same effect could be useful in dynamic compilation. You have to place guards to ensure correct behavior, but there are a lot of different places or methods you can do to insert guards to have the same effect and you pretty much have to guess which one is best. Programmer supplied type annotations could be used as a hint to place guards more intelligently without sacrificing correctness. Of course, I haven't done any Pypy development, so I don't know how feasible this is in practice.
On Sun, Jan 25, 2015 at 12:28 AM, Armin Rigo <arigo@tunes.org> wrote:
Hi,
On 25 January 2015 at 00:05, Ho33e5 <ho33e5@gmail.com> wrote:
What is your view on the new typing/mypy things that are happening on python-dev (pep 484)? What I mean is will this make the typing system of rpython evolve? Could RTyper be adapted to work on pep 484 annotations (would it actually be useful)?
You are confusing RPython for being "Python-with-type-annotations". It is not: RPython does not have explicit types annotations.
I think that this alone invalidates the rest of your discussion about RPython. So let's instead talk about "APython", which would be Python-with-type-annotations. (If we're designing some new language, it can be like Python 3.x for x large enough to include support for the pep 484 syntax, as opposed to RPython which is a subset of Python 2.)
An other question that is related: it's maybe early to think about that but could it be reasonable to expect that pypy will better optimize pep-484-annotated python programs? The thrusting of these user annotations is indeed a problem, so a pypy option could specify that we want it to thrust the type annotations.
The type annotations have not been written with low-level performance in mind. For example, there is no standard type annotation that means "this is a machine-sized integer"; you have only "this is a Python 3 int object", which is a Python 2 "int-or-long". Similarly, there is no mention in PEP 484 about specifying the type of instance attributes, for example.
So APython would need a subtly different set of types to work on. Let's ignore the problem that this breaks compatibility with any other tool written for PEP 484. It is very unclear how much speed PyPy could potentially gain. Basically we would trade removing an unknown but likely small fraction of the guards emitted by the JIT compiler against the very real result of PyPy segfaulting as soon as there is a mismatch somewhere. At least in C++ the compiler does some real type checking and reports to you. Supporting the APython approach would basically be a lot of hard work (for us) with the end result of giving users a sure way to shoot themselves in the foot.
I would argue that there are plenty of old ways to shoot yourself in the foot which are at least more supported by a large number of tools. For example, C++ comes with two essential tools that would be missing: the first is the C++ compiler itself, which does a better job at reporting typing errors than any best-effort type checker can; the second is gdb. I would argue that you first need equivalents of these two tools.
Exact type checking is stricter than best-effort. I doubt that it is possible to write such a tool that would accept large 3rd-party Python libraries which have not been structured for type-checking in the first place. If you think otherwise, then I would say it is your job to write it --- this would seem like a reasonable first step along this path :-)
A bientôt,
Armin. _______________________________________________ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
_______________________________________________ pypy-dev mailing list pypy-dev@python.org https://mail.python.org/mailman/listinfo/pypy-dev
participants (4)
-
Armin Rigo
-
Ho33e5
-
Maciej Fijalkowski
-
Robert Grosse