Cofunctions PEP - Revision 4
Here's an updated version of the PEP reflecting my recent suggestions on how to eliminate 'codef'. PEP: XXX Title: Cofunctions Version: $Revision$ Last-Modified: $Date$ Author: Gregory Ewing <greg.ewing@canterbury.ac.nz> Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 13-Feb-2009 Python-Version: 3.x Post-History: Abstract ======== A syntax is proposed for defining and calling a special type of generator called a 'cofunction'. It is designed to provide a streamlined way of writing generator-based coroutines, and allow the early detection of certain kinds of error that are easily made when writing such code, which otherwise tend to cause hard-to-diagnose symptoms. This proposal builds on the 'yield from' mechanism described in PEP 380, and describes some of the semantics of cofunctions in terms of it. However, it would be possible to define and implement cofunctions independently of PEP 380 if so desired. Specification ============= Cofunction definitions ---------------------- A cofunction is a special kind of generator, distinguished by the presence of the keyword ``cocall`` (defined below) at least once in its body. It may also contain ``yield`` and/or ``yield from`` expressions, which behave as they do in other generators. From the outside, the distinguishing feature of a cofunction is that it cannot be called the same way as an ordinary function. An exception is raised if an ordinary call to a cofunction is attempted. Cocalls ------- Calls from one cofunction to another are made by marking the call with a new keyword ``cocall``. The expression :: cocall f(*args, **kwds) is evaluated by first checking whether the object ``f`` implements a ``__cocall__`` method. If it does, the cocall expression is equivalent to :: yield from f.__cocall__(*args, **kwds) except that the object returned by __cocall__ is expected to be an iterator, so the step of calling iter() on it is skipped. If ``f`` does not have a ``__cocall__`` method, or the ``__cocall__`` method returns ``NotImplemented``, then the cocall expression is treated as an ordinary call, and the ``__call__`` method of ``f`` is invoked. Objects which implement __cocall__ are expected to return an object obeying the iterator protocol. Cofunctions respond to __cocall__ the same way as ordinary generator functions respond to __call__, i.e. by returning a generator-iterator. Certain objects that wrap other callable objects, notably bound methods, will be given __cocall__ implementations that delegate to the underlying object. Grammar ------- The full syntax of a cocall expression is described by the following grammar lines: :: atom: cocall | <existing alternatives for atom> cocall: 'cocall' atom cotrailer* '(' [arglist] ')' cotrailer: '[' subscriptlist ']' | '.' NAME Note that this syntax allows cocalls to methods and elements of sequences or mappings to be expressed naturally. For example, the following are valid: :: y = cocall self.foo(x) y = cocall funcdict[key](x) y = cocall a.b.c[i].d(x) Also note that the final calling parentheses are mandatory, so that for example the following is invalid syntax: :: y = cocall f # INVALID New builtins, attributes and C API functions -------------------------------------------- To facilitate interfacing cofunctions with non-coroutine code, there will be a built-in function ``costart`` whose definition is equivalent to :: def costart(obj, *args, **kwds): try: m = obj.__cocall__ except AttributeError: result = NotImplemented else: result = m(*args, **kwds) if result is NotImplemented: raise TypeError("Object does not support cocall") return result There will also be a corresponding C API function :: PyObject *PyObject_CoCall(PyObject *obj, PyObject *args, PyObject *kwds) It is left unspecified for now whether a cofunction is a distinct type of object or, like a generator function, is simply a specially-marked function instance. If the latter, a read-only boolean attribute ``__iscofunction__`` should be provided to allow testing whether a given function object is a cofunction. Motivation and Rationale ======================== The ``yield from`` syntax is reasonably self-explanatory when used for the purpose of delegating part of the work of a generator to another function. It can also be used to good effect in the implementation of generator-based coroutines, but it reads somewhat awkwardly when used for that purpose, and tends to obscure the true intent of the code. Furthermore, using generators as coroutines is somewhat error-prone. If one forgets to use ``yield from`` when it should have been used, or uses it when it shouldn't have, the symptoms that result can be extremely obscure and confusing. Finally, sometimes there is a need for a function to be a coroutine even though it does not yield anything, and in these cases it is necessary to resort to kludges such as ``if 0: yield`` to force it to be a generator. The ``cocall`` construct address the first issue by making the syntax directly reflect the intent, that is, that the function being called forms part of a coroutine. The second issue is addressed by making it impossible to mix coroutine and non-coroutine code in ways that don't make sense. If the rules are violated, an exception is raised that points out exactly what and where the problem is. Lastly, the need for dummy yields is eliminated by making it possible for a cofunction to call both cofunctions and ordinary functions with the same syntax, so that an ordinary function can be used in place of a cofunction that yields zero times. Record of Discussion ==================== An earlier version of this proposal required a special keyword ``codef`` to be used in place of ``def`` when defining a cofunction, and disallowed calling an ordinary function using ``cocall``. However, it became evident that these features were not necessary, and the ``codef`` keyword was dropped in the interests of minimising the number of new keywords required. The use of a decorator instead of ``codef`` was also suggested, but the current proposal makes this unnecessary as well. It has been questioned whether some combination of decorators and functions could be used instead of a dedicated ``cocall`` syntax. While this might be possible, to achieve equivalent error-detecting power it would be necessary to write cofunction calls as something like :: yield from cocall(f)(args) making them even more verbose and inelegant than an unadorned ``yield from``. It is also not clear whether it is possible to achieve all of the benefits of the cocall syntax using this kind of approach. Prototype Implementation ======================== An implementation of an earlier version of this proposal in the form of patches to Python 3.1.2 can be found here: http://www.cosc.canterbury.ac.nz/greg.ewing/python/generators/cofunctions.ht... If this version of the proposal is received favourably, the implementation will be updated to match. Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End:
Perhaps it's the "cocall" keyword that could be removed, rather than "codef"? A revised semantics for "codef" could cause the body to use the most recent PEP revisions's "__cocall__ or __call__" mechanism for all function calls, perhaps at the expense of some runtime efficiency. p
Paul Du Bois wrote:
Perhaps it's the "cocall" keyword that could be removed, rather than "codef"? A revised semantics for "codef" could cause the body to use the most recent PEP revisions's "__cocall__ or __call__" mechanism for all function calls, perhaps at the expense of some runtime efficiency.
Thinking about it overnight, I came to exactly the same conclusion! This is actually the idea I had in mind right back at the beginning. I think there are some good arguments in favour of it. If cocall sites have to be marked in some way, then when you change your mind about whether a function is a generator or not, you have to track down all the places where the function is called and change them as well. If that causes the enclosing functions to also become generators, then you have to track down all the calls to them as well, etc. etc. I can envisage this being a major hassle in a large program. Whereas if we mark the functions instead of the calls, although some changes will still be necessary, there ought to be far fewer of them. Generally one tends to call functions more often than one defines them. Also, it seems to me that changing 'def' into 'codef' is a far less intrusive change than sprinkling some kind of call marker throughout the body. It means we don't have to invent a weird new calling syntax. It also means you can read the function and think about it as normal code instead of having to be aware at every point of what kind of thing you're calling. It's more duck-typish. So now I'm thinking that my original instinct was right: cofunctions should be functions that call things in a different way. -- Greg
On Wed, Aug 11, 2010 at 4:17 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Paul Du Bois wrote:
Perhaps it's the "cocall" keyword that could be removed, rather than "codef"? A revised semantics for "codef" could cause the body to use the most recent PEP revisions's "__cocall__ or __call__" mechanism for all function calls, perhaps at the expense of some runtime efficiency.
Thinking about it overnight, I came to exactly the same conclusion! This is actually the idea I had in mind right back at the beginning.
I think there are some good arguments in favour of it. If cocall sites have to be marked in some way, then when you change your mind about whether a function is a generator or not, you have to track down all the places where the function is called and change them as well.
If that causes the enclosing functions to also become generators, then you have to track down all the calls to them as well, etc. etc. I can envisage this being a major hassle in a large program.
Whereas if we mark the functions instead of the calls, although some changes will still be necessary, there ought to be far fewer of them. Generally one tends to call functions more often than one defines them.
There still has to be some weird way to call cofunctions from regular functions. Changing the single definition of a function from "def" to "codef" means revisiting all the sites which call that function in the body of regular functions, and pushing the change up the stack as you mentioned. I think this is the wrong direction. But, if you want to head that way, why not make calling a cofunction from a function also transparent, and exhaust the iterator when the function is called? Then it never matters which kind of function or cofunction you call from anywhere, you can just magically change the control flow by adding "co" to your "def"s.
Also, it seems to me that changing 'def' into 'codef' is a far less intrusive change than sprinkling some kind of call marker throughout the body. It means we don't have to invent a weird new calling syntax. It also means you can read the function and think about it as normal code instead of having to be aware at every point of what kind of thing you're calling. It's more duck-typish.
Again, marking the points at which your function could be suspended is a very important feature, in my mind. I would stick with explicitly using "yield from" where needed rather than magically hiding it. -Greg
ghazel@gmail.com wrote:
There still has to be some weird way to call cofunctions from regular functions. Changing the single definition of a function from "def" to "codef" means revisiting all the sites which call that function in the body of regular functions, and pushing the change up the stack as you mentioned.
Yes, but when using generators as coroutines, I believe that invoking a coroutine from a non-coroutine will be a relatively rare thing to do. Essentially you only do it when starting a new coroutine, and most of the time it can be hidden inside whatever library you're using to schedule your coroutines. In each of my scheduler examples, there is only one place where this happens. It's not particularly weird, either -- just a matter of wrapping costart() around it, which is a normal function, no magic involved.
I think this is the wrong direction. But, if you want to head that way, why not make calling a cofunction from a function also transparent, and exhaust the iterator when the function is called?
Because this is almost always the *wrong* thing to do. The cofunction you're calling is expecting to be able to suspend the whole stack of calls right back up to the trampoline, and by automatically exhausting it you're preventing it from being able to do so. Calling a cofunction from a non-cofunction is overwhelmingly likely to be an error, and should be reported as such. For cases where you really do want to exhaust it, a function could be provided for that purpose, but you should have to make a conscious decision to use it.
Again, marking the points at which your function could be suspended is a very important feature, in my mind.
I'm still very far from convinced about that. Or at least I'm not convinced that the benefits of such awareness justify the maintenance cost of keeping the call markers up to date in the face of program changes. Also, consider that if cocall is made to work on both ordinary functions and cofunctions, there is nothing to stop you from simply marking *every* call with cocall just on the offchance. People being basically lazy, I can well imagine someone doing this, and then they've lost any suspendability-awareness benefit that the call markers might bring. Even if they don't go to that extreme, there is nothing to ensure that cocall markers are removed when no longer necessary, so redundant cocalls are likely to accumulate over time, to give misleading indications to future maintainers. -- Greg
On Wed, Aug 11, 2010 at 11:31 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
ghazel@gmail.com wrote:
I think this is the wrong direction. But, if you want to head that way, why not make calling a cofunction from a function also transparent, and exhaust the iterator when the function is called?
Because this is almost always the *wrong* thing to do. The cofunction you're calling is expecting to be able to suspend the whole stack of calls right back up to the trampoline, and by automatically exhausting it you're preventing it from being able to do so.
Right, this is also why you do not always join a thread immediately after launching it. So if cofunctions have a special way of launching and are preemptable at unmarked points and terminate at some time in the future which is based on an indeterminate number of iterations, why not just use threads? codef could simply be a decorator something like this: def codef(f): t = threading.Thread(target=f) t.start() while t.is_alive(): t.join(0.010) yield
Calling a cofunction from a non-cofunction is overwhelmingly likely to be an error, and should be reported as such. For cases where you really do want to exhaust it, a function could be provided for that purpose, but you should have to make a conscious decision to use it.
Again, marking the points at which your function could be suspended is a very important feature, in my mind.
I'm still very far from convinced about that. Or at least I'm not convinced that the benefits of such awareness justify the maintenance cost of keeping the call markers up to date in the face of program changes.
Also, consider that if cocall is made to work on both ordinary functions and cofunctions, there is nothing to stop you from simply marking *every* call with cocall just on the offchance. People being basically lazy, I can well imagine someone doing this, and then they've lost any suspendability-awareness benefit that the call markers might bring.
Even if they don't go to that extreme, there is nothing to ensure that cocall markers are removed when no longer necessary, so redundant cocalls are likely to accumulate over time, to give misleading indications to future maintainers.
The important thing about a cooperation point is that you are specifying where it is safe to pause your function - even if it is not paused. The accumulation of these could eventually be noisy, but it is safe. If someone decorates all their calls with cocall, they have just written a bunch of bugs, and it was hard to do that. Automatically adding cocall everywhere creates a bunch of bugs which they are unaware of. -Greg
On 12/08/10 18:44, ghazel@gmail.com wrote:
why not just use threads?
One reason not to use threads is that they're fairly heavyweight. They use OS resources, and each one needs its own C stack that has to be big enough for everything it might want to do. Switching between threads can be slow, too. In an application that requires thousands of small, cooperating processes, threads are not a good solution. And applications like that do exist -- discrete-event simulation is one example. -- Greg
Greg Ewing wrote:
On 12/08/10 18:44, ghazel@gmail.com wrote:
why not just use threads?
One reason not to use threads is that they're fairly heavyweight. They use OS resources, and each one needs its own C stack that has to be big enough for everything it might want to do. Switching between threads can be slow, too.
In an application that requires thousands of small, cooperating processes, threads are not a good solution. And applications like that do exist -- discrete-event simulation is one example.
Sure, and those use Stackless to solve the problem, which IMHO provides a much more Pythonic approach to these things. Stackless also works across C function calls, a detail which will become more important as we think about JIT compilers for Python and which is not something we want the average Python programmer to have to worry about. Stackless hides all these details from the Python programmer and works well on the platforms that it supports. So if we really want such functionality in general Python (which I'm not convinced of, but that may be just me), then I'd suggest to look at an existing and proven approach first. The techniques used by Stackless to achieve this are nasty, but then Python also ships with ctypes which relies on similar nasty techniques (hidden away in libffi), so I guess the barrier for entry is lower nowadays than it was a few years ago. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 13 2010)
Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/
M.-A. Lemburg wrote:
Greg Ewing wrote:
In an application that requires thousands of small, cooperating processes,
Sure, and those use Stackless to solve the problem, which IMHO provides a much more Pythonic approach to these things.
At the expense of using a non-standard Python installation, though. I'm trying to design something that can be incorporated into standard Python and work without requiring any deep black magic. Guido has so far rejected any idea of merging Stackless into CPython. Also I gather that Stackless works by copying pieces of C stack around, which is probably more lightweight than using an OS thread, but not as light as it could be. And I'm not sure what criteria to judge pythonicity by in all this. Stackless tasklets work without requiring any kind of function or call markers -- everything looks exactly like normal Python code. But Guido and others seem to be objecting to my implicit-cocall proposal on the basis that it looks *too much* like normal code. It seems to me that the same criticism should apply even more to Stackless.
The techniques used by Stackless to achieve this are nasty, but then Python also ships with ctypes which relies on similar nasty techniques
But at least it works provided you write your ctypes code correctly and the library you're calling isn't buggy. I seem to remember that there are certain C libraries that break Stackless because they assume that their C stack frames don't move around. -- Greg
Hello, Greg Ewing:
M.-A. Lemburg wrote:
Sure, and those use Stackless to solve the problem, which IMHO provides a much more Pythonic approach to these things.
At the expense of using a non-standard Python installation, though. I'm trying to design something that can be incorporated into standard Python and work without requiring any deep black magic. Guido has so far rejected any idea of merging Stackless into CPython.
Note, that there is also greenlet library that provides such part of functionality from Stackless (except preemptive scheduling and pickling) and eventlet/gevent libraries that is quite popular solutions for network applications written in Python, which use greenlet.
And I'm not sure what criteria to judge pythonicity by in all this. Stackless tasklets work without requiring any kind of function or call markers -- everything looks exactly like normal Python code. But Guido and others seem to be objecting to my implicit-cocall proposal on the basis that it looks *too much* like normal code. It seems to me that the same criticism should apply even more to Stackless.
For me the fact that greenlet code looks like normal code is more preferable against generator-based coroutines (I think they are overuse of generator syntax). Also I don't see the need to explicit cocall: 1) It will increase the complexity of language without necessity — we have no special syntax for threading, so why we should have one for cooperative threads? Semantics are almost the same relative to unthreaded Python, except with cooperative threading we should explicitly control execution, which has less semantic impact than preemptive threading code, I think. 2) That will affect code reusability a lot, because we can't mix cocalls and calls. All this issues are solved with greenlet library and I think if Python needs cooperative threads they should have API and behave like greenlets. -- Andrey Popp phone: +7 911 740 24 91 e-mail: 8mayday@gmail.com
On Sat, 14 Aug 2010 20:40:52 +0400 Andrey Popp <8mayday@gmail.com> wrote:
2) That will affect code reusability a lot, because we can't mix cocalls and calls.
All this issues are solved with greenlet library and I think if Python needs cooperative threads they should have API and behave like greenlets.
As far as I understand, the only way Stackless and the like can make things "transparent" is that they monkey-patch core socket functionality (and perhaps other critical built-in functionalities). "Soft" cooperative multithreading isn't naturally transparent, which makes it quite different from OS-level multithreading. Regards Antoine.
Antoine Pitrou wrote:
Andrey Popp wrote:
2) That will affect code reusability a lot, because we can't mix cocalls and calls.
All this issues are solved with greenlet library and I think if Python needs cooperative threads they should have API and behave like greenlets.
As far as I understand, the only way Stackless and the like can make things "transparent" is that they monkey-patch core socket functionality (and perhaps other critical built-in functionalities). "Soft" cooperative multithreading isn't naturally transparent, which makes it quite different from OS-level multithreading.
Stackless does not monkey-patch socket, gevent and eventlet do, but I'm not about that. We can just have another socket implementation that cooperates and use it in cooperative code. I'm about how to start coroutines and switch between them — the way it's done in greenlet (but not the implementation) is more preferable to explicit cocalls. -- Andrey Popp phone: +7 911 740 24 91 e-mail: 8mayday@gmail.com
Greg Ewing wrote:
M.-A. Lemburg wrote:
Greg Ewing wrote:
In an application that requires thousands of small, cooperating processes,
Sure, and those use Stackless to solve the problem, which IMHO provides a much more Pythonic approach to these things.
At the expense of using a non-standard Python installation, though. I'm trying to design something that can be incorporated into standard Python and work without requiring any deep black magic. Guido has so far rejected any idea of merging Stackless into CPython.
The problem with doing so is twofold: 1. The use case Stackless addresses is not something an everyday programmer will need, so making CPython more complicated just to add this one extra feature, doesn't appear worth the trouble. 2. The Stackless implementation is not very portable, so the feature would only be available on a limited number of platforms. Apart from that, every new feature will raise the bar for learning Python. If you could turn your proposal into something more like the Stackless tasklets and move the implementation to an extension module (perhaps with some extra help from new CPython APIs), then I'm sure the proposal would get more followers.
Also I gather that Stackless works by copying pieces of C stack around, which is probably more lightweight than using an OS thread, but not as light as it could be.
Well, it works great in practice and is a proven approach. Copying in C is certainly fast enough for most needs and the black magic is well hidden in Stackless.
And I'm not sure what criteria to judge pythonicity by in all this.
"explicit is better than implicit". Tasklets are normal Python objects wrapping functions. The create of those tasklets is explicit, not implicit via some (special) yield burried deep in the code.
Stackless tasklets work without requiring any kind of function or call markers -- everything looks exactly like normal Python code.
Right, because everything *is* normal Python code. Tasklets are much more like threads to the programmer, i.e. a well understood concept.
But Guido and others seem to be objecting to my implicit-cocall proposal on the basis that it looks *too much* like normal code. It seems to me that the same criticism should apply even more to Stackless.
I think an important part of the criticism is hiding the fact that you are writing a cofunction away inside the function definition itself. Generators have the same problem, but at least you can call them as regular Python functions and they only work a little different than normal functions.
The techniques used by Stackless to achieve this are nasty, but then Python also ships with ctypes which relies on similar nasty techniques
But at least it works provided you write your ctypes code correctly and the library you're calling isn't buggy. I seem to remember that there are certain C libraries that break Stackless because they assume that their C stack frames don't move around.
Well, viruses will have a harder time for sure ;-) I am not aware of other use cases that would need to know the location of the stack frame in memory. BTW: I'm sure that the functionality needed by Stackless could also be moved into a C lib for other languages to use (much like the libffi code). -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 16 2010)
Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/
M.-A. Lemburg wrote:
If you could turn your proposal into something more like the Stackless tasklets and move the implementation to an extension module (perhaps with some extra help from new CPython APIs), then I'm sure the proposal would get more followers.
As far as I can see, it's impossible to do what I'm proposing with standard C and without language support. That's why greenlets have to resort to black magic.
"explicit is better than implicit".
This is a strange argument to be making in favour of Stackless, though, where the fact that you're dealing with suspendable code is almost completely *implicit*.
Tasklets are normal Python objects wrapping functions. The create of those tasklets is explicit, not implicit via some (special) yield burried deep in the code.
So would you be more in favour of the alternative version, where there is 'codef' but no 'cocall'?
I think an important part of the criticism is hiding the fact that you are writing a cofunction away inside the function definition itself.
The only reason I did that is because Guido turned his nose up at the idea of defining a function using anything other than 'def'. I'm starting to wish I'd stuck to my guns a bit longer in the hope of changing his sense of smell. :-)
Well, viruses will have a harder time for sure ;-) I am not aware of other use cases that would need to know the location of the stack frame in memory.
One way it can happen is that a task sets up a callback referencing something in a stack frame, and then the callback gets invoked while the task is switched out, so the referenced data isn't in the right place. I believe this is the kind of thing that was causing the trouble with Tkinter. -- Greg
Hi Greg, digged this thing up while looking into the current async discussion. On 14.08.10 03:22, Greg Ewing wrote:
M.-A. Lemburg wrote:
Greg Ewing wrote:
In an application that requires thousands of small, cooperating processes,
Sure, and those use Stackless to solve the problem, which IMHO provides a much more Pythonic approach to these things.
At the expense of using a non-standard Python installation, though. I'm trying to design something that can be incorporated into standard Python and work without requiring any deep black magic. Guido has so far rejected any idea of merging Stackless into CPython.
Also I gather that Stackless works by copying pieces of C stack around, which is probably more lightweight than using an OS thread, but not as light as it could be.
So, here I need to correct a bit. What you are describing is the behavior of stackless 2.0, also what the greenlet does (and eventlet then too for now). The main thing that makes stackless 3.x so difficult _is_ that it is as efficient as can be, because no stack slicing is done, for 90 % of all code. Stackless uses operations to unwind the C stack in most cases. If this were possible in _all_ cases, then all the stack copying would go away, and we had no machine code at all! But the necessary change to Python would be quite heavy, undoable for a small team. I have left these ideas long time ago and did other projects. But maybe things should be considered again, after the world has changed so much. Maybe Python 4 could be decoupled from the C stack. cheers - Chris -- Christian Tismer :^) <mailto:tismer@stackless.com> Software Consulting : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de phone +49 173 24 18 776 fax +49 (30) 700143-0023 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/
On Wed, Aug 11, 2010 at 11:31 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
ghazel@gmail.com wrote:
Again, marking the points at which your function could be suspended is a very important feature, in my mind.
I'm still very far from convinced about that. Or at least I'm not convinced that the benefits of such awareness justify the maintenance cost of keeping the call markers up to date in the face of program changes.
Also, consider that if cocall is made to work on both ordinary functions and cofunctions, there is nothing to stop you from simply marking *every* call with cocall just on the offchance. People being basically lazy, I can well imagine someone doing this, and then they've lost any suspendability-awareness benefit that the call markers might bring.
Even if they don't go to that extreme, there is nothing to ensure that cocall markers are removed when no longer necessary, so redundant cocalls are likely to accumulate over time, to give misleading indications to future maintainers.
I'm with ghazel on this one. Long, long ago I used a system that effectively used implicit cocalls. It was a threading OS with non-preemptive scheduling, so instead of locking you'd just refrain from calling any one of the (very few) syscalls that would allow another thread to run. This worked fine when we just got started, but as we started building more powerful abstractions, a common bug was making a call to some abstraction which behind your back, sometimes, perhaps inside more abstraction, would make a syscall. This was nightmarish to debug (especially since it could happen that the offending abstraction was maintained by someone else and had just evolved to make its first syscall). So, coming back to this, I think I am on the side of explicitly marking cocalls. But whether it's better to use cocall or yield-from, I don't know. -- --Guido van Rossum (python.org/~guido)
Guido van Rossum wrote:
I'm with ghazel on this one. Long, long ago I used a system that effectively used implicit cocalls. It was a threading OS with non-preemptive scheduling, so instead of locking you'd just refrain from calling any one of the (very few) syscalls that would allow another thread to run.
What this says to me is that the mindset of "I'm using cooperative threading, so I don't need to bother with locks" is misguided. Even with non-preemptive scheduling, it's still a good idea to organise your program so that threads interact only at well defined points using appropriate synchronisation structures, because it makes the program easier to reason about. Anyway, the scheme I'm proposing is not the same as the scenario you describe above. There are some things you can be sure of: a plain function (defined with 'def') won't ever block. A function defined with 'codef' *could* block, so this serves as a flag that you need to be careful what you do inside it. If you need to make sure that some section of code can't block, you can factor it out into a plain function and put a comment at the top saying "Warning -- don't ever change this into a cofunction!" So you effectively have a way of creating a critical section, and a mechanism that will alert you if anything changes in a way that would break it. I think this is a better approach than relying on call markers to alert you of potential blockages. Consider that to verify whether a stretch of code inside a cofunction is non-blocking that way, you need to scan it and examine every function call to ensure that it isn't marked. Whereas by looking up the top and seeing that it starts with 'def', you can tell immediately that blocking is impossible. -- Greg
On Wed, Aug 11, 2010 at 6:03 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
is evaluated by first checking whether the object ``f`` implements a ``__cocall__`` method. If it does, the cocall expression is equivalent to
::
yield from f.__cocall__(*args, **kwds)
except that the object returned by __cocall__ is expected to be an iterator, so the step of calling iter() on it is skipped.
I think I'd like to see this exist for a while as: yield from f.cocall(*args, **kwds) for a while after PEP 380 is implemented before it is given syntactic sugar. Similar to my other suggestion, a @cofunction decorator could easily provide a cocall method without implementing __call__. The compiler wouldn't pick up usages of f.cocall() without yield from with this approach, but tools like pychecker and pylint could certainly warn about it.
If ``f`` does not have a ``__cocall__`` method, or the ``__cocall__`` method returns ``NotImplemented``, then the cocall expression is treated as an ordinary call, and the ``__call__`` method of ``f`` is invoked.
Objects which implement __cocall__ are expected to return an object obeying the iterator protocol. Cofunctions respond to __cocall__ the same way as ordinary generator functions respond to __call__, i.e. by returning a generator-iterator.
You want more than the iterator protocol - you want the whole generator API (i.e. send() and throw() as well as __next__()).
Certain objects that wrap other callable objects, notably bound methods, will be given __cocall__ implementations that delegate to the underlying object.
If you use a @cofunction decorator, you can define your own descriptor semantics, independent of those for ordinary functions.
The use of a decorator instead of ``codef`` was also suggested, but the current proposal makes this unnecessary as well.
I'm not sure that is really an advantage, given that using a decorator gives much greater control over the way cofunctions behave.
It has been questioned whether some combination of decorators and functions could be used instead of a dedicated ``cocall`` syntax. While this might be possible, to achieve equivalent error-detecting power it would be necessary to write cofunction calls as something like
::
yield from cocall(f)(args)
making them even more verbose and inelegant than an unadorned ``yield from``. It is also not clear whether it is possible to achieve all of the benefits of the cocall syntax using this kind of approach.
As far as I can see, the only thing dedicated syntax adds is the ability for the compiler to detect when a cofunction is called without correctly yielding control. But pylint/pychecker would still be able to do that with a decorator based approach. I'd really want to see a nice clean @cofunction decorator approach based on PEP 380 seriously attempted before we threw our hands up and said new syntax was the only way. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Nick Coghlan wrote:
You want more than the iterator protocol - you want the whole generator API (i.e. send() and throw() as well as __next__()).
I can't see why that should be necessary. A 'yield from' manages to degrade gracefully when given something that only supports next(), and there's no reason a cocall can't do the same. -- Greg
On Thu, Aug 12, 2010 at 8:42 AM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Nick Coghlan wrote:
You want more than the iterator protocol - you want the whole generator API (i.e. send() and throw() as well as __next__()).
I can't see why that should be necessary. A 'yield from' manages to degrade gracefully when given something that only supports next(), and there's no reason a cocall can't do the same.
Without send() and throw(), an object is just an iterator, never a cofunction (as there is no way for it to make cooperative calls - you need the extra two methods in order to receive the results of any such calls). Implementing __cocall__ without yourself being able to make cooperative calls doesn't make any sense. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Nick Coghlan wrote:
Without send() and throw(), an object is just an iterator, never a cofunction (as there is no way for it to make cooperative calls - you need the extra two methods in order to receive the results of any such calls).
There are plenty of uses for cofunctions that never send or receive any values using yield, but just use it as a suspension point. In that case, send() is never used, only next(). And I suspect that use of throw() will be even rarer. -- Greg
On Thu, Aug 12, 2010 at 10:51 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Nick Coghlan wrote:
Without send() and throw(), an object is just an iterator, never a cofunction (as there is no way for it to make cooperative calls - you need the extra two methods in order to receive the results of any such calls).
There are plenty of uses for cofunctions that never send or receive any values using yield, but just use it as a suspension point. In that case, send() is never used, only next(). And I suspect that use of throw() will be even rarer.
Could you name some of those uses please? If you aren't getting answers back, they sound like ordinary iterators to me. The whole *point* of cofunctions to my mind is that they let you do things like async I/O (where you expect a result back, in the form of a return value or an exception) in a way that feels more like normal imperative programming. So, you may consider there to be plenty of uses for iterate-only cofunctions, but I come up blank. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Fri, Aug 13, 2010 at 7:39 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On Thu, Aug 12, 2010 at 10:51 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Nick Coghlan wrote:
Without send() and throw(), an object is just an iterator, never a cofunction (as there is no way for it to make cooperative calls - you need the extra two methods in order to receive the results of any such calls).
There are plenty of uses for cofunctions that never send or receive any values using yield, but just use it as a suspension point. In that case, send() is never used, only next(). And I suspect that use of throw() will be even rarer.
Could you name some of those uses please? If you aren't getting answers back, they sound like ordinary iterators to me. The whole *point* of cofunctions to my mind is that they let you do things like async I/O (where you expect a result back, in the form of a return value or an exception) in a way that feels more like normal imperative programming.
So, you may consider there to be plenty of uses for iterate-only cofunctions, but I come up blank.
At the very least, a non-generator cofunction will need to offer close() and __del__() (or its weakref equivalent) to release resources in the event of an exception in any called cofunctions (independent of any expected exceptions, almost anything can throw KeyboardInterrupt). I just don't see how further blurring the lines between cofunctions and ordinary generators is helping here. Providing dummy implementations of send() and throw() that ignore their arguments and devolve to next() is trivial, while still making the conceptual separation clearer. PEP 342 is *called* "Coroutines via enhanced generators", and it still seems to me that the usage of send() and throw() is one of the key features distinguishing a cooperative scheduler from ordinary iteration. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 13/08/10 10:00, Nick Coghlan wrote:
At the very least, a non-generator cofunction will need to offer close() and __del__() (or its weakref equivalent) to release resources in the event of an exception in any called cofunctions
But if it doesn't call any other cofunctions and doesn't use any resources besides memory, there's no need for it to provide a close() method.
Providing dummy implementations of send() and throw() that ignore their arguments and devolve to next() is trivial,
But it seems perverse to force people to provide such implementations, given that yield-from is defined in such a way that the same effect results from simply omitting those methods.
PEP 342 is *called* "Coroutines via enhanced generators", and it still seems to me that the usage of send() and throw() is one of the key features distinguishing a cooperative scheduler from ordinary iteration.
Ahhh..... I've just looked at that PEP, and I can now see where the confusion is coming from. PEP 342 talks about using yield to communicate instructions to and from a coroutine-driving trampoline. However, that entire technique is a *workaround* for not having something like yield-from. If you do have yield-from, then none of that is necessary, and you don't *need* generators to be "enhanced" with a send() facility in order to do coroutine scheduling -- plain next() is more than sufficient, as I hope my socket example demonstrates. A similar thing applies to throw(). The PEP 342 motivation for it is so that the trampoline can propagate an exception raised in an inner generator back up the call stack, by manually throwing it into each generator along the way. But this technique is rendered obsolete by yield-from as well, because any exception occurring in an inner generator propagates up the yield-from chain automatically, without having to do anything special. -- Greg
On 13/08/10 09:39, Nick Coghlan wrote:
On Thu, Aug 12, 2010 at 10:51 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
There are plenty of uses for cofunctions that never send or receive any values using yield
Could you name some of those uses please? If you aren't getting answers back, they sound like ordinary iterators to me. The whole *point* of cofunctions to my mind is that they let you do things like async I/O (where you expect a result back, in the form of a return value or an exception) in a way that feels more like normal imperative programming.
I provided an example of doing exactly that during the yield-from debate. A full discussion can be found here: http://www.cosc.canterbury.ac.nz/greg.ewing/python/generators/yf_current/Exa... Are you perhaps confusing the value produced by 'yield' with the function return value of a cofunction or a generator used with yield-from? They're different things, and it's the return value that gets seen by the function doing the cocall or yield-from. That's what enables you to think you're writing in a normal imperative style. In the above example, for instance, I define a function sock_readline() that waits for data to arrive on a socket, reads it and returns it to the caller. It's used like this: line = yield from sock_readline(sock) or if you're using cofunctions, line = cocall sock_readline(sock) The definition of sock_readline looks like this: def sock_readline(sock): buf = "" while buf[-1:] != "\n": block_for_reading(sock) yield data = sock.recv(1024) if not data: break buf += data if not buf: close_fd(sock) return buf The 'yield' in there is what suspends the coroutine, and it neither sends or receives any value. The data read from the socket is returned to the caller by the return statement at the end. [Clarification: block_for_reading doesn't actually suspend, it just puts the current coroutine on a list to be woken up when the socket is ready.] -- Greg
On Fri, Aug 13, 2010 at 12:34 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Are you perhaps confusing the value produced by 'yield' with the function return value of a cofunction or a generator used with yield-from? They're different things, and it's the return value that gets seen by the function doing the cocall or yield-from. That's what enables you to think you're writing in a normal imperative style.
I'll admit that I was forgetting the difference between the return value of yield and that of yield from. So send() isn't essential.
def sock_readline(sock): buf = "" while buf[-1:] != "\n": block_for_reading(sock) yield data = sock.recv(1024) if not data: break buf += data if not buf: close_fd(sock) return buf
The 'yield' in there is what suspends the coroutine, and it neither sends or receives any value. The data read from the socket is returned to the caller by the return statement at the end. [Clarification: block_for_reading doesn't actually suspend, it just puts the current coroutine on a list to be woken up when the socket is ready.]
But the "yield" is also the point that allows the scheduler to throw in an exception to indicate that the socket has gone away and that the error should be propagated up the coroutine stack. You need throw() at least to participate in proper exception handling. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 13/08/10 16:13, Nick Coghlan wrote:
But the "yield" is also the point that allows the scheduler to throw in an exception to indicate that the socket has gone away and that the error should be propagated up the coroutine stack.
If by "gone away" you mean that the other end has been closed, that's already taken care of. The socket becomes ready to read, the coroutine wakes up, tries to read it and discovers the EOF condition for itself. The same thing applies if anything happens to the socket that would cause an exception if you tried to read it. All the scheduler needs to do is notice that the socket is being reported as ready by select(). When the coroutine tries to deal with it, the exception will occur in that coroutine and be propagated within it. I'm not saying that no scheduler will ever want to throw an exception into a coroutine, but it's not needed in this case. -- Greg
On Tue, Aug 10, 2010 at 10:03 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Here's an updated version of the PEP reflecting my recent suggestions on how to eliminate 'codef'.
...
Also note that the final calling parentheses are mandatory, so that for example the following is invalid syntax:
::
y = cocall f # INVALID
If this is the case, why not say "y= cocall f with (x)" or something like that instead of f(x)? When I see f(x), I think, "OK, so it's going to call f with the argument x then it will do something cocall-ish to it." But actually the reality is first it looks up the __cocall__ on f and only then passes in the args and kwargs. Of course, anyone who studies cocalls will learn how it really works before they get too deep into the theory behind them, but I still think it's kind of misleading for people new to the world of cocalling. I also suspect that Nick is right that we should probably try out something like "y = yield from cocall(f, *args, **kwargs)" and see if it catches on before resorting to a new keyword… -- Carl Johnson
Apologies if this already exists, but for the benefit of those less enlightened, I think it would be very helpful if the pep included or linked to an example of an algorithm implemented 3 ways: - Straight python, no coroutines - Coroutines implemented via "yield from" - Coroutines implemented via "cocall" iirc, the last two would not look much different, but maybe I'm mistaken. As I understand it: cocall f(x, y, z) is sugar for: yield from f.__cocall__(x, y, z) and it now magically promotes the function that contains it to a cofunction (thus implementing __cocall__ for said function). From what I understand, __cocall__ does not exist because you might want to also have __call__ with different behavior, but instead it exists to allow the "cocaller" to differentiate between cofunctions and normal functions? In theory though, I could implement an object myself that implemented both __call__ and __cocall__, correct? I suppose __cocall__ is to __call__ as __iter__ is to __call__ presently. I'd say my main problem with this is conceptual complexity. Many folks have a hard time understanding generators, and the presence of the conceptually similar language concept of iterators doesn't help. This feels like yet another conceptually similar concept to generators, that just muddies the waters further. What I'd love to see is a version of coroutines that isn't just sugared generators. It really seems to me that generators should be implemented on top of coroutines and the not the reverse. That would lead to a more linear path to understanding: iterators -> generators -> coroutines. This proposal doesn't feel like that to me, it feels more like an adjunct thing that uses the generator machinery for different ends. I'm surely confused, but that's part of my point 8^) -Casey On Aug 11, 2010, at 2:03 AM, Greg Ewing wrote:
Here's an updated version of the PEP reflecting my recent suggestions on how to eliminate 'codef'.
PEP: XXX Title: Cofunctions Version: $Revision$ Last-Modified: $Date$ Author: Gregory Ewing <greg.ewing@canterbury.ac.nz> Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 13-Feb-2009 Python-Version: 3.x Post-History:
Abstract ========
A syntax is proposed for defining and calling a special type of generator called a 'cofunction'. It is designed to provide a streamlined way of writing generator-based coroutines, and allow the early detection of certain kinds of error that are easily made when writing such code, which otherwise tend to cause hard-to-diagnose symptoms.
This proposal builds on the 'yield from' mechanism described in PEP 380, and describes some of the semantics of cofunctions in terms of it. However, it would be possible to define and implement cofunctions independently of PEP 380 if so desired.
Specification =============
Cofunction definitions ----------------------
A cofunction is a special kind of generator, distinguished by the presence of the keyword ``cocall`` (defined below) at least once in its body. It may also contain ``yield`` and/or ``yield from`` expressions, which behave as they do in other generators.
From the outside, the distinguishing feature of a cofunction is that it cannot be called the same way as an ordinary function. An exception is raised if an ordinary call to a cofunction is attempted.
Cocalls -------
Calls from one cofunction to another are made by marking the call with a new keyword ``cocall``. The expression
::
cocall f(*args, **kwds)
is evaluated by first checking whether the object ``f`` implements a ``__cocall__`` method. If it does, the cocall expression is equivalent to
::
yield from f.__cocall__(*args, **kwds)
except that the object returned by __cocall__ is expected to be an iterator, so the step of calling iter() on it is skipped.
If ``f`` does not have a ``__cocall__`` method, or the ``__cocall__`` method returns ``NotImplemented``, then the cocall expression is treated as an ordinary call, and the ``__call__`` method of ``f`` is invoked.
Objects which implement __cocall__ are expected to return an object obeying the iterator protocol. Cofunctions respond to __cocall__ the same way as ordinary generator functions respond to __call__, i.e. by returning a generator-iterator.
Certain objects that wrap other callable objects, notably bound methods, will be given __cocall__ implementations that delegate to the underlying object.
Grammar -------
The full syntax of a cocall expression is described by the following grammar lines:
::
atom: cocall | <existing alternatives for atom> cocall: 'cocall' atom cotrailer* '(' [arglist] ')' cotrailer: '[' subscriptlist ']' | '.' NAME
Note that this syntax allows cocalls to methods and elements of sequences or mappings to be expressed naturally. For example, the following are valid:
::
y = cocall self.foo(x) y = cocall funcdict[key](x) y = cocall a.b.c[i].d(x)
Also note that the final calling parentheses are mandatory, so that for example the following is invalid syntax:
::
y = cocall f # INVALID
New builtins, attributes and C API functions --------------------------------------------
To facilitate interfacing cofunctions with non-coroutine code, there will be a built-in function ``costart`` whose definition is equivalent to
::
def costart(obj, *args, **kwds): try: m = obj.__cocall__ except AttributeError: result = NotImplemented else: result = m(*args, **kwds) if result is NotImplemented: raise TypeError("Object does not support cocall") return result
There will also be a corresponding C API function
::
PyObject *PyObject_CoCall(PyObject *obj, PyObject *args, PyObject *kwds)
It is left unspecified for now whether a cofunction is a distinct type of object or, like a generator function, is simply a specially-marked function instance. If the latter, a read-only boolean attribute ``__iscofunction__`` should be provided to allow testing whether a given function object is a cofunction.
Motivation and Rationale ========================
The ``yield from`` syntax is reasonably self-explanatory when used for the purpose of delegating part of the work of a generator to another function. It can also be used to good effect in the implementation of generator-based coroutines, but it reads somewhat awkwardly when used for that purpose, and tends to obscure the true intent of the code.
Furthermore, using generators as coroutines is somewhat error-prone. If one forgets to use ``yield from`` when it should have been used, or uses it when it shouldn't have, the symptoms that result can be extremely obscure and confusing.
Finally, sometimes there is a need for a function to be a coroutine even though it does not yield anything, and in these cases it is necessary to resort to kludges such as ``if 0: yield`` to force it to be a generator.
The ``cocall`` construct address the first issue by making the syntax directly reflect the intent, that is, that the function being called forms part of a coroutine.
The second issue is addressed by making it impossible to mix coroutine and non-coroutine code in ways that don't make sense. If the rules are violated, an exception is raised that points out exactly what and where the problem is.
Lastly, the need for dummy yields is eliminated by making it possible for a cofunction to call both cofunctions and ordinary functions with the same syntax, so that an ordinary function can be used in place of a cofunction that yields zero times.
Record of Discussion ====================
An earlier version of this proposal required a special keyword ``codef`` to be used in place of ``def`` when defining a cofunction, and disallowed calling an ordinary function using ``cocall``. However, it became evident that these features were not necessary, and the ``codef`` keyword was dropped in the interests of minimising the number of new keywords required.
The use of a decorator instead of ``codef`` was also suggested, but the current proposal makes this unnecessary as well.
It has been questioned whether some combination of decorators and functions could be used instead of a dedicated ``cocall`` syntax. While this might be possible, to achieve equivalent error-detecting power it would be necessary to write cofunction calls as something like
::
yield from cocall(f)(args)
making them even more verbose and inelegant than an unadorned ``yield from``. It is also not clear whether it is possible to achieve all of the benefits of the cocall syntax using this kind of approach.
Prototype Implementation ========================
An implementation of an earlier version of this proposal in the form of patches to Python 3.1.2 can be found here:
http://www.cosc.canterbury.ac.nz/greg.ewing/python/generators/cofunctions.ht...
If this version of the proposal is received favourably, the implementation will be updated to match.
Copyright =========
This document has been placed in the public domain.
.. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: _______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
Casey Duncan wrote:
Apologies if this already exists, but for the benefit of those less enlightened, I think it would be very helpful if the pep included or linked to an example of an algorithm implemented 3 ways:
There isn't currently a single one implemented all three ways, but my parser example is implemented with plain Python and yield-from, and the philosophers and socket server are implemented using yield-from and cofunctions. http://www.cosc.canterbury.ac.nz/greg.ewing/python/generators/
iirc, the last two would not look much different, but maybe I'm mistaken.
You're not mistaken -- mainly it's just a matter of replacing 'yield from' with 'codef'. If the implicit-cocalling version of cofunctions gains sway, it would be more different -- all the 'yield from's would disappear, and some function definitions would change from 'def' to 'codef'.
As I understand it:
cocall f(x, y, z)
is sugar for:
yield from f.__cocall__(x, y, z)
and it now magically promotes the function that contains it to a cofunction (thus implementing __cocall__ for said function).
That's essentially correct as the PEP now stands.
From what I understand, __cocall__ does not exist because you might want to also have __call__ with different behavior, but instead it exists to allow the "cocaller" to differentiate between cofunctions and normal functions?
Yes, that's right. A cofunction's __cocall__ method does exactly the same thing as a normal generator's __call__ method does.
In theory though, I could implement an object myself that implemented both __call__ and __cocall__, correct?
You could, and in fact one version of the cofunctions proposal suggests making ordinary functions behaves as though they did implement both, with __cocall__ returning an iterator that yields zero times. There would be nothing to stop you creating an object that had arbitrarily different behaviour for __call__ and __cocall__ either, although I'm not sure what use such an object would be.
I suppose __cocall__ is to __call__ as __iter__ is to __call__ presently.
Not exactly. When you do for x in f(): ... __call__ and __iter__ are *both* involved -- __call__ is invoked first, and then __iter__ on the result. But when making a cocall, __cocall__ is invoked *instead* of __call__ (and the result is expected to already be an iterator, so __iter__ is not used).
It really seems to me that generators should be implemented on top of coroutines and the not the reverse. That would lead to a more linear path to understanding: iterators -> generators -> coroutines.
If generators didn't already exist, it might make sense to do it that way. It would be easy to create an @generator decorator that would turn a cofunction into a generator. (Such a thing might be good to have in any case.) But we're stuck with generators the way the are, so we might as well make the most of them, including using them as a foundation for a less-restricted form of suspendable function. Also keep in mind that the way they're documented and taught doesn't necessarily have to reflect the implementation strategy. It would be possible to describe cofunctions and cocalls as an independent concept, and only later explain how they relate to generators. -- Greg
participants (11)
-
Andrey Popp
-
Antoine Pitrou
-
Carl M. Johnson
-
Casey Duncan
-
Christian Tismer
-
ghazel@gmail.com
-
Greg Ewing
-
Guido van Rossum
-
M.-A. Lemburg
-
Nick Coghlan
-
Paul Du Bois