micro-threading PEP proposal (long) -- take 2!

pipe = micro_pipe() next(pipe) # hangs the program! No micro_thread created to feed
It was suggested that I rearrange the micro-threading PEP proposal to place the juicy Python stuff up front. So I've done this here. And now that I see that people are back from the toils of creating 3.0b3 and starting to comment again on python-ideas, it seems like a good time to repost this! (I guess my first post was really bad timing..., sorry about that!) I ask that you provide feedback. I have no direct need for this, so don't really have a horse in the race. But it was an idea that I thought might be very useful to the Python community, seeing the emphasis on web servers, so am making an effort here to run it up the flagpole... If this ends up going into a trial version, I am prepared to help considerably with the implementation. If you don't think that Python needs such silliness, that's OK and I'd like to hear that too (it will mean a lot less work for me! ;-) ). I don't imagine that this PEP represents an /easy/ way to solve this problem, but do imagine that it is the /right/ way to solve it. Other similar proposals have been made in past years that looked at easier ways out. These have all been rejected. But I don't think that there are really any easy ways out that are robust solutions, and so I offer this one. If I am wrong, and the reason that the prior proposals were rejected is due to a lack of need, rather than a lack of robustness, then this proposal should also be rejected. This might be the case if, for example, all Python programs end up being unavoidably CPU bound so that micro-threading would provide little real benefit. If there /is/ a perceived need for this, then I am sure that this PEP would benefit from your TLC and other ideas! If you read the previous version, the only changes here are a little more specificity in the Python section. Thank you for your attention on this! -bruce Abstract ======== This PEP adds micro-threading (or `green threads`_) at the C level so that micro-threading is built in and can be used with very little coding effort at the Python level. The implementation is quite similar to the Twisted_ [#twisted-fn]_ Deferred_/Reactor_ model, but applied at the C level by extending the `C API`_ [#c_api]_ slightly. Doing this provides the Twisted capabilities to Python, but without requiring the Python programmer to code in the Twisted event driven style. Thus, legacy Python code would gain the benefits that Twisted provides with very little modification. Burying the event driven mechanism in the C level also makes the same benefits available to Python GUI interface tools so that the Python programmers don't have to deal with event driven programming there either. This capability is also used to provide some of the features that `Stackless Python`_ [#stackless]_ provides, such as microthreads and channels (here, called micro_pipes). .. _Twisted: http://twistedmatrix.com/trac/ .. _Deferred: http://twistedmatrix.com/projects/core/documentation/howto/defer.html .. _Reactor: http://twistedmatrix.com/projects/core/documentation/howto/reactor-basics.ht... .. _C API: http://docs.python.org/api/api.html .. _green threads: http://en.wikipedia.org/wiki/Green_threads Motivation ========== The popularity of the Twisted project has demonstrated the need for a micro-threading alternative to the standard Posix thread_ [#thread-module]_ and threading_ [#threading-module]_ packages. Micro-threading allows large numbers (1000's) of simultaneous connections to Python servers, as well as fan-outs to large numbers of downstream connections. The advantages to the Twisted approach over Posix threads are: #. much less memory is required per thread #. faster thread creation #. faster context switching (I'm guessing on this one, is this really true?) #. synchronization between threads is easier because there is no preemption, making it much easier to write critical sections of code. The disadvantages are: #. the Python developer must write his/her program in an event driven style #. the approach can not be used with standard Python code that wasn't written in this event driven style #. the approach does not take advantage of multiple processor architectures #. since there is no preemption, a long running micro-thread will starve other micro-threads This PEP attempts to retain all of the advantages that Twisted has demonstrated, and to resolve the first two disadvantages to make the advantages accessible to all Python programs, including legacy programs not written in the Twisted style. This should make it very easy for legacy programs like WSGI apps, Django and TurboGears to reap the benefits of Twisted. Another example of event driven mechanisms are the GUI/windows events. This PEP also makes it easy for Python GUI interface toolkits (like wxpython and qtpython) to hide the GUI/windows event driven style of programming from the Python programmer. For example, you would no longer need to use modal dialog boxes just to make the programming easier. This PEP does not address the last two disadvantages, and thus also has these disadvantages itself. The primary inspiration for this PEP comes from the Twisted_ [#twisted-fn]_ project. If the C level deals with the Deferred objects, then the Python level wouldn't have to. And if that is the case, this would greatly lower the bar to Python programmers desiring the benefits that Twisted provides and make those benefits available to all Python programmers essentially for free. The secondary inspiration was to treat the Deferreds as a special case of exceptions, which are already designed to unwind the C stack. This lets us take a more piecemeal approach to implementing the PEP at the C level because an unmodified C function used in a situation where its execution would have to be deferred is gracefully caught as a standard exception. In addition, this exception can report the name of the unmodified C function in its message. So we don't need to change *everything* that might be affected on a first roll out. It also adds deferred processing without adding additional checks after each C function call to see whether to defer execution. The check that is already being done for exceptions doubles as a check for deferred processing. Finally, once Python has this deferred mechanism in place at the C level, many things become quite easy at the Python level. This includes full micro-threading, micro-pipes between micro-threads, new-style generators that can delegate responsibility for generating values to called functions without having to intervene between their caller and the called function, parallel execution constructs (``parallel_map``). It is expected that many more of these kind of devices will be easily implementable once the underlying deferred mechanism in place. Specification of Python Layer Enhancements ========================================== Fortunately, at the Python level, the programmer does not see the underlying `C deferred`_, `reactor function`_, or notifier_ objects. The Python programmer will see three things: #. An addition of non_blocking modes of accessing files, sockets, time.sleep and other functions that may block. It is not clear yet exactly what these will look like. The possibilities are: - Add an argument to the object creation functions to specify blocking or non-blocking. - Add an operation to change the blocking mode after the object has been created. - Add new non-blocking versions of the methods on the objects that may block (e.g., read_d/write_d/send_d/recv_d/sleep_d). - Some combination of these. If an object is used in blocking mode, then all micro-threads (within its Posix thread_) will block. So the Python programmer must set non-blocking mode on these objects as a first step towards taking advantage of micro-threading. It may also be useful to add a locking capability to files and sockets so that code (like traceback.print_exception) that outputs several lines can prevent other output from being intermingled with it. #. Micro_thread objects. Each of these will have a re-usable C deferred object attached to it, since each micro_thread can only be suspended waiting for one thing at a time. The current micro_thread would be stored within a C global variable, much like ``_PyThreadState_Current``. If the Python programmer isn't interested in micro_threading, micro_threads can be safely ignored (like posix threads, you get one for free, but don't have to be aware of it). If the programmer *is* interested in micro-threading, then s/he must create additional micro_threads. Each micro-thread would be created to run a single Python function. When that function returns, the micro-thread is finished. There are three usage scenarios, aided by three different functions to create micro-threads: #. Create a micro-thread to do something, without regard to the final value returned from *function*. An example here would be a web server that has a top-level ``socket.accept`` loop that runs a ``handle_client`` function in a separate micro_thread on each new connection. Once launched, the ``socket.accept`` thread is no longer interested in the ``handle_client`` threads. In this case, the normal return value of the ``handle_client`` function can be discarded. But what should be done with exceptions that are not caught in the child threads? Therefore, this style of use would allow a top-level exception handler for the new thread:: start_and_forget(function, *args, exception_handler=traceback.print_exception, **kws) The parent thread does not need to do any kind of *wait* after the child thread is complete. It will either complete normally and go away silently (with any final return value ignored), or raise an uncaught exception, which is passed to the indicated exception_handler, and then go away without further ado. #. Create micro_threads to run multiple long-running *functions* in parallel where the final return value from each *function* is needed by the parent thread:: thread = start_in_parallel(function, *args, **kws) In this case, the parent thread is expected to do a *thread.wait()* when it is ready for the return value of the function. Thus, completed micro_threads will form zombie threads until their parents retrieve their final return values (much like unix processes). On doing the *wait*, an uncaught exception in the child micro_thread is re-raised in the parent micro_thread. It might be nice, for example, to have a ``parallel_map`` function that will create a micro_thread for each element of its *iterable* argument in order to run the mapping function on all of them in parallel and then return an iterable of the waited for results. #. In the above examples, the child micro_threads are completely independent of each other -- i.e., they don't communicate with each other except for child threads returning a final value to their parents. This final scenario uses *micro_pipes* to allow threads to cooperatively solve problems (much like unix pipes):: pipe = generate(function, *args, **kws) These micro_threads have a micro_pipe associated with them (called *stdout*). When a micro_thread is finished it goes away silently (and the final return value from the *function* is ignored). The pipe looks like a normal Python iterator, but is designed to be read by a different micro-thread than the one generating the values. Uncaught exceptions in the micro_thread generating the values are propagated through the micro_pipe to the micro_pipe's reader. #. Micro_pipes. Micro_pipes are one-way pipes that allow synchronized communication between micro_threads. The protocol for the receiving side of the pipe is simply the standard Python iterator protocol. Thus, for example, they can be directly used in ``for`` statements. The sending side has these methods: - ``put(object)`` to send *object* to the receiving side (retrieved with the ``__next__`` method). - ``take_from(iterable)`` to send a series of objects to the receiving side. - ``close()`` to cause a ``StopIteration`` on the ``__next__`` call. A ``put`` done after a ``close`` silently terminates the micro_thread doing the ``put`` (in case the receiving side closes the micro_pipe). Micro_pipes are automatically associated with micro_threads, making it less likely to hang the program: pipe... So each micro_thread may have a *stdout* micro_pipe assigned to them and may also be assigned a *stdin* micro_pipe (some other micro_thread's stdout micro_pipe). When the micro_thread terminates, it automatically calls ``close`` on its stdin and stdout micro_pipes. To easily access the stdout micro_pipe of the current micro_thread, new ``put`` and ``take_from`` built-in functions are provided:: put(object) take_from(iterable) In addition, the current built-in ``iter`` and ``next`` functions would be modified so that they may be called with no arguments. In this case, they would use the current micro_thread's *stdin* pipe as their argument. Micro_pipes let us write generator functions in a new way by having the generator do ``put(object)`` rather than ``yield object``. In this case, the generator function has no ``yield`` statement, so is not treated specially by the compiler. Basically this means that calling a new-style generator does not automatically create a new micro_thread (sort of what calling an old-style generator does). The ``put(object)`` does the same thing as ``yield object``, but allows the generator to share the micro_pipe with other new-style generator functions (by simply calling them) and old-style generators (or any iterable) by calling ``take_from`` on them. This lets the generator delegate to other generators without having to get involved with passing the results back to its caller. For example, a generator to output all the odd numbers from 1-n:: def odd(n): take_from(range(1, n, 2)) These "new-style" generators would have to be run in their own micro_thread:
The generator is then not restricted to having its own micro_thread. It could also be used as a helper by other generators from the other generator's micro_thread without having to create additional micro-threads or doing "bucket brigades" to yield values from the helper back to the other generator's caller. For example:: def even(n): take_from(range(2, n, 2)) def odd_even(n): odd(n) even(n) At this point ``generate`` could be called on any of these three generators (``odd``, ``even`` or ``odd_even``). Specification of C Layer Enhancements ===================================== This is where most of the work is to implement this PEP. These are the underlying mechanisms that make the whole thing "tick". Basically, this is a C Deferred that micro-thread aware C functions deal with to be put to sleep and avoid blocking; and a Reactor to wake the Deferreds back up when the event occurs that they are waiting for. This is very similar in concept to the Twisted Deferred and Reactor, just done at the C level so that Python programmers don't have to deal with them. C Deferred ---------- ``PyDeferred_CDeferred`` is written as a new exception type for use by the C code to defer execution. This is a subclass of ``NotImplementedError``. Instances are not raised as a normal exception (e.g., with ``PyErr_SetObject``), but by calling ``PyNotifier_Defer`` (described in the Notifier_ section, below). This registers the ``PyDeferred_CDeferred`` associated with the currently running micro_thread as the current error object, but also readies it for its primary job -- deferring execution. As an exception, it creates its own error message, if needed, which is "Deferred execution not yet implemented by %s" % c_function_name. ``PyErr_ExceptionMatches`` may be used with these. This allows them to be treated as exceptions by non micro-threading aware (*unmodified*) C functions. But these C deferred objects serve as special indicators that are treated differently than normal exceptions by micro-threading aware (*modified*) C code. Modified C functions do this by calling ``PyDeferred_AddCallback``, or explicitly checking ``PyErr_ExceptionMatches(PyDeferred_CDeferred)`` after receiving an error return status from a called function. ``PyDeferred_CDeferred`` instances offer the following methods (in addition to the normal exception methods): - ``int PyDeferred_AddCallbackEx(PyObject *deferred, const char *caller_name, const char *called_name, PyObject *(*callback_fn)(PyObject *returned_object, void *state), void *state)`` - The *caller_name* and *called_name* are case sensitive. The *called_name* must match exactly the *caller_name* used by the called function when it dealt with this *deferred*. If the names are different, the *deferred* knows that an intervening unmodified C function was called. This is what triggers it to then act like an exception. The *called_name* must be ``NULL`` when called by the function that executed the ``PyNotifier_Defer`` to initiate the deferring process. - The *callback_fn* will be called with the ``PyObject`` of the results of the prior registered callback_fn. An exception is passed to *callback_fn* by setting the exception and passing ``NULL`` (just like returning an exception from a C function). In the case that the *deferred* initially accepts some *callback_fns* after a ``PyNotifier_Defer`` is done, and then later has to reject them (because of encountering the exception case, above), it will pass itself again, now acting like an exception, to all of these new callback_fns to allow them to clean up. It then returns 0 to continue to be treated as an exception (see the explanation for ``PyDeferred_Callback``, below). - The *callback_fn* is always guaranteed to be called exactly once at some point in the future. It will be passed the same *state* value as was passed with it to ``PyDeferred_AddCallback``. It is up to the *callback_fn* to deal with the memory management of this *state* object. - The *callback_fn* may be ``NULL`` if no callback is required. But in this case ``PyDeferred_AddCallback`` must still be called to notify the *deferred* that the C function is micro-threading aware. - This returns 0 if it fails (is acting like an exception), 1 otherwise. If it fails, the caller should do any needed clean up because the caller won't be resumed by the *deferred* (i.e., *callback_fn* will not be called). - ``int PyDeferred_AddCallback(const char *caller_name, const char *called_name, PyObject *(*callback_fn)(PyObject *returned_object, void *state), void *state)`` - Same as ``PyDeferred_AddCallbackEx``, except that the deferred object is taken from the *value* object returned by ``PyErr_Fetch``. If the *type* returned by ``PyErr_Fetch`` is not ``PyDeferred_CDeferred``, 0 is returned. Thus, this function can be called after any exception and then other standard exception processing done if 0 is returned (including checking for other kinds of exceptions). - ``int PyDeferred_IsExceptionEx(PyObject *deferred)`` - Returns 1 if *deferred* is in exception mode, 0 otherwise. - ``int PyDeferred_IsException(void)`` - Same as ``PyDeferred_IsExceptionEx``, except that the deferred object is taken from the *value* object returned by ``PyErr_Fetch``. If the *type* returned by ``PyErr_Fetch`` is not ``PyDeferred_CDeferred``, 1 is returned. Thus, this function can be called after any exception and then other standard exception processing done if 1 is returned (including checking for other kinds of exceptions). - ``int PyDeferred_Callback(PyObject *deferred, PyObject *returned_object)`` - This is called by the `reactor function`_ to resume execution of a micro_thread after the *deferred* has been scheduled with ``PyReactor_Schedule`` or ``PyReactor_ScheduleException``. - This calls the callback_fn sequence stored in *deferred* passing *returned_object* to the first registered callback_fn, and each callback_fn's returned ``PyObject`` to the next registered callback_fn. - To signal an exception to the callbacks, first set the error indicator (e.g. with ``PyErr_SetString``) and then call ``PyDeferred_Callback`` passing ``NULL`` as the *returned_object* (just like returning ``NULL`` from a C function to signal an exception). - If a callback_fn wants to defer execution, this same *deferred* object will be used by ``PyNotifier_Defer`` (since the callback_fn is running in the same micro_thread). The *deferred* keeps the newly added callback_fns in the proper sequence relative the existing callback_fns that have not yet been executed (described below). When *deferred* is returned from a callback_fn, no further callback_fns are called. Note that this check is also done on the starting *returned_object*, so that if this *deferred* exception is passed in, then none of its callback_fns are executed and it simply returns. - If a callback_fn defers, a final check is done to see if its name was the last one registered by a ``PyDeferred_AddCallback`` call. If not, and if this *deferred* has not already been set into exception mode, the *deferred* sets itself into exception mode and raises itself through the entire callback_fn sequence. This should end up terminating the micro_thread. - If a callback_fn starts to defer (by calling ``PyNotifier_Defer``) and then later raises some other exception, the *deferred* will know that it's been activated but not returned as the final error object by the callback_fn. In this case, the *deferred* raises a ``SystemError`` attaching the other exception to it as its ``__cause__`` and runs this through all new callback_fns that were added subsequent to the ``PyNotifier_Defer``. The ``SystemError`` exception is then cleared and the other exception reestablished (it will have the *deferred* as its ``__context__``). The other exception is then passed to the remaining callback_fns to terminate the micro_thread. - If no callback_fn defers, then the micro_thread is finished executing. The results of the last callback_fn are treated as the final result of the micro_thread. If the micro_thread has an ``exception_handler``, the ``exception_handler`` is used on the final exception (if there is one) and the micro_thread is deleted. If the micro_thread has no ``exception_handler``, the final return value (or exception) is stored in the micro_thread and the micro_thread is converted into a zombie state. This will also result in a ``close`` being done on the micro_thread's stdout micro_pipe. - Returns 0 on error, 1 otherwise. Note that an error from the final callback_fn does not cause a 0 to be returned here. Only if ``PyDeferred_Callback`` itself has a problem that it can't deal with is 0 returned. Each micro_thread has its own C deferred object associated with it. This is possible because each micro_thread may only be suspended for one thing at a time. This also allows us to re-use C deferreds and, through the following trick, means that we don't need a lot of C deferred instances when a micro_thread is deferred many times at different points in the call stack. One peculiar thing about the stored callbacks, is that they're not really a queue. When the C deferred is first used and has no saved callbacks, the callbacks are saved in straight FIFO manor. Let's say that four callbacks are saved in this order: ``D'``, ``C'``, ``B'``, ``A'`` (meaning that ``A`` called ``B``, called ``C``, called ``D`` which deferred): - after ``D'`` is added, the queue looks like: ``D'`` - after ``C'`` is added, the queue looks like: ``D'``, ``C'`` - after ``B'`` is added, the queue looks like: ``D'``, ``C'``, ``B'`` - after ``A'`` is added, the queue looks like: ``D'``, ``C'``, ``B'``, ``A'`` Upon resumption, ``D'`` is called, then ``C'`` is called. ``C'`` then calls ``E`` which calls ``F`` which now wants to defer execution again. ``B'`` and ``A'`` are still in the deferred's callback queue. When ``F'``, then ``E'`` then ``C''`` are pushed, they go in front of the callbacks still present from the last defer: - after ``F'`` is added, the queue looks like: ``F'``, ``B'``, ``A'`` - after ``E'`` is added, the queue looks like: ``F'``, ``E'``, ``B'``, ``A'`` - after ``C''`` is added, the queue looks like: ``F'``, ``E'``, ``C''``, ``B'``, ``A'`` These callback functions are basically a reflection of the C stack at the point the micro_thread is deferred. Reactor Design -------------- The Reactor design is divided into two levels: - The top level `reactor function`_. There is only one long running invocation of this function per standard Posix thread_. - A list of Notifiers_. Each of these knows how to check for a different type of external event, such as a file being ready for IO, a signal having been received, or a GUI/windows event. .. _Notifiers: Notifier_ Reactor Function '''''''''''''''' There is a reactor function instance for each Posix thread. All instances share the same set of ``NotifierList``, ``TimedWaitSeconds`` and ``EventCheckingThreshold`` parameters. The master ``NotifierList`` is a list of classes that are instantiated when the reactor function is created. This list is maintained in descending ``PyNotifier_Priority`` order. The reactor function pops (deferred, returned_object) pairs, doing ``PyDeferred_Callback`` on each, until either the ``EventCheckingThreshold`` number of deferreds have been popped, or there are no more deferreds scheduled. It then runs its copy of the ``NotifierList`` to give each notifier_ a chance to poll for its events. If there are then still no deferreds scheduled, it goes to each notifier in turn asking it to do a ``PyNotifier_TimedWait`` for ``TimedWaitSeconds`` until one returns 1. Then it polls the remaining notifiers again and goes back to running scheduled deferreds. If there is only one notifier, a ``PyNotifier_WaitForever`` is used, rather than first polling with ``PyNotifier_Poll`` and then ``PyNotifier_TimedWait``. If all but one notifier returns -1 on the initial poll pass (such that only one notifier has any deferreds), a ``PyNotifier_WaitForever`` is used on that notifier on the second pass rather than ``PyNotifier_TimedWait``. If all notifiers return -1 on the initial poll pass and there are no deferreds scheduled, the reactor function is done and returns to terminate its Posix thread. The reactor function also manages a list of timers for the notifiers. It calls ``PyNotifier_Timeout`` each time a timer pops. The following functions use the reactor function for the current Posix thread. - ``int PyReactor_Schedule(PyObject *deferred, PyObject *returned_object)`` - Returns 0 on error, 1 otherwise. - ``int PyReactor_ScheduleException(PyObject *deferred, PyObject *exc_type, PyObject *exc_value, PyObject *exc_traceback)`` - Returns 0 on error, 1 otherwise. - ``int PyReactor_Run(void)`` - At least one ``PyReactor_Schedule`` must be done first, or ``PyReactor_Run`` will return immediately. - This only returns when there is nothing left to do. - Returns 0 on error, 1 otherwise. - ``int PyReactor_SetTimer(PyObject *notifier, PyObject *deferred, double seconds)`` - Returns 0 on error, 1 otherwise. - ``int PyReactor_ClearTimer(PyObject *notifier, PyObject *deferred)`` - Returns 0 on error, 1 otherwise. These functions apply globally to all reactor functions (all Posix threads): - ``int PyReactor_AddNotifier(PyObject *notifier_class)`` - The *notifier_class* is added to the NotifierList in proper priority order. - The same NotifierList is used by all reactor functions (all Posix threads). - Returns 0 on error, 1 otherwise. - ``int PyReactor_RemoveNotifier(PyObject *notifier_class)`` - The *notifier_class* is removed from the NotifierList. - Returns 0 on error, 1 otherwise. It is an error if the *notifier_class* was not in the NotifierList. - ``int PyReactor_SetEventCheckingThreshold(long num_continues)`` - Returns 0 on error, 1 otherwise. - ``int PyReactor_SetTimedWaitSeconds(double seconds)`` - Returns 0 on error, 1 otherwise. Notifier '''''''' Each notifier knows how to check for a different kind of event. The notifiers must release the GIL lock prior to suspending the Posix thread. - ``int PyNotifier_Priority(PyObject *notifier_class)`` - Returns the priority of this *notifier_class* (-1 for error). Higher numbers have higher priorities. - ``int PyNotifier_RegisterDeferred(PyObject *notifier, PyObject *deferred, PyObject *wait_reason, double max_wait_seconds)`` - *Max_wait_seconds* of 0.0 means no time limit. Otherwise, register *deferred* with ``PyReactor_SetTimer`` (above). - Adds *deferred* to the list of waiting objects, for *wait_reason*. - The meaning of *wait_reason* is determined by the notifier. It can be used, for example, to indicate whether to wait for input or output on a file. - Returns 0 on error, 1 otherwise. - ``void PyNotifier_Defer(PyObject *notifier, PyObject *wait_reason, double max_wait_seconds)`` - Passes the deferred of the current micro_thread to ``PyNotifier_RegisterDeferred``, and then raises the deferred as an exception. *Wait_reason* and *max_wait_seconds* are passed on to ``PyNotifier_RegisterDeferred``. - This function has no return value. It always generates an exception. - ``int PyNotifier_Poll(PyObject *notifier)`` - Poll for events and schedule the appropriate ``PyDeferred_CDeferreds``. Do not cause the process to be put to sleep. Return -1 if no deferreds are waiting for this events, 0 on error, 1 on success (whether or not any events were discovered). - ``int PyNotifier_TimedWait(PyObject *notifier, double seconds)`` - Wait for events and schedule the appropriate deferreds. Do not cause the Posix thread to be put to sleep for more than the indicated number of *seconds*. Return -2 if *notifier* is not capable of doing timed sleeps, -1 if no deferreds are waiting for events, 0 on error, 1 on success (whether or not any events were discovered). Return a 1 if the wait was terminated due to the process having received a signal. - If *notifier* is not capable of doing timed waits, it should still do a poll and should still return -1 if no deferreds are waiting for events. - ``int PyNotifier_WaitForever(PyObject *notifier)`` - Suspend the process until an event occurs and schedule the appropriate deferreds. The process may be put to sleep indefinitely. Return -1 if no deferreds are waiting for events, 0 on error, 1 on success (whether or not any ``PyDeferred_CDeferreds`` were scheduled). Return a 1 if the wait was terminated due to the process having received a signal. - ``int PyNotifier_Timeout(PyObject *notifier, PyObject *deferred)`` - Called by `reactor function`_ when the timer set by ``PyReactor_SetTimer`` expires. - Deregisters *deferred*. - Passes a ``TimeoutException`` to *deferred* using ``PyDeferred_Callback``. - Return 0 on error, 1 otherwise. - ``int PyNotifier_DeregisterDeferred(PyObject *notifier, PyObject *deferred, PyObject *returned_object)`` - Deregisters *deferred*. - Passes *returned_object* to *deferred* using ``PyDeferred_Callback``. - *Returned_object* may be ``NULL`` to indicate an exception to the callbacks. - Returns 0 on error, 1 otherwise. Open Questions ============== #. How are tracebacks handled? #. Do we: #. Treat each Python-to-Python call as a separate C call, with it's own callback_fn? #. Only register one callback_fn for each continuous string of Python-to-Python calls and then process them iteratively rather than recursively in the callback_fn (but not in the original calls)? or #. Treat Python-to-Python calls iteratively both in the original calls and in the callback_fn? #. How is process termination handled? - I guess we can keep a list of micro_threads and terminate each of them. There's a question of whether to allow the micro_threads to complete or to abort them mid-stream. Kind of like a unix shutdown. Maybe two kinds of process termination? #. How does this impact the debugger/profiler/sys.settrace? #. Should functions (C and Python) that may defer be indicated with some naming convention (e.g., ends in '_d') to make it easier for programmers to avoid them within their critical sections of code (in terms of synchronization)? #. Do we really need to expose micro_pipes to the Python programmer as anything more than iterables, or can we just use the built-in ``put`` and ``take_from`` functions? Rationale ========= Impact on Other Python Implementations -------------------------------------- The heart of this approach, the C deferred, reactor function and notifiers, are not exposed to the Python level. This leaves their implementation open so that other implementations of Python (e.g., Jython_ [#jython-project]_, IronPython_ [#ironpy]_ and PyPy_ [#pypy_project]_) are not constrained by the choices made for CPython. Also, the interfaces to the new Python-level objects (micro_threads, micro_pipes) are kept to a minimum thus hiding design decisions made within the underlying implementation so as not to unduly constrain other Python implementations that wish to support compatible features. Other Approaches ---------------- Here's a brief comparison to other approaches to micro-threading in Python: - `Stackless Python`_ [#stackless]_ - As near as I can tell, stackless went through two incarnations: #. The first incarnation involved an implementation of Frame continuations which were then used to provide the rest of the stackless functionality. - A new ``Py_UnwindToken`` was created to unwind the stack. This is similar to the new ``PyDeferred_CDeferred`` proposed in this PEP, except that ``Py_UnwindToken`` is treated as a special case of a normal ``PyObject`` return value, while the ``PyDeferred_CDeferred`` is treated as a special case of a normal exception. It's not clear whether C functions are exposed to this special value. So either C functions can't be unwound, or unmodified C functions may behave strangely. There is mention of trouble if a C function calls a Python function. I also saw no mention of being able to defer execution rather than block the whole program. This PEP treats the requests to defer as special exceptions, which are already designed to unwind the C stack. - Another difference between the two styles of continuations is that the stackless continuation is designed to be able to be continued multiple times. In other words, you can continue the execution of the program from the point the continuation was made as many times as you wish, passing different seed values each time. The ``PyDeferred_CDeferred`` described in this PEP (like the Twisted Deferred) is designed to be continued only once. - The stackless approach provides a Python-level continuation mechanism (at the Frame level) that only makes Python functions continuable. It provides no way for C functions to register continuations so that C functions can be unwound from the stack and later continued (other than those related to the byte code interpreter). In contrast, this PEP proposes a C-level continuation mechanism very similar to the Twisted Deferred. Each C function registers a callback to be run when the deferred is continued. From this perspective, the byte code interpreter is just another C function. #. The second incarnation involved a way of hacking the underlying C stack to copy it and later restore it as a means of continuing the execution. - This doesn't appear to be portable to different CPU/C Compiler configurations. - This doesn't deal with other global state (global/static variables, file pointers, etc) that may also be used by this saved stack. - In contrast, this PEP uses a single C stack and makes no assumptions about the underlying C stack implementation. It is completely portable to any CPU/C compiler configuration. - `py.magic.greenlet: Lightweight concurrent programming`_ [#greenlets]_ This takes its implementation from the second incarnation of stackless and copies the C stack for re-use. It has the same portability questions that the second generation of stackless does. It does not include a reactor component, though one could be written for it. - `Implementing "weightless threads" with Python generators`_ [#weightless]_ - This requires you code each thread as generators. The generator executes a ``yield`` to relinquish control. - It's not clear how this scales. It seems that to pause in a lower Python function, it and all intermediate functions must be generators. - python-safethread_ [#safethread]_ - This is an alternate implementation to thread_ that adds monitors to mutable types, deadlock detection, improves exception propagation across threads and program finalization, and removes the GIL lock. As such, it is not a "micro" threading approach, though by removing the GIL lock it may be able to better utilize multiple processor configurations than the approach proposed in this PEP. - `Sandboxed Threads in Python`_ [#sandboxed-threads]_ - Another alternate implementation to thread_, this one only shares immutable objects between threads, modifying the referencing counting system to avoid synchronization issues with the reference count for shared objects. Again, not a "micro" threading approach, but perhaps also better with multiple processors. .. _Jython: http://www.jython.org/Project/ .. _IronPython: http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython .. _PyPy: http://codespeak.net/pypy/dist/pypy/doc/home.html .. _Implementing "weightless threads" with Python generators: http://www.ibm.com/developerworks/library/l-pythrd.html .. _python-safethread: https://launchpad.net/python-safethread .. _Sandboxed Threads in Python: http://mail.python.org/pipermail/python-dev/2005-October/057082.html .. _Stackless Python: http://www.stackless.com/ .. _thread: http://docs.python.org/lib/module-thread.html .. _threading: http://docs.python.org/lib/module-threading.html .. _`py.magic.greenlet: Lightweight concurrent programming`: http://codespeak.net/py/dist/greenlet.html Backwards Compatibility ======================= This PEP doesn't break any existing code. Existing code just won't take advantage of any of the new features. But there are two possible problem areas: #. Python code uses micro-threading, but then causes an unmodified C function to call a modified C function which tries to defer execution. In this case an exception will be generated stating that this C function needs to be converted before the program will work. #. Python code originally written in a single threaded environment is now used in a micro-threaded environment. The old code was not written taking synchronization issues into account, which may cause problems if the old code calls a function which defers in the middle of a critical section. This could cause very strange behavior, but can't result in any C-level errors (e.g., segmentation violation). This old code would have to be fixed to run with the new features. I expect that this will not be a frequent problem as these interruptions can only occur at a few places (where functions that defer are called). References ========== .. [#twisted-fn] Twisted, Twisted Matrix Labs (http://twistedmatrix.com/trac/) .. [#c_api] Python/C API Reference Manual, Rossum (http://docs.python.org/api/api.html) .. [#stackless] Stackless Python, Tismer (http://www.stackless.com/) .. [#thread-module] thread -- Multiple threads of control (http://docs.python.org/lib/module-thread.html) .. [#threading-module] threading -- Higher-level threading interface (http://docs.python.org/lib/module-threading.html) .. [#jython-project] The Jython Project (http://www.jython.org/Project/) .. [#ironpy] IronPython (http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython) .. [#pypy_project] PyPy[home] (http://codespeak.net/pypy/dist/pypy/doc/home.html) .. [#greenlets] py.magic.greenlet: Lightweight concurrent programming (http://codespeak.net/py/dist/greenlet.html) .. [#weightless] Charming Python: Implementing "weightless threads" with Python generators, Mertz (http://www.ibm.com/developerworks/library/l-pythrd.html) .. [#safethread] Threading extensions to the Python Language, (https://launchpad.net/python-safethread) .. [#sandboxed-threads] Sandboxed Threads in Python, Olsen (http://mail.python.org/pipermail/python-dev/2005-October/057082.html) Copyright ========= This document has been placed in the public domain.

On Mon, Aug 25, 2008 at 9:48 AM, Bruce Frederiksen <dangyogi@gmail.com> wrote:
Actually, the toiling for the core developers has shifted to the release candidates, so I still would not count on much response out of them until we go final and are sure there is no emergency release. -Brett

Hello, I haven't read everything in detail, but a few comments.
The advantages to the Twisted approach over Posix threads are:
#. much less memory is required per thread
Yes, which also means better CPU cache utilization and thus potentially better scalability.
#. faster thread creation
Certainly.
#. faster context switching (I'm guessing on this one, is this really true?)
Depends. The CPU overhead of context switching is probably lower (although that's not sure in the case of Twisted, since the reactor is written in pure Python). However, in a cooperative threading model (rather than the traditional preemptive model), latencies tend to go up unless you have a lot of possible switching points.
#. synchronization between threads is easier because there is no preemption, making it much easier to write critical sections of code.
This is definitely the primary advantage of the Twisted approach.
Probably. However, since the core of your proposal is itself far from trivial, I suggest you concentrate on it in this PEP; the higher-level constructs can be deferred ( :-)) to another PEP.
Sounds ok. FWIW, the py3k IO stack is supposed to be ready for non-blocking IO, but this possibility is almost completely untested as of yet.
I don't think it's critical.
By "global", you mean "thread-local", no? That is, there is (at most) one currently running micro-thread per OS-level thread.
There are three usage scenarios, aided by three different functions to create micro-threads:
I suggest you fold those usage scenarios into one simple primitive that launches a single micro-thread and provides a way to wait for its result (using a CDeferred I suppose?). Higher-level stuff ("start_in_parallel") does not seem critical for the usefulness of the PEP.
What is the added value of "micro pipes" compared to, e.g., a standard Python list or deque? Are they non-blocking?
Silencing this sounds like a bad idea.
Hmm, is it really necessary? Shouldn't micro-threads just create their own pipes when they need them? The stdin/stdout analogy is only meaningful in certain types of workloads.
I'm not sure I understand this right. Does this mean there is a single, pre-constructed CDeferred object for each micro-thread? If yes, then this deviates slightly from the Twisted model where many deferreds can be created dynamically, chained together etc.
In this example, can you give the C pseudo-code and the equivalent Twisted Python (pseudo-)code? (I haven't read the Reactor part so I won't comment on it)
#. How is process termination handled?
Raising SystemExit (or another BaseException-derived exception, e.g. ThreadExit) in all micro-threads sounds reasonable.
#. How does this impact the debugger/profiler/sys.settrace?
:-) Last point: you should try to get some Twisted guys involved in the writing of the PEP if you want it to succeed. Regards Antoine.

Thank you for your response. I have written up a python-style pseudo code for both the C level code and the Python level code and posted them as two separate posts. I changed the subject somewhat on them, so they don't show up on the same thread... :-( Here is some specific feedback on your questions. Antoine Pitrou wrote:
I had considered that. But the core by itself accomplishes nothing, except to serve as a foundation for some kind of higher-level constructs, so I put them together. I guess having separate PEPs allows them to evolve more independently. (I'm new to this PEP process). If I split them, so I keep posting updated versions on python-ideas? Or do I just accumulate the changes offline and post the completed PEP much later?
Good to know. I'll have to look at this.
Yes!
I have a single micro_thread class with a couple of optional arguments that affects it operation, so you may be right. I have included the higher-level stuff ("start_in_parallel") in the Python level pseudo code to give everybody a feel of what's involved.
Micro_pipes connect two micro_threads, much like unix pipes join two unix processes. Each thread will suspend if the other thread isn't ready. The micro_pipes use the C_deferreds to suspend the thread and allow other threads to run. So micro_pipes don't store a sequence of values (like lists or deques), but pass individual values on from one thread to another. The implementation proposed in the Python level pseudo code only stores one value and will block the writer when it tries to write a second value before the reader has read the first value. This buffer size of one could be expanded, but I've been working on the premise that this should be kept as simple as possible for a first out, and then allowed to grow after more experience is gained with it. I've seen many software projects (and I'm guilty of this myself) where they include all kinds of stuff that really isn't that useful. And, once released, these things are hard to take back. So I'm consciously trying to keep the first out to a bare minimum of features that can grow later.
Yes, I think "silently" means raising a MicroThreadExit exception in the ``put`` and then silently ignoring it when it is finally re-raised by the top function of the thread (thus, terminating the thread, but allowing clean code to run on the way down).
They seem necessary to handle exception situations. For example, when a reader thread on a pipe dies with an exception, how is the write thread notified? What mechanism knows that this pipe was being read by the errant thread so that it will never be read from again? Lacking some kind of mechanism like this may mean that the writer thread is suspended forever. And the same applies in reverse if the writer thread dies. The reader is left hanging forever. So the pipes need to be "attached" to the threads so that an exception in one thread can also affect other interested threads.
Yes, a single deferred for each micro-thread. And yes, this differs some from the Twisted model. But, again, this helps to "connect the dots" between threads for exception propagation. I think that it will also give slightly better performance because fewer memory allocations are required.
Do you mean the pseudo code of the deferred implementation, or the pseudo code for using the deferreds?
Last point: you should try to get some Twisted guys involved in the writing of the PEP if you want it to succeed.
Good suggestion! I was hoping that some might show up here, but ... I guess I need to go looking for them! Thanks! -bruce

Hi,
Having separate PEPs also makes it easier to discuss the issues piecewise rather than a whole big chunk of additions.
I think it's better to post updated versions, at least as long as there seems to be some interest.
Yes, it's the kind of things that can be discussed as part of the implementation rather than as part of the spec itself.
A pipe could be divided into two half-objects: the receiving end and the sending end, each independently managed by the standard reference counting mechanism, and referencing each other through weakrefs. That way, when e.g. the last reference to the receiving end dies, the sending end will notice that and can raise an exception when a thread tries to write to it. In any case, I don't think creating standard "stdin" and "stdout" pipes for each thread makes things any easier. You still have to handle the case of whatever non-stdin/stdout pipes get created.
I mean the pseudo code for using the deferreds in the particular example which is outlined. Also, if possible, the corresponding Twisted code, to see where and how the two idioms diverge. You can of course give pseudo-code for the implementation as well, but I think discussing the implementation is prematurate if the API hasn't been discussed first :-)
Good suggestion! I was hoping that some might show up here, but ... I guess I need to go looking for them!
A Twisted developer told me that he had tried to read your PEP (the first version) but found it difficult to understand. Regards Antoine.

Antoine Pitrou wrote:
Having separate PEPs also makes it easier to discuss the issues piecewise rather than a whole big chunk of additions.
I am preparing 3 new PEPs now. One for the C level, one for micro-threads which provides some basic capabilities, and one for micro-pipes which I expect will be the one that will change the most and take the longest to put to bed. I'm also concentrating on APIs rather than implementations and will include examples of using these APIs.
I don't think that this would be very portable to other flavors of python (jython/ironpython/pypy) that don't use reference counting. It makes the thread termination dependent on the implementation of the garbage collector.
I imagine that the python programmer would not be allowed to create micro-pipes directly. I'm thinking that there would be a num_stdout (= 0) parameter on the micro_thread constructor that creates that many micro_pipes, and then the python programmer can connect these to the stdin of other micro-threads. The reason for this is so that the underlying implementation always knows how the dots are connected so that it can provide sensible exception/abort semantics. I'm working on another project that uses generators a _lot_ and there are problems there because 'for' loops don't call 'close' on generators to clean things up. I've also hit problems where the code works fine on CPython, but fails on jython and ironpython because I'm relying on the reference counting to immediately collect abandoned generators and run their 'finally' clauses.
I hope that the new versions will be easier to follow! Thanks! -bruce

Bruce Frederiksen schrieb:
You can use "with closing" to ensure this. However, it bloats the code a tiny bit. from __future__ import with_statement from itertools import islice from contextlib import closing def gen(): try: print "before" yield 1 print "between" yield 2 print "after" except GeneratorExit: print "exit" finally: print "finally"

On Mon, Aug 25, 2008 at 9:48 AM, Bruce Frederiksen <dangyogi@gmail.com> wrote: [snip]
It all depends on what you're doing. If you're waiting on a lot of RPCs to complete and doing light-weight operations to process the responses, then you're probably fine with micro-threads (unless, of course, those RPC responses are themselves pretty big and require a lot of deserialization work, in which case, micro-threads will hurt more than they help).
It in no way demonstrates that. I would say that popularity of Twisted indicates that "a micro-threading alternative to the standard...threading packages" can survive and indeed thrive outside of the standard library. If you feel that Twisted's popularity does indeed demonstrate something in this area, please back up that assertion.
That you don't know is, frankly, not reassuring.
By long-running, you mean "non-yielding", right? Don't CPU-intensive operations generally fall into this category? Combined with the first two disadvantages, this means that a developer using this system has to vet all libraries they might want to use (and all libraries in their transitive dependency closure), looking for places that might destabilize the ability of micro-threads to cooperatively yield. That sounds like an incredibly error-prone and painstaking waste of developer time.
This PEP attempts to retain all of the advantages that Twisted has demonstrated,
Please don't assume that everyone reading your PEP is familiar with Twisted.
So you say, but I see nothing in this entire PEP (and I'll freely admit I started skimming it after page five or so) that specifically references these disadvantages or demonstrates how they're being solved.
This PEP does not address the last two disadvantages, and thus also has these disadvantages itself.
Starvation is a pretty big disadvantage to simply gloss over.
I don't understand this. Please explain in more detail why adding this new (and unexpected) functionality to iter() and next() is desirable as opposed to adding new functions/methods.
Why is this a subclass of NotImplementedError and not a direct subclass of Exception? This is an odd choice of parent class.
And what happens if I use PyErr_SetObject() instead of this new function? Is a TypeError raised?
So it's possible for non-micro-threading aware code to simply swallow these new exceptions? That seems...unwise. Collin Winter

On Mon, Aug 25, 2008 at 9:48 AM, Bruce Frederiksen <dangyogi@gmail.com> wrote:
Actually, the toiling for the core developers has shifted to the release candidates, so I still would not count on much response out of them until we go final and are sure there is no emergency release. -Brett

Hello, I haven't read everything in detail, but a few comments.
The advantages to the Twisted approach over Posix threads are:
#. much less memory is required per thread
Yes, which also means better CPU cache utilization and thus potentially better scalability.
#. faster thread creation
Certainly.
#. faster context switching (I'm guessing on this one, is this really true?)
Depends. The CPU overhead of context switching is probably lower (although that's not sure in the case of Twisted, since the reactor is written in pure Python). However, in a cooperative threading model (rather than the traditional preemptive model), latencies tend to go up unless you have a lot of possible switching points.
#. synchronization between threads is easier because there is no preemption, making it much easier to write critical sections of code.
This is definitely the primary advantage of the Twisted approach.
Probably. However, since the core of your proposal is itself far from trivial, I suggest you concentrate on it in this PEP; the higher-level constructs can be deferred ( :-)) to another PEP.
Sounds ok. FWIW, the py3k IO stack is supposed to be ready for non-blocking IO, but this possibility is almost completely untested as of yet.
I don't think it's critical.
By "global", you mean "thread-local", no? That is, there is (at most) one currently running micro-thread per OS-level thread.
There are three usage scenarios, aided by three different functions to create micro-threads:
I suggest you fold those usage scenarios into one simple primitive that launches a single micro-thread and provides a way to wait for its result (using a CDeferred I suppose?). Higher-level stuff ("start_in_parallel") does not seem critical for the usefulness of the PEP.
What is the added value of "micro pipes" compared to, e.g., a standard Python list or deque? Are they non-blocking?
Silencing this sounds like a bad idea.
Hmm, is it really necessary? Shouldn't micro-threads just create their own pipes when they need them? The stdin/stdout analogy is only meaningful in certain types of workloads.
I'm not sure I understand this right. Does this mean there is a single, pre-constructed CDeferred object for each micro-thread? If yes, then this deviates slightly from the Twisted model where many deferreds can be created dynamically, chained together etc.
In this example, can you give the C pseudo-code and the equivalent Twisted Python (pseudo-)code? (I haven't read the Reactor part so I won't comment on it)
#. How is process termination handled?
Raising SystemExit (or another BaseException-derived exception, e.g. ThreadExit) in all micro-threads sounds reasonable.
#. How does this impact the debugger/profiler/sys.settrace?
:-) Last point: you should try to get some Twisted guys involved in the writing of the PEP if you want it to succeed. Regards Antoine.

Thank you for your response. I have written up a python-style pseudo code for both the C level code and the Python level code and posted them as two separate posts. I changed the subject somewhat on them, so they don't show up on the same thread... :-( Here is some specific feedback on your questions. Antoine Pitrou wrote:
I had considered that. But the core by itself accomplishes nothing, except to serve as a foundation for some kind of higher-level constructs, so I put them together. I guess having separate PEPs allows them to evolve more independently. (I'm new to this PEP process). If I split them, so I keep posting updated versions on python-ideas? Or do I just accumulate the changes offline and post the completed PEP much later?
Good to know. I'll have to look at this.
Yes!
I have a single micro_thread class with a couple of optional arguments that affects it operation, so you may be right. I have included the higher-level stuff ("start_in_parallel") in the Python level pseudo code to give everybody a feel of what's involved.
Micro_pipes connect two micro_threads, much like unix pipes join two unix processes. Each thread will suspend if the other thread isn't ready. The micro_pipes use the C_deferreds to suspend the thread and allow other threads to run. So micro_pipes don't store a sequence of values (like lists or deques), but pass individual values on from one thread to another. The implementation proposed in the Python level pseudo code only stores one value and will block the writer when it tries to write a second value before the reader has read the first value. This buffer size of one could be expanded, but I've been working on the premise that this should be kept as simple as possible for a first out, and then allowed to grow after more experience is gained with it. I've seen many software projects (and I'm guilty of this myself) where they include all kinds of stuff that really isn't that useful. And, once released, these things are hard to take back. So I'm consciously trying to keep the first out to a bare minimum of features that can grow later.
Yes, I think "silently" means raising a MicroThreadExit exception in the ``put`` and then silently ignoring it when it is finally re-raised by the top function of the thread (thus, terminating the thread, but allowing clean code to run on the way down).
They seem necessary to handle exception situations. For example, when a reader thread on a pipe dies with an exception, how is the write thread notified? What mechanism knows that this pipe was being read by the errant thread so that it will never be read from again? Lacking some kind of mechanism like this may mean that the writer thread is suspended forever. And the same applies in reverse if the writer thread dies. The reader is left hanging forever. So the pipes need to be "attached" to the threads so that an exception in one thread can also affect other interested threads.
Yes, a single deferred for each micro-thread. And yes, this differs some from the Twisted model. But, again, this helps to "connect the dots" between threads for exception propagation. I think that it will also give slightly better performance because fewer memory allocations are required.
Do you mean the pseudo code of the deferred implementation, or the pseudo code for using the deferreds?
Last point: you should try to get some Twisted guys involved in the writing of the PEP if you want it to succeed.
Good suggestion! I was hoping that some might show up here, but ... I guess I need to go looking for them! Thanks! -bruce

Hi,
Having separate PEPs also makes it easier to discuss the issues piecewise rather than a whole big chunk of additions.
I think it's better to post updated versions, at least as long as there seems to be some interest.
Yes, it's the kind of things that can be discussed as part of the implementation rather than as part of the spec itself.
A pipe could be divided into two half-objects: the receiving end and the sending end, each independently managed by the standard reference counting mechanism, and referencing each other through weakrefs. That way, when e.g. the last reference to the receiving end dies, the sending end will notice that and can raise an exception when a thread tries to write to it. In any case, I don't think creating standard "stdin" and "stdout" pipes for each thread makes things any easier. You still have to handle the case of whatever non-stdin/stdout pipes get created.
I mean the pseudo code for using the deferreds in the particular example which is outlined. Also, if possible, the corresponding Twisted code, to see where and how the two idioms diverge. You can of course give pseudo-code for the implementation as well, but I think discussing the implementation is prematurate if the API hasn't been discussed first :-)
Good suggestion! I was hoping that some might show up here, but ... I guess I need to go looking for them!
A Twisted developer told me that he had tried to read your PEP (the first version) but found it difficult to understand. Regards Antoine.

Antoine Pitrou wrote:
Having separate PEPs also makes it easier to discuss the issues piecewise rather than a whole big chunk of additions.
I am preparing 3 new PEPs now. One for the C level, one for micro-threads which provides some basic capabilities, and one for micro-pipes which I expect will be the one that will change the most and take the longest to put to bed. I'm also concentrating on APIs rather than implementations and will include examples of using these APIs.
I don't think that this would be very portable to other flavors of python (jython/ironpython/pypy) that don't use reference counting. It makes the thread termination dependent on the implementation of the garbage collector.
I imagine that the python programmer would not be allowed to create micro-pipes directly. I'm thinking that there would be a num_stdout (= 0) parameter on the micro_thread constructor that creates that many micro_pipes, and then the python programmer can connect these to the stdin of other micro-threads. The reason for this is so that the underlying implementation always knows how the dots are connected so that it can provide sensible exception/abort semantics. I'm working on another project that uses generators a _lot_ and there are problems there because 'for' loops don't call 'close' on generators to clean things up. I've also hit problems where the code works fine on CPython, but fails on jython and ironpython because I'm relying on the reference counting to immediately collect abandoned generators and run their 'finally' clauses.
I hope that the new versions will be easier to follow! Thanks! -bruce

Bruce Frederiksen schrieb:
You can use "with closing" to ensure this. However, it bloats the code a tiny bit. from __future__ import with_statement from itertools import islice from contextlib import closing def gen(): try: print "before" yield 1 print "between" yield 2 print "after" except GeneratorExit: print "exit" finally: print "finally"

On Mon, Aug 25, 2008 at 9:48 AM, Bruce Frederiksen <dangyogi@gmail.com> wrote: [snip]
It all depends on what you're doing. If you're waiting on a lot of RPCs to complete and doing light-weight operations to process the responses, then you're probably fine with micro-threads (unless, of course, those RPC responses are themselves pretty big and require a lot of deserialization work, in which case, micro-threads will hurt more than they help).
It in no way demonstrates that. I would say that popularity of Twisted indicates that "a micro-threading alternative to the standard...threading packages" can survive and indeed thrive outside of the standard library. If you feel that Twisted's popularity does indeed demonstrate something in this area, please back up that assertion.
That you don't know is, frankly, not reassuring.
By long-running, you mean "non-yielding", right? Don't CPU-intensive operations generally fall into this category? Combined with the first two disadvantages, this means that a developer using this system has to vet all libraries they might want to use (and all libraries in their transitive dependency closure), looking for places that might destabilize the ability of micro-threads to cooperatively yield. That sounds like an incredibly error-prone and painstaking waste of developer time.
This PEP attempts to retain all of the advantages that Twisted has demonstrated,
Please don't assume that everyone reading your PEP is familiar with Twisted.
So you say, but I see nothing in this entire PEP (and I'll freely admit I started skimming it after page five or so) that specifically references these disadvantages or demonstrates how they're being solved.
This PEP does not address the last two disadvantages, and thus also has these disadvantages itself.
Starvation is a pretty big disadvantage to simply gloss over.
I don't understand this. Please explain in more detail why adding this new (and unexpected) functionality to iter() and next() is desirable as opposed to adding new functions/methods.
Why is this a subclass of NotImplementedError and not a direct subclass of Exception? This is an odd choice of parent class.
And what happens if I use PyErr_SetObject() instead of this new function? Is a TypeError raised?
So it's possible for non-micro-threading aware code to simply swallow these new exceptions? That seems...unwise. Collin Winter
participants (5)
-
Antoine Pitrou
-
Brett Cannon
-
Bruce Frederiksen
-
Collin Winter
-
Mathias Panzenböck