[Python-ideas] New PEP proposal: C Micro-Threading (ala Twisted) (long!)

Bruce Frederiksen dangyogi at gmail.com
Sat Aug 2 21:20:15 CEST 2008


Here is a proposal for making the benefits of Twisted available without
having to write python code in an event driven style.

(I'm not sure whether this list accepts HTML, so am just copying plain
 text -- you may want to run rst2html on this)...

I look forward to your ideas and feedback!

-bruce


Abstract
========

This PEP adds micro-threading (or `green threads`_) at the C level so that
micro-threading is built in and can be used with very little coding effort
at the python level.

The implementation is quite similar to the Twisted_ [#twisted-fn]_
Deferred_/Reactor_ model, but applied at the C level by extending the
`C API`_ [#c_api]_ slightly.  Doing this provides the Twisted
capabilities to python, without requiring the python programmer to code
in the Twisted event driven style.  Thus, legacy python code would gain the
benefits that Twisted provides with very little modification.

Burying the event driven mechanism in the C level should also give the same
benefits to python GUI interface tools so that the python programmers don't
have to deal with event driven programming there either.

This capability may also be used to provide some of the features that
`Stackless Python`_ [#stackless]_ provides, such as microthreads and
channels (here, called micro_pipes).

.. _Twisted: http://twistedmatrix.com/trac/
.. _Deferred:
   http://twistedmatrix.com/projects/core/documentation/howto/defer.html
.. _Reactor:
   
http://twistedmatrix.com/projects/core/documentation/howto/reactor-basics.html
.. _C API: http://docs.python.org/api/api.html
.. _green threads: http://en.wikipedia.org/wiki/Green_threads

Motivation
==========

The popularity of the Twisted project has demonstrated the need for
micro-threads as an alternative the standard thread_ [#thread-module]_ and
threading_ [#threading-module]_ packages.  These allow large numbers 
(1000's)
of simultaneous connections to python servers, as well as fan-outs to large
numbers of downstream connections.

The advantages to the Twisted approach over standard threads are:

#. much less memory is required per thread
#. faster thread creation
#. faster context switching (I'm guessing on this one, is this really true?)
#. synchronization between threads is easier because there is no preemption,
   making it much easier to write critical sections of code.

The disadvantages are:

#. the python developer must write his/her program in an event driven style
#. the approach can not be used with standard python code that wasn't
   written in this event driven style
#. the approach does not take advantage of multiple processor architectures
#. since there is no preemption, a long running micro-thread will starve
   other micro-threads

This PEP attempts to retain all of the advantages that Twisted has
demonstrated, but resolve the first two disadvantages to make these
advantages accessible to all python programs, including legacy programs
not written in the Twisted style.  This should make it very easy for legacy
programs like WSGI apps, Django and TurboGears to reap the Twisted benefits.

Another example of event driven mechanisms are the GUI/windows events.  This
PEP would also make it easy for python GUI interface toolkits (like wxpython
and qtpython) to hide the GUI/windows event driven style of programming from
the python programmer.  For example, you would no longer need to use modal
dialog boxes just to make the programming easier.

This PEP does not address the last two disadvantages, and thus also has
these disadvantages itself.


Specification of C Layer Enhancements
=====================================

Deferred
--------

``PyDef_Deferred`` is written as a new exception type for use by the C code
to defer execution.  This is a subclass of ``NotImplementedError``.  
Instances
are not raised as a normal exception (e.g., with ``PyErr_SetObject``), 
but by
calling ``PyWatch_Defer``.  This registers the ``PyDef_Deferred`` associated
with the currently running micro_thread as the current error object, but 
also
readies it for its primary job -- deferring execution.  As an exception,
it creates its own error message, if needed, which is "Deferred execution
not yet implemented by %s" % c_function_name.

``PyErr_ExceptionMatches`` may be used with these.  This allows them to be
treated as exceptions by non micro-threading aware (unmodified) C functions.

But these ``PyDef_Deferred`` objects are special indicators that are treated
differently than normal exceptions by micro-threading aware (modified) C
code.  Modified C functions do this by calling ``PyDef_AddCallback``,
``PyDef_Final`` or explicitly checking
``PyErr_ExceptionMatches(PyDef_Deferred)`` after receiving an error return
status from a called function.

``PyDef_Deferred`` instances offer the following methods (in addition to the
normal exception methods):

- ``int PyDef_AddCallbackEx(PyObject *deferred, const char *caller_name,
  const char *called_name, PyObject *(*callback_fn)(PyObject 
*returned_object,
  void *state), void *state)``

  - The *caller_name* and *called_name* are case sensitive.  The 
*called_name*
    must match exactly the *caller_name* used by the called function when it
    dealt with this ``PyExc_Deferred``.  If the names are different, the
    ``PyExc_Deferred`` knows that an intervening unmodified C function was
    called.  This is what triggers it to act like an exception.

    The *called_name* must be ``NULL`` when called by the function that
    executed the ``PyWatch_Defer`` to defer execution.

  - The *callback_fn* will be called with the ``PyObject`` of the results of
    the prior registered callback_fn.  An exception is passed to
    *callback_fn* by setting the exception and passing ``NULL`` (just like
    returning an exception from a C function).  In the case that the
    ``PyExc_Deferred`` initially accepts *callback_fn* and then later has
    to reject it (because of the exception case, above), it will pass a
    ``SystemError`` to all registered callback_fns to allow them to 
clean up.
    But this ``SystemError`` is only a "behind the scenes" measure that will
    only be seen by these callback_fns.  It will be cleared and the prior
    error indicator reestablished before ``PyDef_AddCallback`` returns.
  - The *callback_fn* is always guaranteed to be called exactly once at some
    point in the future.  It will be passed the same *state* value as was
    passed with it to ``PyDef_AddCallback``.  It is up to the *callback_fn*
    to deal with the memory management of this *state* object.
  - The *callback_fn* may be ``NULL`` if no callback is required.  But in
    this case ``PyDef_AddCallback`` must still be called to notify the
    ``PyExc_Deferred`` that the C function is micro-threading aware.
  - This returns 0 if it fails (is acting like an exception), 1 otherwise.
    If it fails, the caller should do any needed clean up because the caller
    won't be resumed by the ``PyExc_Deferred`` (i.e., *callback_fn* will not
    be called).

- ``int PyDef_AddCallback(const char *caller_name, const char *called_name,
  PyObject *(*callback_fn)(PyObject *returned_object, void *state),
  void *state)``

  - Same as ``PyDef_AddCallbackEx``, except that the deferred object is 
taken
    from the *value* object returned by ``PyErr_Fetch``.  If the *type*
    returned by ``PyErr_Fetch`` is not ``PyDef_Deferred``, 0 is returned.
    Thus, this function can be called after any exception and then other
    standard exception processing done if 0 is returned (including checking
    for other kinds of exceptions).

- ``int PyDef_FinalEx(PyObject *deferred, const char *called_fn)``

  - Only used by the top-level C function (a reactor) to verify that its
    *called_fn* is micro-threading aware.  Returns 1 if everything looks 
good,
    0 otherwise.  If 0 is returned, then this instance to be treated as an
    exception.

- ``int PyDef_Final(const char *called_fn)``

  - Same as ``PyDef_FinalEx``, except that the deferred object is taken
    from the *value* object returned by ``PyErr_Fetch``.  If the *type*
    returned by ``PyErr_Fetch`` is not ``PyDef_Deferred``, 0 is returned.
    Thus, this function can be called after any exception and then other
    standard exception processing done if 0 is returned (including checking
    for other kinds of exceptions).

- ``int PyDef_IsExceptionEx(PyObject *deferred)``

  - Only used by the top-level C function (a reactor) to determine whether
    to treat the *deferred* as an exception or to do deferred processing.

- ``int PyDef_IsException(void)``

  - Same as ``PyDef_IsExceptionEx``, except that the deferred object is 
taken
    from the *value* object returned by ``PyErr_Fetch``.  If the *type*
    returned by ``PyErr_Fetch`` is not ``PyDef_Deferred``, 1 is returned.
    Thus, this function can be called after any exception and then other
    standard exception processing done if 1 is returned (including checking
    for other kinds of exceptions).

- ``PyObject *PyDef_Callback(PyObject *deferred, PyObject 
*returned_object)``

  - This calls the callback_fn sequence passing *returned_object* to the
    first registered callback_fn, and each callback_fn's returned 
``PyObject``
    to the next registered callback_fn.  The result of the final callback_fn
    is returned (which may be ``NULL`` if an exception was encountered).
  - To signal an exception to the callbacks, first set the error indicator
    (e.g. with ``PyErr_SetString``) and then call ``PyDef_Callback`` passing
    ``NULL`` as the returned_object (just like returning ``NULL`` from a C
    function to signal an exception).
  - If a callback_fn wants to defer execution, this same ``PyDef_Deferred``
    object will be used (since the callback_fn is running in the same
    micro_thread).  The ``PyDef_Deferred`` keeps the newly added 
callback_fns
    in the proper sequence relative the existing callback_fns that have not
    yet been executed.  When ``PyDef_Deferred`` is returned from the
    callback_fn, no further callback_fns are called.

    Note that this check is also done on the starting *returned_object*, so
    that if this ``PyDef_Deferred`` exception is passed in, then none of its
    callback_fns are executed and it simply returns.

  - If this function returns a ``PyDef_Deferred`` exception (itself), no
    ``PyDef_Final`` needs to be done on it, rather a ``PyDef_IsException``
    is done to see whether to treat it as an exception or not.

- ``void PyDef_Abort(PyObject *deferred)``

  - Acts as if ``SystemError`` has been raised to its callback_fns.  
Clears the
    ``SystemError`` and reestablishes any prior error indicator prior to
    returning.
  - If *deferred* is currently registered with a Watcher_, deregister it.

- ``void PyDef_Terminate(PyObject *deferred)``

  - Acts as if ``SystemExit`` has been raised to its callback_fns.  
Clears the
    ``SystemExit`` and reestablishes any prior error indicator prior to
    returning.
  - If *deferred* is currently registered with a Watcher_, deregister it.

Each micro_thread has its own ``PyDef_Deferred`` object associated with it.
This is possible because each micro_thread may only be suspended for one
thing at a time.  This also allows us to re-use ``PyDef_Deferreds`` and,
through the following trick, means that we don't need a lot of
``PyDef_Deferred`` instances when a micro_thread is deferred many times at
different points in the call stack.

One peculiar thing about the stored callbacks, is that they're not really a
queue.  When the ``PyDef_Deferred`` is first used and has no saved 
callbacks,
the callbacks are saved in straight FIFO manor.  Let's say that four
callbacks are saved in this order: ``D'``, ``C'``, ``B'``, ``A'`` (meaning
that ``A`` called ``B``, called ``C``, called ``D`` which deferred):

- after ``D'`` is added, the queue looks like: ``D'``
- after ``C'`` is added, the queue looks like: ``D'``, ``C'``
- after ``B'`` is added, the queue looks like: ``D'``, ``C'``, ``B'``
- after ``A'`` is added, the queue looks like: ``D'``, ``C'``, ``B'``, 
``A'``

Upon resumption, ``D'`` is called, then ``C'`` is called.  ``C'`` then calls
``E`` which calls ``F`` which now wants to defer execution again.  
``B'`` and
``A'`` are still in the deferred's callback queue.  When ``F'``, then ``E'``
then ``C''`` are pushed, they go in front of the callbacks still present
from the last defer:

- after ``F'`` is added, the queue looks like: ``F'``, ``B'``, ``A'``
- after ``E'`` is added, the queue looks like: ``F'``, ``E'``, ``B'``, 
``A'``
- after ``C''`` is added, the queue looks like: ``F'``, ``E'``, ``C''``,
  ``B'``, ``A'``

These callback functions are basically a reflection of the C stack at the
point the micro_thread is deferred.


Reactor
-------

The Reactor logic is divided into two levels:

- The top level function.  There is only one long running invocation of
  this function (per standard thread_).
- A list of Watchers_.  Each of these knows how to watch for a different
  type of external event, such as a file being ready for IO or a signal
  having been received.

.. _Watchers: Watcher_


Top Level
'''''''''

The top level function pops (deferred, returned_object) pairs, doing the
``PyDef_Callback`` on each, until either the ``EventCheckingThreshold``
number of deferreds have been popped, or there are no more deferreds 
scheduled.

It then runs through the ``WatcherList`` (which is maintained in descending
``PyWatch_Priority`` order) to give each watcher_ a chance to poll for its
events.  If there are then still no deferreds scheduled, it goes to each
watcher in turn asking it to do a ``PyWatch_TimedWait`` for
``TimedWaitSeconds`` until one doesn't return -1.  Then it polls the
remaining watchers again and goes back to running scheduled deferreds.

If there is only one watcher, a ``PyWatch_WaitForever`` is used, rather than
first polling with ``PyWatch_Poll`` and then ``PyWatch_TimedWait``.

The top level also manages a list of timers for the watchers.  It calls
``PyWatch_Timeout`` each time a timer pops.

- ``int PyTop_Schedule(PyObject *deferred, PyObject *returned_object)``

  - Returns 0 on error, 1 otherwise.

- ``int PyTop_ScheduleException(PyObject *deferred,
  PyObject *exc_type, PyObject *exc_value, PyObject *exc_traceback)``

  - Returns 0 on error, 1 otherwise.

- ``int PyTop_SetTimer(PyObject *watcher, PyObject *deferred, double 
seconds)``

  - Returns 0 on error, 1 otherwise.

- ``int PyTop_ClearTimer(PyObject *watcher, PyObject *deferred)``

  - Returns 0 on error, 1 otherwise.

- ``int PyTop_SetEventCheckingThreshold(long num_continues)``

  - Returns 0 on error, 1 otherwise.

- ``int PyTop_SetTimedWaitSeconds(double seconds)``

  - Returns 0 on error, 1 otherwise.


Watcher
'''''''

- ``int PyWatch_Priority(PyObject *watcher)``
 
  - Returns the priority of this watcher.  (-1 for error).  Higher numbers
    have higher priorities.

- ``int PyWatch_RegisterDeferred(PyObject *watcher, PyObject *deferred,
  PyObject *wait_reason, double max_wait_seconds)``

  - *Max_wait_seconds* of 0.0 means no time limit.  Otherwise, register
    *deferred* with ``PyTop_SetTimer`` (above).
  - Adds *deferred* to the list of waiting objects, for *wait_reason*.
  - The meaning of *wait_reason* is determined by the watcher.  It can be
    used, for example, to indicate whether to wait for input or output on a
    file.
  - Returns 0 on error, 1 otherwise.

- ``void PyWatch_Defer(PyObject *watcher, PyObject *wait_reason,
  double max_wait_seconds)``

  - Passes the ``PyDef_Deferred`` of the current micro_thread to
    ``PyWatch_RegisterDeferred``, and then raises the ``PyDef_Deferred`` as
    an exception.  *Wait_reason* and *max_wait_seconds* are passed on to
    ``PyWatch_RegisterDeferred``.
  - This function has no return value.  It always generates an exception.

- ``int PyWatch_Poll(PyObject *watcher)``

  - Poll for events and schedule the appropriate ``PyDef_Deferreds``.  
Do not
    cause the process to be put to sleep.  Return 0 on error, 1 on success
    (whether or not any events were discovered).

- ``int PyWatch_TimedWait(PyObject *watcher, double seconds)``

  - Wait for events and schedule the appropriate ``PyDef_Deferreds``.  
Do not
    cause the process to be put to sleep for more than the indicated number
    of *seconds*.  Return -1 if *watcher* is not capable of doing timed
    sleeps, 0 on error, 1 on success (whether or not any events were
    discovered).  Return a 1 if the wait was terminated due to the process
    having received a signal.
  - If *watcher* is not capable of doing timed waits, it does a poll and
    returns -1.

- ``int PyWatch_WaitForever(PyObject *watcher)``

  - Suspend the process until an event occurs and schedule the appropriate
    ``PyDef_Deferreds``.  The process may be put to sleep indefinitely.
    Return 0 on error, 1 on success (whether or not any ``PyDef_Deferreds``
    were scheduled).  Return a 1 if the wait was terminated due to the 
process
    having received a signal.

- ``int PyWatch_Timeout(PyObject *watcher, PyObject *deferred)``
 
  - Called by top level when the timer set by ``PyTop_SetTimer`` expires.
  - Passes a ``TimeoutException`` to the deferred using ``PyDef_Callback``.
  - Return 0 on error, 1 otherwise.

- ``int PyWatch_DeregisterDeferred(PyObject *watcher, PyObject *deferred,
  PyObject *returned_object)``

  - Deregisters *deferred*.
  - Passes *returned_object* to *deferred* using ``PyDef_Callback``.
  - *Returned_object* may be ``NULL`` to indicate an exception to the 
callbacks.
  - Returns 0 on error, 1 otherwise.


Specification of Python Layer Enhancements
==========================================

Fortunately, at the python level, the programmer does not see deferred,
reactor, or watcher objects.  The python programmer will see three things:

#. An addition of non_blocking modes of accessing files, sockets, time.sleep
   and other functions that may block.  It is not clear yet exactly what 
these
   will look like.  The possibilities are:

   - Add an argument to the object creation functions to specify blocking or
     non-blocking.
   - Add an operation to change the blocking mode after the object has been
     created.
   - Add new non-blocking versions of the methods on the objects that may
     block (e.g., read_d/write_d/send_d/recv_d/sleep_d).
   - Some combination of these.

   If an object is used in blocking mode, then all microthreads (within its
   Posix thread_) will block.  So the python programmer must set 
non-blocking
   mode on these objects as a first step to take advantage of 
micro-threading.

#. Micro_thread objects.  Each of these will have a re-usable
   ``PyDef_Deferred`` object attached to it, since each micro_thread can 
only
   be suspended waiting for one thing at a time.  The current micro_thread
   would be stored within a C global variable, much like
   _PyThreadState_Current.  If the python programmer isn't interested in
   micro_threading, micro_threads can be safely ignored (like posix 
threads_,
   you get one for free, but don't have to be aware of it).  If the
   programmer *is* interested in micro-threading, then s/he must create
   micro_threads.  This would be done with::

       micro_thread(function, *args, **kwargs)

   I am thinking that there are three usage scenarios:

   #. Create a micro-thread to do something, without regard to any final
      return value from *function*.  An example here would be a web server
      that has a top-level ``socket.accept`` loop that runs a
      ``handle_client`` function on each new connection.  Once launched,
      the ``socket.accept`` thread is no longer interested in the
      ``handle_client`` threads.

      In this case, the normal return value of the ``handle_client`` 
function
      can be discarded.  But what should be done with exceptions that 
are not
      caught in the child threads?

      Therefore, this style of use would be indicated by providing an
      top-level exception handler for the new thread as a keyword argument,
      e.g. ``exception_handler=traceback.print_exception`` in a developer
      environment and ``exception_handler=my_exception_logger`` in a
      production environment.

      If this keyword argument is provided, then the parent thread does not
      need to do any kind of *wait* after the child thread is complete.  It
      will either complete normally and go away silently, or raise an 
uncaught
      exception, which is passed to the indicated exception_handler, and 
then
      go away with no more ado.

   #. Create micro_threads to run multiple long-running *functions* in
      parallel where the final return value from each *function* is 
needed in
      the parent thread.

      In this case, the ``exception_handler`` argument is not specified 
and the
      parent thread needs to *wait* on the child thread (when the parent is
      ready to do so).  Thus, completed micro_threads will form zombie
      threads until their parents retrieve their final return values (much
      like unix processes).

      This ends up being a kind of parallel execution strategy and it 
might be
      nice to have a ``threaded_map`` function that will create a 
micro_thread
      for each element of its *iterable* argument in order to run the
      *function* on them in parallel and then return an iterable of the
      waited for results.

      On doing the *wait*, an uncaught exception in the child 
micro_thread is
      re-raised in the parent micro_thread.

   #. In the above examples, the child micro_threads are completely
      independent of each other.  This final scenario uses *micro_pipes* to
      allow threads to cooperatively solve problems (much like unix pipes).

#. Micro_pipes.  Micro_pipes are one-way pipes that allow synchronized
   communication between micro_threads.
   
   The protocol for the receiving side of the pipe is simply the standard
   python iterator protocol.

   The sending side has these methods:
   
   - ``put(object)`` to send *object* to the receiving side (retrieved with
     the ``__next__`` method).
   - ``take_from(iterable)`` to send a series of objects to the 
receiving side
     (retrieved with multiple ``__next__`` calls).
   - ``close()`` causes ``StopIteration`` on the next ``__next__`` call.
     A ``put`` done after a ``close`` silently terminates the micro_thread
     doing the ``put`` (in case the receiving side closes the micro_pipe).

   Micro_pipes are automatically associated with micro_threads, making 
it less
   likely to hang the program:

   >>> pipe = micro_pipe()
   >>> next(pipe)  # hangs the program!  No micro_thread created to feed 
pipe...

   So each micro_thread will automatically have a stdout micro_pipe assigned
   to it and can (optionally) be assigned a stdin micro_pipe (some other
   micro_thread's stdout micro_pipe).  When the micro_thread terminates, it
   automatically calls ``close`` on its stdout micro_pipe.

   To access the stdout micro_pipe of the current micro_thread, new ``put``
   and ``take_from`` built-in functions are provided.

   Micro_pipes lets us write generator functions in a new way by having the
   generator do ``put(object)`` rather than ``yield object``.  In this case,
   the generator function has no ``yield`` statement, so is not treated
   specially by the compiler.  Basically this means that calling a new-style
   generator does not automatically create a new micro_thread (sort of what
   calling an old-style generator does).

   The ``put(object)`` does the same thing as ``yield object``, but
   allows the generator to share the micro_pipe with other new-style
   generators functions (by simply calling them) and old-style 
generators (or
   any iterable) by calling ``take_from`` on them.  This lets the generator
   delegate to other generators without having to get involved with passing
   the results back to its caller.

   For example, a generator to output all the even numbers from 1-n,
   followed by all of the odd numbers::

       def even_odd(n):
           take_from(range(2, n, 2))
           take_from(range(1, n, 2))

   These "new-style" generators would have to be run in their own
   micro_thread:

   >>> pipe = micro_thread(even_odd, 100).stdout
   >>> # now pipe is an iterable representing the generator:
   >>> print tuple(pipe)

   But the generator is then not restricted to running within its own
   micro_thread.  It could also sometimes be used as a helper by other
   generators within their micro_thread.  This would allow generators to
   still use each other as helpers.  For example::

       def even(n):
           take_from(range(2, n, 2))

       def odd(n):
           take_from(range(1, n, 2))

       def even_odd(n):
           even(n)
           odd(n)

   At this point a micro_thread may be created on any of the above 
generators.


Open Questions
==============

#. How are exceptions propagated from one ``PyDef_Deferred`` to the next?

   - This would happen when the final result of a micro_thread is needed by
     another micro_thread.  This happens in two cases:

     - When a *wait* is done.  In this case the exception is propagated 
to the
       caller.
     - When dealing with pipes, it seems that an exception on the 
``put`` side
       should be propagated to the ``__next__`` side.  This is another 
reason
       to have the stdout pipe associated with the micro_thread.  
Because when
       the exception occurs, it will not be in the ``put`` call; so without
       attaching the pipe to the micro_thread, there would be no way of
       knowing which pipe that micro_thread was outputting to.

       Thus exception would propagate from the ``put`` side of a 
micro_pipe to
       the ``__next__`` side, but not the other direction.

#. How are tracebacks handled?
#. Do we:

   #. Treat each python-to-python call as a separate C call, with it's own
      callback_fn?
   #. Only register one callback_fn for each continuous string of
      python-to-python calls and then process them iteratively rather than
      recursively in the callback_fn (but not in the original calls)?
   #. Treat python-to-python calls iteratively both in the original calls
      and in the callback_fn?

#. How is process termination handled?
   
   - I guess we can keep a list of micro_threads and terminate each of them.
     There's a question of whether to allow the micro_threads to complete or
     to abort them mid-stream.  Kind of like a unix shutdown.  Maybe two 
kinds
     of process termination?

#. How does this interact with the existing posix thread_ package?

   - Each micro_thread would be associated with a posix thread.  Or,
     conversely, each posix thread would have its own list of micro_threads.

#. How does this impact the debugger/profiler/sys.settrace?
#. Should functions (C and python) that may defer be indicated with some
   naming convention (e.g., ends in '_d') to make it easier for programmers
   to avoid them within their critical sections of code (in terms of
   synchronization)?


Rationale
=========

The implementation is done by treating the C-level deferreds as a special
case of C-level exceptions so that the new ``PyDef_Deferred`` objects 
will be
be treated like any other exception if *any* C function within the call 
chain
hasn't been modified to deal with them.  In this case, the normal execution
of the program is interrupted, but in a well understood way (by an 
exception)
with the name of the offending C function contained in the exception message
so that the python developer knows where to go to fix it.

Only when *all* C functions in the call chain properly recognize and 
deal with
the ``PyDef_Deferred`` is the new deferred object applied to implement the
new micro-threading behavior.

No change is required for C functions that could never get a
``PyDef_Deferred``.

This takes advantage of the fact that the current exception mechanism 
already
unwinds the C stack.  It also adds deferred processing without adding
additional checks after each C function call to see whether to defer
execution.  The check that is already being done for exceptions doubles as a
check for deferred processing.


Other Approaches
================

Here's a brief comparison to other approaches to micro-threading in python:

- `Stackless Python`_ [#stackless]_
 
  - As near as I can tell, stackless went through two incarnations:
    
    #. The first incarnation involved an implementation of Frame 
continuations
       which were then used to provide the rest of the stackless 
functionality.
       
       - A new ``Py_UnwindToken`` was created to unwind the C stack.  
This is
         similar to the new ``PyDef_Deferred`` proposed in this PEP, except
         that ``Py_UnwindToken`` is treated as a special case of a normal
         ``PyObject`` return value, while the ``PyDef_Deferred`` is treated
         as a special case of a normal exception.

         Consider the case where unmodified C function ``A`` calls 
modified C
         function ``B``, and function ``B`` wants to defer execution.  In
         both cases function ``B`` returns a special value indicating it
         wants to defer.

         The difference comes in how function ``A`` (which isn't prepared to
         cooperate with this new type of request) handles this situation.

         With the stackless ``Py_UnwindToken`` as a return value, function
         ``A`` tries to act on ``Py_UnwindToken`` as an ordinary return 
value,
         which it is not.  This may or may not lead to some other exception
         being thrown due to a type inconsistency between ``Py_UnwindToken``
         and what was expected by function ``A``.  It also means that 
further
         up the call chain in the code that called function ``A``, the fact
         that a ``Py_UnwindToken`` was returned isn't known if function 
``A``
         does not simply return it.  In short, who knows what will happen...

         But this PEP treats the request to defer as a special exception.
         So function ``A``, receiving a ``PyDef_Deferred`` exception will
         treat it as an ordinary exception.  Function ``A``, not recognizing
         this exception, will perform any required clean up and simply
         forward the ``PyDef_Deferred`` on to its caller (re-raise it).
         The top-level code that receives this ``PyDef_Deferred`` knows that
         it's a broken ``PyDef_Deferred`` and raises it as a normal 
exception,
         which will state "Deferred execution not yet implemented by A".
         This makes much more sense.

       - Another difference between the two styles of continuations is that
         the stackless continuation is designed to be able to be continued
         multiple times.  In other words, you can continue the execution of
         the program from the point the continuation was made as many times
         as you wish, passing different seed values each time.

         The ``PyDef_Deferred`` described in this PEP (like the Twisted
         Deferred) is designed to be continued oonce.
      
       - The stackless approach provides a python-level continuation
         mechanism (at the Frame level) that only makes python functions
         continuable.  It provides no way for C functions to register
         continuations so that C functions can be unwound from the stack
         and later continued (other than those related to the byte code
         interpreter).

         In contrast, this PEP proposes a C-level continuation mechanism
         very similar to the Twisted Deferred.  Each C function registers a
         callback to be run when the Deferred is continued.  From this
         perspective, the byte code interpreter is just another C function.

    #. The second incarnation involved a way of hacking the underlying C
       stack to copy it and later restore it as a means of continuing the
       execution.

       - This doesn't appear to be portable to different CPU/C Compiler
         configurations.
       - This doesn't deal with other global state (global/static variables,
         file pointers, etc) that may also be used by this saved stack.
       - In contrast, this PEP uses a single C stack and makes no 
assumptions
         about the underlying C stack implementation.  It is completely
         portable to any CPU/C compiler configuration.

- `Implementing "weightless threads" with Python generators`_ [#weightless]_

  - This requires you code each thread as generators.  The generator
    executes a 'yield' to relinquish control.
  - It's not clear how this scales.  It seems that to pause in a lower
    python function, it and all intermediate functions must be generators.

- python-safethread_ [#safethread]_

  - This is an alternate implementation to thread_ that adds monitors for
    mutable types, deadlock detection, improves exception propagation
    across threads and program finalization, and removes the GIL lock.  As
    such, it is not a "micro" threading approach, though by removing the GIL
    lock it may be able to better use multiple processor configurations
    than the approach proposed in this PEP.

- `Sandboxed Threads in Python`_ [#sandboxed-threads]_

  - Another alternate implementation to thread_, this one only shares
    immutable objects between threads, modifying the referencing counting
    system to avoid synchronization issues with the reference count for
    shared objects.  Again, not a "micro" threading approach, but perhaps
    also better with multiple processors.

.. _Implementing "weightless threads" with Python generators:
   http://www.ibm.com/developerworks/library/l-pythrd.html
.. _python-safethread: https://launchpad.net/python-safethread
.. _Sandboxed Threads in Python:
   http://mail.python.org/pipermail/python-dev/2005-October/057082.html
.. _Stackless Python: http://www.stackless.com/
.. _thread: http://docs.python.org/lib/module-thread.html
.. _threading: http://docs.python.org/lib/module-threading.html


Backwards Compatibility
=======================

This PEP doesn't break any existing code.  Existing code just won't take
advantage of any of the new features.

But there are two possible problem areas:

#. Code uses micro-threading, but then causes an unmodified C function
   to call a modified C function which tries to defer execution.

   In this case an exception will be generated stating that the unmodified C
   function needs to be converted before this program will work.

#. Code originally written in a single threaded environment is now used in a
   micro-threaded environment.  The old code was not written taking
   synchronization issues into account, which may cause problems if the old
   code calls a function which causes it to defer in the middle of its
   critical section.  This could cause very strange behavior, but can't
   result in any C-level errors (e.g., segmentation violation).

   This old code would have to be fixed to run with the new features.  I
   expect that this will not be a frequent problem.


References
==========

.. [#twisted-fn] Twisted, Twisted Matrix Labs
   (http://twistedmatrix.com/trac/)
.. [#c_api] Python/C API Reference Manual, Rossum
   (http://docs.python.org/api/api.html)
.. [#stackless] Stackless Python, Tismer
   (http://www.stackless.com/)
.. [#thread-module] thread -- Multiple threads of control
   (http://docs.python.org/lib/module-thread.html)
.. [#threading-module] threading -- Higher-level threading interface
   (http://docs.python.org/lib/module-threading.html)
.. [#weightless] Charming Python: Implementing "weightless threads" with
   Python generators, Mertz
   (http://www.ibm.com/developerworks/library/l-pythrd.html)
.. [#safethread] Threading extensions to the Python Language,
   (https://launchpad.net/python-safethread)
.. [#sandboxed-threads] Sandboxed Threads in Python, Olsen
   (http://mail.python.org/pipermail/python-dev/2005-October/057082.html)


Copyright
=========

This document has been placed in the public domain.




More information about the Python-ideas mailing list