[Python-checkins] peps: Huge update to various parts of the spec. Still not done.

guido.van.rossum python-checkins at python.org
Thu Oct 31 23:48:52 CET 2013


http://hg.python.org/peps/rev/9e960cac9e94
changeset:   5240:9e960cac9e94
user:        Guido van Rossum <guido at dropbox.com>
date:        Thu Oct 31 15:48:48 2013 -0700
summary:
  Huge update to various parts of the spec. Still not done.

files:
  pep-3156.txt |  630 +++++++++++++++++++++++---------------
  1 files changed, 372 insertions(+), 258 deletions(-)


diff --git a/pep-3156.txt b/pep-3156.txt
--- a/pep-3156.txt
+++ b/pep-3156.txt
@@ -39,7 +39,7 @@
 force acceptance of the PEP.  The expectation is that the package will
 keep provisional status in Python 3.4 and progress to final status in
 Python 3.5.  Development continues to occur primarily in the Tulip
-repo.
+repo, with changes occasionally merged into the CPython repo.
 
 Dependencies
 ------------
@@ -47,7 +47,7 @@
 Python 3.3 is required for many of the proposed features.  The
 reference implementation (Tulip) requires no new language or standard
 library features beyond Python 3.3, no third-party modules or
-packages, and no C code, except for the proactor-based event loop on
+packages, and no C code, except for the (optional) IOCP support on
 Windows.
 
 Module Namespace
@@ -99,10 +99,10 @@
 implementation.
 
 In order to support both directions of adaptation, two separate APIs
-are defined:
+are specified:
 
-- getting and setting the current event loop object
-- the interface of a conforming event loop and its minimum guarantees
+- An interface for managing the current event loop
+- The interface of a conforming event loop
 
 An event loop implementation may provide additional methods and
 guarantees, as long as these are called out in the documentation as
@@ -110,17 +110,19 @@
 methods unimplemented if they cannot be implemented in the given
 environment; however, such deviations from the standard API should be
 considered only as a last resort, and only if the platform or
-environment forces the issue.  (An example could be a platform where
-there is a system event loop that cannot be started or stopped.)
+environment forces the issue.  (An example would be a platform where
+there is a system event loop that cannot be started or stopped; see
+"Embedded Event Loops" below.)
 
 The event loop API does not depend on ``yield from``.  Rather, it uses
 a combination of callbacks, additional interfaces (transports and
 protocols), and Futures.  The latter are similar to those defined in
 PEP 3148, but have a different implementation and are not tied to
-threads.  In particular, they have no wait() method; the user is
-expected to use callbacks.
+threads.  In particular, the ``result()`` method raises an exception
+instead of blocking when a result is not yet ready; the user is
+expected to use callbacks (or ``yield from``) to wait for the result.
 
-All event loop methods documented as returning a coroutine are allowed
+All event loop methods specified as returning a coroutine are allowed
 to return either a Future or a coroutine, at the implementation's
 choice (the standard implementation always returns coroutines).  All
 event loop methods documented as accepting coroutine arguments *must*
@@ -180,10 +182,11 @@
 that can be viewed this way, for example an SSH session or a pair of
 UNIX pipes.  Typically there aren't many different transport
 implementations, and most of them come with the event loop
-implementation.  (But there is no requirement that all transports must
-be created by calling an event loop method -- a third party module may
-well implement a new transport and provide a constructor or factory
-function for it that simply takes an event loop as an argument.)
+implementation.  However, there is no requirement that all transports
+must be created by calling an event loop method: a third party module
+may well implement a new transport and provide a constructor or
+factory function for it that simply takes an event loop as an argument
+or calls ``get_event_loop()``.
 
 Note that transports don't need to use sockets, not even if they use
 TCP -- sockets are a platform-specific implementation detail.
@@ -233,6 +236,7 @@
 document their policy and at what point during their initialization
 sequence the policy is set, in order to avoid undefined behavior when
 multiple active frameworks want to override the default policy.
+(See also "Embedded Event Loops" below.)
 
 To get the event loop for current context, use ``get_event_loop()``.
 This returns an event loop object implementing the interface specified
@@ -280,6 +284,28 @@
 ``DefaultEventLoopPolicy``.  The current event loop policy object can
 be retrieved by calling ``get_event_loop_policy()``.
 
+Specifying Times
+----------------
+
+As usual in Python, all timeouts, intervals and delays are measured in
+seconds, and may be ints or floats.  However, absolute times are not
+specified as POSIX timestamps.  The accuracy, precision and epoch of
+the clock are up to the implementation.
+
+The default implementation uses ``time.monotonic()``.  Books could be
+written about the implications of this choice.  Better read the docs
+for the stdandard library ``time`` module.
+
+Embedded Event Loops
+--------------------
+
+On some platforms an event loop is provided by the system.  Such a
+loop may already be running when the user code starts, and there may
+be no way to stop or close it without exiting from the program.  In
+this case, the methods for starting, stopping and closing the event
+loop may not be implementable, and ``is_running()`` may always return
+``True``.
+
 Event Loop Classes
 ------------------
 
@@ -300,89 +326,59 @@
   API for "overlapped I/O").  The constructor takes one optional
   argument, a ``Proactor`` object.  By default an instance of
   ``IocpProactor`` is created and used.  (The ``IocpProactor`` class
-  is not specified by this PEP.  Its inclusion in the standard library
-  is not currently under consideration; it is just an implementation
+  is not specified by this PEP; it is just an implementation
   detail of the ``ProactorEventLoop`` class.)
 
 Event Loop Methods Overview
 ---------------------------
 
 The methods of a conforming event loop are grouped into several
-categories.  A brief overview of the categories.  The first set of
-categories must be supported by all conforming event loop
-implementations.  (However, in some cases a partially-conforming
-implementation may choose not to implement the internet/socket
-methods, and still conform to the other methods.  Also, it is possible
-that a partially-conforming implementation has no way to create,
-close, start, or stop a loop.)
+categories.  The first set of categories must be supported by all
+conforming event loop implementations, with the exception that
+embedded event loops may not implement the methods for starting,
+stopping and closing.  (However, a partially-conforming event loop is
+still better than nothing. :-)
 
-- Miscellaneous: ``close()``, ``time()``.
+- Starting, stopping and closing: ``run_forever()``,
+  ``run_until_complete()``, ``stop()``, ``is_running()``, ``close()``.
 
-- Starting and stopping: ``run_forever()``, ``run_until_complete()``,
-  ``stop()``, ``is_running()``.
-
-- Basic callbacks: ``call_soon()``, ``call_later()``, ``call_at()``.
+- Basic and timed callbacks: ``call_soon()``, ``call_later()``,
+  ``call_at()``, ``time()``.
 
 - Thread interaction: ``call_soon_threadsafe()``,
   ``run_in_executor()``, ``set_default_executor()``.
 
 - Internet name lookups: ``getaddrinfo()``, ``getnameinfo()``.
 
-- Internet connections: ``create_connection()``, ``start_serving()``,
-  ``stop_serving()``, ``create_datagram_endpoint()``.
+- Internet connections: ``create_connection()``, ``create_server()``,
+  ``create_datagram_endpoint()``.
 
 - Wrapped socket methods: ``sock_recv()``, ``sock_sendall()``,
   ``sock_connect()``, ``sock_accept()``.
 
 The second set of categories *may* be supported by conforming event
 loop implementations.  If not supported, they will raise
-``NotImplementedError``.  (In the current state of Tulip,
+``NotImplementedError``.  (In the default implementation,
 ``SelectorEventLoop`` on UNIX systems supports all of these;
 ``SelectorEventLoop`` on Windows supports the I/O event handling
-category; ``ProactorEventLoop`` on Windows supports None.  The
-intention is to add support for pipes and subprocesses on Windows as
-well, using the ``subprocess`` module in the standard library.)
+category; ``ProactorEventLoop`` on Windows supports the pipes and
+subprocess category.)
 
 - I/O callbacks: ``add_reader()``, ``remove_reader()``,
   ``add_writer()``, ``remove_writer()``.
 
 - Pipes and subprocesses: ``connect_read_pipe()``,
-  ``connect_write_pipe()``, ``spawn_subprocess()``.
+  ``connect_write_pipe()``, ``subprocess_shell()``,
+  ``subprocess_exec()``.
 
 - Signal callbacks: ``add_signal_handler()``,
   ``remove_signal_handler()``.
 
-Specifying Times
-----------------
+Event Loop Methods
+------------------
 
-As usual in Python, all timeouts, intervals and delays are measured in
-seconds, and may be ints or floats.  The accuracy and precision of the
-clock are up to the implementation; the default implementation uses
-``time.monotonic()``.  Books could be written about the implications
-of this choice.  Better read the docs for the stdandard library
-``time`` module.
-
-Required Event Loop Methods
----------------------------
-
-Miscellaneous
-'''''''''''''
-
-- ``close()``.  Closes the event loop, releasing any resources it may
-  hold, such as the file descriptor used by ``epoll()`` or
-  ``kqueue()``.  This should not be called while the event loop is
-  running.  After it has been called the event loop may not be used
-  again.  It may be called multiple times; subsequent calls are
-  no-ops.
-
-- ``time()``.  Returns the current time according to the event loop's
-  clock.  This may be ``time.time()`` or ``time.monotonic()`` or some
-  other system-specific clock, but it must return a float expressing
-  the time in units of approximately one second since some epoch.
-  (No clock is perfect -- see PEP 418.)
-
-Starting and Stopping
-'''''''''''''''''''''
+Starting, Stopping and Closing
+''''''''''''''''''''''''''''''
 
 An (unclosed) event loop can be in one of two states: running or
 stopped.  These methods deal with starting and stopping an event loop:
@@ -412,9 +408,14 @@
   ``call_soon()``) it processes before stopping.
 
 - ``is_running()``.  Returns ``True`` if the event loop is currently
-  running, ``False`` if it is stopped.  (TBD: Do we need another
-  inquiry method to tell whether the loop is in the process of
-  stopping?)
+  running, ``False`` if it is stopped.
+
+- ``close()``.  Closes the event loop, releasing any resources it may
+  hold, such as the file descriptor used by ``epoll()`` or
+  ``kqueue()``, and the default executor.  This should not be called
+  while the event loop is running.  After it has been called the event
+  loop should not be used again.  It may be called multiple times;
+  subsequent calls are no-ops.
 
 Basic Callbacks
 '''''''''''''''
@@ -426,24 +427,30 @@
 shared state isn't changed by another callback.
 
 - ``call_soon(callback, *args)``.  This schedules a callback to be
-  called as soon as possible.  Returns a Handle representing the
-  callback, whose ``cancel()`` method can be used to cancel the
-  callback.  It guarantees that callbacks are called in the order in
-  which they were scheduled.
+  called as soon as possible.  Returns a ``Handle`` (see below)
+  representing the callback, whose ``cancel()`` method can be used to
+  cancel the callback.  It guarantees that callbacks are called in the
+  order in which they were scheduled.
 
 - ``call_later(delay, callback, *args)``.  Arrange for
   ``callback(*args)`` to be called approximately ``delay`` seconds in
-  the future, once, unless cancelled.  Returns a Handle representing
+  the future, once, unless cancelled.  Returns a ``Handle`` representing
   the callback, whose ``cancel()`` method can be used to cancel the
   callback.  Callbacks scheduled in the past or at exactly the same
   time will be called in an undefined order.
 
 - ``call_at(when, callback, *args)``.  This is like ``call_later()``,
   but the time is expressed as an absolute time.  Returns a similar
-  Handle.  There is a simple equivalency: ``loop.call_later(delay,
+  ``Handle``.  There is a simple equivalency: ``loop.call_later(delay,
   callback, *args)`` is the same as ``loop.call_at(loop.time() +
   delay, callback, *args)``.
 
+- ``time()``.  Returns the current time according to the event loop's
+  clock.  This may be ``time.time()`` or ``time.monotonic()`` or some
+  other system-specific clock, but it must return a float expressing
+  the time in units of approximately one second since some epoch.
+  (No clock is perfect -- see PEP 418.)
+
 Note: A previous version of this PEP defined a method named
 ``call_repeatedly()``, which promised to call a callback at regular
 intervals.  This has been withdrawn because the design of such a
@@ -464,22 +471,25 @@
 - ``call_soon_threadsafe(callback, *args)``.  Like
   ``call_soon(callback, *args)``, but when called from another thread
   while the event loop is blocked waiting for I/O, unblocks the event
-  loop.  This is the *only* method that is safe to call from another
-  thread.  (To schedule a callback for a later time in a threadsafe
-  manner, you can use ``loop.call_soon_threadsafe(loop.call_later,
-  when, callback, *args)``.)  Note: this is not safe to call from a
-  signal handler (since it may use locks).  In fact, no API is
-  signal-safe; if you want to handle signals, use
-  ``add_signal_handler()`` described below.
+  loop.  Returns a ``Handle``.  This is the *only* method that is safe
+  to call from another thread.  (To schedule a callback for a later
+  time in a threadsafe manner, you can use
+  ``loop.call_soon_threadsafe(loop.call_later, when, callback,
+  *args)``.)  Note: this is not safe to call from a signal handler
+  (since it may use locks).  In fact, no API is signal-safe; if you
+  want to handle signals, use ``add_signal_handler()`` described
+  below.
 
 - ``run_in_executor(executor, callback, *args)``.  Arrange to call
-  ``callback(*args)`` in an executor (see PEP 3148).  Returns a Future
-  whose result on success is the return value of that call.  This is
-  equivalent to ``wrap_future(executor.submit(callback, *args))``.  If
-  ``executor`` is ``None``, the default executor set by
-  ``set_default_executor()`` is used.  If no default executor has been
-  set yet, a ``ThreadPoolExecutor`` with 5 threads is created and set
-  as the default executor.
+  ``callback(*args)`` in an executor (see PEP 3148).  Returns an
+  ``asyncio.Future`` instance whose result on success is the return
+  value of that call.  This is equivalent to
+  ``wrap_future(executor.submit(callback, *args))``.  If ``executor``
+  is ``None``, the default executor set by ``set_default_executor()``
+  is used.  If no default executor has been set yet, a
+  ``ThreadPoolExecutor`` with a default number of threads is created
+  and set as the default executor.  (The default implementation uses
+  5 threads in this case.)
 
 - ``set_default_executor(executor)``.  Set the default executor used
   by ``run_in_executor()``.  The argument must be a PEP 3148
@@ -495,7 +505,7 @@
 These methods are useful if you want to connect or bind a socket to an
 address without the risk of blocking for the name lookup.  They are
 usually called implicitly by ``create_connection()``,
-``start_serving()`` or ``create_datagram_endpoint()``.
+``create_server()`` or ``create_datagram_endpoint()``.
 
 - ``getaddrinfo(host, port, family=0, type=0, proto=0, flags=0)``.
   Similar to the ``socket.getaddrinfo()`` function but returns a
@@ -545,8 +555,8 @@
 - ``create_connection(protocol_factory, host, port, <options>)``.
   Creates a stream connection to a given internet host and port.  This
   is a task that is typically called from the client side of the
-  connection.  It creates an implementation-dependent (bidirectional
-  stream) Transport to represent the connection, then calls
+  connection.  It creates an implementation-dependent bidirectional
+  stream Transport to represent the connection, then calls
   ``protocol_factory()`` to instantiate (or retrieve) the user's
   Protocol implementation, and finally ties the two together.  (See
   below for the definitions of Transport and Protocol.)  The user's
@@ -560,7 +570,7 @@
 
   (*) There is no requirement that ``protocol_factory`` is a class.
   If your protocol class needs to have specific arguments passed to
-  its constructor, you can use ``lambda`` or ``functools.partial()``.
+  its constructor, you can use ``lambda``.
   You can also pass a trivial ``lambda`` that returns a previously
   constructed Protocol instance.
 
@@ -568,14 +578,13 @@
 
   - ``ssl``: Pass ``True`` to create an SSL/TLS transport (by default
     a plain TCP transport is created).  Or pass an ``ssl.SSLContext``
-    object (or something else that implements the same interface) to
-    override the default SSL context object to be used.  If a default
-    context is created it is up to the implementation to configure
-    reasonable defaults.  The reference implementation currently uses
-    ``PROTOCOL_SSLv23`` and sets the ``OP_NO_SSLv2`` option, calls
-    ``set_default_verify_paths()`` and sets verify_mode to
-    ``CERT_REQUIRED``.  In addition, whenever the context (default or
-    otherwise) specifies a verify_mode of ``CERT_REQUIRED`` or
+    object to override the default SSL context object to be used.  If
+    a default context is created it is up to the implementation to
+    configure reasonable defaults.  The reference implementation
+    currently uses ``PROTOCOL_SSLv23`` and sets the ``OP_NO_SSLv2``
+    option, calls ``set_default_verify_paths()`` and sets verify_mode
+    to ``CERT_REQUIRED``.  In addition, whenever the context (default
+    or otherwise) specifies a verify_mode of ``CERT_REQUIRED`` or
     ``CERT_OPTIONAL``, if a hostname is given, immediately after a
     successful handshake ``ssl.match_hostname(peercert, hostname)`` is
     called, and if this raises an exception the conection is closed.
@@ -592,9 +601,9 @@
     Protocol concept or the ``protocol_factory`` argument.
 
   - ``sock``: An optional socket to be used instead of using the
-    ``host``, ``port``, ``family``, ``proto``, and ``flags``
-    arguments.  If this is given, host and port must be omitted;
-    otherwise, host and port are required.
+    ``host``, ``port``, ``family``, ``proto`` and ``flags``
+    arguments.  If this is given, ``host`` and ``port`` must be
+    explicitly set to ``None``.
 
   - ``local_addr``: If given, a ``(host, port)`` tuple used to bind
     the socket to locally.  This is rarely needed but on multi-homed
@@ -602,21 +611,31 @@
     specific address.  This is how you would do that.  The host and
     port are looked up using ``getaddrinfo()``.
 
+  - ``server_hostname``: This is only relevant when using SSL/TLS; it
+    should not be used when ``ssl`` is not set.  When ``ssl`` is set,
+    this sets or overrides the hostname that will be verified.  By
+    default the value of the ``host`` argument is used.  If ``host``
+    is empty, there is no default and you must pass a value for
+    ``server_hostname``.  To disable hostname verification (which is a
+    serious security risk) you must pass an empty string here and pass
+    an ``ssl.SSLContext`` object whose ``verify_mode`` is set to
+    ``ssl.CERT_NONE`` as the ``ssl`` argument.
+
 - ``create_server(protocol_factory, host, port, <options>)``.
   Enters a serving loop that accepts connections.
   This is a coroutine that completes once the serving loop is set up
   to serve.  The return value is a ``Server`` object which can be used
-  to stop the serving loop in a controlled fashion by calling its
-  ``close()`` method.
+  to stop the serving loop in a controlled fashion (see below).
+  Multiple sockets may be bound if the specified address allows
+  both IPv4 and IPv6 connections.
 
-  Multiple sockets may be bound if the specified address allows
-  both IPv4 and IPv6 connections.  Each time a connection is accepted,
-  ``protocol_factory`` is called without arguments(*) to create a
-  Protocol, a (bidirectional stream) Transport is created to represent
+  Each time a connection is accepted,
+  ``protocol_factory`` is called without arguments(**) to create a
+  Protocol, a bidirectional stream Transport is created to represent
   the network side of the connection, and the two are tied together by
   calling ``protocol.connection_made(transport)``.
 
-  (*) See footnote above for ``create_connection()``.  However, since
+  (**) See previous footnote for ``create_connection()``.  However, since
   ``protocol_factory()`` is called once for each new incoming
   connection, it should return a new Protocol object each time it is
   called.
@@ -625,12 +644,13 @@
 
   - ``ssl``: Pass an ``ssl.SSLContext`` object (or an object with the
     same interface) to override the default SSL context object to be
-    used.  (Unlike ``create_connection()``, passing ``True`` does not
-    make sense -- the ``SSLContext`` object is required to specify the
-    certificate and key.)
+    used.  (Unlike for ``create_connection()``, passing ``True`` does
+    not make sense here -- the ``SSLContext`` object is needed to
+    specify the certificate and key.)
 
   - ``backlog``: Backlog value to be passed to the ``listen()`` call.
-    Defaults to ``100``.
+    The default is implementation-dependent; in the default
+    implementation the default value is ``100``.
 
   - ``reuse_address``: Whether to set the ``SO_REUSEADDR`` option on
     the socket.  The default is ``True`` on UNIX, ``False`` on
@@ -643,21 +663,11 @@
      ``0``, to let ``getaddrinfo()`` choose.)
 
   - ``sock``: An optional socket to be used instead of using the
-    ``host``, ``port``, ``family``, and ``flags`` arguments.  If this
-    is given, host and port must be omitted; otherwise, host and port
-    are required.  The return value will be the one-element list
-    ``[sock]``.
-
-- ``stop_serving(sock)``.  The argument should be a socket from the
-  list returned by ``start_serving()``.  The serving loop associated
-  with that socket will be stopped.  Connections that have already
-  been accepted will not be affected.  (TBD: What if
-  ``start_serving()`` doesn't use sockets?  Then it should probably
-  return a list of opaque objects that can be passed to
-  ``stop_serving()``.)
+    ``host``, ``port``, ``family`` and ``flags`` arguments.  If this
+    is given, ``host`` and ``port`` must be explicitly set to ``None``.
 
 - ``create_datagram_endpoint(protocol_factory, local_addr=None,
-  remote_addr=None, **kwds)``.  Creates an endpoint for sending and
+  remote_addr=None, <options>)``.  Creates an endpoint for sending and
   receiving datagrams (typically UDP packets).  Because of the nature
   of datagram traffic, there are no separate calls to set up client
   and server side, since usually a single endpoint acts as both client
@@ -666,16 +676,19 @@
   the coroutine returns successfully, the transport will call
   callbacks on the protocol whenever a datagram is received or the
   socket is closed; it is up to the protocol to call methods on the
-  protocol to send datagrams.  Note that the transport and protocol
-  interfaces used here are different than those for stream
-  connections.
+  protocol to send datagrams.  The transport returned is a
+  ``DatagramTransport``.  The protocol returned is a
+  ``DatagramProtocol``.  These are described later.
 
-  Arguments:
+  Mandatory positional argument:
 
   - ``protocol_factory``: A class or factory function that will be
-    called, without arguments, to construct the protocol object to be
-    returned.  The interface between datagram transport and protocol
-    is described below.
+    called exactly once, without arguments, to construct the protocol
+    object to be returned.  The interface between datagram transport
+    and protocol is described below.
+
+  Optional arguments that may be specified positionally or as keyword
+  arguments:
 
   - ``local_addr``: An optional tuple indicating the address to which
     the socket will be bound.  If given this must be a ``(host,
@@ -693,7 +706,9 @@
     to be resolved and the result will be passed to ``sock_connect()``
     together with the socket created.  If ``getaddrinfo()`` returns
     more than one address, they will be tried in turn.  If omitted,
-    no ``sock_connect()`` will be made.
+    no ``sock_connect()`` call will be made.
+
+  The <options> are all specified using optional keyword arguments:
 
   - ``family``, ``proto``, ``flags``: Address family, protocol and
     flags to be passed through to ``getaddrinfo()``.  These all
@@ -737,9 +752,6 @@
   where ``conn`` is a connected non-blocking socket and ``peer`` is
   the peer address.
 
-Optional Event Loop Methods
----------------------------
-
 I/O Callbacks
 '''''''''''''
 
@@ -756,7 +768,7 @@
 and possibly tty devices are also supported, but disk files are not.
 Exactly which special file types are supported may vary by platform
 and per selector implementation.  (Experimentally, there is at least
-one kind of pseudo-tty on OSX that is supported by ``select`` and
+one kind of pseudo-tty on OS X that is supported by ``select`` and
 ``poll`` but not by ``kqueue``: it is used by Emacs shell windows.)
 
 - ``add_reader(fd, callback, *args)``.  Arrange for
@@ -779,33 +791,90 @@
 Pipes and Subprocesses
 ''''''''''''''''''''''
 
-TBD: Should these really take stream objects?  The stream objects are
-not useful for reading or writing because they would cause blocking
-I/O.  This section of the API is clearly not yet ready for review.
+These methods are supported by ``SelectorEventLoop`` on UNIX and
+``ProactorEventLoop`` on Windows.
+
+The transports and protocols used with pipes and subprocesses differ
+from those used with regular stream connections.  These are described
+later.
+
+Each of the methods below has a ``protocol_factory`` argument, similar
+to ``create_connection()``; this will be called exactly once, without
+arguments, to construct the protocol object to be returned.
+
+Each method is a coroutine that returns a ``(transport, protocol)``
+pair on success, or raises an exception on failure.
 
 - ``connect_read_pipe(protocol_factory, pipe)``: Create a
   unidrectional stream connection from a file-like object wrapping the
-  read end of a UNIX pipe.  The protocol/transport interface is the
-  read half of the bidirectional stream interface.
+  read end of a UNIX pipe, which must be in non-blocking mode.  The
+  transport returned is a ``ReadTransport``.
 
 - ``connect_write_pipe(protocol_factory, pipe)``: Create a
   unidrectional stream connection from a file-like object wrapping the
-  write end of a UNIX pipe.  The protocol/transport interface is the
-  write half of the bidirectional stream interface.
+  write end of a UNIX pipe, which must be in non-blocking mode.  The
+  transport returned is a ``WriteTransport``; it does not have any
+  read-related methods.  The protocol returned is a ``BaseProtocol``.
 
-- TBD: A way to run a subprocess with stdin, stdout and stderr
-  connected to pipe transports.  (This is implemented now but not yet
-  documented.)  (TBD: Document that the subprocess's
-  connection_closed() won't be called until the process has exited
-  *and* all pipes are closed, and why -- it's a race condition.)
+- ``subprocess_shell(protocol_factory, cmd, <options>)``: Create a
+  subprocess from ``cmd``, which is a string using the platform's
+  "shell" syntax.  This is similar to the standard library
+  ``subprocess.Popen()`` class called with ``shell=True``.  The
+  remaining arguments and return value are described below.
 
-TBD: offer the same interface on Windows for e.g. named pipes.  (This
-should be possible given that the standard library ``subprocess``
-module is supported on Windows.)
+- ``subprocess_exec(protocol_factory, *args, <options>)``: Create a
+  subprocess from one or more string arguments, where the first string
+  specifies the program to execute, and the remaining strings specify
+  the program's arguments.  (Thus, together the string arguments form
+  the ``sys.argv`` value of the program, assuming it is a Python
+  script.)  This is similar to the standard library
+  ``subprocess.Popen()`` class called with ``shell=False`` and the
+  list of strings passed as the first argument; however, where
+  ``Popen()`` takes a single argument which is list of strings,
+  ``subprocess_exec()`` takes multiple string arguments.  The
+  remaining arguments and return value are described below.
+
+Apart from the way the program to execute is specified, the two
+``subprocess_*()`` methods behave the same.  The transport returned is
+a ``SubprocessTransport`` which has a different interface than the
+common bidirectional stream transport.  The protocol returned is a
+``SubprocessProtocol`` which also has a custom interface.
+
+The <options> are all specified using optional keyword arguments:
+
+- ``stdin``: Either a file-like object representing the pipe to be
+  connected to the subprocess's standard input stream using
+  ``create_write_pipe()``, or the constant ``subprocess.PIPE`` (the
+  default).  By default a new pipe will be created and connected.
+
+- ``stdout``: Either a file-like object representing the pipe to be
+  connected to the subprocess's standard output stream using
+  ``create_read_pipe()``, or the constant ``subprocess.PIPE`` (the
+  default).  By default a new pipe will be created and connected.
+
+- ``stderr``: Either a file-like object representing the pipe to be
+  connected to the subprocess's standard error stream using
+  ``create_read_pipe()``, or one of the constants ``subprocess.PIPE``
+  (the default) or ``subprocess.STDOUT``.  By default a new pipe will
+  be created and connected.  When ``subprocess.STDOUT`` is specified,
+  the subprocess's standard error stream will be connected to the same
+  pipe as the stdandard output stream.
+
+- ``bufsize``: The buffer size to be used when creating a pipe; this
+  is passed to ``subprocess.Popen()``.  In the default implementation
+  this defaults to zero, and on Windows it must be zero; these
+  defaults deviate from ``subprocess.Popen()``.
+
+- ``executable``, ``preexec_fn``, ``close_fds``, ``cwd``, ``env``,
+  ``startupinfo``, ``creationflags``, ``restore_signals``,
+  ``start_new_session``, ``pass_fds``: These optional arguments are
+  passed to ``subprocess.Popen()`` without interpretation.
 
 Signal callbacks
 ''''''''''''''''
 
+These methods are only supported on UNIX.
+
 - ``add_signal_handler(sig, callback, *args)``.  Whenever signal
   ``sig`` is received, arrange for ``callback(*args)`` to be called.
   Specifying another callback for the same signal replaces the
@@ -826,36 +895,17 @@
   handler was set.
 
 Note: If these methods are statically known to be unsupported, they
-may return ``NotImplementedError`` instead of ``RuntimeError``.
+may raise ``NotImplementedError`` instead of ``RuntimeError``.
 
-Callback Sequencing
--------------------
+Mutual Exclusion of Callbacks
+-----------------------------
 
-When two callbacks are scheduled for the same time, they are run
-in the order in which they are registered.  For example::
-
-  loop.call_soon(foo)
-  loop.call_soon(bar)
-
-guarantees that ``foo()`` is called before ``bar()``.
-
-If ``call_soon()`` is used, this guarantee is true even if the system
-clock were to run backwards.  This is also the case for
-``call_later(0, callback, *args)``.  However, if ``call_later()`` is
-used with a nonzero delay, all bets are off if the system
-clock were to runs backwards.  (A good event loop implementation
-should use ``time.monotonic()`` to avoid problems when the clock runs
-backward.  See PEP 418.)
-
-Context
--------
-
-All event loops have a notion of context.  For the default event loop
-implementation, the context is a thread.  An event loop implementation
-should run all callbacks in the same context.  An event loop
-implementation should run only one callback at a time, so callbacks
-can assume automatic mutual exclusion with other callbacks scheduled
-in the same event loop.
+An event loop should enforce mutual exclusion of callbacks, i.e. it
+should never start a callback while a previously callback is still
+running.  This should apply across all types of callbacks, regardless
+of whether they are scheduled using ``call_soon()``, ``call_later()``,
+``call_at()``, ``call_soon_threadsafe()``, ``add_reader()``,
+``add_writer()``, or ``add_signal_handler()``.
 
 Exceptions
 ----------
@@ -867,11 +917,11 @@
 be passed through by Futures, and they will be logged and ignored when
 they occur in a callback.
 
-However, exceptions deriving only from ``BaseException`` are never
-caught, and will usually cause the program to terminate with a
-traceback.  (Examples of this category include ``KeyboardInterrupt``
-and ``SystemExit``; it is usually unwise to treat these the same as
-most other exceptions.)
+However, exceptions deriving only from ``BaseException`` are typically
+not caught, and will usually cause the program to terminate with a
+traceback.  In some cases they are caught and re-raised.  (Examples of
+this category include ``KeyboardInterrupt`` and ``SystemExit``; it is
+usually unwise to treat these the same as most other exceptions.)
 
 Handles
 -------
@@ -880,19 +930,35 @@
 (``call_soon()``, ``call_later()``, ``call_at()`` and
 ``call_soon_threadsafe()``) all return an object representing the
 registration that can be used to cancel the callback.  This object is
-called a Handle (although its class name is not necessarily
-``Handle``).  Handles are opaque and have only one public method:
+called a ``Handle``.  Handles are opaque and have only one public
+method:
 
-- ``cancel()``.  Cancel the callback.
+- ``cancel()``: Cancel the callback.
 
+Note that ``add_reader()``, ``add_writer()`` and
+``add_signal_handler()`` do not return Handles.
+
+Servers
+-------
+
+The ``create_server()`` method returns a ``Server`` instance, which
+wraps the sockets (or other network objects) used to accept requests.
+This class has two public methods:
+
+- ``close()``: Close the service.  This stops accepting new requests
+  but does not cancel requests that have already been accepted and are
+  currently being handled.
+
+- ``wait_closed()``: A coroutine that blocks until the service is
+  closed and all accepted requests have been handled.
 
 Futures
 -------
 
-The ``tulip.Future`` class here is intentionally similar to the
+The ``asyncio.Future`` class here is intentionally similar to the
 ``concurrent.futures.Future`` class specified by PEP 3148, but there
 are slight differences.  Whenever this PEP talks about Futures or
-futures this should be understood to refer to ``tulip.Future`` unless
+futures this should be understood to refer to ``asyncio.Future`` unless
 ``concurrent.futures.Future`` is explicitly mentioned.  The supported
 public API is as follows, indicating the differences with PEP 3148:
 
@@ -968,12 +1034,12 @@
 A Future is associated with the default event loop when it is created.
 (TBD: Optionally pass in an alternative event loop instance?)
 
-A ``tulip.Future`` object is not acceptable to the ``wait()`` and
+A ``asyncio.Future`` object is not acceptable to the ``wait()`` and
 ``as_completed()`` functions in the ``concurrent.futures`` package.
-However, there are similar APIs ``tulip.wait()`` and
-``tulip.as_completed()``, described below.
+However, there are similar APIs ``asyncio.wait()`` and
+``asyncio.as_completed()``, described below.
 
-A ``tulip.Future`` object is acceptable to a ``yield from`` expression
+A ``asyncio.Future`` object is acceptable to a ``yield from`` expression
 when used in a coroutine.  This is implemented through the
 ``__iter__()`` interface on the Future.  See the section "Coroutines
 and the Scheduler" below.
@@ -981,9 +1047,9 @@
 When a Future is garbage-collected, if it has an associated exception
 but neither ``result()`` nor ``exception()`` nor ``__iter__()`` has
 ever been called (or the latter hasn't raised the exception yet --
-details TBD), the exception should be logged.  TBD: At what level?
+details TBD), the exception is logged.
 
-In the future (pun intended) we may unify ``tulip.Future`` and
+In the future (pun intended) we may unify ``asyncio.Future`` and
 ``concurrent.futures.Future``, e.g. by adding an ``__iter__()`` method
 to the latter that works with ``yield from``.  To prevent accidentally
 blocking the event loop by calling e.g. ``result()`` on a Future
@@ -995,16 +1061,16 @@
 
 There are some public functions related to Futures:
 
-- ``async(arg)``.  This takes an argument that is either a coroutine
-  object or a Future (i.e., anything you can use with ``yield from``)
-  and returns a Future.  If the argument is a Future, it is returned
-  unchanged; if it is a coroutine object, it wraps it in a Task
-  (remember that ``Task`` is a subclass of ``Future``).
+- ``asyncio.async(arg)``.  This takes an argument that is either a
+  coroutine object or a Future (i.e., anything you can use with
+  ``yield from``) and returns a Future.  If the argument is a Future,
+  it is returned unchanged; if it is a coroutine object, it wraps it
+  in a Task (remember that ``Task`` is a subclass of ``Future``).
 
-- ``wrap_future(future)``.  This takes a PEP 3148 Future (i.e., an
-  instance of ``concurrent.futures.Future``) and returns a Future
-  compatible with the event loop (i.e., a ``tulip.Future`` instance).
-
+- ``asyncio.wrap_future(future)``.  This takes a PEP 3148 Future
+  (i.e., an instance of ``concurrent.futures.Future``) and returns a
+  Future compatible with the event loop (i.e., a ``asyncio.Future``
+  instance).
 
 Transports
 ----------
@@ -1172,7 +1238,7 @@
 
 Datagram transports call the following methods on the associated
 protocol object: ``connection_made()``, ``connection_lost()``,
-``connection_refused()``, and ``datagram_received()``.  ("Connection"
+``connection_refused()`` and ``datagram_received()``.  ("Connection"
 in these method names is a slight misnomer, but the concepts still
 exist: ``connection_made()`` means the transport representing the
 endpoint has been created, and ``connection_lost()`` means the
@@ -1182,6 +1248,11 @@
 feature).  (TBD: Do we need the latter?  It seems easy enough to
 implement this in the protocol if it needs to make the distinction.)
 
+Subprocess Transports
+'''''''''''''''''''''
+
+TBD.
+
 Protocols
 ---------
 
@@ -1275,6 +1346,11 @@
   3. ``connection_refused()`` -- at most once
   4. ``connection_lost()`` -- exactly once
 
+Subprocess Protocol
+'''''''''''''''''''
+
+TBD.
+
 Callback Style
 --------------
 
@@ -1290,10 +1366,9 @@
 the callback.  This allows graceful evolution of the API without
 having to worry about whether a keyword might be significant to a
 callee somewhere.  If you have a callback that *must* be called with a
-keyword argument, you can use a lambda or ``functools.partial``.  For
-example::
+keyword argument, you can use a lambda.  For example::
 
-  loop.call_soon(functools.partial(foo, "abc", repeat=42))
+  loop.call_soon(lambda: foo('abc', repeat=42))
 
 
 Coroutines and the Scheduler
@@ -1310,7 +1385,7 @@
 
 A coroutine is a generator that follows certain conventions.  For
 documentation purposes, all coroutines should be decorated with
-``@tulip.coroutine``, but this cannot be strictly enforced.
+``@asyncio.coroutine``, but this cannot be strictly enforced.
 
 Coroutines use the ``yield from`` syntax introduced in PEP 380,
 instead of the original ``yield`` syntax.
@@ -1319,7 +1394,7 @@
 different (though related) concepts:
 
 - The function that defines a coroutine (a function definition
-  decorated with ``tulip.coroutine``).  If disambiguation is needed
+  decorated with ``asyncio.coroutine``).  If disambiguation is needed
   we will call this a *coroutine function*.
 
 - The object obtained by calling a coroutine function.  This object
@@ -1362,7 +1437,7 @@
 ``wait()`` and ``as_completed()`` APIs in the ``concurrent.futures``
 package are provided:
 
-- ``tulip.wait(fs, timeout=None, return_when=ALL_COMPLETED)``.  This
+- ``asyncio.wait(fs, timeout=None, return_when=ALL_COMPLETED)``.  This
   is a coroutine that waits for the Futures or coroutines given by
   ``fs`` to complete.  Coroutine arguments will be wrapped in Tasks
   (see below).  This returns a Future whose result on success is a
@@ -1389,7 +1464,7 @@
     Futures from the condition is surprising, but PEP 3148 does it
     this way.)
 
-- ``tulip.as_completed(fs, timeout=None)``.  Returns an iterator whose
+- ``asyncio.as_completed(fs, timeout=None)``.  Returns an iterator whose
   values are Futures or coroutines; waiting for successive values
   waits until the next Future or coroutine from the set ``fs``
   completes, and returns its result (or raises its exception).  The
@@ -1406,13 +1481,13 @@
   your ``for`` loop may not make progress (since you are not allowing
   other tasks to run).
 
-- ``tulip.wait_for(f, timeout)``.  This is a convenience to wait for a
+- ``asyncio.wait_for(f, timeout)``.  This is a convenience to wait for a
   single coroutine or Future with a timeout.  It is a simple wrapper
-  around ``tulip.wait()`` with a single item in the first argument,
+  around ``asyncio.wait()`` with a single item in the first argument,
   returning the result or raising the exception if it is completed
   within the timeout, raising ``TimeoutError`` otherwise.
 
-- ``tulip.gather(f1, f2, ...)``.  Returns a Future which waits until
+- ``asyncio.gather(f1, f2, ...)``.  Returns a Future which waits until
   all arguments (Futures or coroutines) are done and return a list of
   their corresponding results.  If one or more of the arguments is
   cancelled or raises an exception, the returned Future is cancelled
@@ -1420,12 +1495,12 @@
   argument), and the remaining arguments are left running in the
   background.  Cancelling the returned Future does not affect the
   arguments.  Note that coroutine arguments are converted to Futures
-  using ``tulip.async()``.
+  using ``asyncio.async()``.
 
 Sleeping
 --------
 
-The coroutine ``sleep(delay)`` returns after a given time delay.
+The coroutine ``asyncio.sleep(delay)`` returns after a given time delay.
 
 (TBD: Should the optional second argument, ``result``, be part of the
 spec?)
@@ -1440,22 +1515,22 @@
 becomes the task's result, if it raises an exception, that becomes the
 task's exception.
 
-Cancelling a task that's not done yet prevents its coroutine from
-completing.  In this case a ``CancelledError`` exception is thrown
-into the coroutine, which it may catch to propagate cancellation to
-other Futures.  If the exception is not caught, the generator will be
-properly finalized anyway, as described in PEP 342.
+Cancelling a task that's not done yet throws an
+``asyncio.CancelledError`` exception into the coroutine.  If the
+coroutine doesn't catch this (or if it re-raises it) the task will be
+marked as cancelled; but if the coroutine somehow catches and ignores
+the exception it may continue to execute.
 
 Tasks are also useful for interoperating between coroutines and
 callback-based frameworks like Twisted.  After converting a coroutine
 into a Task, callbacks can be added to the Task.
 
 To convert a coroutine into a task, call the coroutine function and
-pass the resulting coroutine object to the ``tulip.Task()``
-constructor.  You may also use ``tulip.async()`` for this purpose.
+pass the resulting coroutine object to the ``asyncio.Task()``
+constructor.  You may also use ``asyncio.async()`` for this purpose.
 
 You may ask, why not automatically convert all coroutines to Tasks?
-The ``@tulip.coroutine`` decorator could do this.  However, this would
+The ``@asyncio.coroutine`` decorator could do this.  However, this would
 slow things down considerably in the case where one coroutine calls
 another (and so on), as switching to a "bare" coroutine has much less
 overhead than switching to a Task.
@@ -1487,23 +1562,49 @@
 off the coroutine when ``connection_made()`` is called.
 
 
+TO DO
+=====
+
+- Document pause/resume reading/writing.
+
+- Document subprocess/pipe protocols/transports.
+
+- Document locks and queues.
+
+- Describe task cancellation details (and update future cancellation
+  spec to promise less).
+
+- Explain that eof_received() shouldn't return True with SSL/TLS.
+
+- Document SIGCHILD handling API (once it lands).
+
+- Compare all APIs with the source code to be sure there aren't any
+  undocumented or unimplemented features.
+
+
+Wish List
+=========
+
+- Support a "start TLS" operation to upgrade a TCP socket to SSL/TLS.
+
+- UNIX domain sockets.
+
+- A per-loop error handling callback.
+
+
 Open Issues
 ===========
 
-- Make ``loop.stop()`` optional (and hence
-  ``loop.run_until_complete()`` too).
+- Why do ``create_connection()`` and ``create_datagram_endpoint()``
+  have a ``proto`` argument but not ``create_server()``?  And why are
+  the family, flag, proto arguments for ``getaddrinfo()`` sometimes
+  zero and sometimes named constants (whose value is also zero)?
+
+- Do we need another inquiry method to tell whether the loop is in the
+  process of stopping?)
 
 - A fuller public API for Handle?  What's the use case?
 
-- Should we require all event loops to implement ``sock_recv()`` and
-  friends?  Is the use case strong enough?  (Maybe some transports
-  usable with both ``SelectorEventLoop`` and ``ProactorEventLoop`` use
-  them?  That'd be a good enough use case for me.)
-
-- Should we require callers of ``create_connection()`` to create and
-  pass an ``SSLContext`` explicitly, instead of allowing ssl=True?
-  (For ``start_serving()`` we already require an ``SSLContext``.)
-
 - A debugging API?  E.g. something that logs a lot of stuff, or logs
   unusual conditions (like queues filling up faster than they drain)
   or even callbacks taking too much time...
@@ -1511,16 +1612,11 @@
 - Do we need introspection APIs?  E.g. asking for the read callback
   given a file descriptor.  Or when the next scheduled call is.  Or
   the list of file descriptors registered with callbacks.  Right now
-  these would all require using Tulip internals.
+  these all require using internals.
 
-- Locks and queues?  The Tulip implementation contains implementations
-  of most types of locks and queues modeled after the standard library
-  ``threading`` and ``queue`` modules.  These should be incorporated
-  into the PEP.
-
-- Probably need more socket I/O methods, e.g. ``sock_sendto()`` and
-  ``sock_recvfrom()``, and perhaps others like ``pipe_read()``.  Or
-  users can write their own (it's not rocket science).
+- Do we need more socket I/O methods, e.g. ``sock_sendto()`` and
+  ``sock_recvfrom()``, and perhaps others like ``pipe_read()``?
+  I guess users can write their own (it's not rocket science).
 
 - We may need APIs to control various timeouts.  E.g. we may want to
   limit the time spent in DNS resolution, connecting, ssl/tls handshake,
@@ -1564,16 +1660,34 @@
 ===============
 
 Apart from PEP 3153, influences include PEP 380 and Greg Ewing's
-tutorial for ``yield from``, Twisted, Tornado, ZeroMQ, pyftpdlib, tulip
-(the author's attempts at synthesis of all these), wattle (Steve
-Dower's counter-proposal), numerous discussions on python-ideas from
-September through December 2012, a Skype session with Steve Dower and
-Dino Viehland, email exchanges with Ben Darnell, an audience with
-Niels Provos (original author of libevent), and two in-person meetings
-with several Twisted developers, including Glyph, Brian Warner, David
-Reid, and Duncan McGreggor.  Also, the author's previous work on async
-support in the NDB library for Google App Engine was an important
-influence.
+tutorial for ``yield from``, Twisted, Tornado, ZeroMQ, pyftpdlib, and
+wattle (Steve Dower's counter-proposal).  My previous work on
+asynchronous support in the NDB library for Google App Engine provided
+an important starting point.
+
+I am grateful for the numerous discussions on python-ideas from
+September through December 2012, and many more on python-tulip since
+then; a Skype session with Steve Dower and Dino Viehland; email
+exchanges with and a visit by Ben Darnell; an audience with Niels
+Provos (original author of libevent); and in-person meetings (as well
+as frequent email exchanges) with several Twisted developers,
+including Glyph, Brian Warner, David Reid, and Duncan McGreggor.
+
+Contributors to the implementation include 
+Eli Bendersky,
+Saúl Ibarra Corretgé,
+Geert Jansen,
+A. Jesse Jiryu Davis,
+Nikolay Kim,
+Charles-François Natali, 
+Richard Oudkerk,
+Antoine Pitrou,
+Giampaolo Rodolá,
+Andrew Svetlov,
+and many others who submitted bugs and/or fixes.
+
+I thank Antoine Pitrou for his feedback in his role of official PEP
+BDFL.
 
 
 Copyright

-- 
Repository URL: http://hg.python.org/peps


More information about the Python-checkins mailing list