>
> Date: Fri, 13 Oct 2017 08:57:00 -0700
> From: Guido van Rossum <guido(a)python.org>
> To: Martin Teichmann <lkb.teichmann(a)gmail.com>
> Cc: Python-Dev <python-dev(a)python.org>
> Subject: Re: [Python-Dev] What is the design purpose of metaclasses vs
> code generating decorators? (was Re: PEP 557: Data Classes)
> Message-ID:
> <
> CAP7+vJKBVuDqf09zTWDAuvQ-cCNM+cF82c22s2NJOj+A9k7_kA(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> This is food for thought. I'll have to let it sink in a bit, but you may be
> on to something.
>
> Since the question was asked at some point, yes, metaclasses are much older
> than class decorators. At some point I found the book Putting Metaclasses
> to Work by Ira Forman and Scott Danforth (
> https://www.amazon.com/Putting-Metaclasses-Work-Ira-Forman/dp/0201433052)
> and translated the book's ideas from C++ to Python, except for the
> automatic merging of multiple inherited metaclasses.
>
> But in many cases class decorators are more useful.
>
> I do worry that things like your autoslots decorator example might be
> problematic because they create a new class, throwing away a lot of work
> that was already done. But perhaps the right way to address this would be
> to move the decision about the instance layout to a later phase? (Not sure
> if that makes sense though.)
>
> --Guido
>
Just FYI, recreating the class with slots runs into problems with regards
to PEP 3135 (New Super). In attrs we resort to black magic to update the
__class__ cell in existing methods. Being able to add slotness later would
be good, but not that useful for us since we have to support down to 2.7.
I've written a PEP for what might be thought of as "mutable namedtuples
with defaults, but not inheriting tuple's behavior" (a mouthful, but it
sounded simpler when I first thought of it). It's heavily influenced by
the attrs project. It uses PEP 526 type annotations to define fields.
From the overview section:
@dataclass
class InventoryItem:
name: str
unit_price: float
quantity_on_hand: int = 0
def total_cost(self) -> float:
return self.unit_price * self.quantity_on_hand
Will automatically add these methods:
def __init__(self, name: str, unit_price: float, quantity_on_hand:
int = 0) -> None:
self.name = name
self.unit_price = unit_price
self.quantity_on_hand = quantity_on_hand
def __repr__(self):
return
f'InventoryItem(name={self.name!r},unit_price={self.unit_price!r},quantity_on_hand={self.quantity_on_hand!r})'
def __eq__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) ==
(other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
def __ne__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) !=
(other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
def __lt__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) <
(other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
def __le__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) <=
(other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
def __gt__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) >
(other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
def __ge__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) >=
(other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
Data Classes saves you from writing and maintaining these functions.
The PEP is largely complete, but could use some filling out in places.
Comments welcome!
Eric.
P.S. I wrote this PEP when I was in my happy place.
richard@debian:~/Python-3.5.4$ ./python -m test -v test_gdb
== CPython 3.5.4 (default, Oct 10 2017, 00:27:44) [GCC 4.7.2]
== Linux-3.2.0-4-686-pae-i686-with-debian-7.11 little-endian
== hash algorithm: siphash24 32bit
== /home/richard/Python-3.5.4/build/test_python_5566
Testing with flags: sys.flags(debug=0, inspect=0, interactive=0,
optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0,
ignore_environment=0, verbose=0, bytes_warning=0, quiet=0,
hash_randomization=1, isolated=0)
0:00:00 load avg: 0.10 [1/1] test_gdb
test_gdb skipped -- gdb not built with embedded python support
1 test skipped:
test_gdb
Please note I have downloaded a new version of GDB and compiled it with
the config option --with-python.
I then installed it on my system.
I get the same error
Thank You
I've read the current version of PEP 552 over and I think everything looks
good for acceptance. I believe there are no outstanding objections (or they
have been adequately addressed in responses).
Therefore I intend to accept PEP 552 this Friday, unless grave objections
are raised on this mailing list (python-dev).
Congratulations Benjamin. Gotta love those tristate options!
--
--Guido van Rossum (python.org/~guido)
I've updated PEP 554 in response to feedback. (thanks all!) There
are a few unresolved points (some of them added to the Open Questions
section), but the current PEP has changed enough that I wanted to get
it out there first.
Notably changed:
* the API relative to object passing has changed somewhat drastically
(hopefully simpler and easier to understand), replacing "FIFO" with
"channel"
* added an examples section
* added an open questions section
* added a rejected ideas section
* added more items to the deferred functionality section
* the rationale section has moved down below the examples
Please let me know what you think. I'm especially interested in
feedback about the channels. Thanks!
-eric
++++++++++++++++++++++++++++++++++++++++++++++++
PEP: 554
Title: Multiple Interpreters in the Stdlib
Author: Eric Snow <ericsnowcurrently(a)gmail.com>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 2017-09-05
Python-Version: 3.7
Post-History:
Abstract
========
CPython has supported subinterpreters, with increasing levels of
support, since version 1.5. The feature has been available via the
C-API. [c-api]_ Subinterpreters operate in
`relative isolation from one another <Interpreter Isolation_>`_, which
provides the basis for an
`alternative concurrency model <Concurrency_>`_.
This proposal introduces the stdlib ``interpreters`` module. The module
will be `provisional <Provisional Status_>`_. It exposes the basic
functionality of subinterpreters already provided by the C-API.
Proposal
========
The ``interpreters`` module will be added to the stdlib. It will
provide a high-level interface to subinterpreters and wrap the low-level
``_interpreters`` module. The proposed API is inspired by the
``threading`` module. See the `Examples`_ section for concrete usage
and use cases.
API for interpreters
--------------------
The module provides the following functions:
``list_all()``::
Return a list of all existing interpreters.
``get_current()``::
Return the currently running interpreter.
``create()``::
Initialize a new Python interpreter and return it. The
interpreter will be created in the current thread and will remain
idle until something is run in it. The interpreter may be used
in any thread and will run in whichever thread calls
``interp.run()``.
The module also provides the following class:
``Interpreter(id)``::
id:
The interpreter's ID (read-only).
is_running():
Return whether or not the interpreter is currently executing code.
Calling this on the current interpreter will always return True.
destroy():
Finalize and destroy the interpreter.
This may not be called on an already running interpreter. Doing
so results in a RuntimeError.
run(source_str, /, **shared):
Run the provided Python source code in the interpreter. Any
keyword arguments are added to the interpreter's execution
namespace. If any of the values are not supported for sharing
between interpreters then RuntimeError gets raised. Currently
only channels (see "create_channel()" below) are supported.
This may not be called on an already running interpreter. Doing
so results in a RuntimeError.
A "run()" call is quite similar to any other function call. Once
it completes, the code that called "run()" continues executing
(in the original interpreter). Likewise, if there is any uncaught
exception, it propagates into the code where "run()" was called.
The big difference is that "run()" executes the code in an
entirely different interpreter, with entirely separate state.
The state of the current interpreter in the current OS thread
is swapped out with the state of the target interpreter (the one
that will execute the code). When the target finishes executing,
the original interpreter gets swapped back in and its execution
resumes.
So calling "run()" will effectively cause the current Python
thread to pause. Sometimes you won't want that pause, in which
case you should make the "run()" call in another thread. To do
so, add a function that calls "run()" and then run that function
in a normal "threading.Thread".
Note that interpreter's state is never reset, neither before
"run()" executes the code nor after. Thus the interpreter
state is preserved between calls to "run()". This includes
"sys.modules", the "builtins" module, and the internal state
of C extension modules.
Also note that "run()" executes in the namespace of the "__main__"
module, just like scripts, the REPL, "-m", and "-c". Just as
the interpreter's state is not ever reset, the "__main__" module
is never reset. You can imagine concatenating the code from each
"run()" call into one long script. This is the same as how the
REPL operates.
Supported code: source text.
API for sharing data
--------------------
The mechanism for passing objects between interpreters is through
channels. A channel is a simplex FIFO similar to a pipe. The main
difference is that channels can be associated with zero or more
interpreters on either end. Unlike queues, which are also many-to-many,
channels have no buffer.
``create_channel()``::
Create a new channel and return (recv, send), the RecvChannel and
SendChannel corresponding to the ends of the channel. The channel
is not closed and destroyed (i.e. garbage-collected) until the number
of associated interpreters returns to 0.
An interpreter gets associated with a channel by calling its "send()"
or "recv()" method. That association gets dropped by calling
"close()" on the channel.
Both ends of the channel are supported "shared" objects (i.e. may be
safely shared by different interpreters. Thus they may be passed as
keyword arguments to "Interpreter.run()".
``list_all_channels()``::
Return a list of all open (RecvChannel, SendChannel) pairs.
``RecvChannel(id)``::
The receiving end of a channel. An interpreter may use this to
receive objects from another interpreter. At first only bytes will
be supported.
id:
The channel's unique ID.
interpreters:
The list of associated interpreters (those that have called
the "recv()" method).
__next__():
Return the next object from the channel. If none have been sent
then wait until the next send.
recv():
Return the next object from the channel. If none have been sent
then wait until the next send. If the channel has been closed
then EOFError is raised.
recv_nowait(default=None):
Return the next object from the channel. If none have been sent
then return the default. If the channel has been closed
then EOFError is raised.
close():
No longer associate the current interpreter with the channel (on
the receiving end). This is a noop if the interpreter isn't
already associated. Once an interpreter is no longer associated
with the channel, subsequent (or current) send() and recv() calls
from that interpreter will raise EOFError.
Once number of associated interpreters on both ends drops to 0,
the channel is actually marked as closed. The Python runtime
will garbage collect all closed channels. Note that "close()" is
automatically called when it is no longer used in the current
interpreter.
This operation is idempotent. Return True if the current
interpreter was still associated with the receiving end of the
channel and False otherwise.
``SendChannel(id)``::
The sending end of a channel. An interpreter may use this to send
objects to another interpreter. At first only bytes will be
supported.
id:
The channel's unique ID.
interpreters:
The list of associated interpreters (those that have called
the "send()" method).
send(obj):
Send the object to the receiving end of the channel. Wait until
the object is received. If the channel does not support the
object then TypeError is raised. Currently only bytes are
supported. If the channel has been closed then EOFError is
raised.
send_nowait(obj):
Send the object to the receiving end of the channel. If the
object is received then return True. Otherwise return False.
If the channel does not support the object then TypeError is
raised. If the channel has been closed then EOFError is raised.
close():
No longer associate the current interpreter with the channel (on
the sending end). This is a noop if the interpreter isn't already
associated. Once an interpreter is no longer associated with the
channel, subsequent (or current) send() and recv() calls from that
interpreter will raise EOFError.
Once number of associated interpreters on both ends drops to 0,
the channel is actually marked as closed. The Python runtime
will garbage collect all closed channels. Note that "close()" is
automatically called when it is no longer used in the current
interpreter.
This operation is idempotent. Return True if the current
interpreter was still associated with the sending end of the
channel and False otherwise.
Examples
========
Run isolated code
-----------------
::
interp = interpreters.create()
print('before')
interp.run('print("during")')
print('after')
Run in a thread
---------------
::
interp = interpreters.create()
def run():
interp.run('print("during")')
t = threading.Thread(target=run)
print('before')
t.start()
print('after')
Pre-populate an interpreter
---------------------------
::
interp = interpreters.create()
interp.run("""if True:
import some_lib
import an_expensive_module
some_lib.set_up()
""")
wait_for_request()
interp.run("""if True:
some_lib.handle_request()
""")
Handling an exception
---------------------
::
interp = interpreters.create()
try:
interp.run("""if True:
raise KeyError
""")
except KeyError:
print("got the error from the subinterpreter")
Synchronize using a channel
---------------------------
::
interp = interpreters.create()
r, s = interpreters.create_channel()
def run():
interp.run("""if True:
reader.recv()
print("during")
reader.close()
""",
reader=r)
t = threading.Thread(target=run)
print('before')
t.start()
print('after')
s.send(b'')
s.close()
Sharing a file descriptor
-------------------------
::
interp = interpreters.create()
r1, s1 = interpreters.create_channel()
r2, s2 = interpreters.create_channel()
def run():
interp.run("""if True:
fd = int.from_bytes(
reader.recv(), 'big')
for line in os.fdopen(fd):
print(line)
writer.send(b'')
""",
reader=r1, writer=s2)
t = threading.Thread(target=run)
t.start()
with open('spamspamspam') as infile:
fd = infile.fileno().to_bytes(1, 'big')
s.send(fd)
r.recv()
Passing objects via pickle
--------------------------
::
interp = interpreters.create()
r, s = interpreters.create_channel()
interp.run("""if True:
import pickle
""",
reader=r)
def run():
interp.run("""if True:
data = reader.recv()
while data:
obj = pickle.loads(data)
do_something(obj)
data = reader.recv()
reader.close()
""",
reader=r)
t = threading.Thread(target=run)
t.start()
for obj in input:
data = pickle.dumps(obj)
s.send(data)
s.send(b'')
Rationale
=========
Running code in multiple interpreters provides a useful level of
isolation within the same process. This can be leveraged in number
of ways. Furthermore, subinterpreters provide a well-defined framework
in which such isolation may extended.
CPython has supported subinterpreters, with increasing levels of
support, since version 1.5. While the feature has the potential
to be a powerful tool, subinterpreters have suffered from neglect
because they are not available directly from Python. Exposing the
existing functionality in the stdlib will help reverse the situation.
This proposal is focused on enabling the fundamental capability of
multiple isolated interpreters in the same Python process. This is a
new area for Python so there is relative uncertainly about the best
tools to provide as companions to subinterpreters. Thus we minimize
the functionality we add in the proposal as much as possible.
Concerns
--------
* "subinterpreters are not worth the trouble"
Some have argued that subinterpreters do not add sufficient benefit
to justify making them an official part of Python. Adding features
to the language (or stdlib) has a cost in increasing the size of
the language. So it must pay for itself. In this case, subinterpreters
provide a novel concurrency model focused on isolated threads of
execution. Furthermore, they present an opportunity for changes in
CPython that will allow simulateous use of multiple CPU cores (currently
prevented by the GIL).
Alternatives to subinterpreters include threading, async, and
multiprocessing. Threading is limited by the GIL and async isn't
the right solution for every problem (nor for every person).
Multiprocessing is likewise valuable in some but not all situations.
Direct IPC (rather than via the multiprocessing module) provides
similar benefits but with the same caveat.
Notably, subinterpreters are not intended as a replacement for any of
the above. Certainly they overlap in some areas, but the benefits of
subinterpreters include isolation and (potentially) performance. In
particular, subinterpreters provide a direct route to an alternate
concurrency model (e.g. CSP) which has found success elsewhere and
will appeal to some Python users. That is the core value that the
``interpreters`` module will provide.
* "stdlib support for subinterpreters adds extra burden
on C extension authors"
In the `Interpreter Isolation`_ section below we identify ways in
which isolation in CPython's subinterpreters is incomplete. Most
notable is extension modules that use C globals to store internal
state. PEP 3121 and PEP 489 provide a solution for most of the
problem, but one still remains. [petr-c-ext]_ Until that is resolved,
C extension authors will face extra difficulty to support
subinterpreters.
Consequently, projects that publish extension modules may face an
increased maintenance burden as their users start using subinterpreters,
where their modules may break. This situation is limited to modules
that use C globals (or use libraries that use C globals) to store
internal state.
Ultimately this comes down to a question of how often it will be a
problem in practice: how many projects would be affected, how often
their users will be affected, what the additional maintenance burden
will be for projects, and what the overall benefit of subinterpreters
is to offset those costs. The position of this PEP is that the actual
extra maintenance burden will be small and well below the threshold at
which subinterpreters are worth it.
About Subinterpreters
=====================
Shared data
-----------
Subinterpreters are inherently isolated (with caveats explained below),
in contrast to threads. This enables `a different concurrency model
<Concurrency_>`_ than is currently readily available in Python.
`Communicating Sequential Processes`_ (CSP) is the prime example.
A key component of this approach to concurrency is message passing. So
providing a message/object passing mechanism alongside ``Interpreter``
is a fundamental requirement. This proposal includes a basic mechanism
upon which more complex machinery may be built. That basic mechanism
draws inspiration from pipes, queues, and CSP's channels. [fifo]_
The key challenge here is that sharing objects between interpreters
faces complexity due in part to CPython's current memory model.
Furthermore, in this class of concurrency, the ideal is that objects
only exist in one interpreter at a time. However, this is not practical
for Python so we initially constrain supported objects to ``bytes``.
There are a number of strategies we may pursue in the future to expand
supported objects and object sharing strategies.
Note that the complexity of object sharing increases as subinterpreters
become more isolated, e.g. after GIL removal. So the mechanism for
message passing needs to be carefully considered. Keeping the API
minimal and initially restricting the supported types helps us avoid
further exposing any underlying complexity to Python users.
To make this work, the mutable shared state will be managed by the
Python runtime, not by any of the interpreters. Initially we will
support only one type of objects for shared state: the channels provided
by ``create_channel()``. Channels, in turn, will carefully manage
passing objects between interpreters.
Interpreter Isolation
---------------------
CPython's interpreters are intended to be strictly isolated from each
other. Each interpreter has its own copy of all modules, classes,
functions, and variables. The same applies to state in C, including in
extension modules. The CPython C-API docs explain more. [caveats]_
However, there are ways in which interpreters share some state. First
of all, some process-global state remains shared:
* file descriptors
* builtin types (e.g. dict, bytes)
* singletons (e.g. None)
* underlying static module data (e.g. functions) for
builtin/extension/frozen modules
There are no plans to change this.
Second, some isolation is faulty due to bugs or implementations that did
not take subinterpreters into account. This includes things like
extension modules that rely on C globals. [cryptography]_ In these
cases bugs should be opened (some are already):
* readline module hook functions (http://bugs.python.org/issue4202)
* memory leaks on re-init (http://bugs.python.org/issue21387)
Finally, some potential isolation is missing due to the current design
of CPython. Improvements are currently going on to address gaps in this
area:
* interpreters share the GIL
* interpreters share memory management (e.g. allocators, gc)
* GC is not run per-interpreter [global-gc]_
* at-exit handlers are not run per-interpreter [global-atexit]_
* extensions using the ``PyGILState_*`` API are incompatible [gilstate]_
Concurrency
-----------
Concurrency is a challenging area of software development. Decades of
research and practice have led to a wide variety of concurrency models,
each with different goals. Most center on correctness and usability.
One class of concurrency models focuses on isolated threads of
execution that interoperate through some message passing scheme. A
notable example is `Communicating Sequential Processes`_ (CSP), upon
which Go's concurrency is based. The isolation inherent to
subinterpreters makes them well-suited to this approach.
Existing Usage
--------------
Subinterpreters are not a widely used feature. In fact, the only
documented case of wide-spread usage is
`mod_wsgi <https://github.com/GrahamDumpleton/mod_wsgi>`_. On the one
hand, this case provides confidence that existing subinterpreter support
is relatively stable. On the other hand, there isn't much of a sample
size from which to judge the utility of the feature.
Provisional Status
==================
The new ``interpreters`` module will be added with "provisional" status
(see PEP 411). This allows Python users to experiment with the feature
and provide feedback while still allowing us to adjust to that feedback.
The module will be provisional in Python 3.7 and we will make a decision
before the 3.8 release whether to keep it provisional, graduate it, or
remove it.
Alternate Python Implementations
================================
TBD
Open Questions
==============
Leaking exceptions across interpreters
--------------------------------------
As currently proposed, uncaught exceptions from ``run()`` propagate
to the frame that called it. However, this means that exception
objects are leaking across the inter-interpreter boundary. Likewise,
the frames in the traceback potentially leak.
While that might not be a problem currently, it would be a problem once
interpreters get better isolation relative to memory management (which
is necessary to stop sharing the GIL between interpreters). So the
semantics of how the exceptions propagate needs to be resolved.
Initial support for buffers in channels
---------------------------------------
An alternative to support for bytes in channels in support for
read-only buffers (the PEP 3119 kind). Then ``recv()`` would return
a memoryview to expose the buffer in a zero-copy way. This is similar
to what ``multiprocessing.Connection`` supports. [mp-conn]
Switching to such an approach would help resolve questions of how
passing bytes through channels will work once we isolate memory
management in interpreters.
Deferred Functionality
======================
In the interest of keeping this proposal minimal, the following
functionality has been left out for future consideration. Note that
this is not a judgement against any of said capability, but rather a
deferment. That said, each is arguably valid.
Interpreter.call()
------------------
It would be convenient to run existing functions in subinterpreters
directly. ``Interpreter.run()`` could be adjusted to support this or
a ``call()`` method could be added::
Interpreter.call(f, *args, **kwargs)
This suffers from the same problem as sharing objects between
interpreters via queues. The minimal solution (running a source string)
is sufficient for us to get the feature out where it can be explored.
timeout arg to pop() and push()
-------------------------------
Typically functions that have a ``block`` argument also have a
``timeout`` argument. We can add it later if needed.
get_main()
----------
CPython has a concept of a "main" interpreter. This is the initial
interpreter created during CPython's runtime initialization. It may
be useful to identify the main interpreter. For instance, the main
interpreter should not be destroyed. However, for the basic
functionality of a high-level API a ``get_main()`` function is not
necessary. Furthermore, there is no requirement that a Python
implementation have a concept of a main interpreter. So until there's
a clear need we'll leave ``get_main()`` out.
Interpreter.run_in_thread()
---------------------------
This method would make a ``run()`` call for you in a thread. Doing this
using only ``threading.Thread`` and ``run()`` is relatively trivial so
we've left it out.
Synchronization Primitives
--------------------------
The ``threading`` module provides a number of synchronization primitives
for coordinating concurrent operations. This is especially necessary
due to the shared-state nature of threading. In contrast,
subinterpreters do not share state. Data sharing is restricted to
channels, which do away with the need for explicit synchronization. If
any sort of opt-in shared state support is added to subinterpreters in
the future, that same effort can introduce synchronization primitives
to meet that need.
CSP Library
-----------
A ``csp`` module would not be a large step away from the functionality
provided by this PEP. However, adding such a module is outside the
minimalist goals of this proposal.
Syntactic Support
-----------------
The ``Go`` language provides a concurrency model based on CSP, so
it's similar to the concurrency model that subinterpreters support.
``Go`` provides syntactic support, as well several builtin concurrency
primitives, to make concurrency a first-class feature. Conceivably,
similar syntactic (and builtin) support could be added to Python using
subinterpreters. However, that is *way* outside the scope of this PEP!
Multiprocessing
---------------
The ``multiprocessing`` module could support subinterpreters in the same
way it supports threads and processes. In fact, the module's
maintainer, Davin Potts, has indicated this is a reasonable feature
request. However, it is outside the narrow scope of this PEP.
C-extension opt-in/opt-out
--------------------------
By using the ``PyModuleDef_Slot`` introduced by PEP 489, we could easily
add a mechanism by which C-extension modules could opt out of support
for subinterpreters. Then the import machinery, when operating in
a subinterpreter, would need to check the module for support. It would
raise an ImportError if unsupported.
Alternately we could support opting in to subinterpreter support.
However, that would probably exclude many more modules (unnecessarily)
than the opt-out approach.
The scope of adding the ModuleDef slot and fixing up the import
machinery is non-trivial, but could be worth it. It all depends on
how many extension modules break under subinterpreters. Given the
relatively few cases we know of through mod_wsgi, we can leave this
for later.
Poisoning channels
------------------
CSP has the concept of poisoning a channel. Once a channel has been
poisoned, and ``send()`` or ``recv()`` call on it will raise a special
exception, effectively ending execution in the interpreter that tried
to use the poisoned channel.
This could be accomplished by adding a ``poison()`` method to both ends
of the channel. The ``close()`` method could work if it had a ``force``
option to force the channel closed. Regardless, these semantics are
relatively specialized and can wait.
Sending channels over channels
------------------------------
Some advanced usage of subinterpreters could take advantage of the
ability to send channels over channels, in addition to bytes. Given
that channels will already be multi-interpreter safe, supporting then
in ``RecvChannel.recv()`` wouldn't be a big change. However, this can
wait until the basic functionality has been ironed out.
Reseting __main__
-----------------
As proposed, every call to ``Interpreter.run()`` will execute in the
namespace of the interpreter's existing ``__main__`` module. This means
that data persists there between ``run()`` calls. Sometimes this isn't
desireable and you want to execute in a fresh ``__main__``. Also,
you don't necessarily want to leak objects there that you aren't using
any more.
Solutions include:
* a ``create()`` arg to indicate resetting ``__main__`` after each
``run`` call
* an ``Interpreter.reset_main`` flag to support opting in or out
after the fact
* an ``Interpreter.reset_main()`` method to opt in when desired
This isn't a critical feature initially. It can wait until later
if desirable.
Support passing ints in channels
--------------------------------
Passing ints around should be fine and ultimately is probably
desirable. However, we can get by with serializing them as bytes
for now. The goal is a minimal API for the sake of basic
functionality at first.
File descriptors and sockets in channels
----------------------------------------
Given that file descriptors and sockets are process-global resources,
support for passing them through channels is a reasonable idea. They
would be a good candidate for the first effort at expanding the types
that channels support. They aren't strictly necessary for the initial
API.
Rejected Ideas
==============
Explicit channel association
----------------------------
Interpreters are implicitly associated with channels upon ``recv()`` and
``send()`` calls. They are de-associated with ``close()`` calls. The
alternative would be explicit methods. It would be either
``add_channel()`` and ``remove_channel()`` methods on ``Interpreter``
objects or something similar on channel objects.
In practice, this level of management shouldn't be necessary for users.
So adding more explicit support would only add clutter to the API.
Use pipes instead of channels
-----------------------------
A pipe would be a simplex FIFO between exactly two interpreters. For
most use cases this would be sufficient. It could potentially simplify
the implementation as well. However, it isn't a big step to supporting
a many-to-many simplex FIFO via channels. Also, with pipes the API
ends up being slightly more complicated, requiring naming the pipes.
Use queues instead of channels
------------------------------
The main difference between queues and channels is that queues support
buffering. This would complicate the blocking semantics of ``recv()``
and ``send()``. Also, queues can be built on top of channels.
"enumerate"
-----------
The ``list_all()`` function provides the list of all interpreters.
In the threading module, which partly inspired the proposed API, the
function is called ``enumerate()``. The name is different here to
avoid confusing Python users that are not already familiar with the
threading API. For them "enumerate" is rather unclear, whereas
"list_all" is clear.
References
==========
.. [c-api]
https://docs.python.org/3/c-api/init.html#sub-interpreter-support
.. _Communicating Sequential Processes:
.. [CSP]
https://en.wikipedia.org/wiki/Communicating_sequential_processeshttps://github.com/futurecore/python-csp
.. [fifo]
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Pipehttps://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queuehttps://docs.python.org/3/library/queue.html#module-queuehttp://stackless.readthedocs.io/en/2.7-slp/library/stackless/channels.htmlhttps://golang.org/doc/effective_go.html#sharinghttp://www.jtolds.com/writing/2016/03/go-channels-are-bad-and-you-should-fe…
.. [caveats]
https://docs.python.org/3/c-api/init.html#bugs-and-caveats
.. [petr-c-ext]
https://mail.python.org/pipermail/import-sig/2016-June/001062.htmlhttps://mail.python.org/pipermail/python-ideas/2016-April/039748.html
.. [cryptography]
https://github.com/pyca/cryptography/issues/2299
.. [global-gc]
http://bugs.python.org/issue24554
.. [gilstate]
https://bugs.python.org/issue10915http://bugs.python.org/issue15751
.. [global-atexit]
https://bugs.python.org/issue6531
.. [mp-conn]
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Conn…
Copyright
=========
This document has been placed in the public domain.
Hope you don't mind me CC'ing python-dev.
On Sun, Oct 1, 2017 at 9:38 AM, Barry Warsaw <barry(a)python.org> wrote:
> You seem to be in PEP review mode :)
>
> What do you think about 553? Still want to wait, or do you think it’s
> missing anything? So far, all the feedback has been positive, and I think
> we can basically just close the open issues and both the PEP and
> implementation are ready to go.
>
> Happy to do more work on it if you feel it’s necessary.
> -B
>
I'm basically in agreement. Some quick notes:
- There's a comma instead of a period at the end of the 4th bullet in the
Rationale: "Breaking the idiom up into two lines further complicates the
use of the debugger," . Also I don't understand how this complicates use
(even though I also always type it as one line). And the lint warning is
actually useful when you accidentally leave this in code you send to CI
(assuming that runs a linter, as it should). TBH the biggest argument (to
me) is that I simply don't know *how* I would enter some IDE's debugger
programmatically. I think it should also be pointed out that even if an IDE
has a way to specify conditional breakpoints, the UI may be such that it's
easier to just add the check to the code -- and then the breakpoint()
option is much more attractive than having to look up how it's done in your
particular IDE (especially since this is not all that common).
- There's no rationale for the *args, **kwds part of the breakpoint()
signature. (I vaguely recall someone on the mailing list asking for it but
it seemed far-fetched at best.)
- The explanation of the relationship between sys.breakpoint() and
sys.__breakpointhook__ was unclear to me -- I had to go to the docs for
__displayhook__ (
https://docs.python.org/3/library/sys.html#sys.__displayhook__) to
understand that sys.__breakpointhook__ is simply initialized to the same
function as sys.breakpointhook, and the idea is that if you screw up you
can restore sys.breakpointhook from the other rather than having to restart
your process. Is that in fact it? The text "``sys.__breakpointhook__`` then
stashes the default value of ``sys.breakpointhook()`` to make it easy to
reset" seems to imply some action that would happen... when? how?
- Some pseudo-code would be nice. It seems that it's like this:
# in builtins
def breakpoint(*args, **kwds):
import sys
return sys.breakpointhook(*args, **kwds)
# in sys
def breakpointhook(*args, **kwds):
import os
hook = os.getenv('PYTHONBREAKPOINT')
if hook == '0':
return None
if not hook:
import pdb
return pdb.set_trace(*args, **kwds)
if '.' not in hook:
import builtins
mod = builtins
funcname = hook
else:
modname, funcname = hook.rsplit('.', 1)
__import__(modname)
import sys
mod = sys.modules[modname]
func = getattr(mod, funcname)
return func(*args, **kwds)
__breakpointhook__ = breakpointhook
Except that the error handling should be a bit better. (In particular the
PEP specifies a try/except around most of the code in sys.breakpointhook()
that issues a RuntimeWarning and returns None.)
- Not sure what the PEP's language around evaluation of PYTHONBREAKPOINT
means for the above pseudo code. I *think* the PEP author's opinion is that
the above pseudo-code is fine. Since programs can mutate their own
environment, I think something like `os.environ['PYTHONBREAKPOINT'] =
'foo.bar.baz'; breakpoint()` should result in foo.bar.baz() being imported
and called, right?
- I'm not quite sure what sort of fast-tracking for PYTHONBREAKPOINT=0 you
had in mind beyond putting it first in the code above.
- Did you get confirmation from other debuggers? E.g. does it work for
IDLE, Wing IDE, PyCharm, and VS 2015?
- I'm not sure what the point would be of making a call to breakpoint() a
special opcode (it seems a lot of work for something of this nature). ISTM
that if some IDE modifies bytecode it can do whatever it well please
without a PEP.
- I don't see the point of calling `pdb.pm()` at breakpoint time. But it
could be done using the PEP with `import pdb; sys.breakpointhook = pdb.pm`
right? So this hardly deserves an open issue.
- I haven't read the actual implementation in the PR. A PEP should not
depend on the actual proposed implementation for disambiguation of its
specification (hence my proposal to add pseudo-code to the PEP).
That's what I have!
--
--Guido van Rossum (python.org/~guido <http://python.org/%7Eguido>)
Hi,
I am looking for the implementation of open() in the src, but so far I
am not able to do this.
>From my observation, the implementation of open() in python2/3 does
not employ the open(2) system call. However without open(2) how can
one possibly obtain a file descriptor?
Yubin