[Python-Dev] PEP 554 v2 (new "interpreters" module)
Eric Snow
ericsnowcurrently at gmail.com
Tue Sep 12 17:43:43 EDT 2017
On Sat, Sep 9, 2017 at 5:05 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Fri, 8 Sep 2017 16:04:27 -0700, Eric Snow <ericsnowcurrently at gmail.com> wrote:
>> ``list()``::
>
> It's called ``enumerate()`` in the threading module. Not sure there's
> a point in choosing a different name here.
Yeah, in the first version of the PEP it was called "enumerate()". I
changed it to "list()" at Raymond's recommendation. The reasoning is
that it's less confusing to most people that way. TBH, I'd rather
leave it "list()", but could be swayed. Perhaps it would be enough
for the PEP to not mention any relationship to "threading"?
>> The current interpreter (which called ``run()``) will block
>> until the subinterpreter finishes running the requested code. Any
>> uncaught exception in that code will bubble up to the current
>> interpreter.
>
> Why does it block? How is concurrency supposed to be achieved in that
> model? It would be more flexible if run(code) returned an object that
> can later be waited on. Something like... a Future :-)
I expect this is more a problem with my description than with the
feature. :) I've already re-written this bit to be more clear. It's
not that the thread blocks. It's more like a function call, where the
current frame is paused while the call is executed. Then it returns
to the calling frame. Likewise the interpreter in the current thread
gets swapped out with the target interpreter, where the code gets run,
and then the original interpreter gets swapped back in. This is how
you do it in the C-API and it made sense (to me) to do it the same way
in Python.
>
> And why guarantee that it executes in the "current OS thread"?
> I would say you don't want to specify where it executes exactly, as it
> opens the door for more sophisticated implementations (such as
> automatic assignment of subinterpreters inside a pool of threads).
Again, I had explained this poorly in the PEP. The key thing here is
that subinterpreters don't do anything special relative to threading.
If you want to call "Interpreter.run()" in a thread then you stick it
in a "threading.Thread". If you want to auto-assign to a pool of
threads then you treat it like any other function you would
auto-assign to a pool of threads.
>> get_fifo(name):
>> list_fifos():
>
> If fifos are uniquely named, why not return a name->fifo mapping?
I suppose we could. Then we could get rid of "get_fifo()" too. I'm
still mulling over the right API for the FIFO parts of the PEP.
>
>> ``FIFOReader(name)``::
>> [...]
>
> I don't think the method naming choice is very adequate here. The API
> model for the FIFO objects can either be a (threading or
> multiprocessing) Queue or a multiprocessing Pipe.
>
> - if a Queue, then it should have a get() / put() pair of methods
> - if a Pipe, then it should have a recv() / send() pair of methods
>
> Now, since Queues are multi-producer multi-consumer, while Pipes are
> single-producer single-consumer (they aren't "synchronized"), the
> better analogy seems to the multiprocessing Pipe here, so I would vote
> for recv() / send().
>
> But, in any case, definitely not a pop() / push() pair.
Thanks for pointing that out. Prior art, FTW! I'll factor that in.
> Has any thought been given to how FIFOs could integrate with async code
> driven by an event loop (e.g. asyncio)? I think the model of executing
> several asyncio (or Tornado) applications each in their own
> subinterpreter may prove quite interesting to reconcile multi-core
> concurrency with ease of programming. That would require the FIFOs to
> be able to synchronize on something an event loop can wait on (probably
> a file descriptor?).
Personally I've given pretty much no thought to the relationship with
async. TBH, the FIFO parts of the PEP were added only recently and
haven't fully baked yet. I'd be interested more feedback on async
relative to PEP, not just with the FIFO bits; my experience with async
is pretty limited thus far.
-eric
More information about the Python-Dev
mailing list