2.6, 3.0, and truly independent intepreters

"Martin v. Löwis" martin at v.loewis.de
Mon Oct 27 04:05:27 EDT 2008


Andy O'Meara wrote:
> On Oct 24, 9:52 pm, "Martin v. Löwis" <mar... at v.loewis.de> wrote:
>>>> A c-level module, on the other hand, can sidestep/release
>>>> the GIL at will, and go on it's merry way and process away.
>>> ...Unless part of the C module execution involves the need do CPU-
>>> bound work on another thread through a different python interpreter,
>>> right?
>> Wrong.
[...]
> 
> So I think the disconnect here is that maybe you're envisioning
> threads being created *in* python.  To be clear, we're talking out
> making threads at the app level and making it a given for the app to
> take its safety in its own hands.

No. Whether or not threads are created by Python or the application
does not matter for my "Wrong" evaluation: in either case, C module
execution can easily side-step/release the GIL.

>>> As far as I can tell, it seems
>>> CPython's current state can't CPU bound parallelization in the same
>>> address space.
>> That's not true.
>>
> 
> Well, when you're talking about large, intricate data structures
> (which include opaque OS object refs that use process-associated
> allocators), even a shared memory region between the child process and
> the parent can't do the job.  Otherwise, please describe in detail how
> I'd get an opaque OS object (e.g. an OS ref that refers to memory-
> resident video) from the child process back to the parent process.

WHAT PARENT PROCESS? "In the same address space", to me, means
"a single process only, not multiple processes, and no parent process
anywhere". If you have just multiple threads, the notion of passing
data from a "child process" back to the "parent process" is
meaningless.

> Again, the big picture that I'm trying to plant here is that there
> really is a serious need for truly independent interpreters/contexts
> in a shared address space.

I understand that this is your mission in this thread. However, why
is that your problem? Why can't you just use the existing (limited)
multiple-interpreters machinery, and solve your problems with that?

> For most
> industry-caliber packages, the expectation and convention (unless
> documented otherwise) is that the app can make as many contexts as its
> wants in whatever threads it wants because the convention is that the
> app is must (a) never use one context's objects in another context,
> and (b) never use a context at the same time from more than one
> thread.  That's all I'm really trying to look at here.

And that's indeed the case for Python, too. The app can make as many
subinterpreters as it wants to, and it must not pass objects from one
subinterpreter to another one, nor should it use a single interpreter
from more than one thread (although that is actually supported by
Python - but it surely won't hurt if you restrict yourself to a single
thread per interpreter).

Regards,
Martin



More information about the Python-list mailing list