AW: [pypy-dev] multiple isolated VMs in a single process
Tobias.Oberstein at gmx.de
Fri Feb 7 21:11:30 CET 2003
> > Are there any design plans regarding threading architecture yet?
> good question.
> > I particular, are you planning to support having multiple
> > interpreters in one process _without_ sharing objects (no free
> > threading but VM-wise locking). Supporting this would require
> > early design decisions like
> > - no global data for the interpreter/VM
> > - no global data in object implementations
> > - but what about extension modules? many have global data ..
> > isn't it? Also, in an C embedding API, will the VM-state be
> > take from thread-local storage or passed into every function
> > by the embedding application via an opaque pointer?
> IMO we should defer this specific question to a later point. I don't
> expect a C-API to PyPython any time soon. We probably will go for
> a python-only system first. At least i wouldn't consider
> backward-compatibility to CPython's C-API a mandatory feature.
Just my view: CPython API compatibility - I don't care. Any C API
which exposes the VM - probably I would care.
Think about it: a fast, compact, portable Python VM with good multithreading
support and good integration capabilities could be a real competitor to
in the embedding world. On the other hand, how likely is it that PyPython
a threat to "stand-alone" Python any time soon? OK, I've just read a bit
performance stuff and PyCo - thats of course cool. Anyway, I submitted a
item for this issue also;)
> We might as well run on ObjC for that matter :-)
> > I'm thinking of embedding a dynamic language into an
> > multithreaded OODBMS. The different VMs within the server process
> > would share data via objects instantiated from a special
> > extension class that wraps up the OODBMS. Synchronisation on
> > these objects is taken care of by the OODBMS. No need to share
> > objects local to each VM.
> A valid use case, IMO. although some people would say
> that you might use ZODB which does it with multiple
> processes. To me the biggest point for using processes
> is stability. It does come at a cost, though. You have
> to deal with "copies" of objects and only have "shared"
> objects at a higher level. This might suit your needs, though.
It imposes a big performance hit, since every call on an object
from a persistable class must go over IPC, unless the object is
cached in the process. But then again all issues of synchronization
(be it pessimistic via locking or optimistic via multi-versioning)
arise. I don't like it.
Regarding the stability issue: agreed when speaking of components
hacked in C/C++. But if I have components in Python, then I don't
see any problems unless my VM is buggy of course.
> > Sadly, as I was told, fixing above issues for CPython would
> > be hard like shit. It's so sad that global data has not been
> > avoided from the very beginning in the CPython.
> I am sure we will try to avoid global data structures.
> We are all good and nice programmers, aren't we? :-)
> Tobias, would you mind going to our issue-tracker
> registering yourself and posting your "issue" as a
> "wishlist" item? This way we are sure not to loose
> valuable suggestions and use cases.
> Better yet, join the upcoming sprint right next door :-)
More information about the Pypy-dev