2.6, 3.0, and truly independent intepreters
andy55 at gmail.com
Thu Oct 30 18:54:14 CET 2008
On Oct 30, 1:00 pm, "Jesse Noller" <jnol... at gmail.com> wrote:
> Multiprocessing is written in C, so as for the "less agile" - I don't
> see how it's any less agile then what you've talked about.
Sorry for not being more specific there, but by "less agile" I meant
that an app's codebase is less agile if python is an absolute
requirement. If I was told tomorrow that for some reason we had to
drop python and go with something else, it's my job to have chosen a
codebase path/roadmap such that my response back isn't just "well,
we're screwed then." Consider modern PC games. They have huge code
bases that use DirectX and OpenGL and having a roadmap of flexibility
is paramount so packages they choose to use are used in a contained
and hedged fashion. It's a survival tactic for a company not to
entrench themselves in a package or technology if they don't have to
(and that's what I keep trying to raise in the thread--that the python
dev community should embrace development that makes python a leading
candidate for lightweight use). Companies want to build a flexible,
powerful codebases that are married to as few components as
> > - Shared memory -- for the reasons listed in my other posts, IPC or a
> > shared/mapped memory region doesn't work for our situation (and I
> > venture to say, for many real world situations otherwise you'd see end-
> > user/common apps use forking more often than threading).
> I would argue that the reason most people use threads as opposed to
> processes is simply based on "ease of use and entry" (which is ironic,
> given how many problems it causes).
No, we're in agreement here -- I was just trying to offer a more
detailed explanation of "ease of use". It's "easy" because memory is
shared and no IPC, serialization, or special allocator code is
required. And as we both agree, it's far from "easy" once those
threads to interact with each other. But again, my goal here is to
stay on the "embarrassingly easy" parallelization scenarios.
> I would argue that most of the people taking part in this discussion
> are working on "real world" applications - sure, multiprocessing as it
> exists today, right now - may not support your use case, but it was
> evaluated to fit *many* use cases.
And as I've mentioned, it's a totally great endeavor to be super proud
of. That suite of functionality alone opens some *huge* doors for
python and I hope folks that use it appreciate how much time and
thought that undoubtably had to go into it. You get total props, for
sure, and you're work is a huge and unique credit to the community.
> Please correct me if I am wrong in understanding what you want: You
> are making threads in another language (not via the threading API),
> embed python in those threads, but you want to be able to share
> objects/state between those threads, and independent interpreters. You
> want to be able to pass state from one interpreter to another via
> shared memory (e.g. pointers/contexts/etc).
> ParentAppFoo makes 10 threads (in C)
> Each thread gets an itty bitty python interpreter
> ParentAppFoo gets a object(video) to render
> Rather then marshal that object, you pass a pointer to the object to
> the children
> You want to pass that pointer to an existing, or newly created itty
> bitty python interpreter for mangling
> Itty bitty python interpreter passes the object back to a C module via
> a pointer/context
> If the above is wrong, I think possible outlining it in the above form
> may help people conceptualize it - I really don't think you're talking
> about python-level processes or threads.
Yeah, you have it right-on there, with added fact that the C and
python execution (and data access) are highly intertwined (so getting
and releasing the GIL would have to be happening all over). For
example, consider and the dynamics, logic, algorithms, and data
structures associated with image and video effects and image and video
More information about the Python-list