2.6, 3.0, and truly independent intepreters

Michael Sparks ms at cerenity.org
Sat Oct 25 14:14:41 CEST 2008

Andy O'Meara wrote:

> basically, it seems that we're talking about the
> "embarrassingly parallel" scenario raised in that paper

We build applications in Kamaelia and then discover afterwards that they're
embarrassingly parallel and just work. (we have an introspector that can
look inside running systems and show us the structure that's going on -
very useful for debugging)

My current favourite example of this is a tool created to teaching small
children to read and write:

Uses gesture recognition and speech synthesis, has a top level view of
around 15 concurrent components, with signficant numbers of nested ones.

(OK, that's not embarrasingly parallel since it's only around 50 things, but
the whiteboard with around 200 concurrent things, is)

The trick is to stop viewing concurrency as the problem, but to find a way
to use it as a tool for making it easier to write code. That program was a
10 hour or so hack. You end up focussing on the problem you want to solve,
and naturally gain a concurrent friendly system.

Everything else (GIL's, shared memory etc) then "just" becomes an
optimisation problem - something only to be done if you need it.

My previous favourite examples were based around digital TV, or user
generated content transcode pipelines.

My reason for preferring the speak and write at the moment is because its a
problem you wouldn't normally think of as benefitting from concurrency,
when in this case it benefitted by being made easier to write in the first



More information about the Python-list mailing list