Python advocacy in scientific computation
lycka at carmen.se
Tue Mar 7 16:57:37 CET 2006
Terry Reedy wrote:
> I believe it is Guido's current view, perhaps Google's collective view, and
> a general *nix view that such increases can just as well come thru parallel
> processes. I believe one can run separate Python processes on separate
> cores just as well as one can run separate processes on separate chips or
> separate machines. Your view has also been presented and discussed on the
> pydev list. (But I am not one for thread versus process debate.)
That's ok for me. I usually have lots of different things
happening on the same computer, but for someone who writes
an application and want to make his particular program faster,
there is not a lot of support for building simple multi-process
systems in Python. While multi-threading is nasty, it makes
it possible to perform tasks in the same program in parallel.
I could well imagine something similar to Twisted, where the
different tasks handled by the event loop were dispatched to
parallel execution on different cores/CPUs by some clever
mechanism under the hood.
If the typical CPU in X years from now is a 5GHz processor with
16 cores, we probably want a single Python "program" to be able
to use more than one core for CPU intensive tasks.
>>least if writing threaded applications becomes less error prone
>>in competing languages, this might well be the weak point of Python
>>in the future.
> Queue.Queue was added to help people write correct threaded programs.
What I'm trying to say is that multi-threaded programming
(in all languages) is difficult. If *other* languages manage
to make it less difficult than today, they will achieve a
convenient performance boost that Python can't compete with
when the GIL prevents parallel execution of Python code.
- For many years, CPU performance has increased year after
year through higher and higher processor clock speeds. The
number of instructions through a single processing pipeline
per second has increased. This simple kind of speed increase
is flattening out. The first 1GHz CPUs from Intel appeared
about five years ago. At that time, CPU speed still doubled
every 2 years. With that pace, we would have had 6GHz CPUs
now. We don't! Perhaps someone will make some new invention
and the race will be on again...but it's not the case right
- The hardware trend right now, is to make CPUs allow more
parallel execution. Today, double core CPU's are becoming
common, and in a few years there will be many more cores
on each CPU.
- While most computers have a number of processes running in
parallel, there is often one process / application that is
performance critical on typical computers.
- To utilize the performance in these multi-core processors,
we need to use multi-threading, multiple processes or some
other technology that executes code in parallel.
- I think languages and application design patterns will evolve
to better support this parallel execution. I guess it's only
a few languages such as Erlang that support it well today.
- If Python isn't to get far behind the competition, it has
to manage this shift in how to utilize processor power. I
don't know if this will happen through core language changes,
or via some conveninent modules that makes fork/spawn/whatever
and IPC much more convenient, or if there is something
entirely different waiting around the corner. Something is
needed I think.
More information about the Python-list