dual processor

Steve Jorgensen nospam at nospam.nospam
Tue Sep 6 00:10:09 CEST 2005

On Mon, 05 Sep 2005 21:43:07 +0100, Michael Sparks <ms at cerenity.org> wrote:

>Steve Jorgensen wrote:
>> On 05 Sep 2005 10:29:48 GMT, Nick Craig-Wood <nick at craig-wood.com> wrote:
>>>Jeremy Jones <zanesdad at bellsouth.net> wrote:
>>>>  One Python process will only saturate one CPU (at a time) because
>>>>  of the GIL (global interpreter lock).
>>>I'm hoping python won't always be like this.
>> I don't get that.  Python was never designed to be a high performance
>> language, so why add complexity to its implementation by giving it
>> high-performance capabilities like SMP? 
>It depends on personal perspective. If in a few years time we all have
>machines with multiple cores (eg the CELL with effective 9 CPUs on a chip,
>albeit 8 more specialised ones), would you prefer that your code *could*
>utilise your hardware sensibly rather than not.
>Or put another way - would you prefer to write your code mainly in a
>language like python, or mainly in a language like C or Java? If python,
>it's worth worrying about!
>If it was python (or similar) you might "only" have to worry about
>concurrency issues. If it's a language like C you might have to worry
>about  memory management, typing AND concurrency (oh my!).
>(Let alone C++'s TMP :-)

That argument makes some sense, but I'm still not sure I agree.  Rather than
make Python programmers have to deal with concurrentcy issues in every app to
get it to make good use of the hardware it's on, why not have many of the
common libraries that Python uses to do processing take advantage of SMP when
you use them.  A database server is a good example of a way we can already do
some of that today.  Also, what if things like hash table updates were made
lazy (if they aren't already) and could be processed as background operations
to have the table more likely to be ready when the next hash lookup occurs.

More information about the Python-list mailing list