tim.arnold at sas.com
Thu Sep 3 16:48:32 CEST 2009
"Jan Kaliszewski" <zuo at chopin.edu.pl> wrote in message
news:mailman.895.1251958800.2854.python-list at python.org...
> 06:49:13 Scott David Daniels <Scott.Daniels at acm.org> wrote:
>> Tim Arnold wrote:
>>> (1) what's wrong with having each chapter in a separate thread? Too
>>> much going on for a single processor?
>> Many more threads than cores and you spend a lot of your CPU switching
> In fact, python threads work relatively the best with a powerful single
> core; with more cores it becomes being suprisingly inefficient.
> The culprit is Pythn GIL and the way it [mis]cooperates with OS
> See: http://www.dabeaz.com/python/GIL.pdf
> Jan Kaliszewski (zuo) <zuo at chopin.edu.pl>
I've read about the GIL (I think I understand the problem there)--thanks. In
my example, the actual job called for each chapter ended up being a call to
subprocess (that called a different python program). I figured that would
save me from the GIL problems since each subprocess would have its own GIL.
In the words of Tom Waits, " the world just keeps getting bigger when you
get out on your own". So I'm re-reading now, and maybe what I've been doing
would have been better served by the multiprocessing package.
I'm running python2.6 on FreeBSD with a dual quadcore cpu. Now my questions
(1) what the heck should I be doing to get concurrent builds of the
chapters, wait for them all to finish, and pick up processing the main job
again? The separate chapter builds have no need for communication--they're
(2) using threads with the target fn calling subprocess, a bad idea?
(3) should I study up on multiprocessing package and/or pprocessing?
thanks for your inputs,
More information about the Python-list