Python threading (was: Re: global interpreter lock not working as it should)

Mark Hammond mhammond at skippinet.com.au
Fri Aug 9 01:15:08 EDT 2002


Bengt Richter wrote:
> On 08 Aug 2002 09:19:26 +0200, martin at v.loewis.de (Martin v. Loewis) wrote:
> 
> 
>>bokr at oz.net (Bengt Richter) writes:
>>
>>
>>>If you're talking about the compute-bound situation, as we have been,
>>>yes, but typically all is not computation. I don't expect you mean that
>>>in general multithreading always slows down a *system* ;-)
>>
>>Compared to what? A single-threaded solution? I do think that
> 
> Yes, compared to a single-threaded system, when the programmer does not
> have the patience or skill to satisfy all the if's you mention below ;-)
> 
> 
>>multi-threading creates a higher CPU load, and if you manage not to
>>block in system calls when there is work to and, and to avoid
> 
> Don't forget that a disk controller is effectively blocking and waiting
> for attention if you don't give it work to do when there is disk work to do
> (although that can be mitigated with OS/file system readahead for sequential
> access etc.) So part of managing "not to block in system calls" may be getting
> the disk controller to start filling a new buffer in parallel with your single
> thread as soon as it's ready to, so by the time you need the data, you won't block.

No, the technique is to write your single-threaded program as a state 
machine.  Effectively, this defines "when you need it" as "as soon as it 
is ready" <wink>

> In a single thread, the code to do that will likely be ugly and/or inefficient.
> Polling is effectively a time-distributed busy wait, so if you need to do that
> in order to keep i/o going, you are not really avoiding busy waiting, you are
> just diluting it with added latency. And worse, if you do it by writing Python
> code to poll, you will be hugely more inefficient than letting ceval.c do it
> in the byte code loop, even if the latter is not as optimum as it could be.

Agreed - but you don't do that.  Look at asyncore for an example - I 
think you will find that this both out-scales, and simply out-performs, 
a threaded solution.

>>Threads are for convenience, not for performance.
> 
> Yes, but for many situations convenience is crucial in getting programmers
> to deal with problems of managing parallel system activity so as to have
> at least one unblocked thread available most of the time to keep the CPU busy.

Of course, you are both right <wink>.  High level languages, and 
therefore Python, exist purely for convenience.  Because something 
exists only for convenience does not decrease its worth.

Mark.




More information about the Python-list mailing list