xeon hyperthreading & GIL
seb.bacon at jamkit.com
Mon Feb 10 14:48:55 CET 2003
> In article <1044623847.893400 at ananke.eclipse.net.uk>,
> Seb Bacon <seb.bacon at jamkit.com> wrote:
>>Let me try again:
>>Currently python's global interpreter lock will often cause SMP system
>>performance to *decrease* due to the cost of context switching.
> Wrong. It's not the cost of context switching that causes the
> performance decrease (or at least no more than for an equivalent C
> program). It's the inability to schedule multiple simultaneous Python
> threads. In order to get a performance boost (and this applies to
> single-CPU machines just as much as SMP), you need to call out to an
> extension that releases the GIL.
Stating the completely obvious, but there is also an inability to
schedule multiple simlutaneous threads on a single CPU.
Therefore, the fact that you can't schedule them on SMP does not explain
why SMP performs worse than single-CPU in many python applications.
I understood that SMP is slower because context switching between
processors is more expensive than switching contexts on a single
processor. You can certainly improve SMP performance by increasing the
interval check on the lock, which kind of implies that my understanding
The relevant part of your response to my original question is "at least
no more than for an equivalent C program".
Given an equivalent C program, and no rewriting of python code outside
the GIL, how will the performance of a Xeon compare to that of an
equivalent SMP system?
More information about the Python-list