multi-core software
Jon Harrop
jon at ffconsultancy.com
Sun Jun 7 11:20:56 EDT 2009
George Neuner wrote:
> On Fri, 05 Jun 2009 16:26:37 -0700, Roedy Green
> <see_website at mindprod.com.invalid> wrote:
>>On Fri, 5 Jun 2009 18:15:00 +0000 (UTC), Kaz Kylheku
>><kkylheku at gmail.com> wrote, quoted or indirectly quoted someone who
>>said :
>>>Even for problems where it appears trivial, there can be hidden
>>>issues, like false cache coherency communication where no actual
>>>sharing is taking place. Or locks that appear to have low contention and
>>>negligible performance impact on ``only'' 8 processors suddenly turn into
>>>bottlenecks. Then there is NUMA. A given address in memory may be
>>>RAM attached to the processor accessing it, or to another processor,
>>>with very different access costs.
>>
>>Could what you are saying be summed up by saying, "The more threads
>>you have the more important it is to keep your threads independent,
>>sharing as little data as possible."
>
> And therein lies the problem of leveraging many cores. There is a lot
> of potential parallelism in programs (even in Java :) that is lost
> because it is too fine a grain for threads.
That will always be true so it conveys no useful information to the
practitioner.
> Even the lightest weight
> user space ("green") threads need a few hundred instructions, minimum,
> to amortize the cost of context switching.
Work items in Cilk are much faster than that.
> Add to that the fact that programmers have shown themselves, on
> average, to be remarkably bad at figuring out what _should_ be done in
> parallel - as opposed to what _can_ be done - and you've got a clear
> indicator that threads, as we know them, are not scalable except under
> a limited set of conditions.
Parallelism is inherently not scalable. I see no merit in speculating about
the ramifications of "average" programmers alleged inabilities.
--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
More information about the Python-list
mailing list