vend82 at virgilio.it
Sat Jun 6 02:49:38 CEST 2009
On Jun 6, 1:26 am, Roedy Green <see_webs... at mindprod.com.invalid>
> On Fri, 5 Jun 2009 18:15:00 +0000 (UTC), Kaz Kylheku
> <kkylh... at gmail.com> wrote, quoted or indirectly quoted someone who
> said :
> >Even for problems where it appears trivial, there can be hidden
> >issues, like false cache coherency communication where no actual
> >sharing is taking place. Or locks that appear to have low contention and
> >negligible performance impact on ``only'' 8 processors suddenly turn into
> >bottlenecks. Then there is NUMA. A given address in memory may be
> >RAM attached to the processor accessing it, or to another processor,
> >with very different access costs.
> Could what you are saying be summed up by saying, "The more threads
> you have the more important it is to keep your threads independent,
> sharing as little data as possible."
Besides technical issues such as cache conflicts and synchronization
latencies, there are more theoretical issues of task decomposability.
It seems it is not always feasible to decompose an algorithm into
subprograms that can be executed in parallel.
More information about the Python-list