Can parellelized program run slower than single process program?

Dave Angel d at davea.name
Thu Jun 21 07:58:29 CEST 2012


On 06/21/2012 01:05 AM, Yesterday Paid wrote:
> from multiprocessing import Pool
> from itertools import product
>
> def sym(lst):
>     x,y=lst
>     tmp=x*y
>     if rec(tmp):
>         return tmp
>     else:
>         return None
>
> def rec(num):
>     num=str(num)
>     if num == "".join(reversed(num)):    return True
>     else:    return False

Those last two lines could be replaced by simply
         return num == num[::-1]
which should be much faster.  I haven't checked that, however.

> if __name__ == "__main__":
>     pool=Pool(processes=8)
>     lst=product(xrange(100,1000),repeat=2)
>     rst=pool.map(sym,lst)
>     #rst=sym(lst)
>     print max(rst)
>
>
> in my old computer(2006),
> when run it on single process, it takes 2 seconds
> but using multiprocessing.pool, it takes almost 8 or 9 seconds
> is it possible?

Sure, it's possible.  Neither multithreading nor multiprocessing will
help a CPU-bound program if you don' t have multiple processors to run
it on.  All you have is the overhead of all the pickling and unpickling,
to get data between processes, with no reduction in useful processing time.

I suspect that in your case, the amount of work you're giving each
process is small, compared to the work that the multiprocessing module
is doing getting the processes to cooperate.

Now, if you had a quad CPU (8 hyperthreads), and if the work that each
process were doing were non-trivial, then i'd expect the balance to go
in the other direction.

Or if the operation involved I/O or other non-CPU-intensive work.

-- 

DaveA



More information about the Python-list mailing list