problem in implementing multiprocessing

James Mills prologic at shortcircuit.net.au
Mon Jan 19 07:00:45 CET 2009


On Mon, Jan 19, 2009 at 3:50 PM, gopal mishra <gopalm at infotechsw.com> wrote:
> i know this is not an io - bound problem, i am creating heavy objects in the
> process and add these objects in to queue and get that object in my main
> program using queue.
> you can test the this sample code
> import time
> from multiprocessing import Process, Queue
>
> class Data(object):
>    def __init__(self):
>        self.y = range(1, 1000000)
>
> def getdata(queue):
>    data = Data()
>    queue.put(data)
>
> if __name__=='__main__':
>    t1 = time.time()
>    d1 = Data()
>    d2 = Data()
>    t2 = time.time()
>    print "without multiProcessing total time:", t2-t1
>    #multiProcessing
>    queue = Queue()
>    Process(target= getdata, args=(queue, )).start()
>    Process(target= getdata, args=(queue, )).start()
>    s1 = queue.get()
>    s2 = queue.get()
>    t2 = time.time()
>    print "multiProcessing total time::", t2-t1

The reason your code above doesn't work as you
expect and the multiprocessing part takes longer
is because your Data objects are creating a list
(a rather large list) of ints. Use xrange instead of range.

Here's what I get (using xrange):

$ python test.py
without multiProcessing total time: 1.50203704834e-05
multiProcessing total time:: 0.116630077362

cheers
James



More information about the Python-list mailing list