rmorgan466 at gmail.com
Tue Sep 29 03:20:16 CEST 2015
I am using the multiprocessing with apply_async to do some work. Each task
takes a few seconds but I have several thousand tasks. I was wondering if
there is a more efficient method and especially when I plan to operate on a
large memory arrays (numpy)
Here is what I have now
import multiprocessing as mp
for i in range(x):
if x*x + y*y<=1:
for arg in xrange(n):
result = [ i.get() for i in resultObj ]
1) Does multiprocessing do a fork for each task?
2) If so, I assume thats costly due to setup and teardown. Would this be
3) I plan to pass large arrays to function,f, therefore is there a more
efficient method to achieve this?
--- Get your facts first, then you can distort them as you please.--
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-list