paralell ftp uploads and pool size

ben at ben at
Wed Jan 9 21:10:43 CET 2013


I have a python script that uploads multiple files from the local machine to a remote server in parallel via ftp  using p process pool:

p = Pool(processes=x)

Now as I increase the value of x, the overall upload time for all files drops as expected. If I set x too high however, then an exception is thrown. The exact value at which this happens varies, but is ~20

Traceback (most recent call last):
  File "", line 59, in <module>
  File "", line 56, in multiupload,files)
  File "/usr/lib64/python2.6/multiprocessing/", line 148, in map
    return self.map_async(func, iterable, chunksize).get()
  File "/usr/lib64/python2.6/multiprocessing/", line 422, in get
    raise self._value

Now this is not a problem - 20 is more than enough - but I'm trying to understand the mechanisms involved, and why the exact number of processes at which this exception occurs seems to vary.

I guess it comes down to the current resources of the server itself...but any insight would be much appreciated!

More information about the Python-list mailing list