Is this a bug in multiprocessing or in my script?

ryles rylesny at gmail.com
Wed Aug 5 21:48:24 CEST 2009


On Aug 4, 10:37 pm, erikcw <erikwickst... at gmail.com> wrote:
> Traceback (most recent call last):
>   File "scraper.py", line 144, in <module>
>     print pool.map(scrape, range(10))
>   File "/usr/lib/python2.6/multiprocessing/pool.py", line 148, in map
>     return self.map_async(func, iterable, chunksize).get()
>   File "/usr/lib/python2.6/multiprocessing/pool.py", line 422, in get
>     raise self._value
> TypeError: expected string or buffer

This is almost certainly due to your scrape call raising an exception.
In the parent process, multiprocessing will detect if one of its
workers have terminated with an exception and then re-raise it.
However, only the exception and not the original traceback is made
available, which is making debugging more difficult for you. Here's a
simple example which demonstrates this behavior:

*** from multiprocessing import Pool
*** def evil_on_8(x):
...   if x == 8: raise ValueError("I DONT LIKE THE NUMBER 8")
...   return x + 1
...
*** pool = Pool(processes=4)
>>> pool.map(evil_on_8, range(5))
[1, 2, 3, 4, 5]
*** pool.map(evil_on_8, range(10)) # 8 will cause evilness.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/bb/real/3ps/lib/python2.6/multiprocessing/pool.py", line 148,
in map
    return self.map_async(func, iterable, chunksize).get()
  File "/bb/real/3ps/lib/python2.6/multiprocessing/pool.py", line 422,
in get
    raise self._value
ValueError: I DONT LIKE THE NUMBER 8
***

My recommendation is that you wrap your scrape code inside a try/
except and log any exception. I usually do this with logging.exception
(), or if logging is not in use, the traceback module. After that you
can simply re-raise it.



More information about the Python-list mailing list