[Multiprocessing-sig] Logging when using multiprocessing...

Mike Meyer mwm at mired.org
Mon May 7 13:58:29 CEST 2012


The multiprocessing module docs just have a warning about "the logging
package does not use process shared locks so it is possible (depending
on the handler type) for messages from different processes to get
mixed up."

This seems at best inadequate. It also misses issue 13697, which means
that the logging module is unsafe to use with the multiprocessing
module on Unix (as of 3.2, anyway).

Googling for "multiprocessing logging python" turns up lots of
suggestions that you avoid the mixing issue by setting up a server
process and writing a new handler for logging that queue's messages to
the server. This seems like overkill for a module that's pushed as the
alternative to threading because of the GIL.

What's funny is that I couldn't find any mention of using one of the
IP style handlers (SMTP, HTTP) or syslog to avoid the "mixing" problem
completely (but not the RLock bug). For that matter, even a
FileHandler won't have much problem if you keep your writes short
enough that they get buffered properly.

On the other hand, the FileHandlers that close and reopen the file can
do a lot more than "mix up the messages" - it's possible for a process
to overwrite earlier messages from another process.

Anyone got any thoughts about fixing these issues? Or at least
documenting them?

    <mike
-- 
Mike Meyer <mwm at mired.org>		http://www.mired.org/
Independent Software developer/SCM consultant, email for more information.

O< ascii ribbon campaign - stop html mail - www.asciiribbon.org


More information about the Multiprocessing-sig mailing list