[Python-Dev] Threading idea -- exposing a global thread lock

Tim Peters tim.peters at gmail.com
Tue Mar 14 20:40:47 CET 2006


[Raymond Hettinger]
> ...
> I disagree that the need is rare.  My own use case is that I sometimes
> add some debugging print statements that need to execute
> atomically -- it is a PITA because PRINT_ITEM and PRINT_NEWLINE
> are two different opcodes and are not guaranteed to pair atomically.

Well, it's much worse than that, right?  If you have a print list with
N items, there are N PRINT_ITEM opcodes.

> The current RightWay(tm) is for me to create a separate daemon
> thread for printing and to send lines to it via the queue module
> (even that is tricky because you don't want the main thread to exit
> before a print queued item is completed).  I suggest that that is too
> complex for a simple debugging print statement.

It sure is.  You're welcome to use my thread-safe debug-print function :-):

def msg(fmt, *args):
    s = fmt % args + '\n'
    for stream in sys.stdout, logfile:
        stream.write(s)
        stream.flush()

I use that for long-running (days) multi-threaded apps, where I want
to see progress messages on stdout but save them to a log file too. 
It assumes that the underlying C library writes a single string
atomically.  If I couldn't assume that, it would be easy to
acquire/release a lock inside the function.  For example, as-is the
order of lines displayed on stdout isn't always exactly the same as
the order in the log file, and when I care about that (I rarely do)
adding a lock can make it deterministic.

I also have minor variants of that function, some that prepend a
timestamp to each message, and/or prepend the id or name of the
current thread.  Because all such decisions are hiding inside the
msg() function, it's very esay to change the debug output as needed. 
Or to do

def msg(*args):
    pass

when I don't want to see output at all.


More information about the Python-Dev mailing list