Closing Output files

Doug Stanfield DOUGS at
Fri May 26 15:30:34 EDT 2000

[Michaell Taylor had a question:]

> I have what I am sure is a simple problem to which there 
> should be a simple
> answer.
[..............problem description snipped............]
> Not closing the file means that one process dominates.  I 
> need all instances
> to be able to read and write to the file - not at the same time.  In
> general, a process will need rw access to the file to write a 
> single number
> before wandering off to perform roughly half an hour of computations.
> What am I missing?

Instead of trying to use operating system functions to handle the process
coordination, maybe you could write a Python program to do it. ;-)

I assume from your description that you expected to have a shared file
system on one of the available computers and that was where you expected to
let each of the processes write their results.  I'd suggest looking into a
client/server design with only the server process owning the files and
dispatching the subtasks to the workers.

My current favorite for talking between processs on different machines is
xml-rpc.  Check There
are others, such as pyro and dopy.  Check the Vaults of Parnassus at

One possible design; The server starts an xmlrpc server.  The clients clock
in as they are started and request a data set from the server.  The server
opens the results file, tracks which client gets which data set, waits for
the clients to check in a result and then dispatchs more data.  Because you
have only one process writing to the file it eliminates that particular

Whether the description above generates fears of worse problems may have
much to do with your experience of Python.  I've found that the ability of
Python and its libraries to simplify a possibly complicated design make it
easy to try things out and either discover very powerful techniques or
discard the unworkable with extremely minimal investment of effort and time.

Good luck,


More information about the Python-list mailing list