bedouglas at earthlink.net
Sun Mar 1 18:14:54 CET 2009
to your question... no file is acquired by multiple processes. and yeah,
i've thought of copying the files to a separate dir for each child/client
process to work with.. and in all honesty, this is as fast as i can envision
at this time...
From: python-list-bounces+bedouglas=earthlink.net at python.org
[mailto:python-list-bounces+bedouglas=earthlink.net at python.org]On Behalf
Sent: Sunday, March 01, 2009 8:04 AM
To: python-list at python.org
Subject: Re: file locking...
> Got a bit of a question/issue that I'm trying to resolve. I'm asking this
> a few groups so bear with me.
> I'm considering a situation where I have multiple processes running, and
> each process is going to access a number of files in a dir. Each process
> accesses a unique group of files, and then writes the group of files to
> another dir. I can easily handle this by using a form of locking, where I
> have the processes lock/read a file and only access the group of files in
> the dir based on the open/free status of the lockfile.
> However, the issue with the approach is that it's somewhat synchronous.
> looking for something that might be more asynchronous/parallel, in that
> like to have multiple processes each access a unique group of files from
> given dir as fast as possible.
> So.. Any thoughts/pointers/comments would be greatly appreciated. Any
> pointers to academic research, etc.. would be useful.
You say "each process accesses a unique group of files". Does this mean
that no two processes access the same file?
If yes, then why do you need locking?
If no, then could you lock, move the files into a work folder, and then
unlock? (Remember that moving a file on the same volume (disk) should be
a quick renaming.) There could be one work folder per process, although
that isn't necessary because each process would know (or be told) which
of the files in the work folder belonged to it.
More information about the Python-list