A new and very robust method for doing file locking over NFS?
Douglas Alan
nessus at mit.edu
Thu Apr 17 15:56:17 EDT 2003
I'd like to do file locking over NFS without using lockd. The reason
I want to avoid using lockd is because many lockd implementations are
too buggy.
It is fairly easy to avoid using lockd -- just avoid using lockf() to
lock a file. Instead of using lockf(), lock a file by creating a lock
file that you open with the O_CREAT | O_EXCL flags. To unlock the
file, you merely unlink the lock file. This method is fairly
reliable, except that there is a small chance with every file lock or
unlock that something will go wrong.
The two failure symptoms, as I understand things, are as follows:
(1) The lock file might be created without the client realizing that
it has been created if the file creation acknowledgement is lost due
to severe network problems. The file being locked would then remain
locked forever (until someone manually deletes the lock) because no
process would take responsibility for having locked the file. This
failure symptom is relatively benign for my purposes and if needed it
can be fixed via the approach described in the Red Hat man page for
the open() system call.
(2) When a process goes to remove its file lock, the acknowledgement
for the unlink() could be lost. If this happens, then the NFS driver
on the client could accidentally unlink a lock file created by another
process when it retries the unlink() request. This failure symptom is
pretty bad for my purposes, since it could cause a structured file to
become corrupt.
I have an idea for a slightly different way of doing file locking that
I think solves problem #2 (and also solves problem #1). What if,
instead of using a lock file to lock a file, we rename the file to
something like "filename.locked.hostname.pid"? If the rename()
acknowledgement gets lost, the client will see the rename() system
call as having failed due to the file not existing. But in this case
it can then check for the existence of "filename.locked.hostname.pid".
If this file exists, then the process knows that the rename() system
call didn't actually fail--the acknowledgement just got lost. Later,
when the process goes to unlock the file, it will rename the file back
to "filename". Again, if the rename system call appears to fail, the
process can check for the existance of "filename.locked.hostname.pid".
If the file no longer exists, then it knows the rename call really did
succeed, and again the acknowledgement just got lost.
How does this sound? Is this close to foolproof, or am missing
something?
I'm not much of an NFS expert, so I am a bit worried that there are
details of NFS client-side caching that I don't understand that would
prevent this scheme from working without some modification.
|>oug
More information about the Python-list
mailing list