Locking around
Nikolaus Rath
Nikolaus at rath.org
Wed Aug 6 05:30:57 EDT 2008
Ulrich Eckhardt <eckhardt at satorlaser.com> writes:
> Nikolaus Rath wrote:
>> I need to synchronize the access to a couple of hundred-thousand
>> files[1]. It seems to me that creating one lock object for each of the
>> files is a waste of resources, but I cannot use a global lock for all
>> of them either (since the locked operations go over the network, this
>> would make the whole application essentially single-threaded even
>> though most operations act on different files).
>
> Just wondering, but at what time do you know what files are needed?
As soon as I have read a client request. Also, I will only need one
file per request, not multiple.
> If you know that rather early, you could simply 'check out' the
> required files, do whatever you want with them and then release them
> again. If one of the requested files is marked as already in use,
> you simply wait (without reserving the others) until someone
> releases files and then try again. You could also wait for that
> precise file to be available, but that would require that you
> already reserve the other files, which might unnecessarily block
> other accesses.
>
> Note that this idea requires that each access locks one set of files at the
> beginning and releases them at the end, i.e. no attempts to lock files in
> between, which would otherwise easily lead to deadlocks.
I am not sure that I understand your idea. To me this sounds exactly
like what I'm already doing, just replace 'check out' by 'lock' in
your description... Am I missing something?
>> My idea is therefore to create and destroy per-file locks "on-demand"
>> and to protect the creation and destruction by a global lock
>> (self.global_lock). For that, I add a "usage counter"
>> (wlock.user_count) to each lock, and destroy the lock when it reaches
>> zero.
> [...code...]
>
>> - Does that look like a proper solution, or does anyone have a better
>> one?
>
> This should work, at least the idea is not flawed. However, I'd say
> there are too many locks involved. Rather, you just need a simple
> flag and the global lock. Further, you need a condition/event that
> tells waiting threads that you released some of the files so that it
> should see again if the ones it wants are available.
I have to agree that this sounds like an easier implementation. I just
have to think about how to do the signalling. Thanks a lot!
Best,
-Nikolaus
--
»It is not worth an intelligent man's time to be in the majority.
By definition, there are already enough people to do that.«
-J.H. Hardy
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C
More information about the Python-list
mailing list