pavlovevidence at gmail.com
Wed Aug 6 12:49:43 CEST 2008
On Aug 4, 9:30 am, Nikolaus Rath <Nikol... at rath.org> wrote:
> I need to synchronize the access to a couple of hundred-thousand
> files. It seems to me that creating one lock object for each of the
> files is a waste of resources, but I cannot use a global lock for all
> of them either (since the locked operations go over the network, this
> would make the whole application essentially single-threaded even
> though most operations act on different files).
> My idea is therefore to create and destroy per-file locks "on-demand"
> and to protect the creation and destruction by a global lock
> (self.global_lock). For that, I add a "usage counter"
> (wlock.user_count) to each lock, and destroy the lock when it reaches
> My questions:
> - Does that look like a proper solution, or does anyone have a better
You need the per-file locks at all if you use a global lock like
this. Here's a way to do it using threading.Condition objects. I
suspect it might not perform so well if there is a lot of competition
for certain keys but it doesn't sound like that's the case for you.
Performance and robustness improvements left as an exercise. (Note:
I'm not sure where self comes from in your examples so I left it out
global_lock = threading.Condition()
locked_keys = set()
while s3key in locked_keys:
More information about the Python-list