Handling locked db tables...
nagle at animats.com
Fri Feb 22 05:04:21 CET 2008
> Thanks for the reply. I understand that this is normal locking
> behavior. What I am looking for is a standard method to either loop
> the query until the table is unlocked, or put the query into some sort
> of queue. Basically my queries work like this.
> Request comes in
> PART I:
> LOCK TABLE port_usage READ;
> SELECT * FROM port_usage WHERE in_use = 0;
> Get available port
> UPDATE port_usage SET in_use = 1 WHERE port = available_port;
> UNLOCK TABLES;
> send request to available port and do some stuff until finished with
> PART II:
> LOCK TABLE port_usage READ
> UPDATE port_usage SET in_use = 0 WHERE port = available_port;
> UNLOCK TABLES;
> Several of these *may* be happening simultaneously so when a second
> request comes in, and the first one has the table locked, I want to
> have the PART I sql still work. Any suggestions here?
Ah. You're just using MySQL as a lock manager. Check out
GET_LOCK, RELEASE_LOCK, and IS_FREE_LOCK. That may be simpler
for this application.
I use those functions regularly, for coordinating multiple
processes and servers. They're quite useful when you have
multiple servers, and OS-level locking isn't enough.
But what you're doing should work. It could be improved;
use "SELECT * FROM port_usage WHERE in_use = 0 LIMIT 1;",
since you only need one value returned. Also, unless port usage
persists over reboots or you have millions of ports, use the
MEMORY engine for the table; then it's just in RAM. Each
restart of MySQL will clear the table.
You shouldn't get an error if the table is locked; the
MySQL connection just waits. What error are you getting?
You wrote "But, when another cursor then tries to do the read
I get an error unless the first process has been completed...
unlocking the tables." Bear in mind that you can only have one
cursor per database connection. The MySQLdb API makes it look
like you can have multiple cursors, but that doesn't actually
More information about the Python-list