[Twisted-Python] OT - adbapi, connection timeouts, mysql - OT
![](https://secure.gravatar.com/avatar/275dbf027f6e280894a60f83a480e083.jpg?s=120&d=mm&r=g)
Hello everyone, I have been experiencing the ConnectionError with adbapi & cp_reconnect=True. I know that because of the cp_reconnect=True param tha is reconnects and that the query is not re-run. I have written some code that should re-run the query in that case (if I get a Failure back because of a ConnectionError), but it doesn't seem to work. My question is if anyone knows how to make mysql's idle timeouts shorter so that I can debug my code? I searched google and the mysql site with no luck. thank you, Gabriel
![](https://secure.gravatar.com/avatar/485d482a94574df2721dde13185b81fe.jpg?s=120&d=mm&r=g)
Hi Gabriel had the same problem, solved it by having keepalive() called in a LoopingCall(), MySQL sitting at defaults timingwise. DB_DRIVER = "MySQLdb" USERDB_ARGS = { 'host': '', 'db': '', 'user': '', 'passwd': '', 'cp_reconnect': True } storekeeper = StoreKeeper(DB_DRIVER, **USERDB_ARGS) ka = task.LoopingCall(storekeeper.store.keepAlive) ka.start(300) class StoreKeeper(object): def __init__(self, dbapiName, **params): self.store = Store(dbapiName, **params) def dbdisconn(self, reason): print 'db disconnected for ', reason def keepAlive(self): d = self.store.runQuery('SELECT 1') d.addErrback(self.dbdisconn) #with store being something like: class Store(object): def __init__(self, dbapiName, **params): self.__pool = adbapi.ConnectionPool(dbapiName, **params) print self.__pool.__getstate__() self.runOperation('SET autocommit = %s', 1) def runQuery(self, query, *args): d = self.__pool.runInteraction(self.mapQuery, query, args) return d def mapQuery(self, curs, query, *args): try: curs.execute(query, *args) except adbapi.ConnectionLost: print print '++++++++++++ rerunning query' print curs.execute(query, *args) #simply resend query, assuming cp_reconnect=True result = curs.fetchall() columns = [d[0] for d in curs.description] return [dict(zip(columns, r)) for r in result] def runOperation(self, query, *args): d = self.__pool.runOperation(query, args) return d def runInteraction(self, fun, queries=(), args=()): d = self.__pool.runInteraction(fun, queries, args) return d HTH, Werner Gabriel Rossetti wrote:
Hello everyone,
I have been experiencing the ConnectionError with adbapi & cp_reconnect=True. I know that because of the cp_reconnect=True param tha is reconnects and that the query is not re-run. I have written some code that should re-run the query in that case (if I get a Failure back because of a ConnectionError), but it doesn't seem to work. My question is if anyone knows how to make mysql's idle timeouts shorter so that I can debug my code? I searched google and the mysql site with no luck.
thank you, Gabriel
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
![](https://secure.gravatar.com/avatar/1629fa1d8b1a39cd63f4e3c4b3c2a6db.jpg?s=120&d=mm&r=g)
Regarding the original question: "how to make mysql's idle timeouts shorter so that I can debug my code?" You should be able to do that in the mysql shell: mysql> show variables like '%timeout%'; +----------------------------+-------+ | Variable_name | Value | +----------------------------+-------+ | connect_timeout | 5 | | delayed_insert_timeout | 300 | | innodb_lock_wait_timeout | 50 | | innodb_rollback_on_timeout | OFF | | interactive_timeout | 600 | | net_read_timeout | 30 | | net_write_timeout | 60 | | slave_net_timeout | 3600 | | table_lock_wait_timeout | 50 | | wait_timeout | 600 | +----------------------------+-------+ 10 rows in set (0.00 sec)
set global variable interactive_timeout = 5;
But in my experience MySQLdb makes the idle connection timeout very difficult to debug effectively. Will twisted.adbapi.ConnectionPool ever offer a pool_recycle kw like sqlalchemy? Clay Gerrard Office: 210-312-3443 Mobile: 210-788-9431 -----Original Message----- From: twisted-python-bounces@twistedmatrix.com [mailto:twisted-python-bounces@twistedmatrix.com] On Behalf Of Werner Thie Sent: Tuesday, July 21, 2009 4:05 PM To: Twisted general discussion Subject: Re: [Twisted-Python] OT - adbapi, connection timeouts, mysql - OT Hi Gabriel had the same problem, solved it by having keepalive() called in a LoopingCall(), MySQL sitting at defaults timingwise. DB_DRIVER = "MySQLdb" USERDB_ARGS = { 'host': '', 'db': '', 'user': '', 'passwd': '', 'cp_reconnect': True } storekeeper = StoreKeeper(DB_DRIVER, **USERDB_ARGS) ka = task.LoopingCall(storekeeper.store.keepAlive) ka.start(300) class StoreKeeper(object): def __init__(self, dbapiName, **params): self.store = Store(dbapiName, **params) def dbdisconn(self, reason): print 'db disconnected for ', reason def keepAlive(self): d = self.store.runQuery('SELECT 1') d.addErrback(self.dbdisconn) #with store being something like: class Store(object): def __init__(self, dbapiName, **params): self.__pool = adbapi.ConnectionPool(dbapiName, **params) print self.__pool.__getstate__() self.runOperation('SET autocommit = %s', 1) def runQuery(self, query, *args): d = self.__pool.runInteraction(self.mapQuery, query, args) return d def mapQuery(self, curs, query, *args): try: curs.execute(query, *args) except adbapi.ConnectionLost: print print '++++++++++++ rerunning query' print curs.execute(query, *args) #simply resend query, assuming cp_reconnect=True result = curs.fetchall() columns = [d[0] for d in curs.description] return [dict(zip(columns, r)) for r in result] def runOperation(self, query, *args): d = self.__pool.runOperation(query, args) return d def runInteraction(self, fun, queries=(), args=()): d = self.__pool.runInteraction(fun, queries, args) return d HTH, Werner Gabriel Rossetti wrote:
Hello everyone,
I have been experiencing the ConnectionError with adbapi & cp_reconnect=True. I know that because of the cp_reconnect=True param tha is reconnects and that the query is not re-run. I have written some code that should re-run the query in that case (if I get a Failure back because of a ConnectionError), but it doesn't seem to work. My question is if anyone knows how to make mysql's idle timeouts shorter so that I can debug my code? I searched google and the mysql site with no luck.
thank you, Gabriel
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python Confidentiality Notice: This e-mail message (including any attached or embedded documents) is intended for the exclusive and confidential use of the individual or entity to which this message is addressed, and unless otherwise expressly indicated, is confidential and privileged information of Rackspace. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse@rackspace.com, and delete the original message. Your cooperation is appreciated.
![](https://secure.gravatar.com/avatar/d5894734b9f67c07b276319fdc2e5d88.jpg?s=120&d=mm&r=g)
Instead of trying to keep the connection alive you can also just reconnect when necessary. Example code here: http://stackoverflow.com/questions/207981/how-to-enable-mysql-client-auto-re... On Tue, Jul 21, 2009 at 2:18 PM, Clay Gerrard <clay.gerrard@rackspace.com>wrote:
Regarding the original question: "how to make mysql's idle timeouts shorter so that I can debug my code?"
You should be able to do that in the mysql shell: mysql> show variables like '%timeout%'; +----------------------------+-------+ | Variable_name | Value | +----------------------------+-------+ | connect_timeout | 5 | | delayed_insert_timeout | 300 | | innodb_lock_wait_timeout | 50 | | innodb_rollback_on_timeout | OFF | | interactive_timeout | 600 | | net_read_timeout | 30 | | net_write_timeout | 60 | | slave_net_timeout | 3600 | | table_lock_wait_timeout | 50 | | wait_timeout | 600 | +----------------------------+-------+ 10 rows in set (0.00 sec)
set global variable interactive_timeout = 5;
But in my experience MySQLdb makes the idle connection timeout very difficult to debug effectively.
Will twisted.adbapi.ConnectionPool ever offer a pool_recycle kw like sqlalchemy?
Clay Gerrard Office: 210-312-3443 Mobile: 210-788-9431 -----Original Message----- From: twisted-python-bounces@twistedmatrix.com [mailto: twisted-python-bounces@twistedmatrix.com] On Behalf Of Werner Thie Sent: Tuesday, July 21, 2009 4:05 PM To: Twisted general discussion Subject: Re: [Twisted-Python] OT - adbapi, connection timeouts, mysql - OT
Hi Gabriel
had the same problem, solved it by having keepalive() called in a LoopingCall(), MySQL sitting at defaults timingwise.
DB_DRIVER = "MySQLdb"
USERDB_ARGS = { 'host': '', 'db': '', 'user': '', 'passwd': '', 'cp_reconnect': True }
storekeeper = StoreKeeper(DB_DRIVER, **USERDB_ARGS)
ka = task.LoopingCall(storekeeper.store.keepAlive) ka.start(300)
class StoreKeeper(object): def __init__(self, dbapiName, **params): self.store = Store(dbapiName, **params)
def dbdisconn(self, reason): print 'db disconnected for ', reason
def keepAlive(self): d = self.store.runQuery('SELECT 1') d.addErrback(self.dbdisconn)
#with store being something like:
class Store(object): def __init__(self, dbapiName, **params): self.__pool = adbapi.ConnectionPool(dbapiName, **params) print self.__pool.__getstate__() self.runOperation('SET autocommit = %s', 1)
def runQuery(self, query, *args): d = self.__pool.runInteraction(self.mapQuery, query, args) return d
def mapQuery(self, curs, query, *args): try: curs.execute(query, *args) except adbapi.ConnectionLost: print print '++++++++++++ rerunning query' print curs.execute(query, *args) #simply resend query, assuming cp_reconnect=True result = curs.fetchall() columns = [d[0] for d in curs.description] return [dict(zip(columns, r)) for r in result]
def runOperation(self, query, *args): d = self.__pool.runOperation(query, args) return d
def runInteraction(self, fun, queries=(), args=()): d = self.__pool.runInteraction(fun, queries, args) return d
HTH, Werner
Gabriel Rossetti wrote:
Hello everyone,
I have been experiencing the ConnectionError with adbapi & cp_reconnect=True. I know that because of the cp_reconnect=True param tha is reconnects and that the query is not re-run. I have written some code that should re-run the query in that case (if I get a Failure back because of a ConnectionError), but it doesn't seem to work. My question is if anyone knows how to make mysql's idle timeouts shorter so that I can debug my code? I searched google and the mysql site with no luck.
thank you, Gabriel
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Confidentiality Notice: This e-mail message (including any attached or embedded documents) is intended for the exclusive and confidential use of the individual or entity to which this message is addressed, and unless otherwise expressly indicated, is confidential and privileged information of Rackspace. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse@rackspace.com, and delete the original message. Your cooperation is appreciated.
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
![](https://secure.gravatar.com/avatar/275dbf027f6e280894a60f83a480e083.jpg?s=120&d=mm&r=g)
Hello Garret, yes, I finally did did something like that, I am currently testing the code. Thanks, Gabriel Garret Heaton wrote:
Instead of trying to keep the connection alive you can also just reconnect when necessary. Example code here: http://stackoverflow.com/questions/207981/how-to-enable-mysql-client-auto-re...
On Tue, Jul 21, 2009 at 2:18 PM, Clay Gerrard <clay.gerrard@rackspace.com <mailto:clay.gerrard@rackspace.com>> wrote:
Regarding the original question: "how to make mysql's idle timeouts shorter so that I can debug my code?"
You should be able to do that in the mysql shell: mysql> show variables like '%timeout%'; +----------------------------+-------+ | Variable_name | Value | +----------------------------+-------+ | connect_timeout | 5 | | delayed_insert_timeout | 300 | | innodb_lock_wait_timeout | 50 | | innodb_rollback_on_timeout | OFF | | interactive_timeout | 600 | | net_read_timeout | 30 | | net_write_timeout | 60 | | slave_net_timeout | 3600 | | table_lock_wait_timeout | 50 | | wait_timeout | 600 | +----------------------------+-------+ 10 rows in set (0.00 sec)
> set global variable interactive_timeout = 5;
But in my experience MySQLdb makes the idle connection timeout very difficult to debug effectively.
Will twisted.adbapi.ConnectionPool ever offer a pool_recycle kw like sqlalchemy?
Clay Gerrard Office: 210-312-3443 Mobile: 210-788-9431 -----Original Message----- From: twisted-python-bounces@twistedmatrix.com <mailto:twisted-python-bounces@twistedmatrix.com> [mailto:twisted-python-bounces@twistedmatrix.com <mailto:twisted-python-bounces@twistedmatrix.com>] On Behalf Of Werner Thie Sent: Tuesday, July 21, 2009 4:05 PM To: Twisted general discussion Subject: Re: [Twisted-Python] OT - adbapi, connection timeouts, mysql - OT
Hi Gabriel
had the same problem, solved it by having keepalive() called in a LoopingCall(), MySQL sitting at defaults timingwise.
DB_DRIVER = "MySQLdb"
USERDB_ARGS = { 'host': '', 'db': '', 'user': '', 'passwd': '', 'cp_reconnect': True }
storekeeper = StoreKeeper(DB_DRIVER, **USERDB_ARGS)
ka = task.LoopingCall(storekeeper.store.keepAlive) ka.start(300)
class StoreKeeper(object): def __init__(self, dbapiName, **params): self.store = Store(dbapiName, **params)
def dbdisconn(self, reason): print 'db disconnected for ', reason
def keepAlive(self): d = self.store.runQuery('SELECT 1') d.addErrback(self.dbdisconn)
#with store being something like:
class Store(object): def __init__(self, dbapiName, **params): self.__pool = adbapi.ConnectionPool(dbapiName, **params) print self.__pool.__getstate__() self.runOperation('SET autocommit = %s', 1)
def runQuery(self, query, *args): d = self.__pool.runInteraction(self.mapQuery, query, args) return d
def mapQuery(self, curs, query, *args): try: curs.execute(query, *args) except adbapi.ConnectionLost: print print '++++++++++++ rerunning query' print curs.execute(query, *args) #simply resend query, assuming cp_reconnect=True result = curs.fetchall() columns = [d[0] for d in curs.description] return [dict(zip(columns, r)) for r in result]
def runOperation(self, query, *args): d = self.__pool.runOperation(query, args) return d
def runInteraction(self, fun, queries=(), args=()): d = self.__pool.runInteraction(fun, queries, args) return d
HTH, Werner
Gabriel Rossetti wrote: > Hello everyone, > > I have been experiencing the ConnectionError with adbapi & > cp_reconnect=True. I know that because of the cp_reconnect=True param > tha is reconnects and that the query is not re-run. I have written some > code that should re-run the query in that case (if I get a Failure back > because of a ConnectionError), but it doesn't seem to work. My question > is if anyone knows how to make mysql's idle timeouts shorter so that I > can debug my code? I searched google and the mysql site with no luck. > > thank you, > Gabriel > > _______________________________________________ > Twisted-Python mailing list > Twisted-Python@twistedmatrix.com <mailto:Twisted-Python@twistedmatrix.com> > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com <mailto:Twisted-Python@twistedmatrix.com> http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Confidentiality Notice: This e-mail message (including any attached or embedded documents) is intended for the exclusive and confidential use of the individual or entity to which this message is addressed, and unless otherwise expressly indicated, is confidential and privileged information of Rackspace. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse@rackspace.com <mailto:abuse@rackspace.com>, and delete the original message. Your cooperation is appreciated.
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com <mailto:Twisted-Python@twistedmatrix.com> http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
------------------------------------------------------------------------
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
![](https://secure.gravatar.com/avatar/1629fa1d8b1a39cd63f4e3c4b3c2a6db.jpg?s=120&d=mm&r=g)
In my experience "re-running the query" has not been sufficient. All of the connections in the pool time out around the same time. Using cp_reconnect just forces the "idle" connection (cursor) to be removed from the pool before raising the generic "ConnectionLost": def rollback(self): if not self._pool.reconnect: self._connection.rollback() return try: self._connection.rollback() curs = self._connection.cursor() curs.execute(self._pool.good_sql) curs.close() self._connection.commit() return except: log.err(None, "Rollback failed") self._pool.disconnect(self._connection) if self._pool.noisy: log.msg("Connection lost.") raise ConnectionLost() But when I go to re-run the query it's very unlikely that I'll get that same thread id again (and therefore a fresh connection). More than likely I'll get another stale connection which will also get dropped. Repeat ad infinitum I'm fairly sure the right thing to do is to make adbapi.Connection objects aware of their created time, and allow adbapi.ConnectionPool.connect to potentially refresh "old" connections based on an optional kwarg when creating the connection pool: "pool_recycle" That's what I'm working on anyway... Clay Gerrard Office: 210-312-3443 Mobile: 210-788-9431 -----Original Message----- From: twisted-python-bounces@twistedmatrix.com [mailto:twisted-python-bounces@twistedmatrix.com] On Behalf Of Gabriel Rossetti Sent: Wednesday, July 22, 2009 1:34 AM To: Twisted general discussion Subject: Re: [Twisted-Python] OT - adbapi, connection timeouts, mysql - OT Hello Garret, yes, I finally did did something like that, I am currently testing the code. Thanks, Gabriel Garret Heaton wrote:
Instead of trying to keep the connection alive you can also just reconnect when necessary. Example code here: http://stackoverflow.com/questions/207981/how-to-enable-mysql-client-auto-re...
On Tue, Jul 21, 2009 at 2:18 PM, Clay Gerrard <clay.gerrard@rackspace.com <mailto:clay.gerrard@rackspace.com>> wrote:
Regarding the original question: "how to make mysql's idle timeouts shorter so that I can debug my code?"
You should be able to do that in the mysql shell: mysql> show variables like '%timeout%'; +----------------------------+-------+ | Variable_name | Value | +----------------------------+-------+ | connect_timeout | 5 | | delayed_insert_timeout | 300 | | innodb_lock_wait_timeout | 50 | | innodb_rollback_on_timeout | OFF | | interactive_timeout | 600 | | net_read_timeout | 30 | | net_write_timeout | 60 | | slave_net_timeout | 3600 | | table_lock_wait_timeout | 50 | | wait_timeout | 600 | +----------------------------+-------+ 10 rows in set (0.00 sec)
> set global variable interactive_timeout = 5;
But in my experience MySQLdb makes the idle connection timeout very difficult to debug effectively.
Will twisted.adbapi.ConnectionPool ever offer a pool_recycle kw like sqlalchemy?
Clay Gerrard Office: 210-312-3443 Mobile: 210-788-9431 -----Original Message----- From: twisted-python-bounces@twistedmatrix.com <mailto:twisted-python-bounces@twistedmatrix.com> [mailto:twisted-python-bounces@twistedmatrix.com <mailto:twisted-python-bounces@twistedmatrix.com>] On Behalf Of Werner Thie Sent: Tuesday, July 21, 2009 4:05 PM To: Twisted general discussion Subject: Re: [Twisted-Python] OT - adbapi, connection timeouts, mysql - OT
Hi Gabriel
had the same problem, solved it by having keepalive() called in a LoopingCall(), MySQL sitting at defaults timingwise.
DB_DRIVER = "MySQLdb"
USERDB_ARGS = { 'host': '', 'db': '', 'user': '', 'passwd': '', 'cp_reconnect': True }
storekeeper = StoreKeeper(DB_DRIVER, **USERDB_ARGS)
ka = task.LoopingCall(storekeeper.store.keepAlive) ka.start(300)
class StoreKeeper(object): def __init__(self, dbapiName, **params): self.store = Store(dbapiName, **params)
def dbdisconn(self, reason): print 'db disconnected for ', reason
def keepAlive(self): d = self.store.runQuery('SELECT 1') d.addErrback(self.dbdisconn)
#with store being something like:
class Store(object): def __init__(self, dbapiName, **params): self.__pool = adbapi.ConnectionPool(dbapiName, **params) print self.__pool.__getstate__() self.runOperation('SET autocommit = %s', 1)
def runQuery(self, query, *args): d = self.__pool.runInteraction(self.mapQuery, query, args) return d
def mapQuery(self, curs, query, *args): try: curs.execute(query, *args) except adbapi.ConnectionLost: print print '++++++++++++ rerunning query' print curs.execute(query, *args) #simply resend query, assuming cp_reconnect=True result = curs.fetchall() columns = [d[0] for d in curs.description] return [dict(zip(columns, r)) for r in result]
def runOperation(self, query, *args): d = self.__pool.runOperation(query, args) return d
def runInteraction(self, fun, queries=(), args=()): d = self.__pool.runInteraction(fun, queries, args) return d
HTH, Werner
Gabriel Rossetti wrote: > Hello everyone, > > I have been experiencing the ConnectionError with adbapi & > cp_reconnect=True. I know that because of the cp_reconnect=True param > tha is reconnects and that the query is not re-run. I have written some > code that should re-run the query in that case (if I get a Failure back > because of a ConnectionError), but it doesn't seem to work. My question > is if anyone knows how to make mysql's idle timeouts shorter so that I > can debug my code? I searched google and the mysql site with no luck. > > thank you, > Gabriel > > _______________________________________________ > Twisted-Python mailing list > Twisted-Python@twistedmatrix.com <mailto:Twisted-Python@twistedmatrix.com> > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com <mailto:Twisted-Python@twistedmatrix.com> http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Confidentiality Notice: This e-mail message (including any attached or embedded documents) is intended for the exclusive and confidential use of the individual or entity to which this message is addressed, and unless otherwise expressly indicated, is confidential and privileged information of Rackspace. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse@rackspace.com <mailto:abuse@rackspace.com>, and delete the original message. Your cooperation is appreciated.
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com <mailto:Twisted-Python@twistedmatrix.com> http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
------------------------------------------------------------------------
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python Confidentiality Notice: This e-mail message (including any attached or embedded documents) is intended for the exclusive and confidential use of the individual or entity to which this message is addressed, and unless otherwise expressly indicated, is confidential and privileged information of Rackspace. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse@rackspace.com, and delete the original message. Your cooperation is appreciated.
![](https://secure.gravatar.com/avatar/d5894734b9f67c07b276319fdc2e5d88.jpg?s=120&d=mm&r=g)
There is a solution to the reconnection issue posted here<http://www.gelens.org/2008/09/12/reinitializing-twisted-connectionpool/>which one of my friends improved upon. He wasn't on the list at the time of this posting so I'm replying to this on his behalf. The improved solution is: class ReconnectingConnectionPool(adbapi.ConnectionPool): def _runInteraction(self, interaction, *args, **kw): try: return adbapi.ConnectionPool._runInteraction(self, interaction, *args, **kw) except MySQLdb.OperationalError, e: if e[0] not in (2006, 2013): raise log.msg("RCP: got error %s, retrying operation" %(e)) conn = self.connections.get(self.threadID()) self.disconnect(conn) # try the interaction again return adbapi.ConnectionPool._runInteraction(self, interaction, *args, **kw) Also on GitHub: http://gist.github.com/174056 It's working well for both of us so far. On Wed, Jul 22, 2009 at 7:06 AM, Clay Gerrard <clay.gerrard@rackspace.com>wrote:
In my experience "re-running the query" has not been sufficient.
All of the connections in the pool time out around the same time.
Using cp_reconnect just forces the "idle" connection (cursor) to be removed from the pool before raising the generic "ConnectionLost":
def rollback(self): if not self._pool.reconnect: self._connection.rollback() return
try: self._connection.rollback() curs = self._connection.cursor() curs.execute(self._pool.good_sql) curs.close() self._connection.commit() return except: log.err(None, "Rollback failed")
self._pool.disconnect(self._connection)
if self._pool.noisy: log.msg("Connection lost.")
raise ConnectionLost()
But when I go to re-run the query it's very unlikely that I'll get that same thread id again (and therefore a fresh connection). More than likely I'll get another stale connection which will also get dropped. Repeat ad infinitum
I'm fairly sure the right thing to do is to make adbapi.Connection objects aware of their created time, and allow adbapi.ConnectionPool.connect to potentially refresh "old" connections based on an optional kwarg when creating the connection pool:
"pool_recycle"
That's what I'm working on anyway...
Clay Gerrard Office: 210-312-3443 Mobile: 210-788-9431 -----Original Message----- From: twisted-python-bounces@twistedmatrix.com [mailto: twisted-python-bounces@twistedmatrix.com] On Behalf Of Gabriel Rossetti Sent: Wednesday, July 22, 2009 1:34 AM To: Twisted general discussion Subject: Re: [Twisted-Python] OT - adbapi, connection timeouts, mysql - OT
Hello Garret,
yes, I finally did did something like that, I am currently testing the code.
Thanks, Gabriel
Garret Heaton wrote:
Instead of trying to keep the connection alive you can also just reconnect when necessary. Example code here:
http://stackoverflow.com/questions/207981/how-to-enable-mysql-client-auto-re...
On Tue, Jul 21, 2009 at 2:18 PM, Clay Gerrard <clay.gerrard@rackspace.com <mailto:clay.gerrard@rackspace.com>> wrote:
Regarding the original question: "how to make mysql's idle timeouts shorter so that I can debug my code?"
You should be able to do that in the mysql shell: mysql> show variables like '%timeout%'; +----------------------------+-------+ | Variable_name | Value | +----------------------------+-------+ | connect_timeout | 5 | | delayed_insert_timeout | 300 | | innodb_lock_wait_timeout | 50 | | innodb_rollback_on_timeout | OFF | | interactive_timeout | 600 | | net_read_timeout | 30 | | net_write_timeout | 60 | | slave_net_timeout | 3600 | | table_lock_wait_timeout | 50 | | wait_timeout | 600 | +----------------------------+-------+ 10 rows in set (0.00 sec)
> set global variable interactive_timeout = 5;
But in my experience MySQLdb makes the idle connection timeout very difficult to debug effectively.
Will twisted.adbapi.ConnectionPool ever offer a pool_recycle kw like sqlalchemy?
Clay Gerrard Office: 210-312-3443 Mobile: 210-788-9431 -----Original Message----- From: twisted-python-bounces@twistedmatrix.com <mailto:twisted-python-bounces@twistedmatrix.com> [mailto:twisted-python-bounces@twistedmatrix.com <mailto:twisted-python-bounces@twistedmatrix.com>] On Behalf Of Werner Thie Sent: Tuesday, July 21, 2009 4:05 PM To: Twisted general discussion Subject: Re: [Twisted-Python] OT - adbapi, connection timeouts, mysql - OT
Hi Gabriel
had the same problem, solved it by having keepalive() called in a LoopingCall(), MySQL sitting at defaults timingwise.
DB_DRIVER = "MySQLdb"
USERDB_ARGS = { 'host': '', 'db': '', 'user': '', 'passwd': '', 'cp_reconnect': True }
storekeeper = StoreKeeper(DB_DRIVER, **USERDB_ARGS)
ka = task.LoopingCall(storekeeper.store.keepAlive) ka.start(300)
class StoreKeeper(object): def __init__(self, dbapiName, **params): self.store = Store(dbapiName, **params)
def dbdisconn(self, reason): print 'db disconnected for ', reason
def keepAlive(self): d = self.store.runQuery('SELECT 1') d.addErrback(self.dbdisconn)
#with store being something like:
class Store(object): def __init__(self, dbapiName, **params): self.__pool = adbapi.ConnectionPool(dbapiName, **params) print self.__pool.__getstate__() self.runOperation('SET autocommit = %s', 1)
def runQuery(self, query, *args): d = self.__pool.runInteraction(self.mapQuery, query, args) return d
def mapQuery(self, curs, query, *args): try: curs.execute(query, *args) except adbapi.ConnectionLost: print print '++++++++++++ rerunning query' print curs.execute(query, *args) #simply resend query, assuming cp_reconnect=True result = curs.fetchall() columns = [d[0] for d in curs.description] return [dict(zip(columns, r)) for r in result]
def runOperation(self, query, *args): d = self.__pool.runOperation(query, args) return d
def runInteraction(self, fun, queries=(), args=()): d = self.__pool.runInteraction(fun, queries, args) return d
HTH, Werner
Gabriel Rossetti wrote: > Hello everyone, > > I have been experiencing the ConnectionError with adbapi & > cp_reconnect=True. I know that because of the cp_reconnect=True param > tha is reconnects and that the query is not re-run. I have written some > code that should re-run the query in that case (if I get a Failure back > because of a ConnectionError), but it doesn't seem to work. My question > is if anyone knows how to make mysql's idle timeouts shorter so that I > can debug my code? I searched google and the mysql site with no luck. > > thank you, > Gabriel > > _______________________________________________ > Twisted-Python mailing list > Twisted-Python@twistedmatrix.com <mailto:Twisted-Python@twistedmatrix.com> > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com <mailto:Twisted-Python@twistedmatrix.com> http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Confidentiality Notice: This e-mail message (including any attached
or
embedded documents) is intended for the exclusive and confidential use of the individual or entity to which this message is addressed, and unless otherwise expressly indicated, is confidential and privileged information of Rackspace. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse@rackspace.com <mailto:abuse@rackspace.com>, and delete the original message. Your cooperation is appreciated.
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com <mailto:Twisted-Python@twistedmatrix.com> http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
------------------------------------------------------------------------
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Confidentiality Notice: This e-mail message (including any attached or embedded documents) is intended for the exclusive and confidential use of the individual or entity to which this message is addressed, and unless otherwise expressly indicated, is confidential and privileged information of Rackspace. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse@rackspace.com, and delete the original message. Your cooperation is appreciated.
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
![](https://secure.gravatar.com/avatar/dd1243740a09f0676ef225404105cfc0.jpg?s=120&d=mm&r=g)
On Aug 24, 2009, at 2:58 PM, Garret Heaton wrote:
There is a solution to the reconnection issue posted here which one of my friends improved upon. He wasn't on the list at the time of this posting so I'm replying to this on his behalf. The improved solution is: [snip] # try the interaction again return adbapi.ConnectionPool._runInteraction(self, interaction, *args, **kw)
The thing that would concern me is that you can get a ConnectionLost error for a variety of different reasons, and they might not be raised by the first query issued in your interaction. I feel like I can think of any number of ways running the same series of queries twice would be a bad thing. I have some very important queries that are run inside a interaction because they do a series of operations in sequence. I can't use transactions because my tables are MyISAM. Perhaps I'm jumping at shadows, but automatic re-querying just seems dangerous to me. Only the particular application knows when it's safe. -phil
![](https://secure.gravatar.com/avatar/5bd935f17a824b2ac0f9293b033ac2b4.jpg?s=120&d=mm&r=g)
Phil Christensen wrote:
The thing that would concern me is that you can get a ConnectionLost error for a variety of different reasons, and they might not be raised by the first query issued in your interaction.
I feel like I can think of any number of ways running the same series of queries twice would be a bad thing. I have some very important queries that are run inside a interaction because they do a series of operations in sequence. I can't use transactions because my tables are MyISAM.
Perhaps I'm jumping at shadows, but automatic re-querying just seems dangerous to me. Only the particular application knows when it's safe.
I agree strongly. A case I've run into with Sql Server accessed through ODBC: some work is done resulting in some open connections in the pool. At some point, the DB server goes down, or maybe connectivity is lost, and later restored. Any following attempt to use one of the pool connections will fail, with one of several error messages, and I at least was unable to use them to clearly distinguish this case from an error on the request itself. I wound up putting a wrapper around adbapi that closes the connection after each request. -- Don Dwiggins Advanced Publishing Technology
![](https://secure.gravatar.com/avatar/d5894734b9f67c07b276319fdc2e5d88.jpg?s=120&d=mm&r=g)
On Mon, Aug 24, 2009 at 3:53 PM, Phil Christensen <phil@bubblehouse.org>wrote:
On Aug 24, 2009, at 2:58 PM, Garret Heaton wrote:
There is a solution to the reconnection issue posted here<http://www.gelens.org/2008/09/12/reinitializing-twisted-connectionpool/>which one of my friends improved upon. He wasn't on the list at the time of this posting so I'm replying to this on his behalf. The improved solution is:
[snip]
# try the interaction again return adbapi.ConnectionPool._runInteraction(self, interaction, *args, **kw)
The thing that would concern me is that you can get a ConnectionLost error for a variety of different reasons, and they might not be raised by the first query issued in your interaction.
Interesting, I'll have to look in to this a bit more.
I feel like I can think of any number of ways running the same series of queries twice would be a bad thing. I have some very important queries that are run inside a interaction because they do a series of operations in sequence. I can't use transactions because my tables are MyISAM.
Perhaps I'm jumping at shadows, but automatic re-querying just seems dangerous to me. Only the particular application knows when it's safe.
This is very true, and I should have mentioned it in my reply. Re-running queries is only possible when you know your application can do it safely. How would you solve the issue of a lost connection in your case?
-phil
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
![](https://secure.gravatar.com/avatar/dd1243740a09f0676ef225404105cfc0.jpg?s=120&d=mm&r=g)
On Aug 24, 2009, at 9:14 PM, Garret Heaton wrote:
On Mon, Aug 24, 2009 at 3:53 PM, Phil Christensen <phil@bubblehouse.org
wrote: I feel like I can think of any number of ways running the same series of queries twice would be a bad thing. I have some very important queries that are run inside a interaction because they do a series of operations in sequence. I can't use transactions because my tables are MyISAM.
Perhaps I'm jumping at shadows, but automatic re-querying just seems dangerous to me. Only the particular application knows when it's safe.
This is very true, and I should have mentioned it in my reply. Re- running queries is only possible when you know your application can do it safely.
How would you solve the issue of a lost connection in your case?
Honestly, I have never actually solved it. I pretty much only write webapps these days, so when a ConnectionLost happens, it just propagates up the stack and displays an error to the user. Not ideal, by any means. However, in my case, the only time I really get ConnectionLost errors is when the MySQL connection timeout closes idle connections in the pool. I accidentally fixed this months ago by setting up Nagios to monitor the web services, and it runs often enough that it usually hits every open connection within that timeout (8hrs). The only safe way I can think of to do this is run a simple query (like SELECT 1) before every interaction. Then you would only reconnect/requery if that fails. The only problem with this is that it still doesn't protect you from lost connections that occur during a multi-query interaction. -phil
![](https://secure.gravatar.com/avatar/426d6dbf6554a9b3fca1fd04e6b75f38.jpg?s=120&d=mm&r=g)
Phil Christensen wrote:
On Aug 24, 2009, at 9:14 PM, Garret Heaton wrote:
On Mon, Aug 24, 2009 at 3:53 PM, Phil Christensen <phil@bubblehouse.org
wrote: I feel like I can think of any number of ways running the same series of queries twice would be a bad thing. I have some very important queries that are run inside a interaction because they do a series of operations in sequence. I can't use transactions because my tables are MyISAM.
Perhaps I'm jumping at shadows, but automatic re-querying just seems dangerous to me. Only the particular application knows when it's safe. This is very true, and I should have mentioned it in my reply. Re- running queries is only possible when you know your application can do it safely.
How would you solve the issue of a lost connection in your case?
Honestly, I have never actually solved it. I pretty much only write webapps these days, so when a ConnectionLost happens, it just propagates up the stack and displays an error to the user. Not ideal, by any means.
It's hard to apply in the general case, but I like the way Zope handles this using the per-request transaction machinery. Basically, if any processing generates a "retry"able exception, all transactions attached to the request are rolled back and the entire request re-submitted. This works really, really well for HTML<->SQL CRUD apps. I can't see how Twisted could support that currently, given the deferToThread == 1 transaction nature of adbapi.
![](https://secure.gravatar.com/avatar/dd1243740a09f0676ef225404105cfc0.jpg?s=120&d=mm&r=g)
On Aug 25, 2009, at 11:25 AM, Phil Mayers wrote:
Phil Christensen wrote:
Honestly, I have never actually solved it. I pretty much only write webapps these days, so when a ConnectionLost happens, it just propagates up the stack and displays an error to the user. Not ideal, by any means.
It's hard to apply in the general case, but I like the way Zope handles this using the per-request transaction machinery.
Basically, if any processing generates a "retry"able exception, all transactions attached to the request are rolled back and the entire request re-submitted.
That's definitely the preferable solution, but an additional problem is you still can't use transactions with MyISAM-backed tables. Yeah yeah, we should be using postgres, sqllite, or even InnoDB ;-) Of course sometimes that's just not an option... -phil
![](https://secure.gravatar.com/avatar/1629fa1d8b1a39cd63f4e3c4b3c2a6db.jpg?s=120&d=mm&r=g)
I see this issue (ConnectionLost on MySQL) come up at least once a month. It's actually the reason I originally joined this discussion list, but I've yet to see a wholly satisfactory solution. Since there is no one true perfect generic way to safely rerun a failed interaction after ConnectionLost - the smart thing to do is just raise the exception and let the application deal with (or bomb out on) what *should* be a rare occurrence indicating some sort of problem. adabapi does exactly this. However, with MySQL (>=v5.0 at least) you HAVE to make plans to deal with ConnectionLost (or "MySQL server has gone away" if cp_reconnect != True) or your twisted app WILL be quite broken. Unfortunately, there's no obvious way to separate a real connection "problem" from the perfectly normal behavior of the MySQL server closing an idle connection without making your code provider specific. e.g. from MySQLdb import OperationalError I've seen (and even begrudgingly implemented) non provider specific solutions that just create a new connection every time, or that qualify connections returned from the pool with "good_sql" before handing them over to the app. But, that overhead is obviously not acceptable in many situations. I suggest "the right way(tm)" to fix this is with an optional pool_recycle kwarg that sets the maximum time that a connection can be reused before it must be recycled (many ORM's provide precedent for this solution - sqlalchemy, dajngo.db, etc.) reconnectionpool.py ''' Created on Aug 26, 2009 @author: clayg ''' from time import time from twisted.enterprise import adbapi from twisted.python import log class ReConnectionPool(adbapi.ConnectionPool): ''' subclass of adbapi.ConnectionPool that supports pool_recycle pool_recycle disabled by default (-1) ''' def __init__(self, *args, **kwargs): self.pool_recycle = kwargs.pop('pool_recycle', -1) self.conn_starttime = {} # connections starttime, hashed on thread id adbapi.ConnectionPool.__init__(self, *args, **kwargs) def connect(self, *args, **kwargs): # ask ConnectionPool for a connection conn = adbapi.ConnectionPool.connect(self, *args, **kwargs) # if pool_recycling is enabled if self.pool_recycle > -1: # get the start time for this connection tid = self.threadID() starttime = self.conn_starttime.get(tid) now = time() if not starttime: # init starttime for new conn self.conn_starttime[tid] = starttime = now log.msg("Connection %s was created at %s." % (tid, now)) # if the connection is older than limit in pool_recycle if (now - starttime >= self.pool_recycle): self.disconnect(conn) log.msg("Connection %s was recycled at %s." % (tid, now)) conn = adbapi.ConnectionPool.connect(self, *args, **kwargs) self.conn_starttime[tid] = now return conn I think it be quite a bit less messy if it was implemented inside the ConnectionPool class instead of a subclass. Someone else could probably do a much better job than I, although I wouldn't mind taking a crack at it. Thoughts? Clay Gerrard Office: 210-312-3443 Mobile: 210-788-9431 -----Original Message----- From: twisted-python-bounces@twistedmatrix.com [mailto:twisted-python-bounces@twistedmatrix.com] On Behalf Of Phil Christensen Sent: Tuesday, August 25, 2009 10:30 AM To: Twisted general discussion Subject: Re: [Twisted-Python] OT - adbapi, connection timeouts, mysql - OT On Aug 25, 2009, at 11:25 AM, Phil Mayers wrote:
Phil Christensen wrote:
Honestly, I have never actually solved it. I pretty much only write webapps these days, so when a ConnectionLost happens, it just propagates up the stack and displays an error to the user. Not ideal, by any means.
It's hard to apply in the general case, but I like the way Zope handles this using the per-request transaction machinery.
Basically, if any processing generates a "retry"able exception, all transactions attached to the request are rolled back and the entire request re-submitted.
That's definitely the preferable solution, but an additional problem is you still can't use transactions with MyISAM-backed tables. Yeah yeah, we should be using postgres, sqllite, or even InnoDB ;-) Of course sometimes that's just not an option... -phil _______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python Confidentiality Notice: This e-mail message (including any attached or embedded documents) is intended for the exclusive and confidential use of the individual or entity to which this message is addressed, and unless otherwise expressly indicated, is confidential and privileged information of Rackspace. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse@rackspace.com, and delete the original message. Your cooperation is appreciated.
![](https://secure.gravatar.com/avatar/607cfd4a5b41fe6c886c978128b9c03e.jpg?s=120&d=mm&r=g)
On 09:49 pm, clay.gerrard@rackspace.com wrote:
I see this issue (ConnectionLost on MySQL) come up at least once a month. It's actually the reason I originally joined this discussion list, but I've yet to see a wholly satisfactory solution.
Since there is no one true perfect generic way to safely rerun a failed interaction after ConnectionLost - the smart thing to do is just raise the exception and let the application deal with (or bomb out on) what *should* be a rare occurrence indicating some sort of problem.
adabapi does exactly this.
However, with MySQL (>=v5.0 at least) you HAVE to make plans to deal with ConnectionLost (or "MySQL server has gone away" if cp_reconnect != True) or your twisted app WILL be quite broken. Unfortunately, there's no obvious way to separate a real connection "problem" from the perfectly normal behavior of the MySQL server closing an idle connection without making your code provider specific. e.g. from MySQLdb import OperationalError
I've seen (and even begrudgingly implemented) non provider specific solutions that just create a new connection every time, or that qualify connections returned from the pool with "good_sql" before handing them over to the app. But, that overhead is obviously not acceptable in many situations.
I suggest "the right way(tm)" to fix this is with an optional pool_recycle kwarg that sets the maximum time that a connection can be reused before it must be recycled (many ORM's provide precedent for this solution - sqlalchemy, dajngo.db, etc.)
Is this irrespective of idle time? It sounds like not, though the MySQL idle timeout is the motivation you mention above.
reconnectionpool.py
''' Created on Aug 26, 2009
@author: clayg '''
from time import time
from twisted.enterprise import adbapi from twisted.python import log
class ReConnectionPool(adbapi.ConnectionPool): ''' subclass of adbapi.ConnectionPool that supports pool_recycle pool_recycle disabled by default (-1) '''
def __init__(self, *args, **kwargs): self.pool_recycle = kwargs.pop('pool_recycle', -1) self.conn_starttime = {} # connections starttime, hashed on thread id adbapi.ConnectionPool.__init__(self, *args, **kwargs)
def connect(self, *args, **kwargs): # ask ConnectionPool for a connection conn = adbapi.ConnectionPool.connect(self, *args, **kwargs) # if pool_recycling is enabled if self.pool_recycle > -1: # get the start time for this connection tid = self.threadID() starttime = self.conn_starttime.get(tid) now = time() if not starttime: # init starttime for new conn self.conn_starttime[tid] = starttime = now log.msg("Connection %s was created at %s." % (tid, now))
# if the connection is older than limit in pool_recycle if (now - starttime >= self.pool_recycle): self.disconnect(conn) log.msg("Connection %s was recycled at %s." % (tid, now)) conn = adbapi.ConnectionPool.connect(self, *args, **kwargs) self.conn_starttime[tid] = now
return conn
I think it be quite a bit less messy if it was implemented inside the ConnectionPool class instead of a subclass. Someone else could probably do a much better job than I, although I wouldn't mind taking a crack at it.
Thoughts?
This could be a cool feature. I'm sure a lot of people would certainly be happy for there to be a simple answer to the monthly MySQL question you referred to above. Some random thoughts: * you should use the reactor to keep track of time - reactor.seconds() should someday actually be more reliable than time.time() (but it's the same today); also, parameterize the reactor and use reactor.seconds() and the code is more easily testable. * this will throw reconnect overhead in front of whichever query happens to get issued to a thread which has a connection which needs to be recycled. If it avoids a connection lost error, that's great; if it doesn't, then it's just overhead. This, combined with the fact that queries are dispatched to basically a random connection, might not be too nice. It could be cool to do the reconnecting more proactively in idle threads and improve the dispatch logic to give preference to fresh, available connections when possible. * tests tests tests ;) Jean-Paul
![](https://secure.gravatar.com/avatar/1629fa1d8b1a39cd63f4e3c4b3c2a6db.jpg?s=120&d=mm&r=g)
The way that sqlalchemy implements pool_recycle is "irrespective of idle time" best I can tell from looking at it. Keeping track of the last_used_time seems like it would have been equally reasonable, although perhaps even additional overhead. I'm not sure I understand what you mean by "parameterize the reactor" - pass it as an argument when you initialize the (Re)ConnectionPool? Can you point me to a twisted class that does this as an example? I agree it would be very cool "to do the reconnecting more proactively in idle threads" - I'll have to think on that, do you know off-hand of any analogs? Thanks for the feedback, Clay Gerrard Office: 210-312-3443 Mobile: 210-788-9431 -----Original Message----- From: twisted-python-bounces@twistedmatrix.com [mailto:twisted-python-bounces@twistedmatrix.com] On Behalf Of exarkun@twistedmatrix.com Sent: Wednesday, August 26, 2009 5:26 PM To: Twisted general discussion Subject: Re: [Twisted-Python] OT - adbapi, connection timeouts, mysql - OT On 09:49 pm, clay.gerrard@rackspace.com wrote:
I see this issue (ConnectionLost on MySQL) come up at least once a month. It's actually the reason I originally joined this discussion list, but I've yet to see a wholly satisfactory solution.
Since there is no one true perfect generic way to safely rerun a failed interaction after ConnectionLost - the smart thing to do is just raise the exception and let the application deal with (or bomb out on) what *should* be a rare occurrence indicating some sort of problem.
adabapi does exactly this.
However, with MySQL (>=v5.0 at least) you HAVE to make plans to deal with ConnectionLost (or "MySQL server has gone away" if cp_reconnect != True) or your twisted app WILL be quite broken. Unfortunately, there's no obvious way to separate a real connection "problem" from the perfectly normal behavior of the MySQL server closing an idle connection without making your code provider specific. e.g. from MySQLdb import OperationalError
I've seen (and even begrudgingly implemented) non provider specific solutions that just create a new connection every time, or that qualify connections returned from the pool with "good_sql" before handing them over to the app. But, that overhead is obviously not acceptable in many situations.
I suggest "the right way(tm)" to fix this is with an optional pool_recycle kwarg that sets the maximum time that a connection can be reused before it must be recycled (many ORM's provide precedent for this solution - sqlalchemy, dajngo.db, etc.)
Is this irrespective of idle time? It sounds like not, though the MySQL idle timeout is the motivation you mention above.
reconnectionpool.py
''' Created on Aug 26, 2009
@author: clayg '''
from time import time
from twisted.enterprise import adbapi from twisted.python import log
class ReConnectionPool(adbapi.ConnectionPool): ''' subclass of adbapi.ConnectionPool that supports pool_recycle pool_recycle disabled by default (-1) '''
def __init__(self, *args, **kwargs): self.pool_recycle = kwargs.pop('pool_recycle', -1) self.conn_starttime = {} # connections starttime, hashed on thread id adbapi.ConnectionPool.__init__(self, *args, **kwargs)
def connect(self, *args, **kwargs): # ask ConnectionPool for a connection conn = adbapi.ConnectionPool.connect(self, *args, **kwargs) # if pool_recycling is enabled if self.pool_recycle > -1: # get the start time for this connection tid = self.threadID() starttime = self.conn_starttime.get(tid) now = time() if not starttime: # init starttime for new conn self.conn_starttime[tid] = starttime = now log.msg("Connection %s was created at %s." % (tid, now))
# if the connection is older than limit in pool_recycle if (now - starttime >= self.pool_recycle): self.disconnect(conn) log.msg("Connection %s was recycled at %s." % (tid, now)) conn = adbapi.ConnectionPool.connect(self, *args, **kwargs) self.conn_starttime[tid] = now
return conn
I think it be quite a bit less messy if it was implemented inside the ConnectionPool class instead of a subclass. Someone else could probably do a much better job than I, although I wouldn't mind taking a crack at it.
Thoughts?
This could be a cool feature. I'm sure a lot of people would certainly be happy for there to be a simple answer to the monthly MySQL question you referred to above. Some random thoughts: * you should use the reactor to keep track of time - reactor.seconds() should someday actually be more reliable than time.time() (but it's the same today); also, parameterize the reactor and use reactor.seconds() and the code is more easily testable. * this will throw reconnect overhead in front of whichever query happens to get issued to a thread which has a connection which needs to be recycled. If it avoids a connection lost error, that's great; if it doesn't, then it's just overhead. This, combined with the fact that queries are dispatched to basically a random connection, might not be too nice. It could be cool to do the reconnecting more proactively in idle threads and improve the dispatch logic to give preference to fresh, available connections when possible. * tests tests tests ;) Jean-Paul _______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python Confidentiality Notice: This e-mail message (including any attached or embedded documents) is intended for the exclusive and confidential use of the individual or entity to which this message is addressed, and unless otherwise expressly indicated, is confidential and privileged information of Rackspace. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse@rackspace.com, and delete the original message. Your cooperation is appreciated.
![](https://secure.gravatar.com/avatar/607cfd4a5b41fe6c886c978128b9c03e.jpg?s=120&d=mm&r=g)
On 26 Aug, 11:08 pm, clay.gerrard@rackspace.com wrote:
The way that sqlalchemy implements pool_recycle is "irrespective of idle time" best I can tell from looking at it. Keeping track of the last_used_time seems like it would have been equally reasonable, although perhaps even additional overhead.
It probably wouldn't be much more overhead. The important question to consider is whether SQLAlchemy (and whoever else) had another good reason for doing it the way they do it. Is monitoring idle time not sufficient for some systems?
I'm not sure I understand what you mean by "parameterize the reactor" - pass it as an argument when you initialize the (Re)ConnectionPool? Can you point me to a twisted class that does this as an example?
Yep, that's what I meant. You can find examples of this in some of the newer code in Twisted (as we only started to encourage it relatively recently). One example is twisted.web.wsgi.WSGIResource; another is twisted.internet.task.deferLater.
I agree it would be very cool "to do the reconnecting more proactively in idle threads" - I'll have to think on that, do you know off-hand of any analogs?
Hm. I can't think of any analogs, no. I hope the implementation would be straightforward, though. The connection pool can use reactor.callLater in the main thread to force old connections to be re- established, and it can take those connections out of the lineup while it's doing that, to make sure no queries are dispatched to them. One tricky part, though, may be ensuring that a connection isn't actively servicing a query when time comes to recycle it. The thread pool used by the connection pool may not currently have the features required for this (it doesn't allow messages to be directed at specific threads, for example). This feature may also not be terribly important. I don't know what the reconnect overhead actually is. Jean-Paul
![](https://secure.gravatar.com/avatar/275dbf027f6e280894a60f83a480e083.jpg?s=120&d=mm&r=g)
Hello Clay, thanks! It will help me debug my code by setting a low timout. Gabriel Clay Gerrard wrote:
Regarding the original question: "how to make mysql's idle timeouts shorter so that I can debug my code?"
You should be able to do that in the mysql shell: mysql> show variables like '%timeout%'; +----------------------------+-------+ | Variable_name | Value | +----------------------------+-------+ | connect_timeout | 5 | | delayed_insert_timeout | 300 | | innodb_lock_wait_timeout | 50 | | innodb_rollback_on_timeout | OFF | | interactive_timeout | 600 | | net_read_timeout | 30 | | net_write_timeout | 60 | | slave_net_timeout | 3600 | | table_lock_wait_timeout | 50 | | wait_timeout | 600 | +----------------------------+-------+ 10 rows in set (0.00 sec)
set global variable interactive_timeout = 5;
But in my experience MySQLdb makes the idle connection timeout very difficult to debug effectively.
Will twisted.adbapi.ConnectionPool ever offer a pool_recycle kw like sqlalchemy?
Clay Gerrard Office: 210-312-3443 Mobile: 210-788-9431 -----Original Message----- From: twisted-python-bounces@twistedmatrix.com [mailto:twisted-python-bounces@twistedmatrix.com] On Behalf Of Werner Thie Sent: Tuesday, July 21, 2009 4:05 PM To: Twisted general discussion Subject: Re: [Twisted-Python] OT - adbapi, connection timeouts, mysql - OT
Hi Gabriel
had the same problem, solved it by having keepalive() called in a LoopingCall(), MySQL sitting at defaults timingwise.
DB_DRIVER = "MySQLdb"
USERDB_ARGS = { 'host': '', 'db': '', 'user': '', 'passwd': '', 'cp_reconnect': True }
storekeeper = StoreKeeper(DB_DRIVER, **USERDB_ARGS)
ka = task.LoopingCall(storekeeper.store.keepAlive) ka.start(300)
class StoreKeeper(object): def __init__(self, dbapiName, **params): self.store = Store(dbapiName, **params)
def dbdisconn(self, reason): print 'db disconnected for ', reason
def keepAlive(self): d = self.store.runQuery('SELECT 1') d.addErrback(self.dbdisconn)
#with store being something like:
class Store(object): def __init__(self, dbapiName, **params): self.__pool = adbapi.ConnectionPool(dbapiName, **params) print self.__pool.__getstate__() self.runOperation('SET autocommit = %s', 1)
def runQuery(self, query, *args): d = self.__pool.runInteraction(self.mapQuery, query, args) return d
def mapQuery(self, curs, query, *args): try: curs.execute(query, *args) except adbapi.ConnectionLost: print print '++++++++++++ rerunning query' print curs.execute(query, *args) #simply resend query, assuming cp_reconnect=True result = curs.fetchall() columns = [d[0] for d in curs.description] return [dict(zip(columns, r)) for r in result]
def runOperation(self, query, *args): d = self.__pool.runOperation(query, args) return d
def runInteraction(self, fun, queries=(), args=()): d = self.__pool.runInteraction(fun, queries, args) return d
HTH, Werner
Gabriel Rossetti wrote:
Hello everyone,
I have been experiencing the ConnectionError with adbapi & cp_reconnect=True. I know that because of the cp_reconnect=True param tha is reconnects and that the query is not re-run. I have written some code that should re-run the query in that case (if I get a Failure back because of a ConnectionError), but it doesn't seem to work. My question is if anyone knows how to make mysql's idle timeouts shorter so that I can debug my code? I searched google and the mysql site with no luck.
thank you, Gabriel
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Confidentiality Notice: This e-mail message (including any attached or embedded documents) is intended for the exclusive and confidential use of the individual or entity to which this message is addressed, and unless otherwise expressly indicated, is confidential and privileged information of Rackspace. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse@rackspace.com, and delete the original message. Your cooperation is appreciated.
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
![](https://secure.gravatar.com/avatar/275dbf027f6e280894a60f83a480e083.jpg?s=120&d=mm&r=g)
Hello Werner, Thanks for your idea, I will give it a try if my current "fix" doesn't work well (reconnecting if I get a connection lost exception and re-running the query). Gabriel Werner Thie wrote:
Hi Gabriel
had the same problem, solved it by having keepalive() called in a LoopingCall(), MySQL sitting at defaults timingwise.
DB_DRIVER = "MySQLdb"
USERDB_ARGS = { 'host': '', 'db': '', 'user': '', 'passwd': '', 'cp_reconnect': True }
storekeeper = StoreKeeper(DB_DRIVER, **USERDB_ARGS)
ka = task.LoopingCall(storekeeper.store.keepAlive) ka.start(300)
class StoreKeeper(object): def __init__(self, dbapiName, **params): self.store = Store(dbapiName, **params)
def dbdisconn(self, reason): print 'db disconnected for ', reason
def keepAlive(self): d = self.store.runQuery('SELECT 1') d.addErrback(self.dbdisconn)
#with store being something like:
class Store(object): def __init__(self, dbapiName, **params): self.__pool = adbapi.ConnectionPool(dbapiName, **params) print self.__pool.__getstate__() self.runOperation('SET autocommit = %s', 1)
def runQuery(self, query, *args): d = self.__pool.runInteraction(self.mapQuery, query, args) return d
def mapQuery(self, curs, query, *args): try: curs.execute(query, *args) except adbapi.ConnectionLost: print print '++++++++++++ rerunning query' print curs.execute(query, *args) #simply resend query, assuming cp_reconnect=True result = curs.fetchall() columns = [d[0] for d in curs.description] return [dict(zip(columns, r)) for r in result]
def runOperation(self, query, *args): d = self.__pool.runOperation(query, args) return d
def runInteraction(self, fun, queries=(), args=()): d = self.__pool.runInteraction(fun, queries, args) return d
HTH, Werner
Gabriel Rossetti wrote:
Hello everyone,
I have been experiencing the ConnectionError with adbapi & cp_reconnect=True. I know that because of the cp_reconnect=True param tha is reconnects and that the query is not re-run. I have written some code that should re-run the query in that case (if I get a Failure back because of a ConnectionError), but it doesn't seem to work. My question is if anyone knows how to make mysql's idle timeouts shorter so that I can debug my code? I searched google and the mysql site with no luck.
thank you, Gabriel
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
participants (8)
-
Clay Gerrard
-
Don Dwiggins
-
exarkun@twistedmatrix.com
-
Gabriel Rossetti
-
Garret Heaton
-
Phil Christensen
-
Phil Mayers
-
Werner Thie