Strange problem: MySQL and python logging using two separate cursors

Frank Aune Frank.Aune at broadpark.no
Wed Jan 9 10:11:09 CET 2008


Hi,

Explaining this problem is quite tricky, but please bear with me. 

Ive got an application which uses MySQL (InnoDB) for two things:

1. Store and retrieve application data (including viewing the application log)
2. Store the application log produced by the python logging module

The connection and cursor for the two tasks are completely separate, and they 
connect, query, execute and commit on their own. But they do use the same SQL 
database.

Task 1 typically consist of alot of SQL queries.

For task 2 I'm using the logging_test14.py example from the logging module as 
a template. (The only thing I've added basically is an escape_string() call 
to properly escape any SQL queries from task 1 logged by task 2 to the 
database.)

From a MySQL shell I can view the logging updates as they are commited, eg. 
the log continues to grow when using the application.

But if I try to do the same from inside the application, the cursor in task 1 
will only return between 50 and 60 logentries, even though more updates 
exists (I can still view them as they grow from the MySQL shell). If I try to 
re-run the same query, the same 50-60 logentries are returned. No error, no 
message - nothing. 

To recap: task 2 writes all log messages to the database, and task 1 reads 
these same log messages based on user input and present them in a GUI.

I don't know if this is explained well enough, but its kind of tricky 
explaining such weird behaviour.

The only clue I have so far, is that the cursor in task 1 seems to be unable 
to "register" any new entries in the log table produced by task 2 as soon as 
task 1 perform an SQL query of some kind.

Im contemplating using the same cursor for task 1 and 2, but I think keeping 
them separate is a better design - if it only worked :)

Any input on this "nutcracker"?

Thanks,
Frank



More information about the Python-list mailing list