SQL/Python question -- slow... What is the fixed cost?
sholden at holdenweb.com
Thu Oct 4 19:03:03 CEST 2001
"Leonardo B Lopes" <leo at iems.nwu.edu> wrote in message
news:3BBC8E99.56BD59FB at iems.nwu.edu...
> Thanks Mark! Actually I am doing my best to keep my system independent
> of mysql. And my db is quite particular: It is only written to once.
> After that, it is only queried, and even so in a quite particular way.
> But my db is pretty big. This instance, for example, has a table with
> 83000+ records. It is not so much that the query joining that table with
> 2 others takes .001 cpusec that worries me. That is OK. The problem is
> that some other pretty simple queries also take the same time. If that
> is always the case, then I will have to find another quite more
> sophisticated solution for data storage, and that is what I am trying to
The best starting point for efficient database performance is a clean,
normalized database design. I take it your database is structured in third
If you are *absolutely sure* that the data will not change (which I take to
be the implication of your "only written to once") then you might
permissibly deviate from normalized structures solely to improve query
However, 83,000 rows is not by any stretch of the imagination a large
database. If you could outline your DB structures and the usage, maybe you
would get more assistance.
The particular database module need not be of concern. If you wanted, you
could use Gadfly (although it uses depracated libraries, it does still work)
and keep everything in memory!
More information about the Python-list