how not to run out of memory in cursor.execute

amberite ldanielburr at mac.com
Tue Jun 6 16:27:03 EDT 2006


Steve Holden wrote:
>
> The MySQLdb solution you give is way more complicated than it needs to
> be, thereby skewing your opinion towards cx_Oracle unnecessarily.
>

I respectfully disagree with your assertion here.  The code I presented
for MySQLdb is what you *have* to do, to avoid using up too much
memory.  This thread is about how to avoid running out of memory when
handling large resultsets, and the approach I outlined is really the
only way to do that with MySQLdb's DBAPI support.

> Look up the .fetchmany() method of cursors in the DB API. There is only
> any need to execute a single query no matter how large the result set:
> you simply need to keep calling .fetchmany(N) (where N is whatever
> you've decided by testing is your optimum chunk size) until it returns
> less than N rows, at which point you have exhausted the query.
>

.fetchmany() buys you nothing in the case of large resultsets.  The
memory usage will continue to climb with each call to fetchmany.  This
isn't the fault of MySQLdb, it is just that MySQL doesn't support
cursors, so MySQLdb has to fake it.

> It's very little more effort to wrap this all up as a generator that
> effectively allows you to use the same solution as you quote for cx_Oracle.
>

Again, I respectfully disagree.  Your proposed solution, while
resulting in shorter code, will continue to eat memory until the entire
resultset has been delivered.

L. Daniel Burr




More information about the Python-list mailing list