[DB-SIG] New take on PostgreSQL bindings for Python

Cristian Gafton gafton at rpath.com
Wed Sep 6 17:53:31 CEST 2006


On Wed, 6 Sep 2006, M.-A. Lemburg wrote:

> I'm not sure I understand your comment on escaping things - you
> normally pass the statement (with the binding parameter markers)
> and the binding parameters separately to the database.

That's not how the MySQL and PostgreSQL bindings I looked at work. Given a 
cursor.execute(query, args), they go through various pains to escape the 
args tuple, then simply do "query % args" and pass the resulting string as 
a single query without parameters to the backends.

> This allows the database to create an query plan for the statement
> and then apply the parameters to this query plan one or more times.
>
> The main benefit is that you don't have to do any escaping in the
> SQL statement, which as a side-effect, also prevent the typical
> SQL injection vulnerabilities.

> Server side cursors vs. client side cursors is usually something
> that's implemented and managed by the database driver - why should
> the Python programmer have to think about this detail ?

Because it is only the programmer that knows "I am expecting 1 million 
rows out of this query, you'd better now load it all up in RAM at once"

> The Python programmer can use .fetchone() or .fetchmany() to
> indicate whether she wants to read rows in chunks or one-by-one.
> The Python interface can then map these requests to whatever
> the database driver has to offer.

Not all database drivers are rich enough, or smart enough, or sufficiently 
envolved (MySQL and PostgreSQL are such examples); you either retrieve all 
results at once at a cost of client memory or you retrieve in chunks using 
FETCH at the cost of speed. Again, it is the application programmer that 
knows which is appropiate for which case.

Cristian
-- 
Cristian Gafton
rPath, Inc.



More information about the DB-SIG mailing list