[DB-SIG] Optional DB API Extensions

Stuart Bishop zen@shangri-la.dropbear.id.au
Fri, 26 Oct 2001 15:14:58 +1000

On Thursday, October 25, 2001, at 10:37  PM, Matthew T. Kromer wrote:

> On Thursday, October 25, 2001, at 06:52 AM, Stuart Bishop wrote:
>> I also would like the connection method:
>>         quote(object)
>>             Returns a SQL quoted version of the given variable as a
>>             string, suitable for execution by the database backend.
>>             For example:

> Since the primary reason you want quoting is generally to handle cases 
> where you can't directly BIND values to your database, I'm of the 
> opinion that this should be heavily leaned against.  Instead, use of 
> parameter binding should be highly encouraged.

Definitely encouraged, which is is best done with documentation. The two 
things I do with
this sort method (using Perl is a past life, and now Zope) are:
	- Generate logs of SQL commands sent to the RDBMS.
	- Generate hairy single use SQL dynamically for data mining type 

> I would be open to having extension methods which helped construct 
> statements using the API's preferred bind specification method, or a 
> mechanism whereby the application can negotiate with the adapter to 
> choose a quoting mechanism.

A different method of getting this functionality would be:

     >>> con.sql('SELECT * FROM users WHERE username = :p1',('zen',))

This would be useful for the first case I mentioned, but more obtuse for 
the second case
since I would end up doing ''' con.sql(':p1',('zen',)) ''' to retrieve 
the quoted parameter.

> Having said all of that, this makes me lean towards a bilevel (or 
> trilevel) approach where there is a standard module which all adapters 
> subclass, and an optional module to attempt to provide further adapter 
> abstractions (inter-adapter abstractions).  However, I'd leave the 
> current API spec largely untouched except for DRIVER level features; 
> APPLICATION level features (including application convenience 
> functions) belong elsewhere.

This is similar to the approach the DB API 3.0 strawman I put out a few 
months ago was
heading for. I think this sort of thing is beyond the scope of 
Marc-Andre's proposal
as it stands (although he may decide to grow it).

> This is an application convenience issue to me; and comes at a cost of 
> making things run slower.  I don't like preempting the application's 
> speed for the programmer's convenience.  To give an example, in 
> DCOracle2, the underlying C module might fetch 40,000 records in .7 
> seconds -- but wrapping these results as python objects bumps that up 
> to 3.5 seconds, and going through the entire fetchmany() layer (which 
> for DCOracle2 involves converting the column-based results to row-based 
> results) bumps the whole thing up to almost 9 seconds.  While this 
> example is an instance which I'm sure can be improved upon, I'm 
> strongly opposed to making the situation worse rather than better.

But is it worth having as an optional method? If this behavior is 
already being
exposed by drivers, then its worth having a common invocation just to 
make things
more consistent. The documentation should come with a comment along the 
of "don't use dictionaries if you want your code to run fast".

> I would endorse anything which attempts to expose more features of the 
> underlying RDBMS; e.g. Oracle can do the transaction levels you 
> mentioned, and can also set transaction IDs for doing transaction 
> coordination and distributed two-phase commit.

What methods would transaction coordination and distributed two-phase 
commit require?

Stuart Bishop <zen@shangri-la.dropbear.id.au>