executemany ('SELECT ...') (was: [DB-SIG] DBAPI-2.0 clarifications)

brian zimmer bzimmer@ziclix.com
Wed, 21 Mar 2001 08:55:14 -0600


At 02:45 PM 3/21/2001 +0100, Federico Di Gregorio wrote:
>Scavenging the mail folder uncovered M.-A. Lemburg's letter:
> > Here's a list:
> > - transaction behaviour (implicit start of transactions when creating
> >   a cursor, implicit rollback when cursor is closed)
>
>er... em... no. in a previous mail you said that cursors
>are not transaction-aware. this is very similar to the per-cursor
>commit extension implemented (and outruled by the dbapi) by
>psycopg, popy and maybe other drivers.
>
> > - handling of multiple result returns using .nextset()
> > - handling of SELECT with .executemany()

Just so I'm clear, .executemany() with SELECT will result in many 
.nextset() calls.  I don't want to have to parse SQL to figure out as an 
implementor if the call should be allowed.

>ok, we are discussin it.
>
> > - how to treat special database data types such as decimals
> >   and file column bindings
>
>i have some ideas on how to implement type-casting from db types
>to python types and classes. psycopg impelemnts two module functions
>(new_type() and register_type()) that can be used to register a new
>singleton type object (just like STRING, NUMBER, etc...) and let
>the driver use it to typecast from the db. the default types (STRING,
>etc...) are just created using those functiuons at module loading
>time. if you are interested i can write some lines about that or you
>can donload psycopg sources and look for the types_test.py script.

I work with primarily with Informix, which has a couple of types not 
currently defined, such as Interval.  To handle these cases I added the 
following to .execute*() and callproc():

c.execute("insert into yy (id, game_interval) values (?, ?)", [(8, 
interval)], bindings={1:INTERVAL})

  - where bindings is a dict of key=? index and value=integer type code as 
defined in JDBC, ODBC and optionally the vendor

I am a little confused by the setinputsizes functionality and seems a lot 
of "compliant" modules just pass on it.  If one could advise on it's most 
common use I'd appreciate it.

I use the following Java interface internally to take the value of the 
bindings and pass it to a more vendor-specific implementation:

  public PyObject getPyObject(ResultSet set, int col, int type) throws 
SQLException;
  public void setJDBCObject(PreparedStatement stmt, int index, PyObject 
object) throws SQLException;
  public void setJDBCObject(PreparedStatement stmt, int index, PyObject 
object, int type) throws SQLException;
  public void preExecute(Statement stmt) throws SQLException;
  public void postExecute(Statement stmt) throws SQLException;

Then I allow every cursor to add arbitrary DataHandler's to handle special 
cases such as these for both inserts/updates and queries.  I've found it to 
work quite well and allows the maximum amount of control for different 
database engines.  For instance, it allows me to write code to handle 
getting the last Informix serial or MySql auto_increment without having to 
write a ton of custom code in the core of zxJDBC.  By allowing the plugging 
in of DataHandler's all behaviour for type casting can be controlled by the 
end user if they so desire.  Of course, by default, I handle every case I'm 
originally aware of and the DataHandler can be implemented in Java or Python.

Another area where DataHandler's have helped tremendously is with CLOB and 
BLOB columns.  I've had a hard time adding support for CLOBs to Informix, 
Oracle and MySql in any kind of uniform way.  I used the DataHandler 
abstraction to get it work for each vendor (note: my implementation is 
fairly weak because I treat them more as big byte arrays than truly being 
CLOBs with locators and all, but the whole point of the DataHandler is to 
allow someone to change this behaviour for their needs without needing me 
to do it.)

I don't know if you find any of this interesting or if it warrants 
discussion, but being able to handle all the different varieties of vendors 
with a single abstract code base required it and it has given me a really 
strong cross-vendor implementation, as it's been reported to me by end 
users that zxJDBC has worked out of the box with a number of different vendors.

thanks,

brian