[DB-SIG] Controlling return types for DB APIs
Michael Bayer
mike_mp at zzzcomputing.com
Sun Apr 22 22:11:55 CEST 2007
(assuming this is meant for on-list)
On Apr 22, 2007, at 3:42 PM, Jim Patterson wrote:
> For the most part I have been thinking about simple type mappings,
> but some
> of the examples raised discuss this kind of greater flexibility. I
> had been
> mostly thinking about the problem of dealing with the database
> specific
> problem of mapping the database type to the Python universe. A number
> can come back as an int/long, or a float, or a decimal, or a
> complex number.
> How that is accomplished is very specific to the database.
>
> If I'm following the capability you are talking about with dates in
> strings
> and pickled classes and jpegs then that should seem to me to be a
> layer on top of database specific part. I see the hierarchy as:
>
> advanced type conversion library
> python dbapi module
> database provided api
> database
>
> It would seem that if we get enough power and flexibility in the
> dbapi type specifies then the "advanced type conversion library"
> can be common code that does not care what type of dbapi it
> is sitting on. To handle jpegs or pickled classes it needs to be
> able to tell the dbapi that it wants to use BINARY objects. To
> handle the dates in strings it needs to be able to tell the dbapi
> that it wants to use strings.
currently, you cant exactly write the "advanced type conversion
library" in a totally dbapi-neutral way, because you cant always be
sure that a DBAPI supports the native types used by a particular
"advanced conversion" type. in particular I mention dates because
SQLite/pysqlite has no date type - you can *only* get a datetime
object to/from a sqlite database using string formatting of some
kind. so if datestring parsing is part of a "layer on top of DBAPI",
in the case of sqlite you need to use this layer, in the case of most
other databases you dont.
another example would be an "advanced" type that relies upon array
values. lots of folks seem to like using Postgres' array type, a
type which is not available in other DBs. so such a type which
depends on underlying arrarys would also need to vary its
implementation depending on DBAPI.
Not that converting from binary->picklestream isnt something that
should be performed externally to DBAPI...but because of the variance
in available type support its hard to draw a crisp line between whats
"on top" of DBAPI and whats not, which is why with dates in
particular I put them in the "native" category, if for no other
reason than sqlite's non-support of them (well, and also that dates
are pretty darn important).
SQLAlchemy also expresses the "native type"/"advanced type" dichotomy
explicitly. For things like dates (which are non-standard to
sqlite), binary objects (which return a specialized LOB object on
oracle that is normalized to act like the other DBAPIs), numbers
(which are returned as Decimal in postgres, floats in all others), SA
implements whole modules of different TypeEngine implementations
tailored to each supported DBAPI - these types form the "lower level"
set of types. The "translation on top of a type" operation is
handled by subclasses of TypeDecorator, which references a TypeEngine
(the lower level type base class) compositionally - currently
PickleType is the only standard type within this second hierarchy.
Other folks have also implemented Enums in this layer (which
ironically is a native type in mysql).
So I guess the reason i conflate the "native"/"advanced" types is
because from DBAPI to DBAPI theres no clear line as to what category
a particular kind of type falls into.
More information about the DB-SIG
mailing list