[PYTHON DB-SIG] API conformant sybase module
Wed, 8 Jan 97 14:57:57 -0600
Bertil Reinhammar <Bertil_Reinhammar@ivab.se> wrote:
!!! Hi, nice to see some activity here ;-)
> 1) Is there a reason that closing a connection has to
render it useless ? In > my current module, I can connect
and disconnect at will. This might even be > useful if
you wanted to simulate cursors with multiple connections
but keep > your actual connection count down. If you
say db.close(), is this not a message that you explicitly
wish to make the kill permanent ? In the Informix Module
( which I expect to release real soon now ) connections
are handled by allowing the user to open as many
connections as required, each giving an object (of course)
and transparently making the connections dormant or
current depending on which object is used.
I just don't see the point of having a method that renders an object
completely useless, but keeps it around. I guess I will implement the close
method as stated, but keep my connect and disconnect methods as well.
> 2) I don't understand how the variable binding is
supposed to work. Are the > items in the tuple just
replaced by the values or is there actually some sort >
of binding to a name in some namespace ? Hmm, I understand
this regards variable binding in execute() method, no ?
(Sorry if I restate the obvious.) If so, the tuple in
execute call provides values to be used. In the case of
SELECT, the values in each row is returned as a tuple
from fetch() method. In case one whish to be generic
w.r.t. table, use description() method, which is supposed
to return column descriptions in the same order as the
result comes from fetch() method.
Sorry, but I still don't get it. Looking at the spec again, I may be missing
something significant. I was assuming that the 'operation' arguement to
execute is a string of arbitrary SQL. If this is the case, then I don't see
any role for the 'params'. I also don't see any benefit to storing the string.
I must be missing something, somebody please help ! A python code example
would be very useful here.
> 3) How, when, and where are the DBI objects used ?
There are e.g. BLOB objects. If I have a BLOB column in
my table, I get a dbiRaw object element in my return
tuple (if select) and I access BLOB data via the dbiRaw
> 4) I assume that its ok to add methods, but what would
people think about > having settable attributes (eg. for
the database or the connection info) ? What do you have
in mind ?
Currently I can do things like:
>>> syb1 = Sybase(user = 'ted', server = 'local', database = 'emp',
... password = 'haha', interface = '/usr/sybase/interfaces')
>>> syb1.sql('select * from employee')
[<Result instance at 165b74, 1629 rows, 10 columns>]
>>> syb1.database = 'acct'
>>> syb1.sql('select * from accounts')
[<Result instance at 16ab8c, 157 rows, 3 columns>]
>>> syb1.user = 'guest'
>>> syb1.database = 'cat'
>>> syb1.sql('select * from catalog')
[<Result instance at 15732c, 210 rows, 17 columns>]
Right now this is done in a python wrapper class, and I guess it should just
> > 5) What would people think about replacing the
connection function with > something more generic. Rather
than modulename(connection_string), have > connect(**kw).
Then some_module.connect(user = "ted", server = "local",
...) > could work on different implementations and unused
parameters could just be > ignored. I like this. Please
See the example above. We could come up with a standard set of keywords
(user, password, server, etc) that would be common to most databases, but you
could put whatever you wanted in the arguement list. The implementor would
just pick out the things that are needed for their implementation, possibly
providing defaults, and ignore the rest.
> Also, have people written generic software on top of
this API ? Is there a > test suite ? Test suite ? Not
what I know of. Generic code ? No, there is a need of
a standardized error report mechanism before this is even
worth trying. As of now, when you try to write something
non- trivial, you end up handling database specifics in
the error reports either through direct error value fetch
(which is not defined as yet) or parsing error string
from exception (and parsing messages meant for humans to
guide my software does not appeal to me).
How can we do to improve ? Suggestion:
1) Analyse all possible errors and classify them into
error comment -----
------- DBERR_LOCK When some part
of query failed due to some
lock, timeout, deadlock.
DBERR_QUERY Malformed queries. Syntax
error, missing table. DBERR_DATA
Insert fail due to constraints, duplicate keys
etc. DBERR_SERVER Server down,
connection lost etc. DBERR_ACCESS
Authorization problems. DBERR_RESOURCE
Memory full, disk full, blobspace full, licence
DBERR_BUG Internal errors to be
reported to vendor. DBERR_ERROR The
rest that cannot be successfully mapped on
the list above.
2) Upon exception, provide a tuple with
Error class as of 1). String as reported from
database and "this" API. For human consumption.
Dictionary with error information such as the
numeric code and (in case
of informix) ISAM code. Some of these
entries may be standardized, some left
to the particular database API. Using
a dictionary to report general information allows
descriptive keys as to how to interpret the
This allows a generic interface as long as the error
handling keeps to a reason- able level of sophistication.
When you require your software to be even smarter, I feel
genericity falls as you then probably require knowledge
about the under- lying components. We have used a model
similar to this for some time in our proprietary
OK, sounds fine.
DB-SIG - SIG on Tabular Databases in Python
send messages to: firstname.lastname@example.org
administrivia to: email@example.com