From tjg@avalongroup.net Fri Jan 21 01:51:26 2000
From: tjg@avalongroup.net (Timothy Grant)
Date: Thu, 20 Jan 2000 17:51:26 -0800
Subject: [DB-SIG] DBI Commentary
Message-ID: <3887BB9E.B3FA5174@exceptionalminds.com>
Hi all,
Thought I would take an item that I have been working on and commenting
on on the main list and move it here in the hope of being more on-topic.
Please forgive this newbie for jumping in on the deepend of a discussion
that has probably been going on for a long time with many minds more
familiar with the subject than I.
There has been some discussion on the main list regarding the
retrieving of column headings using DBI databases. As far as I can tell
from the spec, associating column names with data retrieved from a
SELECT does not happen automatically.
Thanks particularly go to Gordon, Aahz and Lance as well as a number of
others in my journey to understanding DBI. Now that I think I understand
it, I'd like to make some recommendations (As I said last night, my PSA
membership form is on its way today so I can fully participate in the DB
sig).
First, embedding SQL in Python, even using the """SQL""" method is
really ugly especially while keeping indentation consistent
and it get's phenomenally ugly if the queries get at all complex. I
don't think there is a solution to this problem, and I would hazzard a
guess that no matter what language is in use it would be equally ugly.
That said, anything that can be done to keep the ugliness to a minimum
would be a good thing.
Aahz made the very valid point that specifying columns in your query
*guarantees* tuple return order. I had not fully appreciated that, and I
have put that into use in a number of places. However, I have to agree
with Lance when he says that "SELECT * FROM table" is not as often a bad
thing as Aahz claimed. My reasons for that are far less technical than
Lance's, my reasoning is just that it adds to the ugliness mentioned in
the prior paragraph. Using "SELECT field1, field2, field3 FROM table"
simply becomes *far* to cumbersome and difficult to read/debug if you
get beyond five or so fields. If you are dealing with a large number of
columns (my current project has one table with 33). Debugging an error
becomes an exercise in frustration (especially when using the """SQL"""
method as the interpreter points the error message at the first line of
the multi-line execute statement.
Lance posted some very nice code to build dictionaries out of the
results from a query...
dsc = cur.description
l = len(dsc)
lst = cur.fetchall()
rtn = []
for data in lst:
dict = {}
for i2 in range(l):
dict[string.lower(dsc[i2][0])] = data[i2]
rtn.append(dict)
return rtn
It seems to me that the DBI spec *should* mandate that the results of a
fetchXXX() return not a tuple, but a dictionary, or a list of
dictionaries. The problem is that a change like this would break a large
amount of existing code. So, I'm trying to think of an appropriate
extension (perhaps a new argument to the fetchXXX()'s)
It is possible that I am missing something already specified that has
not been implemented in my tools, but I have been unable to find it.
--
Stand Fast,
tjg.
Chief Technology Officer tjg@exceptionalminds.com
Red Hat Certified Engineer www.exceptionalminds.com
Avalon Technology Group, Inc. (503) 246-3630
>>>>>>>>>>>>EXCEPTIONAL MINDS, INNOVATIVE PRODUCTS<<<<<<<<<<<<
From Frank McGeough"
Hi All,
I've built a fairly full featured odbc access layer for my C++
programmers. I've spent a lot of time optimizing it and dealing with
the idiosyncracies of various odbc drivers. I'd like to extend it to
the Python folk. I looked at the some of the extensions that are
included with Python like the odbc.pyd in win32 land. All the
extensions I've examined have been pretty messy 'C' code. I get the
general idea --- there seems to be a table of function pointers and
everything seems to be a PyObject --- but I was wondering if there are
more extensive samples of class libraries converted to extensions?
Also are there automated tools in this area? Where can I turn for help
in this process?
I have the following general set of classes:
CDBIDatabase (used to open up a particular data source)
CDBIConnection (obtained from a database. Can be more than one per
database).
CDBITable (encapsulates the idea of a database table).
CDBIColumn (encapsulates the idea of a database column).
CDBISchema (description of a schema - made up of CDBIColumns)
CDBIForeignKey (description of fkey).
CDBIRow (a row of result information, made of up of an ordrered
collection of CDBIValue's)
CDBICursor (a db cursor - may be forward only or bi-directional).
CDBIValue (a database value obtained by executing some sql statement).
I've also provided a set of classes to wrap database types - like Blob
and String and DateTime. My class library doesn't seem to fit into dbi
either but that's a lesser consideration for my internal folk. Each of
my classes is coded as a reference counted interface/implementation. I
currently deliver this to the C++ programmers as Windows (win32) DLL.
Any help or pointers appreciated.
Frank
Synchrologic, Inc.
http://www.synchrologic.com
T: 770.754.5600
F: 770.619.5612
From mal@lemburg.com Mon Jan 24 09:22:52 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 24 Jan 2000 10:22:52 +0100
Subject: [DB-SIG] DBI Commentary
References: <3887BB9E.B3FA5174@exceptionalminds.com>
Message-ID: <388C19EC.C38E7EA0@lemburg.com>
Timothy Grant wrote:
>
> Hi all,
>
> Thought I would take an item that I have been working on and commenting
> on on the main list and move it here in the hope of being more on-topic.
> Please forgive this newbie for jumping in on the deepend of a discussion
> that has probably been going on for a long time with many minds more
> familiar with the subject than I.
>
> There has been some discussion on the main list regarding the
> retrieving of column headings using DBI databases. As far as I can tell
> from the spec, associating column names with data retrieved from a
> SELECT does not happen automatically.
>
> Thanks particularly go to Gordon, Aahz and Lance as well as a number of
> others in my journey to understanding DBI. Now that I think I understand
> it, I'd like to make some recommendations (As I said last night, my PSA
> membership form is on its way today so I can fully participate in the DB
> sig).
>
> First, embedding SQL in Python, even using the """SQL""" method is
> really ugly especially while keeping indentation consistent
> and it get's phenomenally ugly if the queries get at all complex. I
> don't think there is a solution to this problem, and I would hazzard a
> guess that no matter what language is in use it would be equally ugly.
> That said, anything that can be done to keep the ugliness to a minimum
> would be a good thing.
>
> Aahz made the very valid point that specifying columns in your query
> *guarantees* tuple return order. I had not fully appreciated that, and I
> have put that into use in a number of places. However, I have to agree
> with Lance when he says that "SELECT * FROM table" is not as often a bad
> thing as Aahz claimed. My reasons for that are far less technical than
> Lance's, my reasoning is just that it adds to the ugliness mentioned in
> the prior paragraph. Using "SELECT field1, field2, field3 FROM table"
> simply becomes *far* to cumbersome and difficult to read/debug if you
> get beyond five or so fields. If you are dealing with a large number of
> columns (my current project has one table with 33). Debugging an error
> becomes an exercise in frustration (especially when using the """SQL"""
> method as the interpreter points the error message at the first line of
> the multi-line execute statement.
>
> Lance posted some very nice code to build dictionaries out of the
> results from a query...
>
> dsc = cur.description
> l = len(dsc)
> lst = cur.fetchall()
> rtn = []
> for data in lst:
> dict = {}
> for i2 in range(l):
> dict[string.lower(dsc[i2][0])] = data[i2]
> rtn.append(dict)
> return rtn
>
> It seems to me that the DBI spec *should* mandate that the results of a
> fetchXXX() return not a tuple, but a dictionary, or a list of
> dictionaries. The problem is that a change like this would break a large
> amount of existing code. So, I'm trying to think of an appropriate
> extension (perhaps a new argument to the fetchXXX()'s)
Too much breakage, I suppose. These enhancements can easily
be implemented in the database abstraction layer of your
application.
Note that doing this in general will yield some confusion:
there oftern are columns which do not refer to a table column,
e.g. return values of SQL functions. Databases usually create
some wild names for them, so you'd still get the data, but
it would be hard to address the column due to those "wild
names" not being standardised.
Note that Jim Fulton implemented a special result set type
for storing the return values. These types (being sequences
of sequences) are in line with the DB API while still providing
some additional functionality. Perhaps that's the way to go...
The DB SIG would have to provide a reference implementation
which could then be made part of the next DB API version.
> It is possible that I am missing something already specified that has
> not been implemented in my tools, but I have been unable to find it.
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From mal@lemburg.com Mon Jan 24 11:20:07 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 24 Jan 2000 12:20:07 +0100
Subject: [DB-SIG] ODBC Access
References: <000201bf6604$53d4f1b0$289b90d1@synchrologic.com>
Message-ID: <388C3567.B5E2BE4D@lemburg.com>
Frank McGeough wrote:
>
> Hi All,
>
> I've built a fairly full featured odbc access layer for my C++
> programmers. I've spent a lot of time optimizing it and dealing with
> the idiosyncracies of various odbc drivers. I'd like to extend it to
> the Python folk. I looked at the some of the extensions that are
> included with Python like the odbc.pyd in win32 land. All the
> extensions I've examined have been pretty messy 'C' code. I get the
> general idea --- there seems to be a table of function pointers and
> everything seems to be a PyObject --- but I was wondering if there are
> more extensive samples of class libraries converted to extensions?
> Also are there automated tools in this area? Where can I turn for help
> in this process?
Check out SWIG which works great for semi-automagically wrapping
C++ libs for use in Python (and other languages): http://www.swig.org
> I have the following general set of classes:
>
> CDBIDatabase (used to open up a particular data source)
> CDBIConnection (obtained from a database. Can be more than one per
> database).
> CDBITable (encapsulates the idea of a database table).
> CDBIColumn (encapsulates the idea of a database column).
> CDBISchema (description of a schema - made up of CDBIColumns)
> CDBIForeignKey (description of fkey).
> CDBIRow (a row of result information, made of up of an ordrered
> collection of CDBIValue's)
> CDBICursor (a db cursor - may be forward only or bi-directional).
> CDBIValue (a database value obtained by executing some sql statement).
Do you have some sample code using these classes ? What
methods do tables and columns have in your lib ?
> I've also provided a set of classes to wrap database types - like Blob
> and String and DateTime. My class library doesn't seem to fit into dbi
> either but that's a lesser consideration for my internal folk. Each of
> my classes is coded as a reference counted interface/implementation. I
> currently deliver this to the C++ programmers as Windows (win32) DLL.
>
> Any help or pointers appreciated.
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From jjsa@gvtc.com Tue Jan 25 04:38:47 2000
From: jjsa@gvtc.com (Jim Johannsen)
Date: Mon, 24 Jan 2000 22:38:47 -0600
Subject: [DB-SIG] PyGresSql help
Message-ID: <388D28D7.7286E6EC@gvtc.com>
I am trying to use PyGresSQL to connect Python and Postgress. I have
managed to connect to the db, but cannot update the database with a list
or with strings. I have tried the following code w/o success:
cnx.query("INSERT INTO county VALUES(%s,%s,%s)") %(Ldata[0], Ldata[1],
Ldata[3])
cnx.query("INSERT INTO county VALUES('%s,%s,%s')") %(Ldata[0],
Ldata[1], Ldata[3])
cnx.query("INSERT INTO county VALUES(%s,%s,%s)") %(x,y,z) where x,y
and z are strings.
and countless variations of the above. Every variation but the correct
one.
I have also tried to insert lists through the inserttable function.
Can't get it to update either.
I can insert hard coded values into the database so I know I can
insert, just not by reference.
Could some one point out the error of my ways.
From deirdre@deirdre.net Tue Jan 25 04:55:42 2000
From: deirdre@deirdre.net (Deirdre Saoirse)
Date: Mon, 24 Jan 2000 20:55:42 -0800 (PST)
Subject: [DB-SIG] PyGresSql help
In-Reply-To: <388D28D7.7286E6EC@gvtc.com>
Message-ID:
On Mon, 24 Jan 2000, Jim Johannsen wrote:
> cnx.query("INSERT INTO county VALUES(%s,%s,%s)") %(Ldata[0], Ldata[1],
> Ldata[3])
I haven't used this, but it seems to me:
cnx.query("INSERT INTO county VALUES('%s', '%s', '%s')")
% (Ldata[0], Ldata[1], Ldata[3])
At least, that's how I've done it on other sql implementations.
--
_Deirdre * http://www.linuxcabal.net * http://www.deirdre.net
"Mars has been a tough target" -- Peter G. Neumann, Risks Digest Moderator
"That's because the Martians keep shooting things down." -- Harlan Rosenthal
, retorting in Risks Digest 20.60
From tbryan@server.python.net Tue Jan 25 12:51:58 2000
From: tbryan@server.python.net (Tom Bryan)
Date: Tue, 25 Jan 2000 07:51:58 -0500 (EST)
Subject: [DB-SIG] PyGresSql help
In-Reply-To: <388D28D7.7286E6EC@gvtc.com>
Message-ID:
On Mon, 24 Jan 2000, Jim Johannsen wrote:
> I am trying to use PyGresSQL to connect Python and Postgress. I have
> managed to connect to the db, but cannot update the database with a list
> or with strings. I have tried the following code w/o success:
>
> cnx.query("INSERT INTO county VALUES(%s,%s,%s)") %(Ldata[0], Ldata[1],
> Ldata[3])
...
> and countless variations of the above. Every variation but the correct
> one.
I think that this is more of a generic Python syntax problem than a
database problem. I would guess that the string interpolation must
occur inside of the function argument. For example,
Python 1.5.2 (#1, Apr 18 1999, 16:03:16) [GCC pgcc-2.91.60 19981201
(egcs-1.1.1 on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> def foo(text):
... print text
...
>>> foo("%s, %s, %s") % ('one','two','three')
%s, %s, %s
Traceback (innermost last):
File "", line 1, in ?
TypeError: bad operand type(s) for %
>>> foo( "%s, %s, %s" % ('one','two','three') )
one, two, three
>>>
So, I'd suggest the following. (Note that the whitespace isn't really
necessary: I'm just trying to emphasize how the parentheses nest.)
cnx.query (
"INSERT INTO county VALUES(%s,%s,%s)" % (
Ldata[0], Ldata[1],Ldata[3]
)
)
---Tom
From Cary Collett Tue Jan 25 14:19:44 2000
From: Cary Collett (Cary Collett)
Date: Tue, 25 Jan 2000 08:19:44 -0600
Subject: [DB-SIG] PyGresSql help
In-Reply-To:
References: <388D28D7.7286E6EC@gvtc.com>
Message-ID: <20000125081944.A18036@ratatosk.org>
Thus spake Tom Bryan (tbryan@server.python.net):
>
>
> On Mon, 24 Jan 2000, Jim Johannsen wrote:
>
> > I am trying to use PyGresSQL to connect Python and Postgress. I have
> > managed to connect to the db, but cannot update the database with a list
> > or with strings. I have tried the following code w/o success:
> >
> > cnx.query("INSERT INTO county VALUES(%s,%s,%s)") %(Ldata[0], Ldata[1],
> > Ldata[3])
>
> So, I'd suggest the following. (Note that the whitespace isn't really
> necessary: I'm just trying to emphasize how the parentheses nest.)
>
> cnx.query (
> "INSERT INTO county VALUES(%s,%s,%s)" % (
> Ldata[0], Ldata[1],Ldata[3]
> )
> )
>
I don't use PostgreSQL, but every other DB I've used will
expect single quotes around strings:
cnx.query (
"INSERT INTO county VALUES('%s','%s','%s')" % (
Ldata[0], Ldata[1],Ldata[3]
)
)
Cary
--
Cary Collett cary@ratatosk.org
http://ratatosk.org/~cary
"To me boxing is like ballet, except that there's no music, no choreography,
and the dancers hit eachother." -- Jack Handy
From hniksic@iskon.hr Tue Jan 25 14:33:01 2000
From: hniksic@iskon.hr (Hrvoje Niksic)
Date: 25 Jan 2000 15:33:01 +0100
Subject: [DB-SIG] Preparing statement API
Message-ID: <9t9r9f6p6g2.fsf@mraz.iskon.hr>
My idle mind has played with the option of Python's DB API supporting
the concept of preparing an SQL statement. I know you can pass the
same string object to the db module and it can reuse it, but I
consider that feature very non-obvious and (gasp) un-Pythonical --
look at how the regexp compilation is handled in Python's regexp
module.
I would feel much safer with an explicit prepare interface that looked
like this:
statement = cur.prepare("INSERT INTO foo (user, name) VALUES (?, ?)")
for user, name in lst:
cur.execute(statement, (user, name))
The interface change is minimal: only one more function needs to be
implemented, and all the fetching functions remain unchanged. The
advantage of this form over the reuse-old-sql-statement-string method
is that here the programmer can control what statements to cache, and
how long to keep the statement handles around.
Finally, the databases that don't support preparing SQL statements can
simply define cursor.prepare(str) to return str unchanged, and
everything will work as before.
From mal@lemburg.com Tue Jan 25 18:12:31 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 25 Jan 2000 19:12:31 +0100
Subject: [DB-SIG] Preparing statement API
References: <9t9r9f6p6g2.fsf@mraz.iskon.hr>
Message-ID: <388DE78F.AA271807@lemburg.com>
Hrvoje Niksic wrote:
>
> My idle mind has played with the option of Python's DB API supporting
> the concept of preparing an SQL statement. I know you can pass the
> same string object to the db module and it can reuse it, but I
> consider that feature very non-obvious and (gasp) un-Pythonical --
> look at how the regexp compilation is handled in Python's regexp
> module.
>
> I would feel much safer with an explicit prepare interface that looked
> like this:
>
> statement = cur.prepare("INSERT INTO foo (user, name) VALUES (?, ?)")
>
> for user, name in lst:
> cur.execute(statement, (user, name))
>
> The interface change is minimal: only one more function needs to be
> implemented, and all the fetching functions remain unchanged. The
> advantage of this form over the reuse-old-sql-statement-string method
> is that here the programmer can control what statements to cache, and
> how long to keep the statement handles around.
>
> Finally, the databases that don't support preparing SQL statements can
> simply define cursor.prepare(str) to return str unchanged, and
> everything will work as before.
The problem with this approach is that you'd have to store
the perpared information somewhere. This is usually done
by the cursor which is not under Python's control.
FYI, the next version of mxODBC will provide this interface:
cursor.prepare(operation)
Prepare a database operation (query or command) statement for
later execution and set cursor.command. To execute a prepared
statement, pass cursor.statement to one of the .executeXXX() methods.
cursor.command
Provides access to the current prepared SQL command available
through the cursor. This is set by .prepare(), all catalog
methods, .execute() and .executemany().
It uses the hidden reuse optimization as basis. The main
difference between .prepare and the .execute method is that
no statement execution takes place; the command is only parsed
and verified.
This allows keeping a cache of cursors with prepared statements
around which speeds up processing of often recurring queries.
In addition to the above, mxODBC will also have a .freeset()
method which clears the result set currently available on
the cursor -- this is needed in cases where you don't fetch the
whole set as memory optimization.
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From brunson@Level3.net Tue Jan 25 19:04:06 2000
From: brunson@Level3.net (Eric Brunson)
Date: Tue, 25 Jan 2000 12:04:06 -0700
Subject: [DB-SIG] Preparing statement API
In-Reply-To: <388DE78F.AA271807@lemburg.com>; from mal@lemburg.com on Tue, Jan 25, 2000 at 07:12:31PM +0100
References: <9t9r9f6p6g2.fsf@mraz.iskon.hr> <388DE78F.AA271807@lemburg.com>
Message-ID: <20000125120406.A18415@level3.net>
* M.-A. Lemburg (mal@lemburg.com) [000125 18:25]:
> Hrvoje Niksic wrote:
> >
> > My idle mind has played with the option of Python's DB API supporting
> > the concept of preparing an SQL statement. I know you can pass the
> > same string object to the db module and it can reuse it, but I
> > consider that feature very non-obvious and (gasp) un-Pythonical --
> > look at how the regexp compilation is handled in Python's regexp
> > module.
> >
> > I would feel much safer with an explicit prepare interface that looked
> > like this:
> >
> > statement = cur.prepare("INSERT INTO foo (user, name) VALUES (?, ?)")
> >
> > for user, name in lst:
> > cur.execute(statement, (user, name))
> >
> > The interface change is minimal: only one more function needs to be
> > implemented, and all the fetching functions remain unchanged. The
> > advantage of this form over the reuse-old-sql-statement-string method
> > is that here the programmer can control what statements to cache, and
> > how long to keep the statement handles around.
> >
> > Finally, the databases that don't support preparing SQL statements can
> > simply define cursor.prepare(str) to return str unchanged, and
> > everything will work as before.
>
> The problem with this approach is that you'd have to store
> the perpared information somewhere. This is usually done
> by the cursor which is not under Python's control.
How does what Hrvoje is asking for differ from the Handle object in
the DCOracle module?
From the DCOracle docs:
prepare(sql) --
Returns a new Handle Object. A handle is a pre-prepared
notion of a cursor, where some handling of the SQL
parsing has already been done.
You use it like this:
(Assume a DB connection called "dbc")
inshndl = dbc.prepare( "insert into mytable ( attribute, value ) "
"values ( :p1, :p2 )" )
for attr, val in list:
inshndl.execute( (attr, val) )
:p1 and :p2 can also be named parameters, but as I use them in that
example, they are simply positional.
This is a pretty common usage from within an API like Pro*C. I'm
sorry that I don't have experience with anything but Oracle and the
DCOracle python module, so perhaps I'm missing something in the
generalization of the problem to other DB vendors.
For what it's worth,
e.
--
Eric Brunson * _ o * Faster and faster,
brunson@brunson.com * / //\ until the thrill of speed
brunson@level3.net \>>| * overcomes the fear of death
page-eric@level3.net \\,
From mal@lemburg.com Tue Jan 25 20:17:08 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 25 Jan 2000 21:17:08 +0100
Subject: [DB-SIG] Preparing statement API
References: <9t9r9f6p6g2.fsf@mraz.iskon.hr> <388DE78F.AA271807@lemburg.com> <20000125120406.A18415@level3.net>
Message-ID: <388E04C4.2894BD5B@lemburg.com>
Eric Brunson wrote:
>
> How does what Hrvoje is asking for differ from the Handle object in
> the DCOracle module?
>
> >From the DCOracle docs:
>
> prepare(sql) --
> Returns a new Handle Object. A handle is a pre-prepared
> notion of a cursor, where some handling of the SQL
> parsing has already been done.
>
> You use it like this:
>
> (Assume a DB connection called "dbc")
>
> inshndl = dbc.prepare( "insert into mytable ( attribute, value ) "
> "values ( :p1, :p2 )" )
> for attr, val in list:
> inshndl.execute( (attr, val) )
>
> :p1 and :p2 can also be named parameters, but as I use them in that
> example, they are simply positional.
With mxODBC this would look something like this:
cursor = dbc.cursor()
cursor.prepare("insert into mytable (attribute, value ) values (?,?)")
for attr, val in list:
cursor.execute(cursor.command, (attr, val))
Not all that different and without the need to invent new
APIs or objects... seriously, all these gimmicks can be done
using abstraction layers on top of the existing DB API interface.
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From dcinege@psychosis.com Tue Jan 25 23:56:05 2000
From: dcinege@psychosis.com (Dave Cinege)
Date: Tue, 25 Jan 2000 18:56:05 -0500
Subject: [DB-SIG] DCOracle -- Need help and examples
Message-ID: <388E3815.941D536B@psychosis.com>
I'm a python novice and trying get this module going with little luck.
Maybe it's not installed right?
(The 2 .so's and all the .py's I put in site-packages/)
dbc = oci_.Connect(user/pass)
Works without error.
Anything else I try to do error's. Even trying to close it.
#!/usr/local/bin/python
import sys, os
import Buffer, oci_
dbc = oci_.Connect(user/pass)
dbc.close()
$ ./dbtest.py
Traceback (innermost last):
File "./dbtest.py", line 14, in ?
dbc.close()
AttributeError: 'Connection' object has no attribute 'close'
If someone could throw me a pile of example code it would help me greatly.
$ v /usr/local/lib/python1.5/site-packages/
total 6496
-r-xr-xr-x 1 root other 31864 Jan 25 18:12 Buffer.so
-rw-r--r-- 1 root other 2620 Jan 25 18:05 __init__.py
-rw-r--r-- 1 root other 3973 Jan 25 18:05 dbi.py
-rw-r--r-- 1 root other 11965 Jan 25 18:05 ociBind.py
-rw-r--r-- 1 root other 13813 Jan 25 18:05 ociCurs.py
-rw-r--r-- 1 root other 11327 Jan 25 18:05 ociProc.py
-rw-r--r-- 1 root other 2424 Jan 25 18:05 ociUtil.py
-r-xr-xr-x 1 root other 6547188 Jan 25 18:12 oci_.so
-rw-r--r-- 1 root other 4893 Jan 25 18:05 ocidb.py
-rw-r--r-- 1 root other 2808 Jan 25 18:05 ocitypes.py
From brunson@Level3.net Wed Jan 26 00:06:16 2000
From: brunson@Level3.net (Eric Brunson)
Date: Tue, 25 Jan 2000 17:06:16 -0700
Subject: [DB-SIG] DCOracle -- Need help and examples
In-Reply-To: <388E3815.941D536B@psychosis.com>; from dcinege@psychosis.com on Tue, Jan 25, 2000 at 06:56:05PM -0500
References: <388E3815.941D536B@psychosis.com>
Message-ID: <20000125170615.A26041@level3.net>
--pWyiEgJYm5f9v55/
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable
* Dave Cinege (dcinege@psychosis.com) [000125 23:48]:
> I'm a python novice and trying get this module going with little luck.
> Maybe it's not installed right?
> (The 2 .so's and all the .py's I put in site-packages/)
See below.
> dbc =3D oci_.Connect(user/pass)=20
You shouldn't be calling the oci_ functions directly.
orb(~)$ python
Python 1.5.2 (#1, Dec 22 1999, 14:55:37) [GCC 2.8.1] on sunos5
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> import DCOracle
>>> dbc =3D DCOracle.Connect( "prov_owner/secret" )
>>> curs =3D dbc.cursor()
>>> curs.execute( "select table_name from tabs" )
>>> curs.execute( "select table_name from tabs" )
>>> for ( name, ) in curs.fetchall():
=2E.. print name
=2E..=20
SERVICE
SERVICE_ATTRIBUTE
SHARED_DIAL_POOL
SITE
SVC_COMMENTS
SVC_ERROR_HISTORY
SVC_NET_ELEMENT
SVC_PROV_SYSTEM
SWITCH
>>>=20
That said, if you know what you're doing, you should probably be able
to call the oci_ module directly, but why?
> $ v /usr/local/lib/python1.5/site-packages/
> total 6496
> -r-xr-xr-x 1 root other 31864 Jan 25 18:12 Buffer.so
> -rw-r--r-- 1 root other 2620 Jan 25 18:05 __init__.py
> -rw-r--r-- 1 root other 3973 Jan 25 18:05 dbi.py
> -rw-r--r-- 1 root other 11965 Jan 25 18:05 ociBind.py
> -rw-r--r-- 1 root other 13813 Jan 25 18:05 ociCurs.py
> -rw-r--r-- 1 root other 11327 Jan 25 18:05 ociProc.py
> -rw-r--r-- 1 root other 2424 Jan 25 18:05 ociUtil.py
> -r-xr-xr-x 1 root other 6547188 Jan 25 18:12 oci_.so
> -rw-r--r-- 1 root other 4893 Jan 25 18:05 ocidb.py
> -rw-r--r-- 1 root other 2808 Jan 25 18:05 ocitypes.py
>=20
Finish reading the installation directions.
=46rom README.txt:
The DCOracle package is a Python package. To install it, simply
copy (or on Unix link) the 'DCOracle' directory to a directory in
your Python search path.
Then you'll be able to import DCOracle.
Hope this helps.
Sincerely,
e.
--=20
Eric Brunson * _ o * Faster and faster, =
=20
brunson@brunson.com * / //\ until the thrill of speed =
=20
brunson@level3.net \>>| * overcomes the fear of death
page-eric@level3.net \\, =20
--pWyiEgJYm5f9v55/
Content-Type: application/pgp-signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.0 (SunOS)
Comment: For info see http://www.gnupg.org
iD8DBQE4jjp3QDyMrcmL1RcRAZ/eAJ9+IUGBDodjyrhalR7oPJ0ALp38UgCgi4gg
AmAhqBijX9YZrAWcgxH5Cg0=
=zTnI
-----END PGP SIGNATURE-----
--pWyiEgJYm5f9v55/--
From hniksic@iskon.hr Wed Jan 26 08:10:27 2000
From: hniksic@iskon.hr (Hrvoje Niksic)
Date: 26 Jan 2000 09:10:27 +0100
Subject: [DB-SIG] Preparing statement API
In-Reply-To: "M.-A. Lemburg"'s message of "Tue, 25 Jan 2000 19:12:31 +0100"
References: <9t9r9f6p6g2.fsf@mraz.iskon.hr> <388DE78F.AA271807@lemburg.com>
Message-ID: <9t9oga9l0cs.fsf@mraz.iskon.hr>
"M.-A. Lemburg" writes:
> > statement = cur.prepare("INSERT INTO foo (user, name) VALUES (?, ?)")
[...]
> The problem with this approach is that you'd have to store the
> perpared information somewhere. This is usually done by the cursor
> which is not under Python's control.
You'd store it (or a pointer to it) in the statement object, returned
by cursor.prepare(). Given that prepare() is a cursor method, the
cursor can be made aware of each such object.
> cursor.prepare(operation)
> Prepare a database operation (query or command) statement for
> later execution and set cursor.command. To execute a prepared
> statement, pass cursor.statement to one of the .executeXXX()
> methods.
>
> cursor.command
> Provides access to the current prepared SQL command available
> through the cursor. This is set by .prepare(), all catalog
> methods, .execute() and .executemany().
This is very close to what I proposed, except that my cursor.prepare()
returns the statement (which you provide through cursor.command).
This allows one to prepare more than one statement using one cursor.
From hniksic@iskon.hr Wed Jan 26 08:13:31 2000
From: hniksic@iskon.hr (Hrvoje Niksic)
Date: 26 Jan 2000 09:13:31 +0100
Subject: [DB-SIG] Preparing statement API
In-Reply-To: Eric Brunson's message of "Tue, 25 Jan 2000 12:04:06 -0700"
References: <9t9r9f6p6g2.fsf@mraz.iskon.hr> <388DE78F.AA271807@lemburg.com> <20000125120406.A18415@level3.net>
Message-ID: <9t9k8kxl07o.fsf@mraz.iskon.hr>
Eric Brunson writes:
> From the DCOracle docs:
>
> prepare(sql) --
> Returns a new Handle Object. A handle is a pre-prepared
> notion of a cursor, where some handling of the SQL
> parsing has already been done.
>
> You use it like this:
>
> (Assume a DB connection called "dbc")
>
> inshndl = dbc.prepare( "insert into mytable ( attribute, value ) "
> "values ( :p1, :p2 )" )
> for attr, val in list:
> inshndl.execute( (attr, val) )
The difference is that the interface of my proposal seems (to me) more
in line with the general cursor-based model of DB API. It only adds
one more function, and everything else happens naturally -- for
instance, you still execute statements through cursor.execute,
retrieve rows through cursor.fetch*, etc.
Everyone please note that my proposal comes from a mere *user* of the
DB modules. If what I propose is hard to implement, don't hesitate to
let me know.
From mal@lemburg.com Wed Jan 26 09:22:05 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 26 Jan 2000 10:22:05 +0100
Subject: [DB-SIG] Preparing statement API
References: <9t9r9f6p6g2.fsf@mraz.iskon.hr> <388DE78F.AA271807@lemburg.com> <9t9oga9l0cs.fsf@mraz.iskon.hr>
Message-ID: <388EBCBD.4833F613@lemburg.com>
Hrvoje Niksic wrote:
>
> "M.-A. Lemburg" writes:
>
> > > statement = cur.prepare("INSERT INTO foo (user, name) VALUES (?, ?)")
> [...]
> > The problem with this approach is that you'd have to store the
> > perpared information somewhere. This is usually done by the cursor
> > which is not under Python's control.
>
> You'd store it (or a pointer to it) in the statement object, returned
> by cursor.prepare(). Given that prepare() is a cursor method, the
> cursor can be made aware of each such object.
>
> > cursor.prepare(operation)
> > Prepare a database operation (query or command) statement for
> > later execution and set cursor.command. To execute a prepared
> > statement, pass cursor.statement to one of the .executeXXX()
> > methods.
> >
> > cursor.command
> > Provides access to the current prepared SQL command available
> > through the cursor. This is set by .prepare(), all catalog
> > methods, .execute() and .executemany().
>
> This is very close to what I proposed, except that my cursor.prepare()
> returns the statement (which you provide through cursor.command).
> This allows one to prepare more than one statement using one cursor.
Maybe this is just an ODBC thing, but you cannot prepare more
than one statement on a cursor (in ODBC the notion of cursors
is captured by the HSTMT or statement handle). For ODBC, your
.prepare() function would have to return a new cursor.
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From brunson@Level3.net Wed Jan 26 17:17:23 2000
From: brunson@Level3.net (Eric Brunson)
Date: Wed, 26 Jan 2000 10:17:23 -0700
Subject: [DB-SIG] Preparing statement API
In-Reply-To: <9t9k8kxl07o.fsf@mraz.iskon.hr>; from hniksic@iskon.hr on Wed, Jan 26, 2000 at 09:13:31AM +0100
References: <9t9r9f6p6g2.fsf@mraz.iskon.hr> <388DE78F.AA271807@lemburg.com> <20000125120406.A18415@level3.net> <9t9k8kxl07o.fsf@mraz.iskon.hr>
Message-ID: <20000126101723.A10128@level3.net>
--EeQfGwPcQSOJBaQU
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable
* Hrvoje Niksic (hniksic@iskon.hr) [000126 08:13]:
> Eric Brunson writes:
>=20
> > From the DCOracle docs:
> >=20
> > prepare(sql) --
> > Returns a new Handle Object. A handle is a pre-prepared
> > notion of a cursor, where some handling of the SQL
> > parsing has already been done.=20
> >=20
> > You use it like this:
> >=20
> > (Assume a DB connection called "dbc")
> >=20
> > inshndl =3D dbc.prepare( "insert into mytable ( attribute, value ) "
> > "values ( :p1, :p2 )" )
> > for attr, val in list:
> > inshndl.execute( (attr, val) )
>=20
> The difference is that the interface of my proposal seems (to me) more
> in line with the general cursor-based model of DB API. It only adds
> one more function, and everything else happens naturally -- for
> instance, you still execute statements through cursor.execute,
> retrieve rows through cursor.fetch*, etc.
>=20
> Everyone please note that my proposal comes from a mere *user* of the
> DB modules. If what I propose is hard to implement, don't hesitate to
> let me know.
>=20
I guess my point was the DCOracle package already has the
functionality you are describing, the Handle object inherits directly
from the Cursor object but they add the prepare() function to the
connection object and then overload execute() from the Cursor object
to take a list of parameters.
So, this is almost exactly what you're asking, except you apparently
want the prepare() function to be a method of the Cursor object
instead of the Connection. The handle is still a Cursor and is
manipulated through the fetch*() functions. =20
I'm thinking that putting the prepare in the Connection object
probably avoids having to tote the baggage of runtime binding around
in a Cursor object if you aren't going to use it. It does limit you
by having to instantiate a different Handle object for each different
query, but you'd have to do some careful benchmarking to determine if
that is outweighed by a performance gain in using normal cursors. If
you look at the actual C code generated by the Pro*C precompiler of a
simple static select statement vs. one with runtime variable binding,
you'll see quite a difference. That could very well be performance
impacting.
What I'm saying is the code is there, it exists. I'm afraid I've only
ever used the DCOracle package and not the built in DB API stuff, but
from what I've read, the interfaces are pretty darn similar, almost
identical. I believe the DCOracle implementation using :arbitrayname
to be a little more elegant since you can use named parameter lists
*or* positional parameters, rather than being restricted to positional
parameters by question marks as place holders. This is also more in
line of the syntax of all the other Oracle implementations I've seen
in APIs such as Pro*C, SQL*Plus and PL/SQL.
So, if you want the functionality right away, use the DCOracle package
from Zope. If it doesn't suit your tastes, then feel free to ignore
this.
e.
P.S.
I only have an in depth understanding of Oracle's implementation
specifics, so if what I've described above does not translate to
Sybase, Informix and the Open Source SQL DBs, please enlighten me.
Thanks.
--=20
Eric Brunson * _ o * Faster and faster, =
=20
brunson@brunson.com * / //\ until the thrill of speed =
=20
brunson@level3.net \>>| * overcomes the fear of death
page-eric@level3.net \\, =20
--EeQfGwPcQSOJBaQU
Content-Type: application/pgp-signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.0 (SunOS)
Comment: For info see http://www.gnupg.org
iD8DBQE4jywjQDyMrcmL1RcRAXuEAJ4sBu0oKoLXPlhW2TfcTUb/CjCuUACdFokl
oAPWptpI/VwyqPXi7LEANWo=
=w1uG
-----END PGP SIGNATURE-----
--EeQfGwPcQSOJBaQU--
From brunson@Level3.net Wed Jan 26 17:22:49 2000
From: brunson@Level3.net (Eric Brunson)
Date: Wed, 26 Jan 2000 10:22:49 -0700
Subject: [DB-SIG] Preparing statement API
In-Reply-To: <388EBCBD.4833F613@lemburg.com>; from mal@lemburg.com on Wed, Jan 26, 2000 at 10:22:05AM +0100
References: <9t9r9f6p6g2.fsf@mraz.iskon.hr> <388DE78F.AA271807@lemburg.com> <9t9oga9l0cs.fsf@mraz.iskon.hr> <388EBCBD.4833F613@lemburg.com>
Message-ID: <20000126102248.B10128@level3.net>
--ZfOjI3PrQbgiZnxM
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable
* M.-A. Lemburg (mal@lemburg.com) [000126 10:06]:
> Hrvoje Niksic wrote:
> >=20
> > "M.-A. Lemburg" writes:
> >=20
> > > > statement =3D cur.prepare("INSERT INTO foo (user, name) VALUES =
(?, ?)")
> > [...]
> > > The problem with this approach is that you'd have to store the
> > > perpared information somewhere. This is usually done by the cursor
> > > which is not under Python's control.
> >=20
> > You'd store it (or a pointer to it) in the statement object, returned
> > by cursor.prepare(). Given that prepare() is a cursor method, the
> > cursor can be made aware of each such object.
> >=20
> > > cursor.prepare(operation)
> > > Prepare a database operation (query or command) statement for
> > > later execution and set cursor.command. To execute a prepared
> > > statement, pass cursor.statement to one of the .executeXXX()
> > > methods.
> > >
> > > cursor.command
> > > Provides access to the current prepared SQL command available
> > > through the cursor. This is set by .prepare(), all catalog
> > > methods, .execute() and .executemany().
> >=20
> > This is very close to what I proposed, except that my cursor.prepare()
> > returns the statement (which you provide through cursor.command).
> > This allows one to prepare more than one statement using one cursor.
>=20
> Maybe this is just an ODBC thing, but you cannot prepare more
> than one statement on a cursor (in ODBC the notion of cursors
> is captured by the HSTMT or statement handle). For ODBC, your
> .prepare() function would have to return a new cursor.
>=20
That seems to be true of the DCOracle implementation, also. Yesterday
I tried to reuse a cursor object and it failed, I had to declare a new
one. This may be why they put the prepare() method in the connection
object.
--=20
Eric Brunson * _ o * Faster and faster, =
=20
brunson@brunson.com * / //\ until the thrill of speed =
=20
brunson@level3.net \>>| * overcomes the fear of death
page-eric@level3.net \\, =20
--ZfOjI3PrQbgiZnxM
Content-Type: application/pgp-signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.0 (SunOS)
Comment: For info see http://www.gnupg.org
iD8DBQE4jy1nQDyMrcmL1RcRAU7RAJ4pb4d2zwXVDB8AhJnyk5FGGtR1mQCfQAMt
4QkbTDgy5sm561KBdIb48w0=
=PqLB
-----END PGP SIGNATURE-----
--ZfOjI3PrQbgiZnxM--
From hniksic@iskon.hr Wed Jan 26 17:28:05 2000
From: hniksic@iskon.hr (Hrvoje Niksic)
Date: 26 Jan 2000 18:28:05 +0100
Subject: [DB-SIG] Preparing statement API
In-Reply-To: Eric Brunson's message of "Wed, 26 Jan 2000 10:22:49 -0700"
References: <9t9r9f6p6g2.fsf@mraz.iskon.hr> <388DE78F.AA271807@lemburg.com> <9t9oga9l0cs.fsf@mraz.iskon.hr> <388EBCBD.4833F613@lemburg.com> <20000126102248.B10128@level3.net>
Message-ID: <9t9r9f4g2u2.fsf@mraz.iskon.hr>
Eric Brunson writes:
> That seems to be true of the DCOracle implementation, also.
> Yesterday I tried to reuse a cursor object and it failed,
You mean, you tried:
cursor.execute("")
and then:
cursor.execute("")
And the second operation failed? This is against the DB API, I think.
From brunson@Level3.net Wed Jan 26 17:36:32 2000
From: brunson@Level3.net (Eric Brunson)
Date: Wed, 26 Jan 2000 10:36:32 -0700
Subject: [DB-SIG] Preparing statement API
In-Reply-To: <9t9r9f4g2u2.fsf@mraz.iskon.hr>; from hniksic@iskon.hr on Wed, Jan 26, 2000 at 06:28:05PM +0100
References: <9t9r9f6p6g2.fsf@mraz.iskon.hr> <388DE78F.AA271807@lemburg.com> <9t9oga9l0cs.fsf@mraz.iskon.hr> <388EBCBD.4833F613@lemburg.com> <20000126102248.B10128@level3.net> <9t9r9f4g2u2.fsf@mraz.iskon.hr>
Message-ID: <20000126103632.A11338@level3.net>
--YiEDa0DAkWCtVeE4
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable
* Hrvoje Niksic (hniksic@iskon.hr) [000126 17:29]:
> Eric Brunson writes:
>=20
> > That seems to be true of the DCOracle implementation, also.
> > Yesterday I tried to reuse a cursor object and it failed,
>=20
> You mean, you tried:
>=20
> cursor.execute("")
>=20
> and then:
>=20
> cursor.execute("")
>=20
> And the second operation failed? This is against the DB API, I think.
>=20
Now that I think about it, I may have closed the cursor between
attempts to execute, so it may have been my fault. I'll try it again
today and if it does violate the API spec, I'll make a note of it to
the list.
--=20
Eric Brunson * _ o * Faster and faster, =
=20
brunson@brunson.com * / //\ until the thrill of speed =
=20
brunson@level3.net \>>| * overcomes the fear of death
page-eric@level3.net \\, =20
--YiEDa0DAkWCtVeE4
Content-Type: application/pgp-signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.0 (SunOS)
Comment: For info see http://www.gnupg.org
iD8DBQE4jzCgQDyMrcmL1RcRAanbAJ93uVUqskLbMIbrIRfEp68pHPf69ACggE/i
3ZcjMSjimh3rgIH29mUUFJI=
=RbF/
-----END PGP SIGNATURE-----
--YiEDa0DAkWCtVeE4--
From gstein@lyra.org Thu Jan 27 02:24:28 2000
From: gstein@lyra.org (Greg Stein)
Date: Wed, 26 Jan 2000 18:24:28 -0800 (PST)
Subject: [DB-SIG] Preparing statement API
In-Reply-To: <388E04C4.2894BD5B@lemburg.com>
Message-ID:
On Tue, 25 Jan 2000, M.-A. Lemburg wrote:
>...
> Not all that different and without the need to invent new
> APIs or objects... seriously, all these gimmicks can be done
> using abstraction layers on top of the existing DB API interface.
This was the specific intent behind the current design. An unsophisticated
provider (or database) need not worrying about caching, reuse,
preparation, or implementing additional methods. The provider can add the
reuse functionality and performance will increase.
On the other side, a client can code with the understanding that reuse can
occur, but it will continue to work for those providers that do not
provide the functionality. It also eliminates feature testing.
The design lowers the bar for providers, reduces the breadth of the
interface, yet provides a smooth upgrade path for more performance.
As with most of the design of DBAPI, it provides a simple/minimal
requirement on the provider implementor, a not-broad interface for the
client to worry about, and provides for more sophisticated layers of
functionality at the Python level.
Cheers,
-g
--
Greg Stein, http://www.lyra.org/
From gstein@lyra.org Thu Jan 27 02:46:33 2000
From: gstein@lyra.org (Greg Stein)
Date: Wed, 26 Jan 2000 18:46:33 -0800 (PST)
Subject: [DB-SIG] Preparing statement API
In-Reply-To: <388EBCBD.4833F613@lemburg.com>
Message-ID:
On Wed, 26 Jan 2000, M.-A. Lemburg wrote:
> Hrvoje Niksic wrote:
>...
> > This is very close to what I proposed, except that my cursor.prepare()
> > returns the statement (which you provide through cursor.command).
> > This allows one to prepare more than one statement using one cursor.
>
> Maybe this is just an ODBC thing, but you cannot prepare more
> than one statement on a cursor (in ODBC the notion of cursors
> is captured by the HSTMT or statement handle). For ODBC, your
> .prepare() function would have to return a new cursor.
This is common to all databases that I know of -- a cursor can only deal
with a single statement at a time. You cannot prepare multiple statements
on a given cursor: almost by definition, the cursor is used to hold the
results of a prepare() operation (it also holds execution context during
a fetch; actually, prepare is just one phase of a fetch).
Cheers,
-g
--
Greg Stein, http://www.lyra.org/
From mal@lemburg.com Thu Jan 27 16:28:47 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 27 Jan 2000 17:28:47 +0100
Subject: [DB-SIG] Preparing statement API
References:
Message-ID: <3890723F.EC5D02E9@lemburg.com>
Greg Stein wrote:
>
> On Tue, 25 Jan 2000, M.-A. Lemburg wrote:
> >...
> > Not all that different and without the need to invent new
> > APIs or objects... seriously, all these gimmicks can be done
> > using abstraction layers on top of the existing DB API interface.
>
> This was the specific intent behind the current design. An unsophisticated
> provider (or database) need not worrying about caching, reuse,
> preparation, or implementing additional methods. The provider can add the
> reuse functionality and performance will increase.
>
> On the other side, a client can code with the understanding that reuse can
> occur, but it will continue to work for those providers that do not
> provide the functionality. It also eliminates feature testing.
>
> The design lowers the bar for providers, reduces the breadth of the
> interface, yet provides a smooth upgrade path for more performance.
>
> As with most of the design of DBAPI, it provides a simple/minimal
> requirement on the provider implementor, a not-broad interface for the
> client to worry about, and provides for more sophisticated layers of
> functionality at the Python level.
All true. Still, Hrvoje has got a point there: currently
it is not possible to prepare a statement for execution
(without actually executing it). This is needed in case
you plan to have one cursor per (often used) statement and
store these in a cache of your database abstraction
layer.
Therefore, I propose to add the new features I mentioned in a
previous mail (cursor.command and cursor.prepare()) to the next
release of the DB API spec.
Interface modules not having access to the prepare step of the
DB can then implement cursor.perpare(cmd) by simply setting
cursor.command to cmd and leaving the actual parsing and
prepare step to the execute method as is currently done.
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From roy@endeavor.med.nyu.edu Thu Jan 27 16:37:48 2000
From: roy@endeavor.med.nyu.edu (Roy Smith)
Date: Thu, 27 Jan 2000 11:37:48 -0500
Subject: [DB-SIG] Preparing statement API
In-Reply-To: <3890723F.EC5D02E9@lemburg.com>
Message-ID: <581400.3157961868@qwerky.med.nyu.edu>
Seems like a good plan to me. I can't see any downside. Old code still
works as written, and databases that don't support the new functionality
can still expose the same API with only an efficiency loss.
--On Thu, Jan 27, 2000 5:28 PM +0100 "M.-A. Lemburg"
wrote:
> Therefore, I propose to add the new features I mentioned in a
> previous mail (cursor.command and cursor.prepare()) to the next
> release of the DB API spec.
>
> Interface modules not having access to the prepare step of the
> DB can then implement cursor.perpare(cmd) by simply setting
> cursor.command to cmd and leaving the actual parsing and
> prepare step to the execute method as is currently done.
Roy Smith
New York University School of Medicine
550 First Avenue, New York, NY 10016
From hniksic@iskon.hr Thu Jan 27 16:38:59 2000
From: hniksic@iskon.hr (Hrvoje Niksic)
Date: 27 Jan 2000 17:38:59 +0100
Subject: [DB-SIG] Preparing statement API
In-Reply-To: "M.-A. Lemburg"'s message of "Thu, 27 Jan 2000 17:28:47 +0100"
References: <3890723F.EC5D02E9@lemburg.com>
Message-ID: <9t9r9f3ii58.fsf@mraz.iskon.hr>
"M.-A. Lemburg" writes:
> > This was the specific intent behind the current design. An
> > unsophisticated provider (or database) need not worrying about
> > caching, reuse, preparation, or implementing additional
> > methods. The provider can add the reuse functionality and
> > performance will increase.
That's good design, and I tried to preserve it. As I said, an
unsophisticated database can implement cursor.prepare() as
lambda x: x. What you cannot do with the current API is have control
over whether a statement is cached.
> Therefore, I propose to add the new features I mentioned in a
> previous mail (cursor.command and cursor.prepare()) to the next
> release of the DB API spec.
Now that you've convinced me that it is meaningful to associate only
one statement to a cursor (at a time), I agree with your proposal.
From mal@lemburg.com Thu Jan 27 17:10:48 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 27 Jan 2000 18:10:48 +0100
Subject: [DB-SIG] Preparing statement API
References: <3890723F.EC5D02E9@lemburg.com> <9t9r9f3ii58.fsf@mraz.iskon.hr>
Message-ID: <38907C18.2309FF5D@lemburg.com>
Hrvoje Niksic wrote:
>
> "M.-A. Lemburg" writes:
> > Therefore, I propose to add the new features I mentioned in a
> > previous mail (cursor.command and cursor.prepare()) to the next
> > release of the DB API spec.
>
> Now that you've convinced me that it is meaningful to associate only
> one statement to a cursor (at a time), I agree with your proposal.
Cool. I'll keep it in mind for the next version of the
DB API spec. BTW, are there any other things which should
be done for the next version ?
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From zen@cs.rmit.edu.au Thu Jan 27 22:23:37 2000
From: zen@cs.rmit.edu.au (Stuart 'Zen' Bishop)
Date: Fri, 28 Jan 2000 09:23:37 +1100 (EST)
Subject: [DB-SIG] Next Version [was Re: Preparing statement API]
In-Reply-To: <38907C18.2309FF5D@lemburg.com>
Message-ID:
On Thu, 27 Jan 2000, M.-A. Lemburg wrote:
> Cool. I'll keep it in mind for the next version of the
> DB API spec. BTW, are there any other things which should
> be done for the next version ?
Things I (as a user) would like to see:
Common call to enable/disable autocommit. Default would have to be
'driver specific' for compatibility. Drivers could simply throw
an exception if a programmer tries to set an unsupported mode.
A common parameter style supported by all drivers. Or a method
that convers 'generic' parameter style to 'paramstyle' parameter
style.
Whatever additions Digital Creations requires to enable a Generic
ZopeDA to be written that supports all compliant Python Database API
drivers (well... drivers that don't throw an exception when
con.autocommit(0) is called).
--
___
// Zen (alias Stuart Bishop) Work: zen@cs.rmit.edu.au
// E N Senior Systems Alchemist Play: zen@shangri-la.dropbear.id.au
//__ Computer Science, RMIT WWW: http://www.cs.rmit.edu.au/~zen
From gstein@lyra.org Thu Jan 27 22:58:17 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 27 Jan 2000 14:58:17 -0800 (PST)
Subject: [DB-SIG] Preparing statement API
In-Reply-To: <9t9r9f3ii58.fsf@mraz.iskon.hr>
Message-ID:
On 27 Jan 2000, Hrvoje Niksic wrote:
> "M.-A. Lemburg" writes:
> > > This was the specific intent behind the current design. An
> > > unsophisticated provider (or database) need not worrying about
> > > caching, reuse, preparation, or implementing additional
> > > methods. The provider can add the reuse functionality and
> > > performance will increase.
>
> That's good design, and I tried to preserve it. As I said, an
> unsophisticated database can implement cursor.prepare() as
> lambda x: x. What you cannot do with the current API is have control
> over whether a statement is cached.
And in your change, you *still* do not have that control.
If the DBAPI provide does not want to write that much code, for whatever
reason, then they won't write it. Adding a prepare() is not going to
magically force them to do anything other than a lambda x: x. You're
fooling yourself if you think otherwise.
> > Therefore, I propose to add the new features I mentioned in a
> > previous mail (cursor.command and cursor.prepare()) to the next
> > release of the DB API spec.
>
> Now that you've convinced me that it is meaningful to associate only
> one statement to a cursor (at a time), I agree with your proposal.
I think it adds needless complication, and raises the bar for new DBAPI
implementors. The implementor must now deal with additional interface
elements and maybe some feature testing. Remember: the cursor objects
could very well be written in C, so raising the bar _at_all_ is not always
a good thing. Especially when it does not specifically introduce a new
piece of functionality -- just a different name for it.
Cheers,
-g
--
Greg Stein, http://www.lyra.org/
From gstein@lyra.org Thu Jan 27 23:25:16 2000
From: gstein@lyra.org (Greg Stein)
Date: Thu, 27 Jan 2000 15:25:16 -0800 (PST)
Subject: [DB-SIG] Preparing statement API
In-Reply-To: <3890723F.EC5D02E9@lemburg.com>
Message-ID:
On Thu, 27 Jan 2000, M.-A. Lemburg wrote:
>...
> All true. Still, Hrvoje has got a point there: currently
> it is not possible to prepare a statement for execution
> (without actually executing it). This is needed in case
> you plan to have one cursor per (often used) statement and
> store these in a cache of your database abstraction
> layer.
class PreparedCursor:
"This class allows a user to prepare a statement without executing it."
def __init__(self, conn, statement):
self.cursor = conn.cursor()
self.statement = statement
def __getattr__(self, name):
return getattr(self.cursor, name)
def run(self, params):
return self.cursor.execute(statement, params)
Write the code in Python. Don't force DBAPI implementors to write yet more
code.
If you continue on this track, then the DBAPI is going to have a hundred
functions. Some of them optional, some required. Implementors will be
forced to consider each one, deciding whether it can be done with their
database, whether it should be done, and whether they want to write the
necessary code to get it done.
Users will have a hundred functions to review and figure out which they
need, try to determine through feature testing whether the database
supports it, and write alternate code paths for those that don't. Why
present the user with a lot of functions to consider, when some may not
even apply?
The current design provides for silent, behind-the-scenes, worry-free
optimization for the cases where a statement is used over and over.
>...
> Interface modules not having access to the prepare step of the
> DB can then implement cursor.perpare(cmd) by simply setting
> cursor.command to cmd and leaving the actual parsing and
> prepare step to the execute method as is currently done.
Or the user can implement a teeny little class like the one that I
provided above. The DBAPI implementor doesn't have to worry about more
methods, cursor state, or execution paths through his code.
The concept of a prepared statements exists solely as a performance
benefit. We don't have to expand the interface just to achieve each and
every possible benefit. The current interface does that just fine. And
users determined to *achieve* those benefits *can*.
Cheers,
-g
--
Greg Stein, http://www.lyra.org/
From mal@lemburg.com Fri Jan 28 10:45:06 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 28 Jan 2000 11:45:06 +0100
Subject: [DB-SIG] Preparing statement API
References:
Message-ID: <38917332.50E5C54C@lemburg.com>
Greg Stein wrote:
>
> On Thu, 27 Jan 2000, M.-A. Lemburg wrote:
> >...
> > All true. Still, Hrvoje has got a point there: currently
> > it is not possible to prepare a statement for execution
> > (without actually executing it). This is needed in case
> > you plan to have one cursor per (often used) statement and
> > store these in a cache of your database abstraction
> > layer.
>
> class PreparedCursor:
> "This class allows a user to prepare a statement without executing it."
> def __init__(self, conn, statement):
> self.cursor = conn.cursor()
> self.statement = statement
> def __getattr__(self, name):
> return getattr(self.cursor, name)
> def run(self, params):
> return self.cursor.execute(statement, params)
>
> Write the code in Python. Don't force DBAPI implementors to write yet more
> code.
The problem here is that syntax errors etc. will not be found
until the command is executed.
Anyway, perhaps you are right in stating that this need not
be part of the DB API spec. I'll keep it for mxODBC though.
Perhaps we should add a section to the DB API spec stating
and specifying common extensions ?!
> If you continue on this track, then the DBAPI is going to have a hundred
> functions. Some of them optional, some required. Implementors will be
> forced to consider each one, deciding whether it can be done with their
> database, whether it should be done, and whether they want to write the
> necessary code to get it done.
> Users will have a hundred functions to review and figure out which they
> need, try to determine through feature testing whether the database
> supports it, and write alternate code paths for those that don't. Why
> present the user with a lot of functions to consider, when some may not
> even apply?
>
> The current design provides for silent, behind-the-scenes, worry-free
> optimization for the cases where a statement is used over and over.
>
> >...
> > Interface modules not having access to the prepare step of the
> > DB can then implement cursor.perpare(cmd) by simply setting
> > cursor.command to cmd and leaving the actual parsing and
> > prepare step to the execute method as is currently done.
>
> Or the user can implement a teeny little class like the one that I
> provided above. The DBAPI implementor doesn't have to worry about more
> methods, cursor state, or execution paths through his code.
>
> The concept of a prepared statements exists solely as a performance
> benefit. We don't have to expand the interface just to achieve each and
> every possible benefit. The current interface does that just fine. And
> users determined to *achieve* those benefits *can*.
Well, not all the way. ODBC separates the two terms "prepare"
and "execute" for obvious reasons: catching syntax errors
is usually not easily done in loops which were written to
execute and fetch data. These already have to deal with
type errors and data truncations. A separate prepare step
allows the application to verify the statement and apply
special error handling related only to the executed command.
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From mal@lemburg.com Fri Jan 28 10:40:44 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 28 Jan 2000 11:40:44 +0100
Subject: [DB-SIG] Next Version [was Re: Preparing statement API]
References:
Message-ID: <3891722C.10EB69EB@lemburg.com>
Stuart 'Zen' Bishop wrote:
>
> On Thu, 27 Jan 2000, M.-A. Lemburg wrote:
>
> > Cool. I'll keep it in mind for the next version of the
> > DB API spec. BTW, are there any other things which should
> > be done for the next version ?
>
> Things I (as a user) would like to see:
>
> Common call to enable/disable autocommit. Default would have to be
> 'driver specific' for compatibility. Drivers could simply throw
> an exception if a programmer tries to set an unsupported mode.
Hmm, not all DBs provide transactions and even those that
do sometimes have different thoughts about what isolation
level to choose as default.
I don't think we'll ever have a common API for transaction
mechanisms... I guess this remains for the abstraction layer
to be implemented.
> A common parameter style supported by all drivers. Or a method
> that convers 'generic' parameter style to 'paramstyle' parameter
> style.
Dito.
> Whatever additions Digital Creations requires to enable a Generic
> ZopeDA to be written that supports all compliant Python Database API
> drivers (well... drivers that don't throw an exception when
> con.autocommit(0) is called).
Note that as soon as you deal with DB managers such as ODBC
managers, you don't know about transaction capabilities of
a DB until you connect to it.
Why does Zope rely on the DB doing transactions ? I thought it
had its own transaction mechanism ?
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From Frank McGeough" <3891722C.10EB69EB@lemburg.com>
Message-ID: <001501bf6995$bcd25dd0$529b90d1@synchrologic.com>
Is there any notion in the dbi layer of the app discovering the capabilities
of the driver as in the ODBC spec where I can do a SQLGetInfo call to find
out some information about the driver? I just think it's awkward to throw
exceptions for information that might be readily available and easily
handled by the app.
----- Original Message -----
From: M.-A. Lemburg
To:
Cc: Database SIG
Sent: Friday, January 28, 2000 5:40 AM
Subject: Re: [DB-SIG] Next Version [was Re: Preparing statement API]
> Stuart 'Zen' Bishop wrote:
> >
> > On Thu, 27 Jan 2000, M.-A. Lemburg wrote:
> >
> > > Cool. I'll keep it in mind for the next version of the
> > > DB API spec. BTW, are there any other things which should
> > > be done for the next version ?
> >
> > Things I (as a user) would like to see:
> >
> > Common call to enable/disable autocommit. Default would have to be
> > 'driver specific' for compatibility. Drivers could simply throw
> > an exception if a programmer tries to set an unsupported mode.
>
> Hmm, not all DBs provide transactions and even those that
> do sometimes have different thoughts about what isolation
> level to choose as default.
>
> I don't think we'll ever have a common API for transaction
> mechanisms... I guess this remains for the abstraction layer
> to be implemented.
>
> > A common parameter style supported by all drivers. Or a method
> > that convers 'generic' parameter style to 'paramstyle' parameter
> > style.
>
> Dito.
>
> > Whatever additions Digital Creations requires to enable a Generic
> > ZopeDA to be written that supports all compliant Python Database API
> > drivers (well... drivers that don't throw an exception when
> > con.autocommit(0) is called).
>
> Note that as soon as you deal with DB managers such as ODBC
> managers, you don't know about transaction capabilities of
> a DB until you connect to it.
>
> Why does Zope rely on the DB doing transactions ? I thought it
> had its own transaction mechanism ?
>
> --
> Marc-Andre Lemburg
> ______________________________________________________________________
> Business: http://www.lemburg.com/
> Python Pages: http://www.lemburg.com/python/
>
>
>
> _______________________________________________
> DB-SIG maillist - DB-SIG@python.org
> http://www.python.org/mailman/listinfo/db-sig
From gstein@lyra.org Fri Jan 28 14:38:17 2000
From: gstein@lyra.org (Greg Stein)
Date: Fri, 28 Jan 2000 06:38:17 -0800 (PST)
Subject: [DB-SIG] Next Version [was Re: Preparing statement API]
In-Reply-To: <001501bf6995$bcd25dd0$529b90d1@synchrologic.com>
Message-ID:
Capability discovery of the underlying database is very database-specific.
We have not defined any functions like that.
Unlike ODBC, the DBAPI is actually intended to be something of a
least-common-denominator. Capabilities and extensions that are specific to
a database can be implemented by that specific database provider. Trying
to create an entirely generic API for database interaction is very tough,
and is beyond the scope of what most people would like to see. DBAPI
provides a minimum set of functionality and creates a very good level of
consistency between the different providers.
Cheers,
-g
On Fri, 28 Jan 2000, Frank McGeough wrote:
> Is there any notion in the dbi layer of the app discovering the capabilities
> of the driver as in the ODBC spec where I can do a SQLGetInfo call to find
> out some information about the driver? I just think it's awkward to throw
> exceptions for information that might be readily available and easily
> handled by the app.
>
> ----- Original Message -----
> From: M.-A. Lemburg
> To:
> Cc: Database SIG
> Sent: Friday, January 28, 2000 5:40 AM
> Subject: Re: [DB-SIG] Next Version [was Re: Preparing statement API]
>
>
> > Stuart 'Zen' Bishop wrote:
> > >
> > > On Thu, 27 Jan 2000, M.-A. Lemburg wrote:
> > >
> > > > Cool. I'll keep it in mind for the next version of the
> > > > DB API spec. BTW, are there any other things which should
> > > > be done for the next version ?
> > >
> > > Things I (as a user) would like to see:
> > >
> > > Common call to enable/disable autocommit. Default would have to be
> > > 'driver specific' for compatibility. Drivers could simply throw
> > > an exception if a programmer tries to set an unsupported mode.
> >
> > Hmm, not all DBs provide transactions and even those that
> > do sometimes have different thoughts about what isolation
> > level to choose as default.
> >
> > I don't think we'll ever have a common API for transaction
> > mechanisms... I guess this remains for the abstraction layer
> > to be implemented.
> >
> > > A common parameter style supported by all drivers. Or a method
> > > that convers 'generic' parameter style to 'paramstyle' parameter
> > > style.
> >
> > Dito.
> >
> > > Whatever additions Digital Creations requires to enable a Generic
> > > ZopeDA to be written that supports all compliant Python Database API
> > > drivers (well... drivers that don't throw an exception when
> > > con.autocommit(0) is called).
> >
> > Note that as soon as you deal with DB managers such as ODBC
> > managers, you don't know about transaction capabilities of
> > a DB until you connect to it.
> >
> > Why does Zope rely on the DB doing transactions ? I thought it
> > had its own transaction mechanism ?
> >
> > --
> > Marc-Andre Lemburg
> > ______________________________________________________________________
> > Business: http://www.lemburg.com/
> > Python Pages: http://www.lemburg.com/python/
> >
> >
> >
> > _______________________________________________
> > DB-SIG maillist - DB-SIG@python.org
> > http://www.python.org/mailman/listinfo/db-sig
>
>
> _______________________________________________
> DB-SIG maillist - DB-SIG@python.org
> http://www.python.org/mailman/listinfo/db-sig
>
--
Greg Stein, http://www.lyra.org/
From mal@lemburg.com Fri Jan 28 13:41:37 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 28 Jan 2000 14:41:37 +0100
Subject: [DB-SIG] Next Version [was Re: Preparing statement API]
References: <3891722C.10EB69EB@lemburg.com> <001501bf6995$bcd25dd0$529b90d1@synchrologic.com>
Message-ID: <38919C91.8511B480@lemburg.com>
Frank McGeough wrote:
>
> Is there any notion in the dbi layer of the app discovering the capabilities
> of the driver as in the ODBC spec where I can do a SQLGetInfo call to find
> out some information about the driver? I just think it's awkward to throw
> exceptions for information that might be readily available and easily
> handled by the app.
No. The intention is that applications should check for database
object attributes and then use this abstract information to
persue in different directions.
FYI, mxODBC provides access to SQLGetInfo via connection.getinfo().
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From zen@cs.rmit.edu.au Fri Jan 28 23:30:57 2000
From: zen@cs.rmit.edu.au (Stuart 'Zen' Bishop)
Date: Sat, 29 Jan 2000 10:30:57 +1100 (EST)
Subject: [DB-SIG] Next Version [was Re: Preparing statement API]
In-Reply-To:
Message-ID:
On Fri, 28 Jan 2000, Greg Stein wrote:
> Capability discovery of the underlying database is very database-specific.
> We have not defined any functions like that.
I understand this, and have not suggested anything that is particularly
RDBMS specific.
A common parmstyle could easily be faked with regular expressions (although
I agree this would be best left up to a higher abstraction layer - perhaps
a db library included with a future python distribution).
At the moment, an autocommit mode _cannot_ be handled at a higher abstraction
layer by checking for the existance of the rollback method and explicitly
calling commit after every transaction if autocommit mode is on. For reasons
why, have a look at footnote 3 of the API. Perhaps the addition of a module
global 'transactionsupport'.
0 - No rollback
1 - rollback to last commit
Higher integers would be used if the API ever supports more advanced
transaction mechanisms (are any of these standard?). This module global
cannot be assumed to be a constant, on account of the 'dynamically configured
interfaces' mentioned in footnote 3. Not that I can think of any that exist :-)
Could we have a Footnote stating that a higher level abstraction layer
is envisaged to add functionality such as:
autocommit
generic parameter style
minimum thread safety (?)
advanced row retrieval (rows returned as dictionary or user specified
class)
other frequent requests as approved :-)
BTW - is anyone working on this, or should I sketch out an API for
general mirth and ridicule?
> Unlike ODBC, the DBAPI is actually intended to be something of a
> least-common-denominator. Capabilities and extensions that are specific to
> a database can be implemented by that specific database provider. Trying
> to create an entirely generic API for database interaction is very tough,
> and is beyond the scope of what most people would like to see. DBAPI
> provides a minimum set of functionality and creates a very good level of
> consistency between the different providers.
Its not too tough though, as we have the advantages of being able to
steal ideas from Perl's DBI which has been in production for ages now,
as well as the advantages of Python (exception handling makes an interface
like this sooo much cleaner).
--
___
// Zen (alias Stuart Bishop) Work: zen@cs.rmit.edu.au
// E N Senior Systems Alchemist Play: zen@shangri-la.dropbear.id.au
//__ Computer Science, RMIT WWW: http://www.cs.rmit.edu.au/~zen
From als@ank-sia.com Sat Jan 29 06:13:08 2000
From: als@ank-sia.com (alexander smishlajev)
Date: Sat, 29 Jan 2000 08:13:08 +0200
Subject: [DB-SIG] Next Version
References:
Message-ID: <389284F4.B4C34160@turnhere.com>
Stuart 'Zen' Bishop wrote:
>
> A common parmstyle could easily be faked with regular expressions
> (although I agree this would be best left up to a higher
> abstraction layer - perhaps a db library included with a future
> python distribution).
what are you talking about? i failed to find any indices that
such thing exist somewhere.
> At the moment, an autocommit mode _cannot_ be handled at a higher
> abstraction layer by checking for the existance of the rollback
> method and explicitly calling commit after every transaction if
> autocommit mode is on. For reasons why, have a look at footnote 3
> of the API. Perhaps the addition of a module global
> 'transactionsupport'.
> 0 - No rollback
> 1 - rollback to last commit
>
> Higher integers would be used if the API ever supports more advanced
> transaction mechanisms (are any of these standard?).
i think i saw mechanisms like nested transactions and named
transactions. with 'named' i mean having identifier of any kind,
e.g. tuple (gtrid, bqual) for oracle global transactions. named
transactions may also be nested to known level (including zero,
i.e. no embedded transactions).
> Could we have a Footnote stating that a higher level abstraction layer
> is envisaged to add functionality such as:
> autocommit
> generic parameter style
> minimum thread safety (?)
> advanced row retrieval (rows returned as dictionary or user specified
> class)
> other frequent requests as approved :-)
>
> BTW - is anyone working on this, or should I sketch out an API for
> general mirth and ridicule?
i was hoping to do this, but did not yet manage to produce
anything. please do it if you can. i miss the comfort of perl
DBI very much.
best wishes,
alex.
From als@ank-sia.com Sat Jan 29 07:51:05 2000
From: als@ank-sia.com (alexander smishlajev)
Date: Sat, 29 Jan 2000 09:51:05 +0200
Subject: [DB-SIG] Next Version (was: Preparing statement API)
References: <3890723F.EC5D02E9@lemburg.com> <9t9r9f3ii58.fsf@mraz.iskon.hr> <38907C18.2309FF5D@lemburg.com>
Message-ID: <38929BE9.276C8208@turnhere.com>
"M.-A. Lemburg" wrote:
>
> BTW, are there any other things which should be done for the
> next version ?
it would be helpful if db drivers could implement methods
returning:
- list of available (configured) data sources.
- list of database users: logon name, groups/roles
- list of user groups (roles): name, members
- list of database objects: owner (schema), name, type (table,
view, alias/synonym, procedure etc.), temporary object flag,
server-specific info and maybe permission list? synonyms must
point to corresponding object.
such information is available at almost any db server, but each of
them has own ways to obtain it.
and... in api spec v2.0 i cannot find anything about writing
blobs to database. am i missing something?
best wishes,
alex.
From Anthony Baxter Sat Jan 29 08:06:39 2000
From: Anthony Baxter (Anthony Baxter)
Date: Sat, 29 Jan 2000 19:06:39 +1100
Subject: [DB-SIG] Next Version (was: Preparing statement API)
In-Reply-To: Message from alexander smishlajev
of "Sat, 29 Jan 2000 09:51:05 +0200." <38929BE9.276C8208@turnhere.com>
Message-ID: <200001290806.TAA25882@mbuna.arbhome.com.au>
>>> alexander smishlajev wrote
> - list of available (configured) data sources.
This isn't available for all databases.
> - list of database users: logon name, groups/roles
how can I get this from, e.g., a remote Oracle server, without logging
in to the server with a user with some sort of administrative privileges?
> - list of user groups (roles): name, members
ditto.
> - list of database objects: owner (schema), name, type (table,
> view, alias/synonym, procedure etc.), temporary object flag,
> server-specific info and maybe permission list? synonyms must
> point to corresponding object.
this is so totally and utterly db-specific that it _must_ be done
in a higher level layer. To do otherwise is to risk insanity.
> and... in api spec v2.0 i cannot find anything about writing
> blobs to database. am i missing something?
last time I looked, you had to do special magic for each different
type of database. it would be nice if the DB-SIG API included this,
but I can see it being pretty nasty.
Anthony
--
Anthony Baxter
It's never too late to have a happy childhood.
From mal@lemburg.com Sat Jan 29 10:40:46 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 29 Jan 2000 11:40:46 +0100
Subject: [DB-SIG] Next Version [was Re: Preparing statement API]
References:
Message-ID: <3892C3AE.F1355616@lemburg.com>
Stuart 'Zen' Bishop wrote:
>
> On Fri, 28 Jan 2000, Greg Stein wrote:
>
> > Capability discovery of the underlying database is very database-specific.
> > We have not defined any functions like that.
>
> I understand this, and have not suggested anything that is particularly
> RDBMS specific.
>
> A common parmstyle could easily be faked with regular expressions (although
> I agree this would be best left up to a higher abstraction layer - perhaps
> a db library included with a future python distribution).
>
> At the moment, an autocommit mode _cannot_ be handled at a higher abstraction
> layer by checking for the existance of the rollback method and explicitly
> calling commit after every transaction if autocommit mode is on. For reasons
> why, have a look at footnote 3 of the API. Perhaps the addition of a module
> global 'transactionsupport'.
> 0 - No rollback
> 1 - rollback to last commit
While this would help, it is not needed. All you have to do is
check whether the connection object has the .rollback() method
and if it does, whether it works:
def haveTransactions(conn):
if not hasattr(conn,'rollback'):
return 0
try:
conn.rollback()
except:
return 0
else:
return 1
The function has to be called on newly created connection objects
to not cause unwanted .rollbacks().
> Higher integers would be used if the API ever supports more advanced
> transaction mechanisms (are any of these standard?). This module global
> cannot be assumed to be a constant, on account of the 'dynamically configured
> interfaces' mentioned in footnote 3. Not that I can think of any that exist :-)
>
> Could we have a Footnote stating that a higher level abstraction layer
> is envisaged to add functionality such as:
> autocommit
> generic parameter style
> minimum thread safety (?)
> advanced row retrieval (rows returned as dictionary or user specified
> class)
> other frequent requests as approved :-)
>
> BTW - is anyone working on this, or should I sketch out an API for
> general mirth and ridicule?
Such a higher level layer written in Python would indeed be nice.
It should ideally focus on an OO-style representation of DB
access similar to the RogueWave OO-layer on top of JDBC... well,IMHO
anyway ;-)
> > Unlike ODBC, the DBAPI is actually intended to be something of a
> > least-common-denominator. Capabilities and extensions that are specific to
> > a database can be implemented by that specific database provider. Trying
> > to create an entirely generic API for database interaction is very tough,
> > and is beyond the scope of what most people would like to see. DBAPI
> > provides a minimum set of functionality and creates a very good level of
> > consistency between the different providers.
>
> Its not too tough though, as we have the advantages of being able to
> steal ideas from Perl's DBI which has been in production for ages now,
> as well as the advantages of Python (exception handling makes an interface
> like this sooo much cleaner).
Perhaps someone could summarize these features in Perl's DBI or
provide pointers ?! I, for one, don't have any experience with Perl's
DBI.
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From mal@lemburg.com Sat Jan 29 10:34:25 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sat, 29 Jan 2000 11:34:25 +0100
Subject: [DB-SIG] Next Version (was: Preparing statement API)
References: <3890723F.EC5D02E9@lemburg.com> <9t9r9f3ii58.fsf@mraz.iskon.hr> <38907C18.2309FF5D@lemburg.com> <38929BE9.276C8208@turnhere.com>
Message-ID: <3892C231.B592FCC1@lemburg.com>
alexander smishlajev wrote:
>
> "M.-A. Lemburg" wrote:
> >
> > BTW, are there any other things which should be done for the
> > next version ?
>
> it would be helpful if db drivers could implement methods
> returning:
>
> - list of available (configured) data sources.
This can only be done for database manager, e.g. the ODBC
managers in Windows, iODBC and unixODBC. mxODBC will have
support for this in a future version.
> - list of database users: logon name, groups/roles
> - list of user groups (roles): name, members
Don't know where you would get these infos... you normally
have to have admin priviledges to even look at those sys tables.
An abstraction layer could query the sys tables directly, BTW.
> - list of database objects: owner (schema), name, type (table,
> view, alias/synonym, procedure etc.), temporary object flag,
> server-specific info and maybe permission list? synonyms must
> point to corresponding object.
Don't know how other DBs implement this, but mxODBC has the
catalog methods for these things.
> such information is available at almost any db server, but each of
> them has own ways to obtain it.
>
> and... in api spec v2.0 i cannot find anything about writing
> blobs to database. am i missing something?
See the type object section: blobs should be wrapped into
Binary(string) objects and then passed to the .executeXXX()
methods as parameter. On output you will see a column type
equal to the BINARY type object in the cursor description.
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From zen@cs.rmit.edu.au Sun Jan 30 01:15:48 2000
From: zen@cs.rmit.edu.au (Stuart 'Zen' Bishop)
Date: Sun, 30 Jan 2000 12:15:48 +1100 (EST)
Subject: [DB-SIG] Next Version [was Re: Preparing statement API]
In-Reply-To: <3892C3AE.F1355616@lemburg.com>
Message-ID:
On Sat, 29 Jan 2000, M.-A. Lemburg wrote:
> While this would help, it is not needed. All you have to do is
> check whether the connection object has the .rollback() method
> and if it does, whether it works:
>
> def haveTransactions(conn):
> if not hasattr(conn,'rollback'):
> return 0
> try:
> conn.rollback()
> except:
> return 0
> else:
> return 1
>
> The function has to be called on newly created connection objects
> to not cause unwanted .rollbacks().
I've realized the obvious that faking autocommit mode in a higher level
abstraction layer will double the number of calls to the RDBMS. I think
there is definitly a need to a conn.autocommit() method for drivers that
support it.
> Such a higher level layer written in Python would indeed be nice.
> It should ideally focus on an OO-style representation of DB
> access similar to the RogueWave OO-layer on top of JDBC... well,IMHO
> anyway ;-)
I was thinking of just building on the existing Driver design.
I feel the design of the driver API would provide an excellent basis
for an abstraction layer. I've experimented with OO access to RDBMS tables,
and found in many cases (the ones I was working on at the time), it creates
more problems than it solves.
> Perhaps someone could summarize these features in Perl's DBI or
> provide pointers ?! I, for one, don't have any experience with Perl's
> DBI.
I've attached the DBI docs to this email for people who want a read.
Its probably a few months out of date.
Below is a summary/comparison/general mess.
What PerlDBI has that Python DB doesn't (translated to pseudo-python).
I've ignored the sections that are irrelevant (we thankfully don't have
to worry about Perl's error handling and references):
con.quote('It's an ex-parrot') == "'It''s an ex-parrot'"
con.quote(None) == "NULL"
con.quote(dbi.Date()) == "TO_DATE('1999-04-01 12:53','YYYY-MM-DD HH:MM')"
etc.
This method allows us to dynamically build SQL, without worrying
about the specifics of our RDBMS quoting. Actually, I don't think
PerlDBI does the last, but the Python version could as the PyDB
handles dates. Really useful for when you are building table joins
and selection criteria from command line arguments, HTML form input
etc. They also allow con.quote(arga,type), but I don't know how
useful this would be in python - perhaps some obscure case where
python cannot easily cast a datatype but the RDBMS can.
con.autocommit(1)
mode = con.autocommit()
As discussed above. An interesting point from the docs - 'Drivers
should always default to AutoCommit mode. (An unfortunate choice
forced on the DBI by ODBC and JDBC conventions.)'. I see no
reason why we can't do it properly :-)
cur.prepare('select * from parrots')
sth.execute()
PerlDBI allows you to explicitly prepare your statement. PyDB
neatly sidesteps this issue, but it is non obvious to people
who come from a DBI/ODBC/JDBC background. A small snippet
of code should be added to the PyDB API to demonstrate the reuse:
q = 'select * from parrots where dead = ?'
cur = con.Connect(...)
cur.execute(q,('T',))
print cur.fetchall()
cur.execute(q,('F',)) # Reuses same RDBMS cursor if possible.
# Major performance gain on some RDBMS
print cur.getchall()
q2 = 'select * from parrots where %s = ?' % dead
cur.execute(q2,('T',)) # Note that a new RDBMS cursor will be created
# because a different string instance is passed.
# String equality is not enough.
print cur.fetchall()
cur.fetch_mapping() = {'COLA': 'A Value','COLB','B Value'}
Also a fetchall variant. This would live in an abstraction layer,
not in the driver layer.
DBI.available_drivers()
List of available drivers. Leave this for version 2 of the abstraction
layer, as it would mean it couldn't support PyDB 1 or PyDB 2 drivers.
DBI.data_sources(driver)
List of available data sources. 'Note that many drivers have no way
of knowing what data sources might be available for it and thus,
typically, return an empty or incomplete list.'. Might be worth
defining the call for those few drivers that can support it.
cur.longlength = 2 * 1024 * 1024
cur.truncok = 1
Specify the maximum size of a blob returned from the database,
and if an exception should be raised if the data would need to
be truncated. Does any interface handle BLOBS nicely? I've never
used them myself, as they just seem to painful.
Things Python DB has that PerlDBI doesnt:
Better date handling
PerlDBI claims Y2K compliance because it knows nothing about dates
(although some drivers contain some date handling ability).
Generic functions for input/output optimization.
PyDB provides the arraysize() and setXXXsize() methods to allow
the programmer to specify hints to the RDBMS. These hints are
driver specific in PerlDBI.
Generic function for calling stored procedures.
PerlDBI requires you to do this though the standard cursor
interface, which makes it a pain to obtain stored function output.
cur.nextset() method
PerlDBI doesn't know about multiple result sets (and neither do
I for that matter...)
Driver.threadsafety
There is no generic way of interigating a PerlDBI driver for
thread safety - if you want to write generic DB code, it needs
to be single threaded.
An interesting bit that I think relates to Digital Creations issues
(I wasn't paying too much attention at the time...):
Autocommit
[...]
* Database in which a transaction must be explicitly started
For these database the intention is to have them act
like databases in which a transaction is always active
(as described above).
To do this the DBI driver will automatically begin a
transaction when AutoCommit is turned off (from the
default on state) and will automatically begin another
transaction after a the commit entry elsewhere in this
documentor the rollback entry elsewhere in this
document.
In this way, the application does not have to treat
these databases as a special case.
--
___
// Zen (alias Stuart Bishop) Work: zen@cs.rmit.edu.au
// E N Senior Systems Alchemist Play: zen@shangri-la.dropbear.id.au
//__ Computer Science, RMIT WWW: http://www.cs.rmit.edu.au/~zen
From mal@lemburg.com Sun Jan 30 21:18:41 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Sun, 30 Jan 2000 22:18:41 +0100
Subject: [DB-SIG] Next Version [was Re: Preparing statement API]
References:
Message-ID: <3894AAB1.B2B965DC@lemburg.com>
Stuart 'Zen' Bishop wrote:
>
> On Sat, 29 Jan 2000, M.-A. Lemburg wrote:
>
> > While this would help, it is not needed. All you have to do is
> > check whether the connection object has the .rollback() method
> > and if it does, whether it works:
> >
> > def haveTransactions(conn):
> > if not hasattr(conn,'rollback'):
> > return 0
> > try:
> > conn.rollback()
> > except:
> > return 0
> > else:
> > return 1
> >
> > The function has to be called on newly created connection objects
> > to not cause unwanted .rollbacks().
>
> I've realized the obvious that faking autocommit mode in a higher level
> abstraction layer will double the number of calls to the RDBMS. I think
> there is definitly a need to a conn.autocommit() method for drivers that
> support it.
Just my 2cents: auto commit is a *bad thing* and a pretty evil
invention of ODBC. While it does make writing ODBC drivers
simpler (ones which don't support transactions that is), it
is potentially dangerous at times, e.g. take a crashing
program: there is no way to recover from errors because the
database has no way of knowing which data is valid and which is
not. No commercial application handling "mission critical" (I
love that term ;-) data would ever want to run in auto-commit
mode.
Note that the DB API 2.0 explicitly state that
"""
Note that if the database supports an
auto-commit feature, this must be initially off. An interface method may be
provided to turn it back on.
"""
> > Such a higher level layer written in Python would indeed be nice.
> > It should ideally focus on an OO-style representation of DB
> > access similar to the RogueWave OO-layer on top of JDBC... well,IMHO
> > anyway ;-)
>
> I was thinking of just building on the existing Driver design.
> I feel the design of the driver API would provide an excellent basis
> for an abstraction layer. I've experimented with OO access to RDBMS tables,
> and found in many cases (the ones I was working on at the time), it creates
> more problems than it solves.
The main advantage is that it offers more flexibility when it
comes to dealing with columns of different types. Anyway, this
is just my view of things, so it might well not capture the
needs of others.
> > Perhaps someone could summarize these features in Perl's DBI or
> > provide pointers ?! I, for one, don't have any experience with Perl's
> > DBI.
>
> I've attached the DBI docs to this email for people who want a read.
> Its probably a few months out of date.
> Below is a summary/comparison/general mess.
Thanks for the summary. Comments follow...
> What PerlDBI has that Python DB doesn't (translated to pseudo-python).
> I've ignored the sections that are irrelevant (we thankfully don't have
> to worry about Perl's error handling and references):
>
> con.quote('It's an ex-parrot') == "'It''s an ex-parrot'"
> con.quote(None) == "NULL"
> con.quote(dbi.Date()) == "TO_DATE('1999-04-01 12:53','YYYY-MM-DD HH:MM')"
> etc.
>
> This method allows us to dynamically build SQL, without worrying
> about the specifics of our RDBMS quoting. Actually, I don't think
> PerlDBI does the last, but the Python version could as the PyDB
> handles dates. Really useful for when you are building table joins
> and selection criteria from command line arguments, HTML form input
> etc. They also allow con.quote(arga,type), but I don't know how
> useful this would be in python - perhaps some obscure case where
> python cannot easily cast a datatype but the RDBMS can.
Quoting is generally not needed since you pass in the data
via bound parameters. The DB interface does the needed quoting
(if any) without user intervention.
> con.autocommit(1)
> mode = con.autocommit()
>
> As discussed above. An interesting point from the docs - 'Drivers
> should always default to AutoCommit mode. (An unfortunate choice
> forced on the DBI by ODBC and JDBC conventions.)'. I see no
> reason why we can't do it properly :-)
See above... all drivers should default to auto-commit mode
being *disabled*.
> cur.prepare('select * from parrots')
> sth.execute()
>
> PerlDBI allows you to explicitly prepare your statement. PyDB
> neatly sidesteps this issue, but it is non obvious to people
> who come from a DBI/ODBC/JDBC background. A small snippet
> of code should be added to the PyDB API to demonstrate the reuse:
>
> q = 'select * from parrots where dead = ?'
> cur = con.Connect(...)
> cur.execute(q,('T',))
> print cur.fetchall()
> cur.execute(q,('F',)) # Reuses same RDBMS cursor if possible.
> # Major performance gain on some RDBMS
> print cur.getchall()
> q2 = 'select * from parrots where %s = ?' % dead
> cur.execute(q2,('T',)) # Note that a new RDBMS cursor will be created
> # because a different string instance is passed.
> # String equality is not enough.
Hmm, the spec is a little unclear about this: it should of course
say that "equality is enough". mxODBC first checks for identity
and then for equality, BTW.
> print cur.fetchall()
>
> cur.fetch_mapping() = {'COLA': 'A Value','COLB','B Value'}
>
> Also a fetchall variant. This would live in an abstraction layer,
> not in the driver layer.
Right.
> DBI.available_drivers()
>
> List of available drivers. Leave this for version 2 of the abstraction
> layer, as it would mean it couldn't support PyDB 1 or PyDB 2 drivers.
>
> DBI.data_sources(driver)
>
> List of available data sources. 'Note that many drivers have no way
> of knowing what data sources might be available for it and thus,
> typically, return an empty or incomplete list.'. Might be worth
> defining the call for those few drivers that can support it.
>
> cur.longlength = 2 * 1024 * 1024
> cur.truncok = 1
The truncok attribute is interesting... I would suggest to have
a .dontreportwarnings attribute (or something along those lines)
on both cursor and connection objects (or some other way to turn off
reporting of warnings via exceptions). There are sometimes troubles
with ODBC functions returning warnings and mxODBC having to
raise a Warning exception: since some sequences cannot be completed
because of the warning exceptions, there is considerable loss
of functionality results for some DB backends (MS SQL Server being
a prominent example). mxODBC currently solves this by simply ignoring
some warnings...
> Specify the maximum size of a blob returned from the database,
> and if an exception should be raised if the data would need to
> be truncated. Does any interface handle BLOBS nicely? I've never
> used them myself, as they just seem to painful.
These two are only relevant for interfaces to DB managers
such as ODBC managers which arbitrate DB requests to multiple
ODBC drivers. A single driver usually only handles one data source.
> Things Python DB has that PerlDBI doesnt:
>
> Better date handling
>
> PerlDBI claims Y2K compliance because it knows nothing about dates
> (although some drivers contain some date handling ability).
:-)
> Generic functions for input/output optimization.
>
> PyDB provides the arraysize() and setXXXsize() methods to allow
> the programmer to specify hints to the RDBMS. These hints are
> driver specific in PerlDBI.
>
> Generic function for calling stored procedures.
>
> PerlDBI requires you to do this though the standard cursor
> interface, which makes it a pain to obtain stored function output.
>
> cur.nextset() method
>
> PerlDBI doesn't know about multiple result sets (and neither do
> I for that matter...)
>
> Driver.threadsafety
>
> There is no generic way of interigating a PerlDBI driver for
> thread safety - if you want to write generic DB code, it needs
> to be single threaded.
>
> An interesting bit that I think relates to Digital Creations issues
> (I wasn't paying too much attention at the time...):
Not sure what you're talking about here... what issues were there ?
> Autocommit
>
> [...]
>
> * Database in which a transaction must be explicitly started
>
> For these database the intention is to have them act
> like databases in which a transaction is always active
> (as described above).
>
> To do this the DBI driver will automatically begin a
> transaction when AutoCommit is turned off (from the
> default on state) and will automatically begin another
> transaction after a the commit entry elsewhere in this
> documentor the rollback entry elsewhere in this
> document.
>
> In this way, the application does not have to treat
> these databases as a special case.
We should also have a similar paragraph in the DB API spec.
Just curious: In what state would a DB be if no transaction
was explicitly started ?
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From zen@cs.rmit.edu.au Sun Jan 30 23:56:35 2000
From: zen@cs.rmit.edu.au (Stuart 'Zen' Bishop)
Date: Mon, 31 Jan 2000 10:56:35 +1100 (EST)
Subject: [DB-SIG] Next Version [was Re: Preparing statement API]
In-Reply-To: <3894AAB1.B2B965DC@lemburg.com>
Message-ID:
On Sun, 30 Jan 2000, M.-A. Lemburg wrote:
> Quoting is generally not needed since you pass in the data
> via bound parameters. The DB interface does the needed quoting
> (if any) without user intervention.
I have largish Perl program that couldn't function without it - sort of
a command line data miner. Now that I think of it, it probably could have
been written using named parameters (rather than positional - what generic
PerlDBI supports). It would still be nice to access this though (eg. a GUI
SQL query builder or DBA tool that needs to be able to show the generated
HTML). Every DA needs the code, so it would be nice to access it :-)
> > con.autocommit(1)
> > mode = con.autocommit()
> >
> > As discussed above. An interesting point from the docs - 'Drivers
> > should always default to AutoCommit mode. (An unfortunate choice
> > forced on the DBI by ODBC and JDBC conventions.)'. I see no
> > reason why we can't do it properly :-)
>
> See above... all drivers should default to auto-commit mode
> being *disabled*.
Yes - properly :-) Death to MySQL! *duck*.
> Not sure what you're talking about here... what issues were there ?
I'm not sure either :-) I seem to remember some discussion about transaction
starts, but I can't find it in the archives :-( It is either FUD, an
overactive imagination on my part, or relates to the following from
http://www.zope.org/Members/petrilli/DARoadmap :
self._begin - This is called at the beginning of every transaction.
Depending on your database, and its implementation of "implicit
transactions", you may or may not have to do anything here besides a
pass.
If the PyDB API states that a new transaction is implicitly started on
connection, rollback or commit, or the first executable command afterwards,
then self._begin in a ZopeDA->PyDB adaptor can just be 'pass'. This
might be part of the SQL spec - I'm not sure.
I can't think of any databases I've dealt with that self._begin would
be needed. Can anyone think of situations where it *is* needed? I think
ZopeDA's need the call, as they are designed to support non-SQL databases
as well such as IMAP, LDAP, plain text, ad infinitum.
--
___
// Zen (alias Stuart Bishop) Work: zen@cs.rmit.edu.au
// E N Senior Systems Alchemist Play: zen@shangri-la.dropbear.id.au
//__ Computer Science, RMIT WWW: http://www.cs.rmit.edu.au/~zen
From petrilli@amber.org Mon Jan 31 03:58:20 2000
From: petrilli@amber.org (Christopher Petrilli)
Date: Sun, 30 Jan 2000 22:58:20 -0500
Subject: [DB-SIG] Next Version [was Re: Preparing statement API]
In-Reply-To: <3894AAB1.B2B965DC@lemburg.com>; from mal@lemburg.com on Sun, Jan 30, 2000 at 10:18:41PM +0100
References: <3894AAB1.B2B965DC@lemburg.com>
Message-ID: <20000130225820.B25682@trump.amber.org>
M.-A. Lemburg [mal@lemburg.com] wrote:
>
> Just my 2cents: auto commit is a *bad thing* and a pretty evil
> invention of ODBC. While it does make writing ODBC drivers
> simpler (ones which don't support transactions that is), it
> is potentially dangerous at times, e.g. take a crashing
> program: there is no way to recover from errors because the
> database has no way of knowing which data is valid and which is
> not. No commercial application handling "mission critical" (I
> love that term ;-) data would ever want to run in auto-commit
> mode.
I'm going to back this up. ANY serious application MUST manage its
own transactions, as otherwise you can't ever hope to control
failure modes. Zope will ALWAYS run databases in non-autocommit mode.
Note that X/Open CLI specifies that it must be off by default I believe,
I can check the spc tomorrow. ODBC violates this.
> Quoting is generally not needed since you pass in the data
> via bound parameters. The DB interface does the needed quoting
> (if any) without user intervention.
I can imagine some situation where it would be useful, but no Zope users
has EVER asked for this. And it's probably one of the bigger database
application users in the Python world :-)
> See above... all drivers should default to auto-commit mode
> being *disabled*.
Amen, and all drivers that Digital Creations is involved in will always
behave this way, period.
> Just curious: In what state would a DB be if no transaction
> was explicitly started ?
Most database have implicit BEGIN TRANSACTION on anything of the following:
* All DDL
* All DML
* SELECT FOR UPDATE
Ta'da :-) This is true for Oracle, DB2 and Sybase that I know of.
Chris
--
| Christopher Petrilli
| petrilli@amber.org
From mal@lemburg.com Mon Jan 31 09:42:45 2000
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 31 Jan 2000 10:42:45 +0100
Subject: [DB-SIG] Next Version [was Re: Preparing statement API]
References:
Message-ID: <38955915.B83E1B1A@lemburg.com>
Stuart 'Zen' Bishop wrote:
>
> On Sun, 30 Jan 2000, M.-A. Lemburg wrote:
>
> > Quoting is generally not needed since you pass in the data
> > via bound parameters. The DB interface does the needed quoting
> > (if any) without user intervention.
>
> I have largish Perl program that couldn't function without it - sort of
> a command line data miner. Now that I think of it, it probably could have
> been written using named parameters (rather than positional - what generic
> PerlDBI supports). It would still be nice to access this though (eg. a GUI
> SQL query builder or DBA tool that needs to be able to show the generated
> HTML). Every DA needs the code, so it would be nice to access it :-)
I'm not sure what you would do with a quoting method... it is
usually only needed in case you want to pass raw data to the DB
embedded in the SQL command you post and this is already nicely
dealt with in the .execute() method through the use of bound
parameters.
Perhaps I'm missing something obvious ?
> > > con.autocommit(1)
> > > mode = con.autocommit()
> > >
> > > As discussed above. An interesting point from the docs - 'Drivers
> > > should always default to AutoCommit mode. (An unfortunate choice
> > > forced on the DBI by ODBC and JDBC conventions.)'. I see no
> > > reason why we can't do it properly :-)
> >
> > See above... all drivers should default to auto-commit mode
> > being *disabled*.
>
> Yes - properly :-) Death to MySQL! *duck*.
MySQL was created as a read-only database -- where you typically
don't need transactions ;-)
Seriously, the above is of course only applicable to data
sources providing transactions. If they don't you wouldn't
want to use them for critical applications and instead go
with one of the largish DBs, e.g. Oracle, DB2, etc.
> > Not sure what you're talking about here... what issues were there ?
>
> I'm not sure either :-) I seem to remember some discussion about transaction
> starts, but I can't find it in the archives :-( It is either FUD, an
> overactive imagination on my part, or relates to the following from
> http://www.zope.org/Members/petrilli/DARoadmap :
>
> self._begin - This is called at the beginning of every transaction.
> Depending on your database, and its implementation of "implicit
> transactions", you may or may not have to do anything here besides a
> pass.
>
> If the PyDB API states that a new transaction is implicitly started on
> connection, rollback or commit, or the first executable command afterwards,
> then self._begin in a ZopeDA->PyDB adaptor can just be 'pass'. This
> might be part of the SQL spec - I'm not sure.
Not sure about the SQL spec, but ODBC clearly states that a
transaction is implictely started when the cursor is created.
> I can't think of any databases I've dealt with that self._begin would
> be needed. Can anyone think of situations where it *is* needed? I think
> ZopeDA's need the call, as they are designed to support non-SQL databases
> as well such as IMAP, LDAP, plain text, ad infinitum.
Those examples don't support transactions and thus wouldn't need
the ._begin() call anyways ;-) I think the ._begin() method
is an abstraction in the DA layer to initialize a cursor and
not necessarily related to starting a transaction... could
be wrong though.
--
Marc-Andre Lemburg
______________________________________________________________________
Business: http://www.lemburg.com/
Python Pages: http://www.lemburg.com/python/
From alex@ank-sia.com Mon Jan 31 14:37:05 2000
From: alex@ank-sia.com (alexander smishlajev)
Date: Mon, 31 Jan 2000 16:37:05 +0200
Subject: [DB-SIG] Next Version
References: <3890723F.EC5D02E9@lemburg.com> <9t9r9f3ii58.fsf@mraz.iskon.hr> <38907C18.2309FF5D@lemburg.com> <38929BE9.276C8208@turnhere.com> <3892C231.B592FCC1@lemburg.com>
Message-ID: <38959E11.9C6BFBDA@turnhere.com>
"M.-A. Lemburg" wrote:
>
> > - list of available (configured) data sources.
>
> This can only be done for database manager, e.g. the ODBC
> managers in Windows, iODBC and unixODBC. mxODBC will have
> support for this in a future version.
such information is avalable at least for Oracle connections. for
PostgreSQL, a list of host databases may be obtained after successfull
connection.
> > - list of database users: logon name, groups/roles
> > - list of user groups (roles): name, members
>
> Don't know where you would get these infos... you normally
> have to have admin priviledges to even look at those sys tables.
not always. as far as i know, this information is by default available
to public on PostgreSQL, MS SQL and Oracle. of course, db admin may
reconfigure this, but at least a list of groups/roles assigned to
current login usually can be obtained as well as login name itself.
> An abstraction layer could query the sys tables directly, BTW.
yes, it could. but that would require some knowledge about the
implementation of such things in _each_ database. isn't it a job for db
driver?
> > - list of database objects: owner (schema), name, type (table,
> > view, alias/synonym, procedure etc.), temporary object flag,
> > server-specific info and maybe permission list? synonyms must
> > point to corresponding object.
>
> Don't know how other DBs implement this, but mxODBC has the
> catalog methods for these things.
as far as i know, each server has it's own way to access catalog
information. again, i think that db driver should know a way of doing
this for handled database.
> > and... in api spec v2.0 i cannot find anything about writing
> > blobs to database. am i missing something?
>
> See the type object section: blobs should be wrapped into
> Binary(string) objects and then passed to the .executeXXX()
> methods as parameter. On output you will see a column type
> equal to the BINARY type object in the cursor description.
i was told that this approach is not working for DCOracle. at least not
with procedure calls. perhaps allowed LOB operations should be somehow
reported by a db driver?
best wishes,
alex.