From akuchlin@mems-exchange.org Mon May 1 20:14:32 2000 From: akuchlin@mems-exchange.org (Andrew M. Kuchling) Date: Mon, 1 May 2000 15:14:32 -0400 (EDT) Subject: [DB-SIG] Object/relational mapper code Message-ID: <200005011914.PAA08507@amarok.cnri.reston.va.us> I've released some unmaintained object/relational mapping code; see http://starship.python.net/crew/amk/python/unmaintained/ordb.html The idea is that you write a schema as a set of S-expressions (see the page for an example) and can then read and write objects out of a set of corresponding relational tables. This code is far from finished, and will not be maintained (though I'll answer simple questions about it); I wrote the code during our quest to choose an object storage medium, and the RDBMS idea has mostly been abandoned (though I thought it was promising). If anyone wants to take over maintenance of this code, please do! -- A.M. Kuchling http://starship.python.net/crew/amk/ Extraordinary people survive under the most terrible circumstances and they become more extraordinary because of it. -- Robertson Davies From lucy@madeforchina.com Tue May 2 16:13:09 2000 From: lucy@madeforchina.com (lucy) Date: Tue, 2 May 2000 23:13:09 +0800 Subject: [DB-SIG] (no subject) Message-ID: <010801bfb448$edcc56e0$9308a8c0@mfc.com> This is a multi-part message in MIME format. ------=_NextPart_000_0105_01BFB48B.FB062140 Content-Type: text/plain; charset="gb2312" Content-Transfer-Encoding: base64 DQo= ------=_NextPart_000_0105_01BFB48B.FB062140 Content-Type: text/html; charset="gb2312" Content-Transfer-Encoding: base64 PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUcmFuc2l0aW9uYWwv L0VOIj4NCjxIVE1MPjxIRUFEPg0KPE1FVEEgY29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PWdi MjMxMiIgaHR0cC1lcXVpdj1Db250ZW50LVR5cGU+DQo8TUVUQSBjb250ZW50PSJNU0hUTUwgNS4w MC4yNjE0LjM1MDAiIG5hbWU9R0VORVJBVE9SPg0KPFNUWUxFPjwvU1RZTEU+DQo8L0hFQUQ+DQo8 Qk9EWSBiZ0NvbG9yPSNmZmZmZmY+DQo8RElWPiZuYnNwOzwvRElWPjwvQk9EWT48L0hUTUw+DQo= ------=_NextPart_000_0105_01BFB48B.FB062140-- From pieper@viaregio.de Tue May 2 17:48:35 2000 From: pieper@viaregio.de (Oliver Pieper) Date: Tue, 02 May 2000 18:48:35 +0200 Subject: [DB-SIG] Remote connection to Oracle DB possible? References: <390953F9.6F1EA49E@viaregio.de> <39096856.F5B098F1@lemburg.com> Message-ID: <390F06E3.A60A9E21@viaregio.de> Thanks a lot for the information, that were the hints I've been looking for! I hope you don't mind if I start to ask you a few more specific questions in the future (and in German...). 8-) Oliver -- Oliver Pieper E: pieper@viaregio.de W: www.oliverpieper.de "M.-A. Lemburg" wrote: > > Oliver Pieper wrote: > > > > Hello, > > > > I hope I found the right mailing list to ask that question. If not, please point me in the right direction. > > I guess so :-) > > > What I'd like to know: is there a way to connect Python/Zope to an Oracle database on a remote server (over an Internet connection). The readmes of DCOracle and ZOracleDA mention environment variables that have to be set to local Oracle paths, which seems to imply that you can only connect to a DB on the same machine. Is this correct? > > > > If so, is there any other way to make a remote connection to Oracle? > > You could use an ODBC-ODBC bridge or one of the ODBC multi-tier > driver kits together with mxODBC and the recently announced > Zope DA for it. > > -- > Marc-Andre Lemburg > ______________________________________________________________________ > Business: http://www.lemburg.com/ > Python Pages: http://www.lemburg.com/python/ From jsturman@cvtci.com.ar Wed May 3 14:13:15 2000 From: jsturman@cvtci.com.ar (Javier Sturman) Date: Wed, 3 May 2000 10:13:15 -0300 (ART) Subject: [DB-SIG] MySQL Message-ID: Maybe this is a stupid question but is there any module for MySQL? From thorsten@mats.gmd.de Wed May 3 14:45:07 2000 From: thorsten@mats.gmd.de (Thorsten Horstmann) Date: Wed, 03 May 2000 15:45:07 +0200 Subject: [DB-SIG] MySQL References: Message-ID: <39102D63.7CED51B4@mats.gmd.de> Javier Sturman wrote: > > Maybe this is a stupid question but is there any module for MySQL? Hi, try: http://dustman.net/andy/python/MySQLdb/ I have good experiences with this module. -Thorsten -- Life is like a beta release - bug reports to: god@heaven.org From nnorwitz@yahoo.com Wed May 3 18:08:52 2000 From: nnorwitz@yahoo.com (Neal Norwitz) Date: Wed, 03 May 2000 13:08:52 -0400 Subject: [DB-SIG] Using CLOBs in Oracle 8.1.5 Message-ID: <39105D24.7EED38D0@staff.reverseauction.com> Hi! I am able to send CLOBs Oracle 8.1.5, but I cannot read them from Oracle. I get the error: ORA-24813: cannot send or receive an unsupported LOB when I try to do a fetch (fetchall or fetchone). I am able to get the description and it shows the CLOB column as a STRING of the correct size (4000). This code works on all tables without a CLOB. Thanks, Neal From jeremy@cnri.reston.va.us Wed May 3 20:47:20 2000 From: jeremy@cnri.reston.va.us (Jeremy Hylton) Date: Wed, 3 May 2000 15:47:20 -0400 (EDT) Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL In-Reply-To: References: Message-ID: <14608.33352.474711.599202@goon.cnri.reston.va.us> I am having some trouble figuring out how to use the new PyGreSQL DB-API interface. The issues that are puzzling me are transactions, cursors, and database-Python type conversions. I've addressed this message to both lists (pygresql and db-sig) because it's not clear where the people with the answers are. The pgdb module is new and the users of the pygresql list don't seem to have much experience with the DB-API. I'm a relative novice at all this. My knowledge of SQL has been cobbled together from Phil Greenspun's Web pages, the PostgreSQL docs, and The Practical SQL Handbook (Bowman et al.). I expect that I'm puzzled largely because I am a novice. Issue #1: Transactions This may be a culture issue. PostgreSQL executes transactions in unchained mode -- aka autocommit. According to Greenspun's SQL for Web Nerds, Oracle's SQL*Plus works the same way. I assumed that this was the way things ought to be :-). It looks like I was wrong, because the DB-API specifies implicit transactions that are started automatically and committed or rolled back on demand. I think I prefer explicit creation of transactions and autocommit when not specified. (Hard to say for sure, since it's all I've ever used.) Is there a standard way to accomplish this using the DB-API? Also, there seem to be methods for commit and rollback, but not begin. I assume that the omission is because the DB API does not do autocommit by default. However, if there is a way to enable autocommit, there ought to be a method for creating a transaction. Issue #2: Cursors The DB API describes a cursor tersely. (Dare I say cursorily?) The current PyGreSQL implementation does not use the database's CURSOR facility; it merely. Does that matter? Is the DB API definition of cursor intended to use a database cursor? Issue #3: Types The new PyGreSQL module returns floating point numbers for int types using the fetch methods. The old, non-DB API version of the module returned an int. I can't find any description of how to handle converions between database types and Python types in the DB API. Did I miss it, or is there a standard way to handle the conversions? The current behavior (DB int -> PY float) seems broken, but I can't tell if that's just following the spec. Jeremy From python-db-sig@teleo.net Fri May 5 21:50:28 2000 From: python-db-sig@teleo.net (Patrick Phalen) Date: Fri, 5 May 2000 13:50:28 -0700 Subject: [DB-SIG] Fwd: [ANN] InterBase open sourced and offer of Python support Message-ID: <00050513565304.01169@quadra.teleo.net> I'm forwarding this, since it has information relative to Python. ---------- ## Forwarded Message ## ---------- Subject: [Zope-Annce] Zope and InterBase Date: Fri, 05 May 2000 10:13:11 -0700 From: "Markus Kemper" I would like to share with you all our latest news in that we are taking InterBase 6.0 open source under the IPL (a MPL derivative). Our draft can be viewed at: http://www.interbase.com/IPL.html We expect InterBase 6.0 and the source to be officially released in the June or July time frame. Currently, the InterBase 6.0 beta for Linux, Solaris and Windows is available for download from our new site at: http://www.interbase.com/ We have been receiving a lot of requests for support of InterBase with Zope and Python. We would very much like to assist in this effort if we can. InterBase would be delighted to see full product support via Python and Zope. I believe that there exists an adaptor, python database module (at www.python.org) for interbase (Kinterbasdb 0.2, written by Alexander Kuznetsov. Again, if we can be of help in this effort we would like to do what we can to assist. Please visit our new site and let us know what you think. Kind regards, Markus Kemper InterBase Software mkemper@interbase.com From Bradley McCrorey" This is a multi-part message in MIME format. ------=_NextPart_000_006D_01BFB756.BD7E3A60 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi all, I usually write all my database interfaces for web-enabled apps using = php on apache webserver. however, I'm thinking about building a windows = interface for a change. I've done a BIT of python programming, basically = just running through the basics to get a feel for the syntax and so on. = I found the mysqldb module and downloaded the latest version, but I'm = unclear as to how to install it under windows.=20 Has anyone ever used this module under windows? Ideally I'd like to get = it running under windows then build a tcl/tk interface using python = which would connect to my database. If there are any resources on the web for this sort of thing please let = me know. Any help is greatly appreciated. Cheers, Bradley McCrorey globeproductions.net ------=_NextPart_000_006D_01BFB756.BD7E3A60 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable
Hi all,
 
I usually write all my database = interfaces for=20 web-enabled apps using php on apache webserver. however, I'm thinking = about=20 building a windows interface for a change. I've done a BIT of python=20 programming, basically just running through the basics to get a feel for = the=20 syntax and so on. I found the mysqldb module and downloaded the latest = version,=20 but I'm unclear as to how to install it under windows.
 
Has anyone ever used this module under = windows?=20 Ideally I'd like to get it running under windows then build a tcl/tk = interface=20 using python which would connect to my database.
 
If there are any resources on the web = for this sort=20 of thing please let me know.
 
Any help is greatly = appreciated.
 
Cheers,
 
Bradley McCrorey
globeproductions.net
 
 
------=_NextPart_000_006D_01BFB756.BD7E3A60-- From adustman@comstar.net Sat May 6 04:18:55 2000 From: adustman@comstar.net (Andy Dustman) Date: Fri, 5 May 2000 23:18:55 -0400 (EDT) Subject: [DB-SIG] NEWBIE QUESTION - MySQLdb for Windows In-Reply-To: <007001bfb702$ec7a0320$0100a8c0@globe> Message-ID: On Sat, 6 May 2000, Bradley McCrorey wrote: > Hi all, > > I usually write all my database interfaces for web-enabled apps using > php on apache webserver. however, I'm thinking about building a > windows interface for a change. I've done a BIT of python programming, > basically just running through the basics to get a feel for the syntax > and so on. I found the mysqldb module and downloaded the latest > version, but I'm unclear as to how to install it under windows. > > Has anyone ever used this module under windows? Ideally I'd like to > get it running under windows then build a tcl/tk interface using > python which would connect to my database. Someone has. Not me, mind you, but other people have told me it works. The compile.py script is supposed to get the job done. Obviously (or perhaps not) you need a C compiler as well. -- andy dustman | programmer/analyst | comstar.net, inc. telephone: 770.485.6025 / 706.549.7689 | icq: 32922760 | pgp: 0xc72f3f1d "Therefore, sweet knights, if you may doubt your strength or courage, come no further, for death awaits you all, with nasty, big, pointy teeth!" From brunson@level3.net Mon May 8 21:51:42 2000 From: brunson@level3.net (Eric Brunson) Date: Mon, 8 May 2000 14:51:42 -0600 Subject: [DB-SIG] API Specs document Message-ID: <20000508145142.A16274@level3.net> Does there exist a formal DB API specifications document? Thanks, e. -- Eric Brunson - brunson@level3.net - page-eric@level3.net "When governments fear the people there is liberty. When the people fear the government there is tyranny." - Thomas Jefferson From gstein@lyra.org Mon May 8 22:07:53 2000 From: gstein@lyra.org (Greg Stein) Date: Mon, 8 May 2000 14:07:53 -0700 (PDT) Subject: [DB-SIG] API Specs document In-Reply-To: <20000508145142.A16274@level3.net> Message-ID: On Mon, 8 May 2000, Eric Brunson wrote: > Does there exist a formal DB API specifications document? A simple perusal of the python.org web site would point you to the right place: http://www.python.org/topics/database/ And from there to the spec. -g -- Greg Stein, http://www.lyra.org/ From Benjamin.Schollnick@usa.xerox.com Tue May 9 13:02:22 2000 From: Benjamin.Schollnick@usa.xerox.com (Schollnick, Benjamin) Date: Tue, 09 May 2000 08:02:22 -0400 Subject: [DB-SIG] Help w/ODBC + Access 2000 / CGI problem? Message-ID: <8B049D471C61D111847C00805FA67B1C04FE2F54@usa0129ms1.ess.mc.xerox.com> Folks, I'm running into a slight problem here that I'm having difficulty tracking down. I'm running Python v1.52 with Win32 extensions using ODBC to communicate with a MS Access 2000 database. I've successfully used both MxODBC & the ODBC with the Win32 extensions.... Here's the "funny" part. One of the tables in the database is a "Users" table, containing, name, password, comments, etc. I can access my user record (Record 0), but if I attempt to login as any other user, the login is successful, but if I try any comments, the CGI seems to lockup until the process times out from IIS. I don't see any obvious problems, with the database, nor the ODBC code that I am using, in fact, the USER STATUS command I'm issuing reads all the data back from a COOKIE, not from the Database.... To illustrate: - Login (Read / compare data from database) |- Writes user record information to a cookie - User Status |- Reads data from cookie, displays on screen - User Edit |- Reads account from cookie, displays for editing, saves back to database I'm using the Cookie module from Timothy O'Malley, v2.25 05/03/2000. Has anyone hit a problem like this before? I tried going through the archives, but didn't see anything obvious in the web search's that I performed. Here's the ODBC wrapper, I wrote....Feel free to point out any problems: Keep in mind, this is my first real attempt at ODBC related code. import dbi, ODBC # ODBC modules import time # standard time module def login_to_homepage ( username='', passwd=''): login_string = '' database_conn = ODBC.Windows.Connect( # open a database connection 'homepages', username, passwd) # 'datasource/user/password' return database_conn def logout_database (database): database.close() def do_SQL_statement ( sql_statement, database_handle, database_description = None ): crsr = database_handle.cursor() # create a cursor crsr.execute (sql_statement) # execute some SQL database_description = crsr.description results = crsr.fetchall() crsr.close() return results, database_description def do_update_statement ( sql_statement, database_handle ): crsr = database_handle.cursor() # create a cursor results = crsr.execute(sql_statement) # execute some SQL #results = crsr.fetchall() crsr.close() return results def do_insert_into_statement ( sql_statement, data, database_handle ): crsr = database_handle.cursor() # create a cursor results = crsr.execute(sql_statement, data) # execute some SQL crsr.close() return results From Benjamin.Schollnick@usa.xerox.com Tue May 9 16:21:58 2000 From: Benjamin.Schollnick@usa.xerox.com (Schollnick, Benjamin) Date: Tue, 09 May 2000 11:21:58 -0400 Subject: [DB-SIG] Insane question & correction.... Message-ID: <8B049D471C61D111847C00805FA67B1C04FE2F57@usa0129ms1.ess.mc.xerox.com> Folks, two things... 1) My earlier message was slightly mistaken. I'm running MS Access 97 with that CGI problem that I mentioned earlier today. 2) I don't see any way to validate the state of the ODBC connection from the DB specs v2. For example, I'm logging into the ODBC connection currently without a password or username. I've experimented, and don't get a exception if the database requires a username/password even if the username/password are incorrect (but it still seems to work!?!?!). I'm suspecting that the python process hang that I'm seeing maybe due to the fact that the CGI app can't connect for some reason. A exception does not appear to be happening (no trace messages in browser window), so how can I check for this? - Benjamin From jmcgraw" Hello all, I'm working with a multiplatform (Linux/Unix/Win32/...) database server called Ovrimos. It has some nice features, but _no_ way to access the server from within Python on Unix (no Python interface, no Unix ODBC driver). I'm trying to develop an application based on FreeBSD. The following is an a snip from Dimitrios Souflis at Altera, the company that makes Ovrimos: >Although I have absolutely no knowledge of Python, I've been posing >the issue every now and then for the past 2 years. Unfortunately, >it doesn't seem to convince the management. Currently, they're >infatuated with PHP. Do you have any concrete numbers of the >following of Python, that I could use? (Or even convince many of >your friends to email requesting a Python interface? ;-> ) > >We'd probably have already written the ODBC driver for Unix, had we >had an SDK for it. As it stands, I suspect that we have to write >sql.h and sqlext.h by hand. And I don't even know how such a driver >is _called_. Is it dlopen'ed directly, is there a Driver Manager? >Our ODBC Reference covers only Windows (to be frank, I didn't even >know there were ODBC drivers for Unix till a year ago). > >If you already use ODBC for Unix, you could point us to relevant >resources. Since we put a lot of weight on ODBC, and the Windows >driver is already written, a Unix port seems less distant that >writing the Python interface from scrach. If you provide me with >some starting point, I promise to raise the issue again. > >Best regards, >-- >Dimitrios Souflis dsouflis@ovrimos.com >Ovrimos S.A., Greece http://www.altera.gr/dsouflis >>>TinyScheme download site: http://www.altera.gr/dsouflis/tinyscm.html I would appreciate anyone that's interested sending him an email (dsouflis@ovrimos.com) indicating interest. Apparently Python is not even on their radar screen. I apologize for the lengthiness of this message. Cheers, Joel Mc Graw DataBill, LLC From mal@lemburg.com Wed May 10 21:44:10 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 10 May 2000 22:44:10 +0200 Subject: [DB-SIG] Python advocacy References: <016f01bfbabc$e7a3e880$1400000a@jmcgraw.databill.com> Message-ID: <3919CA1A.7F286EE4@lemburg.com> jmcgraw wrote: > > Hello all, > > I'm working with a multiplatform (Linux/Unix/Win32/...) database server > called Ovrimos. It has some nice features, but _no_ way to access the > server from within Python on Unix (no Python interface, no Unix ODBC > driver). I'm trying to develop an application based on FreeBSD. You could use mxODBC to access the database provided they port the ODBC driver to Unix. > The following is an a snip from Dimitrios Souflis at Altera, the company > that makes Ovrimos: > > >Although I have absolutely no knowledge of Python, I've been posing > >the issue every now and then for the past 2 years. Unfortunately, > >it doesn't seem to convince the management. Currently, they're > >infatuated with PHP. Do you have any concrete numbers of the > >following of Python, that I could use? (Or even convince many of > >your friends to email requesting a Python interface? ;-> ) > > > >We'd probably have already written the ODBC driver for Unix, had we > >had an SDK for it. As it stands, I suspect that we have to write > >sql.h and sqlext.h by hand. And I don't even know how such a driver > >is _called_. Is it dlopen'ed directly, is there a Driver Manager? > >Our ODBC Reference covers only Windows (to be frank, I didn't even > >know there were ODBC drivers for Unix till a year ago). It should be possible to port a Windows ODBC (provided it exists) to Unix. For reference on how this can be done, see the public domain ODBC driver available for MySQL (www.mysql.org). > >If you already use ODBC for Unix, you could point us to relevant > >resources. Since we put a lot of weight on ODBC, and the Windows > >driver is already written, a Unix port seems less distant that > >writing the Python interface from scrach. If you provide me with > >some starting point, I promise to raise the issue again. > > > >Best regards, > >-- > >Dimitrios Souflis dsouflis@ovrimos.com > >Ovrimos S.A., Greece http://www.altera.gr/dsouflis > >>>TinyScheme download site: http://www.altera.gr/dsouflis/tinyscm.html > > I would appreciate anyone that's interested sending him an email > (dsouflis@ovrimos.com) indicating interest. Apparently Python is not even > on their radar screen. > > I apologize for the lengthiness of this message. > > Cheers, > > Joel Mc Graw > DataBill, LLC -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From jlp@apti.com Thu May 11 19:35:59 2000 From: jlp@apti.com (James L. Preston) Date: Thu, 11 May 2000 14:35:59 -0400 Subject: [DB-SIG] mxODBC and large objects (BLOBS) Message-ID: <391AFD8F.7505A1A6@apti.com> I am trying few different methods of working with a kind of document management system under PostgreSQL on Solaris. I have had some success with PygreSQL, but I thought that I would try to use mxODBC, since this is supposed to conform the 2.0-api, etc. I got the unixODBC driver and some things seem to work OK, but I can't figure out how to handle large objects (BLOBS) . Are these not supported by mxODBC, or not by the ODBC drivers for PostgreSQL, or am I missing something? At this point, I think that I will go back to PygreSQL, which has a "large object" python type that wraps around the lo_export kind of postgreSQL functions, which is very helpful, but I would like to get a better idea of what is going on with the ODBC stuff. thanks, jlp From mal@lemburg.com Thu May 11 20:47:45 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 11 May 2000 21:47:45 +0200 Subject: [DB-SIG] mxODBC and large objects (BLOBS) References: <391AFD8F.7505A1A6@apti.com> Message-ID: <391B0E61.50CABFF9@lemburg.com> "James L. Preston" wrote: > > I am trying few different methods of working with a kind of document > management system > under PostgreSQL on Solaris. > I have had some success with PygreSQL, but I thought that I would try to > use mxODBC, since this > is supposed to conform the 2.0-api, etc. I got the unixODBC driver and > some things seem to work OK, > but I can't figure out how to handle large objects (BLOBS) . Are these > not supported by mxODBC, > or not by the ODBC drivers for PostgreSQL, or am I missing something? At > this point, > I think that I will go back to PygreSQL, which has a "large object" > python type that wraps around > the lo_export kind of postgreSQL functions, which is very helpful, but > I would like to get a better idea > of what is going on with the ODBC stuff. There's really no magic to it: you simply pass the BLOB as normal Python string to the .execute() method and ODBC does the rest for you. Output is a string too, BTW. -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From jeremy@cnri.reston.va.us Thu May 11 22:24:06 2000 From: jeremy@cnri.reston.va.us (Jeremy Hylton) Date: Thu, 11 May 2000 17:24:06 -0400 (EDT) Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL Message-ID: <14619.9462.983643.36518@goon.cnri.reston.va.us> Has anyone had a chance to review the quests I posted to this list last week? I was surprised by the behavior of the new pgsqldb module with respect to transactions, cursors, and types. I read the spec, but I still can't tell if the implementation is correct. The spec is silent, for example, on what Python type should be returned for an SQL int type. Does the DB spec have an author? Is this the right place to ask questions? (I realize there are lots of reason for questions to go unanswered.) jeremy From billtut@microsoft.com Thu May 11 22:38:32 2000 From: billtut@microsoft.com (Bill Tutt) Date: Thu, 11 May 2000 14:38:32 -0700 Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL Message-ID: <4D0A23B3F74DD111ACCD00805F31D8101D8BD094@RED-MSG-50> > -----Original Message----- > From: Jeremy Hylton [mailto:jeremy@cnri.reston.va.us] > > > Has anyone had a chance to review the quests I posted to this list > last week? I was surprised by the behavior of the new pgsqldb module > with respect to transactions, cursors, and types. I read the spec, > but I still can't tell if the implementation is correct. The spec is > silent, for example, on what Python type should be returned for an SQL > int type. > A Python int should be returned here, anything else is just plane silly. A DBI cursor doesn't have to map to a real database cursor. The existance of the cursor is just a way to allow a speration between a database connection, and individual statements on that connection, as some databases allow you do do. IIRC > Does the DB spec have an author? Is this the right place to ask > questions? (I realize there are lots of reason for questions to go > unanswered.) > MA Lemburg was the main driving force behind the v2.0 DBI spec. WRT the autocomit issue, ODBC also starts in autocommit mode as well. This is probably one fo those you like what you know questions. There are arguments on both sides of the issue. Bill From jeremy@cnri.reston.va.us Thu May 11 22:48:03 2000 From: jeremy@cnri.reston.va.us (Jeremy Hylton) Date: Thu, 11 May 2000 17:48:03 -0400 (EDT) Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL In-Reply-To: <4D0A23B3F74DD111ACCD00805F31D8101D8BD094@RED-MSG-50> References: <4D0A23B3F74DD111ACCD00805F31D8101D8BD094@RED-MSG-50> Message-ID: <14619.10900.610709.211637@goon.cnri.reston.va.us> >>>>> "BT" == Bill Tutt writes: BT> A Python int should be returned here, anything else is just BT> plane silly. I thought so. But the spec says nothing, so I wasn't sure. BT> A DBI cursor doesn't have to map to a real BT> database cursor. The existance of the cursor is just a way to BT> allow a speration between a database connection, and individual BT> statements on that connection, as some databases allow you do BT> do. IIRC It would be helpful to have a high-level mechanism to use real DB cursors. I needed them for an application that was scanning a very large table. Without a cursor provided by the DB, the pgsql library copies the entire table out of the database; then the pgsqldb module copies the entire table into a Python objects. >> Does the DB spec have an author? Is this the right place to ask >> questions? (I realize there are lots of reason for questions to >> go unanswered.) >> BT> MA Lemburg was the main driving force behind the v2.0 DBI spec. BT> WRT the autocomit issue, ODBC also starts in autocommit mode as BT> well. This is probably one fo those you like what you know BT> questions. There are arguments on both sides of the issue. It's fine for it to be a "know what you like" issue, except that there isn't any standard way to put yourself in the other mode. There ought to be an autocommit flag to the database connection and a beginTransaction method. Are there any other issues outstanding that would lead to a third version of the API? Jeremy From gstein@lyra.org Thu May 11 23:00:12 2000 From: gstein@lyra.org (Greg Stein) Date: Thu, 11 May 2000 15:00:12 -0700 (PDT) Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL In-Reply-To: <14619.10900.610709.211637@goon.cnri.reston.va.us> Message-ID: On Thu, 11 May 2000, Jeremy Hylton wrote: >... > BT> A DBI cursor doesn't have to map to a real > BT> database cursor. The existance of the cursor is just a way to > BT> allow a speration between a database connection, and individual > BT> statements on that connection, as some databases allow you do > BT> do. IIRC > > It would be helpful to have a high-level mechanism to use real DB > cursors. Um... there is. The DBAPI allows for real DB cursors. Did you mean something else? > I needed them for an application that was scanning a very > large table. Without a cursor provided by the DB, the pgsql library > copies the entire table out of the database; then the pgsqldb module > copies the entire table into a Python objects. Poor implementation. The DBAPI is designed around true cursors, so pgsqldb should use that design. I believe the problem may be that pgsqldb was designed against an older PostgreSQL, which didn't have cursors. > >> Does the DB spec have an author? Is this the right place to ask > >> questions? (I realize there are lots of reason for questions to > >> go unanswered.) > >> > > BT> MA Lemburg was the main driving force behind the v2.0 DBI spec. > > BT> WRT the autocomit issue, ODBC also starts in autocommit mode as > BT> well. This is probably one fo those you like what you know > BT> questions. There are arguments on both sides of the issue. > > It's fine for it to be a "know what you like" issue, except that there > isn't any standard way to put yourself in the other mode. There ought > to be an autocommit flag to the database connection and a > beginTransaction method. The specification states that autocommit must be OFF by DEFAULT. That is a solid definition, and all DBAPI-compatible modules obey that. Bill was a bit misleading there :-) Turning autocommit on/off is performed in a module-specific manner. It is not part of the DBAPI spec. beginTransaction is implied by issuing a statement. The transaction ends when you call .commit() > Are there any other issues outstanding that would lead to a third > version of the API? Not really. A few here and there. Dunno if anybody has been collecting them. Cheers, -g -- Greg Stein, http://www.lyra.org/ From gstein@lyra.org Thu May 11 23:08:19 2000 From: gstein@lyra.org (Greg Stein) Date: Thu, 11 May 2000 15:08:19 -0700 (PDT) Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL In-Reply-To: <14619.9462.983643.36518@goon.cnri.reston.va.us> Message-ID: On Thu, 11 May 2000, Jeremy Hylton wrote: >... > Does the DB spec have an author? Is this the right place to ask > questions? (I realize there are lots of reason for questions to go > unanswered.) A bit more here: The DBAPI 1.0 spec was primarily myself and Michael Lorton while we were still eShop. The DBAPI 2.0 spec was primarily Marc-Andre and Andy Dustman, with me pestering them :-). There was also plenty of other input from the members of the DB-SIG, but I'd say most of the changes were MAL and Andy. MAL would be the guy who might have a wish list and/or suggested list of changes for a 3.0 spec. Cheers, -g -- Greg Stein, http://www.lyra.org/ From tbryan@starship.python.net Thu May 11 23:58:58 2000 From: tbryan@starship.python.net (Tom Bryan) Date: Thu, 11 May 2000 18:58:58 -0400 (EDT) Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL In-Reply-To: <14619.9462.983643.36518@goon.cnri.reston.va.us> Message-ID: On Thu, 11 May 2000, Jeremy Hylton wrote: > Has anyone had a chance to review the quests I posted to this list > last week? I haven't, but I can answer a couple of quick questions. > Does the DB spec have an author? Yes. This group. :-) From time to time, someone raises valid concerns with the current specification. See the archives for threads and discussions concerning the API, particularly the long thread that lead to the new specification. > Is this the right place to ask questions? Yes. ---Tom From mal@lemburg.com Fri May 12 09:27:37 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 12 May 2000 10:27:37 +0200 Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL References: Message-ID: <391BC079.13F375CC@lemburg.com> Greg Stein wrote: > > On Thu, 11 May 2000, Jeremy Hylton wrote: > >... > > Does the DB spec have an author? Is this the right place to ask > > questions? (I realize there are lots of reason for questions to go > > unanswered.) > > A bit more here: > > The DBAPI 1.0 spec was primarily myself and Michael Lorton while we were > still eShop. > > The DBAPI 2.0 spec was primarily Marc-Andre and Andy Dustman, with me > pestering them :-). There was also plenty of other input from the members > of the DB-SIG, but I'd say most of the changes were MAL and Andy. > > MAL would be the guy who might have a wish list and/or suggested list of > changes for a 3.0 spec. There were some suggestions for 3.0 (or maybe just 2.1). The archives have the details as usual ;-) One addition which I tried in the mxODBC was the addition of a .prepare(command) method and a .command read-only attribute. The .prepare() method allows preparing a command for execution on the cursor *without* actually executing it. The .command attribute is always set to the currently cached command on the cursor. Now, to execute a prepared command, all you have to do is pass .command to the .execute() method as command. Then the re-use optimization will notice that you are executing a prepared command and skip the normally needed prepare step -- very useful for pooling cursors with often used commands such as id-name mappings etc. Modules not capabale of providing such a method should not implement it and ones which only know at run-time should raise a NotSupported error (like for all other optional features). -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From gstein@lyra.org Fri May 12 09:48:22 2000 From: gstein@lyra.org (Greg Stein) Date: Fri, 12 May 2000 01:48:22 -0700 (PDT) Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL In-Reply-To: <391BC079.13F375CC@lemburg.com> Message-ID: On Fri, 12 May 2000, M.-A. Lemburg wrote: >... > One addition which I tried in the mxODBC was the addition > of a .prepare(command) method and a .command read-only > attribute. > > The .prepare() method allows preparing a command for execution > on the cursor *without* actually executing it. The .command > attribute is always set to the currently cached command > on the cursor. Now, to execute a prepared command, all you > have to do is pass .command to the .execute() method as > command. Then the re-use optimization will notice that you > are executing a prepared command and skip the normally > needed prepare step -- very useful for pooling cursors > with often used commands such as id-name mappings etc. Understood, but I've never seen the utility of doing the prepare() ahead of time (as opposed to simply waiting for that first execution to occur). After that point, the current mechanism and your prepare/command are equivalent. Introducing prepare/command is Yet More Stuff for a person to deal with. Cheers, -g -- Greg Stein, http://www.lyra.org/ From mal@lemburg.com Fri May 12 11:37:17 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 12 May 2000 12:37:17 +0200 Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL References: Message-ID: <391BDEDD.ABAA9D3B@lemburg.com> Greg Stein wrote: > > On Fri, 12 May 2000, M.-A. Lemburg wrote: > >... > > One addition which I tried in the mxODBC was the addition > > of a .prepare(command) method and a .command read-only > > attribute. > > > > The .prepare() method allows preparing a command for execution > > on the cursor *without* actually executing it. The .command > > attribute is always set to the currently cached command > > on the cursor. Now, to execute a prepared command, all you > > have to do is pass .command to the .execute() method as > > command. Then the re-use optimization will notice that you > > are executing a prepared command and skip the normally > > needed prepare step -- very useful for pooling cursors > > with often used commands such as id-name mappings etc. > > Understood, but I've never seen the utility of doing the prepare() ahead > of time (as opposed to simply waiting for that first execution to occur). > After that point, the current mechanism and your prepare/command are > equivalent. As I said: it's very useful for implementing a cursor pool. That's what I use it for with great success (ODBC allows you to separate the prepare and execute step, so adding .prepare() was no big deal). > Introducing prepare/command is Yet More Stuff for a person to deal with. It's optional, so if a DB interface writer doesn't like, she doesn't have to implement it. -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From gstein@lyra.org Fri May 12 12:05:13 2000 From: gstein@lyra.org (Greg Stein) Date: Fri, 12 May 2000 04:05:13 -0700 (PDT) Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL In-Reply-To: <391BDEDD.ABAA9D3B@lemburg.com> Message-ID: On Fri, 12 May 2000, M.-A. Lemburg wrote: > Greg Stein wrote: >... > > Understood, but I've never seen the utility of doing the prepare() ahead > > of time (as opposed to simply waiting for that first execution to occur). > > After that point, the current mechanism and your prepare/command are > > equivalent. > > As I said: it's very useful for implementing a cursor pool. That's > what I use it for with great success (ODBC allows you to separate > the prepare and execute step, so adding .prepare() was no big > deal). But why not create cursors and let them hang around? Why prepare them first? Seems simple to just do this: class PreparedCursor: def __init__(self, conn, statement): self.cursor = conn.cursor() self.statement = statement def run(self, args): return self.cursor.execute(self.statement, args) def __getattr__(self, name): return getattr(self.cursor, name) No need to change the API or to prepare cursors ahead of time. It just doesn't seem to buy you much to add the prepare/command stuff. > > Introducing prepare/command is Yet More Stuff for a person to deal with. > > It's optional, so if a DB interface writer doesn't like, > she doesn't have to implement it. Woah. Bad attitude :-) ODBC is a mess because parts are optional. It is so big and with many "optional" features or varying ways to do things. One of the nice things about the DBAPI is that you are pretty well guaranteed that ALL of its methods are implemented. Some of that guarantee actually broke in DBAPI 2.0, though :-( By keeping the definition small, you can guarantee that it will be available and implemented. The bigger and more optional you make it, the more you do disservice to the users. They can't rely on a feature without doing feature-tests first, thus introducing additional code paths and complexity into their code. Back to prepare/command: the feature seems marginal, and then to say "well, you don't have to implement it" is just the icing on the cake... :-) The "cursor pool" just isn't selling me... Cheers, -g -- Greg Stein, http://www.lyra.org/ From mal@lemburg.com Fri May 12 13:49:54 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 12 May 2000 14:49:54 +0200 Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL References: Message-ID: <391BFDF2.5B64853@lemburg.com> Greg Stein wrote: > > On Fri, 12 May 2000, M.-A. Lemburg wrote: > > Greg Stein wrote: > >... > > > Understood, but I've never seen the utility of doing the prepare() ahead > > > of time (as opposed to simply waiting for that first execution to occur). > > > After that point, the current mechanism and your prepare/command are > > > equivalent. > > > > As I said: it's very useful for implementing a cursor pool. That's > > what I use it for with great success (ODBC allows you to separate > > the prepare and execute step, so adding .prepare() was no big > > deal). > > But why not create cursors and let them hang around? Why prepare them > first? > > Seems simple to just do this: > > class PreparedCursor: > def __init__(self, conn, statement): > self.cursor = conn.cursor() > self.statement = statement > def run(self, args): > return self.cursor.execute(self.statement, args) > def __getattr__(self, name): > return getattr(self.cursor, name) > > No need to change the API or to prepare cursors ahead of time. > > It just doesn't seem to buy you much to add the prepare/command stuff. Well, it does pay off to have some more control over when things happen (e.g. syntax errors are usually raised by the prepare step). But never mind... it was just an idea that was very straight forward to implement for mxODBC. > > > Introducing prepare/command is Yet More Stuff for a person to deal with. > > > > It's optional, so if a DB interface writer doesn't like, > > she doesn't have to implement it. > > Woah. Bad attitude :-) > > ODBC is a mess because parts are optional. It is so big and with many > "optional" features or varying ways to do things. > > One of the nice things about the DBAPI is that you are pretty well > guaranteed that ALL of its methods are implemented. Some of that guarantee > actually broke in DBAPI 2.0, though :-( Mostly because a few databases don't provide the needed services... not implemeting a feature due to one or two interfaces not providing them (while the remaining 20 could without problem), doesn't seem like the right way to go, IMHO. > By keeping the definition small, you can guarantee that it will be > available and implemented. The bigger and more optional you make it, the > more you do disservice to the users. They can't rely on a feature without > doing feature-tests first, thus introducing additional code paths and > complexity into their code. > > Back to prepare/command: the feature seems marginal, and then to say > "well, you don't have to implement it" is just the icing on the cake... > :-) The "cursor pool" just isn't selling me... I'll let others chime in here... I only use mxODBC, so I'm pretty much satisfied already ;-) -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From hniksic@iskon.hr Fri May 12 14:21:46 2000 From: hniksic@iskon.hr (Hrvoje Niksic) Date: 12 May 2000 15:21:46 +0200 Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL In-Reply-To: "M.-A. Lemburg"'s message of "Fri, 12 May 2000 14:49:54 +0200" References: <391BFDF2.5B64853@lemburg.com> Message-ID: "M.-A. Lemburg" writes: > > One of the nice things about the DBAPI is that you are pretty well > > guaranteed that ALL of its methods are implemented. Some of that > > guarantee actually broke in DBAPI 2.0, though :-( > > Mostly because a few databases don't provide the needed services... > not implemeting a feature due to one or two interfaces not providing > them (while the remaining 20 could without problem), doesn't seem > like the right way to go, IMHO. Agreed. The Python DB API seems to have been designed as the least common denominator of database capabilities. While it is a nice thing to be able to guarantee that all the methods are implemented, most people desire better support. For example, IMO Perl's DBI seems to be much more feature-rich, and the stuff it supports is generally implemented by the database backends I've been using. > > Back to prepare/command: the feature seems marginal, and then to > > say "well, you don't have to implement it" is just the icing on > > the cake... :-) The "cursor pool" just isn't selling me... > > I'll let others chime in here... I only use mxODBC, so I'm pretty > much satisfied already ;-) I like the idea of an explicit prepare step for repeated queries, although I'm not sure I really like the proposed interface. From ajung@sz-sb.de Fri May 12 14:37:59 2000 From: ajung@sz-sb.de (Andreas Jung) Date: Fri, 12 May 2000 15:37:59 +0200 Subject: [DB-SIG] Using CLOBs in Oracle 8.1.5 In-Reply-To: <39105D24.7EED38D0@staff.reverseauction.com>; from nnorwitz@staff.reverseauction.com on Wed, May 03, 2000 at 01:08:52PM -0400 References: <39105D24.7EED38D0@staff.reverseauction.com> Message-ID: <20000512153759.A14874@sz-sb.de> On Wed, May 03, 2000 at 01:08:52PM -0400, Neal Norwitz wrote: > Hi! > > I am able to send CLOBs Oracle 8.1.5, but I cannot read them from > Oracle. > > I get the error: ORA-24813: cannot send or receive an unsupported LOB > when I try to do a fetch (fetchall or fetchone). > > I am able to get the description and it shows the CLOB column as a > STRING of the correct size (4000). > This code works on all tables without a CLOB. Hi, did you get rid of the problem ? I must working on a solution on the same problem but with very less success :-( Andreas From nnorwitz@yahoo.com Fri May 12 14:56:37 2000 From: nnorwitz@yahoo.com (Neal Norwitz) Date: Fri, 12 May 2000 09:56:37 -0400 Subject: [DB-SIG] Using CLOBs in Oracle 8.1.5 References: <39105D24.7EED38D0@staff.reverseauction.com> <20000512153759.A14874@sz-sb.de> Message-ID: <391C0D95.EA461C5B@yahoo.com> A colleague investigated this problem as far as I know, it appears that the oci_.c code is using the Oracle version 7 APIs to get the data. I think CLOBs weren't supported in version 7. So it appears that oci_.c needs to be upgraded (possibly a lot of work) to use the version 8 APIs in order to make CLOBs work. We, at least temporarily, switched to use LONGs which do work. Neal -- Andreas Jung wrote: > On Wed, May 03, 2000 at 01:08:52PM -0400, Neal Norwitz wrote: > > Hi! > > > > I am able to send CLOBs Oracle 8.1.5, but I cannot read them from > > Oracle. > > > > I get the error: ORA-24813: cannot send or receive an unsupported LOB > > when I try to do a fetch (fetchall or fetchone). > > > > I am able to get the description and it shows the CLOB column as a > > STRING of the correct size (4000). > > This code works on all tables without a CLOB. > > Hi, > > did you get rid of the problem ? I must working on a solution > on the same problem but with very less success :-( > > Andreas __________________________________________________ Do You Yahoo!? Talk to your friends online with Yahoo! Messenger. http://im.yahoo.com From adustman@comstar.net Tue May 16 05:50:18 2000 From: adustman@comstar.net (Andy Dustman) Date: Tue, 16 May 2000 00:50:18 -0400 (EDT) Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL In-Reply-To: <391BC079.13F375CC@lemburg.com> Message-ID: On Fri, 12 May 2000, M.-A. Lemburg wrote: > One addition which I tried in the mxODBC was the addition > of a .prepare(command) method and a .command read-only > attribute. > > The .prepare() method allows preparing a command for execution > on the cursor *without* actually executing it. The .command > attribute is always set to the currently cached command > on the cursor. Now, to execute a prepared command, all you > have to do is pass .command to the .execute() method as > command. Then the re-use optimization will notice that you > are executing a prepared command and skip the normally > needed prepare step -- very useful for pooling cursors > with often used commands such as id-name mappings etc. > > Modules not capabale of providing such a method should > not implement it and ones which only know at run-time > should raise a NotSupported error (like for all other > optional features). Well, actually, prepare() and command should be mandatory, in fact. For any database that they don't make sense on (MySQL), prepare() should be implemented as: def prepare(self, command): self.command = command Also trivial to implement in a C interface (this is left as an exorcism for the reader). Some general digression/spewing: Why prepare()? Because, queries take time to parse. Complex queries take longer. In most databases, queries are parsed in the server. If you have a commonly used query, one that properly factored out the actual query parameters, having a "pre-compiled" query in the server is a big win. Even better is if these queries could have some persistence in the server so they could be reused by many clients, which may each supply a different parameter set. I have direct experience with two different native database APIs. The first is ODBC, which I think most people would justifiably consider a bloated monstrosity. The other is MySQL, which is very simple. In fact, it's too damn simple: All queries are literal. It does no type conversion at all. (Maybe this is a good thing.) Result sets are either entirely stored in the client or fetched row-by-row from the server (unless you want to do multiple queries with LIMIT). Almost nothing does BLOBs very well (try doing a 1MB BLOB in MySQL; try bigger). But there can be, I think, something closer to an idealized universal database interface API. And it will use CORBA. I've been thinking about this for awhile now. CORBA can pass all the normal string and numeric types via GIOP/IIOPs. BLOBs you can't directly pass, but you can create a BLOB within the ORB and then write data to it (maybe seek on it like a big file). The data transfer would be transparent to the application. What would it look like? a) Database object: somewhat like a connection object, only it would be a unique object that implements the database. b) Query object: pre-parsed/compiled query. Create with QueryCreate(query). Persistent. c) Transaction object: Provides a context for executing queries, i.e. it's creation implicitly begins a transaction, there is an explicit commit() method, and implicit rollback() upon destruction (and explicit rollback()). Not persistent. d) Cursor object: Executes queries in the context of a transaction. Would have two sets of parameters: a sequence of query parameters, and a sequence of sequences of data parameters (for multi-row insertion). Would implement various fetching methods. Not persistent. e) In a clean-sheet database implementation, columns could contain any sort of conceivable CORBA object, thus becoming an object-oriented relational database. No database that I've ever heard of does this, yet. (Oracle might have some CORBA stuff, somewhere.) CORBA interfaces could be constructed for existing databases. But, this may not be necessary or practical. The Python CORBA bindings are very Python-like. They follow the usual Python conventions of package.module.class.method, attributes. So the DB API NG (Next Generation) could be designed to operate as the CORBA interface would, if it really existed (psuedo-CORBA). Realistically, we are not too terribly far from that point now. From a performance perspective, a native CORBA database interface ought to be pretty good, certainly no worse than ODBC. I am thinking enterprise-level, industrial-strength database here, something that would be on a par with Oracle or Informix. Comments? If I get inspired, I may try doing something like this as an alternate/experimental interface for MySQLdb, as a proof-of-concept. -- andy dustman | programmer/analyst | comstar.net, inc. telephone: 770.485.6025 / 706.549.7689 | icq: 32922760 | pgp: 0xc72f3f1d "Therefore, sweet knights, if you may doubt your strength or courage, come no further, for death awaits you all, with nasty, big, pointy teeth!" From billtut@microsoft.com Tue May 16 07:29:01 2000 From: billtut@microsoft.com (Bill Tutt) Date: Mon, 15 May 2000 23:29:01 -0700 Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL Message-ID: <4D0A23B3F74DD111ACCD00805F31D8101D8BD0B6@RED-MSG-50> > From: Andy Dustman [mailto:adustman@comstar.net] > > > Some general digression/spewing: > > Why prepare()? Because, queries take time to parse. Complex > queries take > longer. In most databases, queries are parsed in the server. > If you have a > commonly used query, one that properly factored out the actual query > parameters, having a "pre-compiled" query in the server is a > big win. Even > better is if these queries could have some persistence in the > server so > they could be reused by many clients, which may each supply a > different > parameter set. > Well, duh... However, there's no reason you need to expose this concept to the user in an explicit API. example implmentation: # Dictionary mapping query text -> opqaue prepared query reference (sproc name, underlying db access # mechanism or something entirely different) preparedQuery = {} def execute(self, cmd): if not preparedQuery.has_key(cmd): preparedQuery[cmd] = conn.prepare(cmd) else: conn.execute(preparedQuery[cmd]) > I have direct experience with two different native database APIs. The > first is ODBC, which I think most people would justifiably consider a > bloated monstrosity. ODBC certainly is annoying, but then most flexible APIs into databases from the C level are. Databases are complex enough that life starts to suck bigtime. OLE DB is better than ODBC by a long shot, but its also a very complex beast. [.....] > > But there can be, I think, something closer to an idealized universal > database interface API. And it will use CORBA. > > I've been thinking about this for awhile now. CORBA can pass all the > normal string and numeric types via GIOP/IIOPs. BLOBs you > can't directly > pass, but you can create a BLOB within the ORB and then write > data to it > (maybe seek on it like a big file). The data transfer would > be transparent > to the application. > > What would it look like? > > a) Database object: somewhat like a connection object, only > it would be a > unique object that implements the database. > > b) Query object: pre-parsed/compiled query. Create with > QueryCreate(query). Persistent. > > c) Transaction object: Provides a context for executing > queries, i.e. it's > creation implicitly begins a transaction, there is an explicit > commit() method, and implicit rollback() upon destruction > (and explicit > rollback()). Not persistent. > > d) Cursor object: Executes queries in the context of a > transaction. Would > have two sets of parameters: a sequence of query parameters, and a > sequence of sequences of data parameters (for multi-row > insertion). Would > implement various fetching methods. Not persistent. > > e) In a clean-sheet database implementation, columns could contain any > sort of conceivable CORBA object, thus becoming an object-oriented > relational database. > > No database that I've ever heard of does this, yet. (Oracle might have > some CORBA stuff, somewhere.) CORBA interfaces could be > constructed for > existing databases. But, this may not be necessary or practical. The > Python CORBA bindings are very Python-like. They follow the > usual Python > conventions of package.module.class.method, attributes. So > the DB API NG > (Next Generation) could be designed to operate as the CORBA interface > would, if it really existed (psuedo-CORBA). Realistically, we > are not too > terribly far from that point now. From a performance > perspective, a native > CORBA database interface ought to be pretty good, certainly > no worse than > ODBC. I am thinking enterprise-level, industrial-strength > database here, > something that would be on a par with Oracle or Informix. > > Comments? If I get inspired, I may try doing something like this as an > alternate/experimental interface for MySQLdb, as a proof-of-concept. > You really should read the OLE DB specs. They specify COM interfaces for everything you've specified above and mroe... SQL 7 is completly built using a superset of these OLE DB interfaces. OLE DB lets SQL 7 easily perform and optimize joins between different databases. Utterly cheesy example: Joe wants to join table A in an Oracle database against table B in a SQL 7 database. Depending on the index statistics information (on table B and on table A) the SQL 7 query optimizer can decide how much of the query can be pushed into the Oracle database, and how much of it should be done locally. The OLE DB interfaces exposes all the necessary information that allows SQL 7 do perform this cool task. (Yes, people really do use this feature in real life.) Bill From mal@lemburg.com Tue May 16 09:17:11 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Tue, 16 May 2000 10:17:11 +0200 Subject: [DB-SIG] .prepare() References: <4D0A23B3F74DD111ACCD00805F31D8101D8BD0B6@RED-MSG-50> Message-ID: <39210407.E80FFE4@lemburg.com> Bill Tutt wrote: > > > From: Andy Dustman [mailto:adustman@comstar.net] > > > > > > Some general digression/spewing: > > > > Why prepare()? Because, queries take time to parse. Complex > > queries take > > longer. In most databases, queries are parsed in the server. > > If you have a > > commonly used query, one that properly factored out the actual query > > parameters, having a "pre-compiled" query in the server is a > > big win. Even > > better is if these queries could have some persistence in the > > server so > > they could be reused by many clients, which may each supply a > > different > > parameter set. > > > > Well, duh... However, there's no reason you need to expose this concept to > the user in an explicit API. > example implmentation: > > # Dictionary mapping query text -> opqaue prepared query reference (sproc > name, underlying db access > # mechanism or something entirely different) > preparedQuery = {} > > def execute(self, cmd): > if not preparedQuery.has_key(cmd): > preparedQuery[cmd] = conn.prepare(cmd) > else: > conn.execute(preparedQuery[cmd]) Don't know about other databases, but ODBC can only prepare one statement on one cursor. Once you prepare another statement on the cursor, the previous one is lost, so making the currently prepared command available via an attribute looked like the right way for mxODBC. > > I have direct experience with two different native database APIs. The > > first is ODBC, which I think most people would justifiably consider a > > bloated monstrosity. > > ODBC certainly is annoying, but then most flexible APIs into databases from > the C level are. Databases are complex enough that life starts to suck > bigtime. > > OLE DB is better than ODBC by a long shot, but its also a very complex > beast. But it's nowhere near as portable as ODBC... on NT or Win2k OLE DB may be the way to go, but on other platforms such as Unix or Mac you'd have to revert to other means. > [Andy's CORBA DB API] > > You really should read the OLE DB specs. They specify COM interfaces for > everything you've specified above and mroe... AFAIK, COM is MS centric and I think Andy is targetting Unix platforms here as well. > SQL 7 is completly built using a superset of these OLE DB interfaces. OLE DB > lets SQL 7 easily perform and optimize joins between different databases. > > Utterly cheesy example: > Joe wants to join table A in an Oracle database against table B in a SQL 7 > database. > Depending on the index statistics information (on table B and on table A) > the SQL 7 query optimizer can decide how much of the query can be pushed > into the Oracle database, and how much of it should be done locally. > > The OLE DB interfaces exposes all the necessary information that allows SQL > 7 do perform this cool task. > (Yes, people really do use this feature in real life.) FYI, in ODBC you would do the same using an ODBC DB engine like the one sold by EasySoft. -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From gstein@lyra.org Tue May 16 09:44:28 2000 From: gstein@lyra.org (Greg Stein) Date: Tue, 16 May 2000 01:44:28 -0700 (PDT) Subject: [DB-SIG] .prepare() In-Reply-To: <39210407.E80FFE4@lemburg.com> Message-ID: On Tue, 16 May 2000, M.-A. Lemburg wrote: > Bill Tutt wrote: > > > From: Andy Dustman [mailto:adustman@comstar.net] > > > > > > Some general digression/spewing: > > > > > > Why prepare()? Because, queries take time to parse. Complex > > > queries take > > > longer. In most databases, queries are parsed in the server. > > > If you have a > > > commonly used query, one that properly factored out the actual query > > > parameters, having a "pre-compiled" query in the server is a > > > big win. Even > > > better is if these queries could have some persistence in the > > > server so > > > they could be reused by many clients, which may each supply a > > > different > > > parameter set. > > > > > > > Well, duh... However, there's no reason you need to expose this concept to > > the user in an explicit API. Exactly. The whole prepare/reuse thing is performed using the "is" concept fo the command passed to the .execute() method. The API remains simple and straightforward; all APIs are implemented and available; DBAPI modules *at their discretion* can choose to provide additional optimizations. Throwing prepare/command in there simply serves to toss even more concepts at the user for them to deal with, think about, and attempt to use in their software. It creates a higher bar for users in understanding the API. "but they don't need that -- it is advanced functionality!" you say. Yah, right. Users will read the whole API; they aren't going to know what is advanced or not. Even if you mark it that way, they'll still read through it trying to decide whether *they* are advanced enough to figure out what it does and whether they should use it. A good API has few methods and attributes, is simple to understand and use, and it scales up well to the most demanding tasks. .prepare() is simply shifting the time when a prepare is done; it provides no advantages over doing the prepare on the first call to .execute() [and let additional calls note the same string and reuse the parsed query] > > example implmentation: > > > > # Dictionary mapping query text -> opqaue prepared query reference (sproc > > name, underlying db access > > # mechanism or something entirely different) > > preparedQuery = {} > > > > def execute(self, cmd): > > if not preparedQuery.has_key(cmd): > > preparedQuery[cmd] = conn.prepare(cmd) > > else: > > conn.execute(preparedQuery[cmd]) > > Don't know about other databases, but ODBC can only prepare > one statement on one cursor. Once you prepare another statement > on the cursor, the previous one is lost, so making the currently > prepared command available via an attribute looked like the > right way for mxODBC. Other databases have the same restriction. Bill brain-farted on this one. :-) Cursors maintain context -- in particular, the parsed query and the current result information. They can't deal with multiple, parsed queries. Cheers, -g -- Greg Stein, http://www.lyra.org/ From billtut@microsoft.com Tue May 16 10:25:07 2000 From: billtut@microsoft.com (Bill Tutt) Date: Tue, 16 May 2000 02:25:07 -0700 Subject: [DB-SIG] .prepare() Message-ID: <4D0A23B3F74DD111ACCD00805F31D8101D8BD0B7@RED-MSG-50> > From: Greg Stein [mailto:gstein@lyra.org] > > > example implmentation: > > > > > > # Dictionary mapping query text -> opqaue prepared query > reference (sproc > > > name, underlying db access > > > # mechanism or something entirely different) > > > preparedQuery = {} > > > > > > def execute(self, cmd): > > > if not preparedQuery.has_key(cmd): > > > preparedQuery[cmd] = conn.prepare(cmd) > > > else: > > > conn.execute(preparedQuery[cmd]) > > > > Don't know about other databases, but ODBC can only prepare > > one statement on one cursor. Once you prepare another statement > > on the cursor, the previous one is lost, so making the currently > > prepared command available via an attribute looked like the > > right way for mxODBC. > > Other databases have the same restriction. Bill brain-farted > on this one. > :-) > The above was more of psuedo-code than anything specific that has any chance of working. It was merly to illustrate the concept. Some databases may impose some sort of restriction upon what ODBC thinks of as prepared queries, yes. However, consider: A stored procedure is a prepared and preparsed query. Hrm.... So to fill in the details of the above a little more: preparedQuery = {} def execute(self, ...): if not preparedQuery.has_key(cmd): preparedQuery[cmd] = createStoredProcedure(conn, cmd) conn.execute("exec %s" % preparedQuery[cmd]) Bill From billtut@microsoft.com Tue May 16 10:33:04 2000 From: billtut@microsoft.com (Bill Tutt) Date: Tue, 16 May 2000 02:33:04 -0700 Subject: [DB-SIG] .prepare() Message-ID: <4D0A23B3F74DD111ACCD00805F31D8101D8BD0B8@RED-MSG-50> > From: M.-A. Lemburg [mailto:mal@lemburg.com] > > Bill Tutt wrote: > > > > > OLE DB is better than ODBC by a long shot, but its also a > very complex > > beast. > > But it's nowhere near as portable as ODBC... on NT or Win2k > OLE DB may be the way to go, but on other platforms such as > Unix or Mac you'd have to revert to other means. > I never said life was easy, just mentioning that if you wanted to come up with a cool database API that OLE DB is the coolest thing since sliced bread in this area. > > [Andy's CORBA DB API] > > > > You really should read the OLE DB specs. They specify COM > interfaces for > > everything you've specified above and mroe... > > AFAIK, COM is MS centric and I think Andy is targetting Unix > platforms here as well. > *sigh* Screw COM. There are two very important concepts with OLE DB that make it very powerful. The interfaces that the objects expose, and aggregating IUnknowns. You can implement these core concepts on any semi-object oriented system. In C++, in CORBA, or whatever. He was thinking about a blue-sky concept, so I pointed him at what I think the blue sky should look like on every platform. :) My top DB API related wish is that the core OLE DB libraries were open source, and people could use them on Unix or whatever, but it certainly doesn't seem likely to happen. Data conversion helper libraries are such a pain in the ass to write. :( > > SQL 7 is completly built using a superset of these OLE DB > interfaces. OLE DB > > lets SQL 7 easily perform and optimize joins between > different databases. > > > > Utterly cheesy example: > > Joe wants to join table A in an Oracle database against > table B in a SQL 7 > > database. > > Depending on the index statistics information (on table B > and on table A) > > the SQL 7 query optimizer can decide how much of the query > can be pushed > > into the Oracle database, and how much of it should be done locally. > > > > The OLE DB interfaces exposes all the necessary information > that allows SQL > > 7 do perform this cool task. > > (Yes, people really do use this feature in real life.) > > FYI, in ODBC you would do the same using an ODBC DB engine like the > one sold by EasySoft. > That may be, but writing such a beast is much easier using OLE DB than it every would be with ODBC. ODBC doesn't expose interfaces that allow you to query for an index's statsitcs information. Bill From thierry@geltrude.mixad.it Tue May 16 12:22:59 2000 From: thierry@geltrude.mixad.it (Thierry MICHEL) Date: Tue, 16 May 2000 13:22:59 +0200 Subject: [DB-SIG] Mandatory parameters in execute Message-ID: <20000516132259.A2218@geltrude.mixad.it> hi *, a imple question: the arguments prameter in the execute() function (DBAPI 2.0) is marked as optional. that's right because if you do a SELECT you don't need to pass any parameters? but what about INSERTs? if the use of `arguments' is mandatory it is really easy to control the types of the argument and do some conversions (arrays and large objects, for example.) so the question is, if the driver sets paramstyle to pyformat the user can build an INSERT string outside of execute() and then use it or is he forced to build the INSERT string by passing arguments python-style? thanx, federico and thierry From mal@lemburg.com Tue May 16 13:17:57 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Tue, 16 May 2000 14:17:57 +0200 Subject: [DB-SIG] .prepare() References: <4D0A23B3F74DD111ACCD00805F31D8101D8BD0B8@RED-MSG-50> Message-ID: <39213C75.8B57CE88@lemburg.com> Bill Tutt wrote: > > > From: M.-A. Lemburg [mailto:mal@lemburg.com] > > > > Bill Tutt wrote: > > > > > > > > OLE DB is better than ODBC by a long shot, but its also a > > very complex > > > beast. > > > > But it's nowhere near as portable as ODBC... on NT or Win2k > > OLE DB may be the way to go, but on other platforms such as > > Unix or Mac you'd have to revert to other means. > > > > I never said life was easy, just mentioning that if you wanted to come up > with a cool database API that OLE DB is the coolest thing since sliced bread > in this area. Hmm, Andy could probably steal a few ideas from OLE DB and reimplement them using CORBA as backend. I'm not sure whether it's worth the effort though. > > > [Andy's CORBA DB API] > > > > > > You really should read the OLE DB specs. They specify COM > > interfaces for > > > everything you've specified above and mroe... > > > > AFAIK, COM is MS centric and I think Andy is targetting Unix > > platforms here as well. > > > > *sigh* Screw COM. There are two very important concepts with OLE DB that > make it very powerful. The interfaces that the objects expose, and > aggregating IUnknowns. > You can implement these core concepts on any semi-object oriented system. In > C++, in CORBA, or whatever. > > He was thinking about a blue-sky concept, so I pointed him at what I think > the blue sky should look like on every platform. :) My top DB API related > wish is that the core OLE DB libraries were open source, That would cool and certainly help push OLE DB... but would this be possible at all ? I would guess that OLE DB relies heavily on COM and other MS techniques. > and people could > use them on Unix or whatever, but it certainly doesn't seem likely to > happen. Data conversion helper libraries are such a pain in the ass to > write. :( > > > > SQL 7 is completly built using a superset of these OLE DB > > interfaces. OLE DB > > > lets SQL 7 easily perform and optimize joins between > > different databases. > > > > > > Utterly cheesy example: > > > Joe wants to join table A in an Oracle database against > > table B in a SQL 7 > > > database. > > > Depending on the index statistics information (on table B > > and on table A) > > > the SQL 7 query optimizer can decide how much of the query > > can be pushed > > > into the Oracle database, and how much of it should be done locally. > > > > > > The OLE DB interfaces exposes all the necessary information > > that allows SQL > > > 7 do perform this cool task. > > > (Yes, people really do use this feature in real life.) > > > > FYI, in ODBC you would do the same using an ODBC DB engine like the > > one sold by EasySoft. > > > > That may be, but writing such a beast is much easier using OLE DB than it > every would be with ODBC. > ODBC doesn't expose interfaces that allow you to query for an index's > statsitcs information. I guess this could be done by hooking into the DBs system tables... at the cost of not being portable anymore; probably too big a project though. -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From adustman@comstar.net Tue May 16 16:14:08 2000 From: adustman@comstar.net (Andy Dustman) Date: Tue, 16 May 2000 11:14:08 -0400 (EDT) Subject: [DB-SIG] The CORBA DB API fantasy In-Reply-To: <39213C75.8B57CE88@lemburg.com> Message-ID: On Tue, 16 May 2000, M.-A. Lemburg wrote: > Bill Tutt wrote: > > You really should read the OLE DB specs. They specify COM interfaces for > > everything you've specified above and mroe... > > AFAIK, COM is MS centric and I think Andy is targetting Unix > platforms here as well. Well, not exactly. I'm targeting all non-MS platforms. :) But seriously, one of the major design goals of CORBA is platform-independence. Of course, that's also a goal of Python (or Guido, if you want to argue semantics). So screw COM. On Tue, 16 May 2000, Greg Stein wrote: > .prepare() is simply shifting the time when a prepare is done; it provides > no advantages over doing the prepare on the first call to .execute() [and > let additional calls note the same string and reuse the parsed query] Not necessarily. My thinking on this was that, while prepare() should be a mandatory part of the API (often doing nothing), it's use in an application should be optional. Also that cursor.command would be the prepared query (some kind of object class). But anyway, I'm more interested in the CORBA DB API fantasy... On Tue, 16 May 2000, Bill Tutt wrote: > *sigh* Screw COM. There are two very important concepts with OLE DB that > make it very powerful. The interfaces that the objects expose, and > aggregating IUnknowns. > You can implement these core concepts on any semi-object oriented system. In > C++, in CORBA, or whatever. > > He was thinking about a blue-sky concept, so I pointed him at what I think > the blue sky should look like on every platform. :) My top DB API related > wish is that the core OLE DB libraries were open source, and people could > use them on Unix or whatever, but it certainly doesn't seem likely to > happen. Data conversion helper libraries are such a pain in the ass to > write. :( Okay, there may be some ideas worth stealing here. On Tue, 16 May 2000, M.-A. Lemburg wrote: > Hmm, Andy could probably steal a few ideas from OLE DB and reimplement > them using CORBA as backend. > > I'm not sure whether it's worth the effort though. Heh. I think I'd just as soon try to come up with something that wasn't based on an existing framework, get stuck, and then poke around for some new ideas. -- andy dustman | programmer/analyst | comstar.net, inc. telephone: 770.485.6025 / 706.549.7689 | icq: 32922760 | pgp: 0xc72f3f1d "Therefore, sweet knights, if you may doubt your strength or courage, come no further, for death awaits you all, with nasty, big, pointy teeth!" From adustman@comstar.net Tue May 16 17:23:44 2000 From: adustman@comstar.net (Andy Dustman) Date: Tue, 16 May 2000 12:23:44 -0400 (EDT) Subject: [DB-SIG] Mandatory parameters in execute In-Reply-To: <20000516132259.A2218@geltrude.mixad.it> Message-ID: On Tue, 16 May 2000, Thierry MICHEL wrote: > hi *, > > a imple question: the arguments prameter in the execute() > function (DBAPI 2.0) is marked as optional. that's right because > if you do a SELECT you don't need to pass any parameters? but what > about INSERTs? if the use of `arguments' is mandatory it is really > easy to control the types of the argument and do some conversions > (arrays and large objects, for example.) so the question is, if > the driver sets paramstyle to pyformat the user can build an INSERT > string outside of execute() and then use it or is he forced to build > the INSERT string by passing arguments python-style? I would think that all databases would allow you to build your own literal query statement that didn't use any parameter substitutions. In fact, Zope's Database Adapter interface depends upon this. MySQL, of course, doesn't actually use any parameter substitutions at all; these are implemented in the Python portion of MySQLdb. So, yes, the user ought to be able to build the INSERT query outside of execute() in all cases. -- andy dustman | programmer/analyst | comstar.net, inc. telephone: 770.485.6025 / 706.549.7689 | icq: 32922760 | pgp: 0xc72f3f1d "Therefore, sweet knights, if you may doubt your strength or courage, come no further, for death awaits you all, with nasty, big, pointy teeth!" From mal@lemburg.com Tue May 16 18:17:46 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Tue, 16 May 2000 19:17:46 +0200 Subject: [DB-SIG] The CORBA DB API fantasy References: Message-ID: <392182BA.AABC1F7@lemburg.com> Andy Dustman wrote: > > > Hmm, Andy could probably steal a few ideas from OLE DB and reimplement > > them using CORBA as backend. > > > > I'm not sure whether it's worth the effort though. > > Heh. I think I'd just as soon try to come up with something that wasn't > based on an existing framework, get stuck, and then poke around for some > new ideas. I was referring to the effort needed to get the CORBA DB API up and running ... you have two problems here: the server *and* the client sice. While the Python client is probably manageable, the server side will certainly be a huge undertaking. -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From thierry@geltrude.mixad.it Tue May 16 18:32:20 2000 From: thierry@geltrude.mixad.it (Thierry MICHEL) Date: Tue, 16 May 2000 19:32:20 +0200 Subject: [DB-SIG] Mandatory parameters in execute In-Reply-To: ; from adustman@comstar.net on Tue, May 16, 2000 at 12:23:44PM -0400 References: <20000516132259.A2218@geltrude.mixad.it> Message-ID: <20000516193220.A7234@geltrude.mixad.it> Hi Thanks for your answer I still have a question : The DB API 2.0 specifications includes the following statements "To overcome this problem, a module must provide the constructors defined below to create objects that can hold special values. When passed to the cursor methods, the module can then detect the proper type of the input parameter and bind it accordingly." So, if I want to bind large objects, I have to put parameters in the cursor's methods ? Thanks. Federico and Thierry From gstein@lyra.org Tue May 16 19:09:56 2000 From: gstein@lyra.org (Greg Stein) Date: Tue, 16 May 2000 11:09:56 -0700 (PDT) Subject: [DB-SIG] The CORBA DB API fantasy In-Reply-To: Message-ID: On Tue, 16 May 2000, Andy Dustman wrote: >... > On Tue, 16 May 2000, M.-A. Lemburg wrote: > > Hmm, Andy could probably steal a few ideas from OLE DB and reimplement > > them using CORBA as backend. > > > > I'm not sure whether it's worth the effort though. > > Heh. I think I'd just as soon try to come up with something that wasn't > based on an existing framework, get stuck, and then poke around for some > new ideas. Boy, does this conversation sound like a lot of "Not Invented Here." A lot of "I think that I can design something from scratch that is better than what other people have spent years working on." :-) Cheers, -g -- Greg Stein, http://www.lyra.org/ From adustman@comstar.net Tue May 16 19:21:11 2000 From: adustman@comstar.net (Andy Dustman) Date: Tue, 16 May 2000 14:21:11 -0400 (EDT) Subject: [DB-SIG] The CORBA DB API fantasy In-Reply-To: Message-ID: On Tue, 16 May 2000, Greg Stein wrote: > Boy, does this conversation sound like a lot of "Not Invented Here." A lot > of "I think that I can design something from scratch that is better than > what other people have spent years working on." No, I'd just like to see if I have some different ideas that will work better, without getting into a mindset of, "well, other people did it this way, so I better do it this way too..." I may well end up doing things the same way, or incorporating other ideas, or deciding the whole thing's pointless. It's an experiment. Research is what you're doing when you don't know what you're doing. Plus, the actual CORBA DB API would be something that doesn't really exist yet: A universal (or close to it) database API. -- andy dustman | programmer/analyst | comstar.net, inc. telephone: 770.485.6025 / 706.549.7689 | icq: 32922760 | pgp: 0xc72f3f1d "Therefore, sweet knights, if you may doubt your strength or courage, come no further, for death awaits you all, with nasty, big, pointy teeth!" From adustman@comstar.net Tue May 16 19:24:40 2000 From: adustman@comstar.net (Andy Dustman) Date: Tue, 16 May 2000 14:24:40 -0400 (EDT) Subject: [DB-SIG] The CORBA DB API fantasy In-Reply-To: <392182BA.AABC1F7@lemburg.com> Message-ID: On Tue, 16 May 2000, M.-A. Lemburg wrote: > Andy Dustman wrote: > > > > > Hmm, Andy could probably steal a few ideas from OLE DB and reimplement > > > them using CORBA as backend. > > > > > > I'm not sure whether it's worth the effort though. > > > > Heh. I think I'd just as soon try to come up with something that wasn't > > based on an existing framework, get stuck, and then poke around for some > > new ideas. > > I was referring to the effort needed to get the CORBA DB API > up and running ... you have two problems here: the server *and* > the client sice. While the Python client is probably manageable, > the server side will certainly be a huge undertaking. Well, it'd say it's "non-trivial". :) The IDL for the interface, of course, can be used for both the stubs (client-side) and the skeletons (server-side). Being able to implement the server-side means you have server sources. However, think about the Python client side. The IDL results maps one-to-one to the Python binding. So what I am proposing is two-fold: a) an actualy CORBA interface defined by IDL. b) a Python interface that is functionally identical to the CORBA one using the standard CORBA Python language mapping, but actually using no CORBA. With the CORBA Python language mapping, all the CORBA stuff is really hidden behind the scenes, so we can just pretend it's there and sorta transliterate into the native API, which requires no server changes. Brief example, cribbed from ILU. First, the IDL: module Tutorial { exception DivideByZero {}; interface Calculator { // Set the value of the calculator to _v_ void SetValue (in double v); // Return the value of the calculator double GetValue (); // Adds _v_ to the calculator_s value void Add (in double v); // Subtracts _v_ from the calculator_s value void Subtract (in double v); // Multiplies the calculator_s value by _v_ void Multiply (in double v); // Divides the calculator_s value by _v_ void Divide (in double v) raises (DivideByZero); }; interface Factory { // Create and return an instance of a Calculator object Calculator CreateCalculator(); }; }; The code looks something like this: def Get_Tutorial_Calculator (sid, ih): f = ilu.LookupObject (sid, ih, Tutorial.Factory) c = f.CreateCalculator() return (c) Then c is a Calculator object, which has SetValue(), GetValue(), Add(), Subtract(), Multiply(), and Divide() methods. Once you've actually got an object, manipulating it is just standard Python stuff. So any CORBA-ish, implementation-dependent stuff should obviously be hidden away. -- andy dustman | programmer/analyst | comstar.net, inc. telephone: 770.485.6025 / 706.549.7689 | icq: 32922760 | pgp: 0xc72f3f1d "Therefore, sweet knights, if you may doubt your strength or courage, come no further, for death awaits you all, with nasty, big, pointy teeth!" From petrilli@amber.org Tue May 16 20:40:15 2000 From: petrilli@amber.org (Christopher Petrilli) Date: Tue, 16 May 2000 15:40:15 -0400 Subject: [DB-SIG] The CORBA DB API fantasy In-Reply-To: ; from gstein@lyra.org on Tue, May 16, 2000 at 11:09:56AM -0700 References: Message-ID: <20000516154015.A9205@trump.amber.org> Greg Stein [gstein@lyra.org] wrote: > On Tue, 16 May 2000, Andy Dustman wrote: > >... > > On Tue, 16 May 2000, M.-A. Lemburg wrote: > > > Hmm, Andy could probably steal a few ideas from OLE DB and reimplement > > > them using CORBA as backend. > > > > > > I'm not sure whether it's worth the effort though. > > > > Heh. I think I'd just as soon try to come up with something that wasn't > > based on an existing framework, get stuck, and then poke around for some > > new ideas. > > Boy, does this conversation sound like a lot of "Not Invented Here." A lot > of "I think that I can design something from scratch that is better than > what other people have spent years working on." Run, don't walk to the Enterprise Objects Framework by NeXT (now Apple), and begin to witness what happens when people who truly undersand OO design and modeling apply that power to Relation/Object mapping systems. They really really get it right from my perspective (and other people hre at DC as well). we're definately looking at this as the future. http://developer.apple.com/techpubs/webobjects/System/Documentation/Developer/EnterpriseObjects/DevGuide/GuideTOC.html Chris -- | Christopher Petrilli | petrilli@amber.org From mal@lemburg.com Tue May 16 23:42:33 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 17 May 2000 00:42:33 +0200 Subject: [DB-SIG] The CORBA DB API fantasy References: Message-ID: <3921CED9.AC1C3B51@lemburg.com> Andy Dustman wrote: > > On Tue, 16 May 2000, M.-A. Lemburg wrote: > > > Andy Dustman wrote: > > > > > > > Hmm, Andy could probably steal a few ideas from OLE DB and reimplement > > > > them using CORBA as backend. > > > > > > > > I'm not sure whether it's worth the effort though. > > > > > > Heh. I think I'd just as soon try to come up with something that wasn't > > > based on an existing framework, get stuck, and then poke around for some > > > new ideas. > > > > I was referring to the effort needed to get the CORBA DB API > > up and running ... you have two problems here: the server *and* > > the client sice. While the Python client is probably manageable, > > the server side will certainly be a huge undertaking. > > Well, it'd say it's "non-trivial". :) The IDL for the interface, of > course, can be used for both the stubs (client-side) and the skeletons > (server-side). Being able to implement the server-side means you have > server sources. > > However, think about the Python client side. The IDL results maps > one-to-one to the Python binding. So what I am proposing is > two-fold: a) an actualy CORBA interface defined by IDL. b) a Python > interface that is functionally identical to the CORBA one using the > standard CORBA Python language mapping, but actually using no CORBA. With > the CORBA Python language mapping, all the CORBA stuff is really hidden > behind the scenes, so we can just pretend it's there and sorta > transliterate into the native API, which requires no server changes. Sounds like a cool idea :-), but the native APIs are probably too different to ever be able to actually use a CORBA wrapper around them, so going the WebObjects way might be more interesting. -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From dieter@sz-sb.de Wed May 17 08:23:46 2000 From: dieter@sz-sb.de (Dr. Dieter Maurer) Date: Wed, 17 May 2000 09:23:46 +0200 (CEST) Subject: [DB-SIG] The CORBA DB API fantasy In-Reply-To: <392182BA.AABC1F7@lemburg.com> References: <392182BA.AABC1F7@lemburg.com> Message-ID: <14626.18367.133013.684267@dieter.sz-sb.de> M.-A. Lemburg writes: > I was referring to the effort needed to get the CORBA DB API > up and running ... you have two problems here: the server *and* > the client sice. While the Python client is probably manageable, > the server side will certainly be a huge undertaking. There is at least one scenario, where the effort on client and server would be comparable: a three-tier architecture. On the server side, an application server implements the CORBA DB API through use of current DB API calls. On the client side, the DB API calls are mapped to services from the current DB API. The main problem, I see, is the static typing of IDL. We would probably need some form of marshalling/pickling above the ORB to handle parameters to "execute". - Dieter From alex@ank-sia.com Wed May 17 11:19:34 2000 From: alex@ank-sia.com (alexander smishlajev) Date: Wed, 17 May 2000 12:19:34 +0200 Subject: [DB-SIG] DCOracle number handling still broken? References: <20000406105005.B26075@trump.amber.org> <20000406170708.A34650@ubkaaix6.ubka.uni-karlsruhe.de> <20000406112447.D26075@trump.amber.org> Message-ID: <39227236.22C39096@turnhere.com> Christopher Petrilli wrote: > > Guenter Radestock [guenter@ubka.uni-karlsruhe.de] wrote: > > > > I do use the description "number" in my schema and I only store integer > > values in the database itself. Cursor.description after a select > > returns the tuple (('AVG(AUSGELIEFERT-ERZEUGT)', 'NUMBER', 40, 22, 0, 0, > > 1),) The problem must be with avg() here. I find the following: > > So long as we're clear, NUMBER by itself creates a floating-point with > a precision of 38. Scale is not relevent. These should be handled > correctly, though I don't know if they've been tested recently. they aren't. i have encountered this error yesterday. columns created as NUMBER without (p,s) are converted to ints as well as NUMBER(p,0). the following patch may be applied to obtain best-fit result for numbers: long if possible (i.e. no decimal digits, and no more than 18 digits) or float otherwise: === cut === diff -ruN DCOracle/ociCurs.py DCOracle.my/ociCurs.py --- DCOracle/ociCurs.py Fri Aug 27 20:11:31 1999 +++ DCOracle.my/ociCurs.py Wed May 17 11:50:12 2000 @@ -47,7 +47,7 @@ """DBI-Compliant Oracle Database Interface """ -jimwashere=0 +jimwashere=1 import oci_, dbi, ociProc @@ -58,7 +58,7 @@ from Buffer import Buffer import ociBind -from ociUtil import RowID, oDate, error +from ociUtil import varnum, RowID, oDate, error from string import join try: import Missing @@ -133,10 +133,10 @@ continue elif dbtype==OCI_INTERNAL_NUMBER: if jimwashere: - buf=Buffer(1,21) + buf=Buffer(arraysize,21) dbtype=OCI_EXTERNAL_VARNUM dbsize=21 - f=None + f=varnum else: if scale: buf=Buffer(arraysize,'d') diff -ruN DCOracle/ociUtil.py DCOracle.my/ociUtil.py --- DCOracle/ociUtil.py Fri Aug 27 16:12:05 1999 +++ DCOracle.my/ociUtil.py Tue May 16 20:48:30 2000 @@ -51,8 +51,25 @@ from Buffer import Buffer, BufferType from time import mktime import time +from array import array error="oci.error" + +def varnum(buf): + """convert OCI_EXTERNAL_VARNUM to number (long or float)""" + a =array('b', buf).tolist() + _mlen =a[0] -1 # mantissa length + _exp =a[1] # exponent & sign + if _exp <0: s ='+0'; _neg =0 + else: s ='-0'; _neg =1 + _exp =_exp & 0177 -65 # exponent + for _byte in a[2:_mlen+2]: + if _exp ==-1: s =s +'.' + if _neg: s =s +'%02i' %(101 -_byte) + else: s =s +'%02i' %(_byte -1) + _exp =_exp -1 + if ('.' not in s) and (len(s) <20): return long(s) + else: return float(s) def oDate(s, ts=time.localtime(time.time())[6:]): 'Convert Oracle date data to a dbiDate.' diff -ruN DCOracle/ocidb.py DCOracle.my/ocidb.py --- DCOracle/ocidb.py Fri Aug 27 20:12:10 1999 +++ DCOracle.my/ocidb.py Tue Oct 05 16:59:42 1999 @@ -136,6 +136,7 @@ c=self.cursor() # Allocate a cursor for procs d[name]=p=c.procedures return p + if name =="_d": return d["_d"] if d.has_key('_c'): c=d['_c'] else: d['_c']=c=self.cursor() if name=='_c': return c === cut === best wishes, alex. From ajung@sz-sb.de Wed May 17 12:59:42 2000 From: ajung@sz-sb.de (Andreas Jung) Date: Wed, 17 May 2000 13:59:42 +0200 Subject: [DB-SIG] DCOracle and handling of package OUT parameters broken ?! Message-ID: <20000517135941.A23544@sz-sb.de> Dear all, I have the following stored proc: create or replace procedure select_lob( tabname IN varchar2, primkey IN varchar2, primval IN varchar2, retstr OUT varchar2) IS mylen integer; myclob clob; sql_smt varchar(1024); bufb varchar2(32767); BEGIN sql_smt := 'select inhalt from ojs_de where docnum='''||primval||''''; execute immediate sql_smt into myclob; mylen := DBMS_LOB.GETLENGTH(myclob); DBMS_LOB.READ(myclob , mylen, 1,bufb); retstr := substr(bufb,1,2900); END; Its task is to deliver the content of a CLOB (XML files < 32000 characters). The procedure is being called from search.py: ... curs.execute('select docnum from ojs_de order by docnum') data = curs.fetchall() for d in data: docnum = d[0] print docnum res= curs.procedures.select_lob('ojs_de','docnum',docnum) print res ..... Python fails with the following traceback: Traceback (innermost last): File "./search.py", line 23, in ? res= curs.procedures.select_lob('ojs_de','docnum',docnum) File "DCOracle/DCOracle/ociProc.py", line 133, in __call__ if oci_.oexec(oc): self._error() File "DCOracle/DCOracle/ociProc.py", line 106, in _error raise error, (rc, oci_.OracleErrorMessage(c.lda, rc)) oci.error: (6502, 'ORA-06502: PL/SQL: numeric or value error: character string buffer too small\012ORA-06512: at "AJUNG.SELECT_LOB", line 22\012ORA-06512: at line 2\012') When I reduce the size of the output string (substr()) to less than about 2000 I receive the data correctly. Is this DCOracle related or a problem of Oracle ?! Our environment: Oracle 8.16 EE, Solaris 2.7, Python 1.5.2, DCOracle 1.3 Thanks for helping, Andreas Jung From tbryan@starship.python.net Fri May 19 01:48:39 2000 From: tbryan@starship.python.net (Tom Bryan) Date: Thu, 18 May 2000 20:48:39 -0400 (EDT) Subject: [DB-SIG] Re: [PyGreSQL] Version 3.0 PyGreSQL In-Reply-To: Message-ID: > e) In a clean-sheet database implementation, columns could contain any > sort of conceivable CORBA object, thus becoming an object-oriented > relational database. > > No database that I've ever heard of does this, yet. (Oracle might have > some CORBA stuff, somewhere.) Oracle 8i has an ORB *inside* the database. I'm not sure what its capabilities are for serving up data straight out of tables, which is what I think you have in mind. We're using it to handle Java instances that live inside the database (Oracle 8i has a JVM inside the DB too that can run code from .class files that have been loaded into special tables). Perhaps they're headed to a CORBA-ish interface. ---Tom From alex@ank-sia.com Sat May 20 06:51:21 2000 From: alex@ank-sia.com (alexander smishlajev) Date: Sat, 20 May 2000 07:51:21 +0200 Subject: [DB-SIG] DCOracle number handling still broken? References: <20000406105005.B26075@trump.amber.org> <20000406170708.A34650@ubkaaix6.ubka.uni-karlsruhe.de> <20000406112447.D26075@trump.amber.org> <39227236.22C39096@turnhere.com> Message-ID: <392627D9.295CCF2E@turnhere.com> alexander smishlajev wrote: > > the following patch may be applied to obtain best-fit result for > numbers: long if possible (i.e. no decimal digits, and no more > than 18 digits) or float otherwise: i am sorry, that code was far from functional. now the following conversion procedure is used here: === cut === def varnum(buf): """convert OCI_EXTERNAL_VARNUM to number (long or float)""" a =array('b', buf).tolist() _mlen =a[0] -1 # mantissa length _exp =a[1] # exponent & sign if _exp <0: s ='+0'; _neg =0; _exp =_exp +64 else: s ='-0'; _neg =1; _exp =63 -_exp if (_mlen <19) or (a[20] ==102): _mlen =_mlen -1 if _exp <0: s =s +'.' +'00' *(-_exp) for _byte in a[2:_mlen+2]: if _exp ==0: s =s +'.' _exp =_exp -1 if _neg: s =s +'%02i' %(101 -_byte) else: s =s +'%02i' %(_byte -1) if _exp >0: s =s +'00' *_exp if ('.' not in s) and (len(s) <12): return int(s) elif ('.' not in s) and (len(s) <20): return long(s) else: return float(s) === cut === best wishes, alex. From bert@roedding.de Thu May 25 00:57:10 2000 From: bert@roedding.de (Bert M. Schuldes) Date: Thu, 25 May 2000 01:57:10 +0200 Subject: [DB-SIG] DCOracle question Message-ID: <392C6C56.44BDEA0E@roedding.de> Hallo. I got DcOracle to compile under Linux; running the test gave a Import succeeded Connect succeeded But when trying this script: #!/usr/bin/python import Buffer, oci_, sys print 'Import succeeded' dbc=oci_.Connect('bms/bms@ORAC') print 'Connect succeeded' print dbc cur = dbc.cursor() I received the following: Import succeeded Connect succeeded Traceback (innermost last): File "./mytest.py", line 8, in ? cur = dbc.cursor() AttributeError: 'Connection' object has no attribute 'cursor' Any hints for a python/dcoracle newbie? Thank you, Bert. From j.andrusaitis@konts.lv Thu May 25 10:13:39 2000 From: j.andrusaitis@konts.lv (=?ISO-8859-1?B?SudrYWJzIEFuZHJ18GFpdGlz?=) Date: Thu, 25 May 2000 12:13:39 +0300 Subject: [DB-SIG] DCOracle question In-Reply-To: <392C6C56.44BDEA0E@roedding.de> Message-ID: <000101bfc629$84660320$262a949f@konts.lv> You should import DCOracle module (which wraps all the different modules under one roof). import DCOracle dbc= DCOracle.Connect('username/password@tnsname') cur= dbc.cursor() I think that problem with your code was importing oci_ directly. Jekabs Andrušaitis Sistemu analitikis Tieto Konts Financial Systems Kr.Barona 30, Riga, LV-1011, Latvia tel.: (371) 7286660, (371) 7283689 fax: (371) 7243000 e-mail: J.Andrusaitis@konts.lv http://www.konts.lv From db3l@fitlinxx.com Mon May 29 02:19:50 2000 From: db3l@fitlinxx.com (David Bolen) Date: Sun, 28 May 2000 21:19:50 -0400 Subject: [DB-SIG] mxODBC performance question Message-ID: <1DB8BA4BAC88D3118B2300508B5A552CD92517@mail.fitlinxx.com> I've been converting over some legacy Perl scripts that handle internal database synchronization to use Python, and have been using mxODBC for the database interface (the platform is Windows NT). I'm running into a significant performance degredation with the move that I was hoping someone might help shed some light on. For some reason, the converted scripts in a production environment (where the update data is at one site and the database is accessed remotely at another of our sites) perform extremely poorly when compared to the existing Perl approach. For example, updating about 2500 rows in 3-4 tables takes the Python script 25 minutes, whereas the Perl script does the same job in 10 minutes (!). The sequence of delete/insert operations between the two scripts is identical (designed that way for verification during the transition). The Perl script is using an older ODBC module (970208, by Dave Roth), and for my Python script I'm using mxODBC (1.1.1, with mxDateTime 1.3.0). The performance does not appear to be in the script itself - if I comment out the "cursor.execute()" calls the Python script runs within a few percent of the Perl script even for large data sets, and total script execution time is on the order of seconds. Both the Python and Perl scripts in testing are run on the same client machine (same ODBC driver and configuration) and against the same remote database. So from what I can see the only real difference is the Perl ODBC module versus mxODBC on the Python side, along with whatever translation needs to be performed between the host script environment and the ODBC interface. The database operations are not really designed for efficiency, but that was to be a later evolution - the sequence consists of processing records one by one, deleting any prior record and inserting the new record. That is, the sequence of SQL operations looks something like: delete where = and = insert into
() values () There are no parameter replacements - both of these commands are precomputed strings containing all values expanded into the string, given in their entirety to execute(). So I would think the overhead of converting them from Python/Perl into the internal strings to pass on to the ODBC driver would be small. I did initially run into some problems with data values not seeming to get into the target database initially (even when testing from the interactive interpreter), and ended up using the "clear_auto_commit=0" parameter on the connect() call. I'm guessing our default database behavior is non-transaction (still have to verify) which the Perl script - and thus the Python version - was assuming, and the Perl module probably doesn't clear that flag. Would anyone happen to have any thoughts? I'd really like to jettison the Perl stuff (it's typical organic-growth scripts that is fairly unmaintainable), but as it stands, it's extremely unlikely that I'd be able to move away from the legacy scripts with this sort of wall-clock performance hit. But by the same token, given that it's so dramatically different, I can't help wondering that I'm just missing something obvious? Thanks for any help. -- David PS: I thought of trying a comparison against the older odbc driver in the Win32 package just for kicks, but haven't had a chance yet. Some very small tests I had done previously on my local machine seemed to show it was even a little slower than mxODBC, but I haven't tried a large data set. From db3l@fitlinxx.com Mon May 29 03:25:30 2000 From: db3l@fitlinxx.com (David Bolen) Date: Sun, 28 May 2000 22:25:30 -0400 Subject: [DB-SIG] RE: mxODBC performance question Message-ID: <1DB8BA4BAC88D3118B2300508B5A552CD92519@mail.fitlinxx.com> Probably bad form immediately following up on my own question, but I think I may have a handle on at least a significant contributor to my problem. In my last note I mentioned: "So from what I can see the only real difference is the Perl ODBC module versus mxODBC on the Python side, along with whatever translation needs to be performed between the host script environment and the ODBC interface." I was browsing the Windows module from mxODBC, and noticed that the execute() command always does a SQLPrepare()+SQLExecute() approach, and never an SQLExecDirect(). Of course, the prepare+execute makes sense in order to cover the case of parameters, but I think that in my case (one shot commands without parameters) I'm probably seeing my performance hit by the multiple network access. ODBC performance over a WAN seems to be quite poor to start with (high overhead per request) so doubling the requests (unnecessarily in my case) is a bad thing :-) This would also explain why I didn't see as much of an issue when operating over a LAN during testing. While I wasn't able to track down sources to the exact Perl module we're using (our now-gone Perl guy must have chosen some intermediate build) I found releases just before and after and verified that the source in both cases is using SQLExecDirect(). So if the per-request overhead of the WAN ODBC setup is a large fraction of my runtime, it would explain a good part of the 2:1 ratio I was getting compared to the Perl script. I've seen some comments in the mailing list archives about permitting direct access to the prepare/execute sequence, but haven't seen anything about access to a non-prepared approach. It would seem to me that if I know that I won't be using any parameters, then preparing the statement is wasteful. But I'm not sure the module itself could determine this (is scanning for the parameter character code in the statement enough?), so permitting some level of application control over this might be helpful. Of course, it may well be that the lack of a prepare+parameters approach in the Perl module is what led to the current full-string individual command implementation, in which case I should be able to restructure things post-transition to be more efficient. But the current approach of preparing everything makes that transition difficult. -- David /-----------------------------------------------------------------------\ \ David Bolen \ E-mail: db3l@fitlinxx.com / | FitLinxx, Inc. \ Phone: (203) 708-5192 | / 860 Canal Street, Stamford, CT 06902 \ Fax: (203) 316-5150 \ \-----------------------------------------------------------------------/ From mal@lemburg.com Mon May 29 08:24:40 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 29 May 2000 09:24:40 +0200 Subject: [DB-SIG] RE: mxODBC performance question References: <1DB8BA4BAC88D3118B2300508B5A552CD92519@mail.fitlinxx.com> Message-ID: <39321B38.240F2CE0@lemburg.com> David Bolen wrote: > > I was browsing the Windows module from mxODBC, and noticed that the > execute() command always does a SQLPrepare()+SQLExecute() approach, and > never an SQLExecDirect(). Of course, the prepare+execute makes sense in > order to cover the case of parameters, but I think that in my case (one shot > commands without parameters) I'm probably seeing my performance hit by the > multiple network access. ODBC performance over a WAN seems to be quite poor > to start with (high overhead per request) so doubling the requests > (unnecessarily in my case) is a bad thing :-) This would also explain why I > didn't see as much of an issue when operating over a LAN during testing. > > While I wasn't able to track down sources to the exact Perl module we're > using (our now-gone Perl guy must have chosen some intermediate build) I > found releases just before and after and verified that the source in both > cases is using SQLExecDirect(). So if the per-request overhead of the WAN > ODBC setup is a large fraction of my runtime, it would explain a good part > of the 2:1 ratio I was getting compared to the Perl script. This could be one of the reasons. mxODBC is designed and enhanced to make good use of bound parameters. SQLExecDirect() is currently not used, but would be a good candidate for the case where you don't have any parameters and keeping the prepared command around is not important. I'll add that to my TODO list. > I've seen some comments in the mailing list archives about permitting direct > access to the prepare/execute sequence, but haven't seen anything about > access to a non-prepared approach. It would seem to me that if I know that > I won't be using any parameters, then preparing the statement is wasteful. > But I'm not sure the module itself could determine this (is scanning for the > parameter character code in the statement enough?), so permitting some level > of application control over this might be helpful. > > Of course, it may well be that the lack of a prepare+parameters approach in > the Perl module is what led to the current full-string individual command > implementation, in which case I should be able to restructure things > post-transition to be more efficient. But the current approach of preparing > everything makes that transition difficult. You should definitely consider using bound parameters, since these fit your problem quite nicely (fixed SQL statement + variable query values). -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From db3l@fitlinxx.com Mon May 29 20:18:28 2000 From: db3l@fitlinxx.com (David Bolen) Date: Mon, 29 May 2000 15:18:28 -0400 Subject: [DB-SIG] RE: mxODBC performance question Message-ID: <1DB8BA4BAC88D3118B2300508B5A552CD9251C@mail.fitlinxx.com> M.-A. Lemburg [mal@lemburg.com] writes: > SQLExecDirect() is currently not used, but would be a good candidate > for the case where you don't have any parameters and keeping the > prepared command around is not important. I'll add that to my TODO > list. Thanks - FYI, I experimented with adding a method for that command to mxODBC and the Python script's runtime immediately dropped down to virtually the same as that of the original Perl script. So at least in WAN cases, permitting access to the SQLExecDirect() if parameters aren't used can be a big time saver. > You should definitely consider using bound parameters, since > these fit your problem quite nicely (fixed SQL statement + > variable query values). Precisely, and I expected to do that as the next step - the problem was that the easiest method of verification of the new script was a comparison of the commands being executed, so I sort of need to stick with the full strings initially until we have some parallel runtime under our belts. Thanks. -- David /-----------------------------------------------------------------------\ \ David Bolen \ E-mail: db3l@fitlinxx.com / | FitLinxx, Inc. \ Phone: (203) 708-5192 | / 860 Canal Street, Stamford, CT 06902 \ Fax: (203) 316-5150 \ \-----------------------------------------------------------------------/ From mal@lemburg.com Tue May 30 09:23:58 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Tue, 30 May 2000 10:23:58 +0200 Subject: [DB-SIG] RE: mxODBC performance question References: <1DB8BA4BAC88D3118B2300508B5A552CD9251C@mail.fitlinxx.com> Message-ID: <39337A9E.3BFC4013@lemburg.com> David Bolen wrote: > > M.-A. Lemburg [mal@lemburg.com] writes: > > > SQLExecDirect() is currently not used, but would be a good candidate > > for the case where you don't have any parameters and keeping the > > prepared command around is not important. I'll add that to my TODO > > list. > > Thanks - FYI, I experimented with adding a method for that command to mxODBC > and the Python script's runtime immediately dropped down to virtually the > same as that of the original Perl script. So at least in WAN cases, > permitting access to the SQLExecDirect() if parameters aren't used can be a > big time saver. Sounds like there is a real need for SQLExecDirect()... > > You should definitely consider using bound parameters, since > > these fit your problem quite nicely (fixed SQL statement + > > variable query values). > > Precisely, and I expected to do that as the next step - the problem was that > the easiest method of verification of the new script was a comparison of the > commands being executed, so I sort of need to stick with the full strings > initially until we have some parallel runtime under our belts. I'd be interested in whether you get the same (or better) performance with bound variables. Thanks, -- Marc-Andre Lemburg ______________________________________________________________________ Business: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From oivvio@cajal.mbb.ki.se Tue May 30 11:03:36 2000 From: oivvio@cajal.mbb.ki.se (oivvio polite) Date: Tue, 30 May 2000 12:03:36 +0200 Subject: [DB-SIG] OO wrapper for SQL? Message-ID: Hello everybody. I've been tinkering with a python/sql project for some time know. Basically I use python to generate long sql statements and send them of to the dbms. The sql statments typically create new views or tables out of old ones. Yesterday I went back to revise something I wrote one month or so ago and it did not look pretty at all. sqlpart = "" for enzyme in enzymes: tmp = "id in (SELECT id FROM %s..%s where enzyme = '%s') AND\n" % (db, nomatchtable, enzyme) sqlpart = sqlpart + tmp sqlpart = sqlpart[:-4] sql = "INSERT INTO %s..%s SELECT id FROM %s..%s WHERE %s " % (db, no_cut_ids, db, base, sqlpart) dbc.ExecSQL(sql ,constants.adCmdText, timeout = 3600) A few hundred lines of that can make one somewhat dizzy. What I'd want to have instead is a somekind of oo wrapper for sql, probably something working on top of the python db api. Something that would let my say: oldtable1 = dboo.table.existing(connection,database,tablename1) oldtable2 = dboo.table.existing(connection,database,tablename2) newtable = dboo.table.join(newtablename, oldtable1, oldtable2, some_condition) This would take a couple of old tables in the my database join them whith some condition and insert the result as a new table in the database. Of course there should be tons of other possible operations. I want to be able to do pretty much everything I can do with sql, without having to maintain that horrible sql code. Have you seen anything like this for python, or maybe in for some other language? oivvio polite From brunson@level3.net Tue May 30 23:44:53 2000 From: brunson@level3.net (Eric Brunson) Date: Tue, 30 May 2000 16:44:53 -0600 Subject: [DB-SIG] Buffer objects in DCOracle Message-ID: <20000530164453.A18490@level3.net> I'm having a problem with passing a Buffer (type 'string') and having Oracle be able to manipulate it as a string, i.e. concatenate to it. Here is some code I'm using: ------------- errtextbuff = DCOracle.Buffer( 1, 1000 ) sid = 1 dbh = dbconn.prepare( """ declare errs service_pkg.ErrorList; i number; begin service_pkg.validate( :service_id, errs ); for i in 1..errs.COUNT LOOP :errtext := :errtext || errs(i); end loop; end; """ ) dbh.execute( service_id = sid, errtext = errtextbuff ) -------------- ErrorList is declared as a table of varchar2(255) and within the validate procedure each row is set to a newline terminated error message. When this is run I get the following error: oci.error: (6502, 'ORA-06502: PL/SQL: numeric or value error\012ORA-06512: at line 8\012') If I change line 8 to read: :errtext := errs(i); then I, correctly, get the last error message reported, followed by the remaining space in the buffer, errtextbuff[0], set to "^@"s (ascii 0) padded to 1000 places. Am I doing something wrong? I would like to understand why the Buffer object is not behaving as I would expect it to, rather than have people suggest "why don't you not use a table for err", or "why don't you use a real variable for the concatenation, then set :errtext equal to that" (which is what I'm actually having to do). Thanks, e. -- Eric Brunson - brunson@level3.net - page-eric@level3.net "When governments fear the people there is liberty. When the people fear the government there is tyranny." - Thomas Jefferson