From kjcole@gri.gallaudet.edu Fri Jun 1 19:00:43 2001 From: kjcole@gri.gallaudet.edu (Kevin Cole) Date: Fri, 1 Jun 2001 14:00:43 -0400 (EDT) Subject: [DB-SIG] Newbie trying to fetch individual rows w/ PostgreSQL Message-ID: Hi, I've got a little test code that sorta works, but not as well as I'd like. fetchall() does what I want, fetchone() appears to fetch the same row cursor.rowcount times, rather than fetching the next row. Here's what I'm doing: import pgdb mydb = pgdb.connect("localhost:mydb") curse = mydb.cursor() curse.execute("select * from mytable where state = 'MD') for hits in range(curse.rowcount): print curse.fetchone() If I use "for hits in curse.fetchall():" I get what I expect, and printing curse.rowcount yields the correct number of rows. What am I misunderstanding? -- Kevin Cole, RHCE, Linux Admin | E-mail: kjcole@gri.gallaudet.edu Gallaudet Research Institute | WWW: http://gri.gallaudet.edu/~kjcole/ Hall Memorial Bldg S-419 | Voice: (202) 651-5135 Washington, D.C. 20002-3695 | FAX: (202) 651-5746 From kjcole@gri.gallaudet.edu Fri Jun 1 20:23:24 2001 From: kjcole@gri.gallaudet.edu (Kevin Cole) Date: Fri, 1 Jun 2001 15:23:24 -0400 (EDT) Subject: [DB-SIG] And another thing... cursor.description bug? In-Reply-To: Message-ID: The cursor.description that I get back from PostgreSQL has "None" for null_ok, in spite of what "\d tablename" in psql tells me. (field_name, type_code, display_size, and internal_size appear to be fine. I'm not doing anything with precision and scale.) This is with postgresql 7.1-1 and python 1.5.2-30. -- Kevin Cole, RHCE, Linux Admin | E-mail: kjcole@gri.gallaudet.edu Gallaudet Research Institute | WWW: http://gri.gallaudet.edu/~kjcole/ Hall Memorial Bldg S-419 | Voice: (202) 651-5135 Washington, D.C. 20002-3695 | FAX: (202) 651-5746 From kromag@nsacom.net Fri Jun 1 23:29:44 2001 From: kromag@nsacom.net (kromag@nsacom.net) Date: Fri, 1 Jun 2001 15:29:44 -0700 (PDT) Subject: [DB-SIG] file locking error bonking access (long!) Message-ID: <200106012229.f51MTig00168@pop.nsacom.net> I am having some weirdness occour while experimenting with access 97 SR2. I am using python 2.0 under windows 95B. When I run the scripts copied below, I occaisionally get the following: --------begin barfage-------------------- 816 817 817 records written in 167.529999971 seconds. Traceback (most recent call last): File "c:windowsdesktoptryme.py", line 19, in ? dbwrite.writetwo(pack) File "c:python20dbwrite.py", line 26, in writetwo db=engine.OpenDatabase("\windows\desktop\db1.mdb") File "", line 2, in OpenDatabase pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, 'DAO.Workspace', "Couldn't lock file.", None, -1, -2146825238), None) --------end barfage--------------- Hrm. The script writes 500 lines with 6 values each into two access tables. ---------begin script-------------- import time import random import dbwrite ## my attempt at a module... tock=1 then=time.time() try: while tock < 1000: tick=time.time() surprise=random.randint(1,10000) a=random.choice(['a','e','i','o','u']) b=random.choice(['a','e','i','o','u']) c=random.choice(['a','e','i','o','u']) d=random.choice(['a','e','i','o','u']) pack=(tick,surprise, a,b,c,d) dbwrite.write(pack) print tock tock=tock+1 dbwrite.writetwo(pack) print tock tock=tock+1 finally: now=time.time() - then print ( '%s' +' records written in ' + '%s' + ' seconds.')% (tock -1, now) --------------end script------------ I had previously written this module as an excersise in.... writing a module! :-). I know I really should write a way to identify the tables without the redundantcy. Next week. --------begin weak attempt at module, probably better left as functions!---- import win32com.client # import string ## Function write() writes a list to the database def write(inputtage): time=inputtage[0] number=inputtage[1] str1=inputtage[2] str2=inputtage[3] str3=inputtage[4] str4=inputtage[5] engine=win32com.client.Dispatch("DAO.DBEngine.35") db=engine.OpenDatabase("\windows\desktop\db1.mdb") db.Execute("insert into food values(%f, '%s', '%s','%s','%s','%s')" %(time, number, str1, str2, str3, str4)) return 'ok' ## Function writetwo() writes a list to the database def writetwo(inputtage): time=inputtage[0] number=inputtage[1] str1=inputtage[2] str2=inputtage[3] str3=inputtage[4] str4=inputtage[5] engine=win32com.client.Dispatch("DAO.DBEngine.35") db=engine.OpenDatabase("\windows\desktop\db1.mdb") db.Execute("insert into baboon values(%f, '%s', '%s','%s','%s','%s')" %(time, number, str1, str2, str3, str4)) return 'ok' def wipe(): engine=win32com.client.Dispatch("DAO.DBEngine.35") db=engine.OpenDatabase("\windows\desktop\db1.mdb") db.execute("delete * from food") return 'wiped' def help(): print 'write() - Writes a list of 6 values to the database file,' print 'first value is a float and the following five are string.' print 'wipe() - Wipes database file.' print 'help() - Prints this message.' return 0 ---------------end-------------------- Any clues as to why this fails? It only happens once out of every 10 or so. It also seems to be a bit slower than I would have guessed. A successful run: 998 999 1000 1000 records written in 124.460000038 seconds. Is 10 per second too many to ask from a P3 7xx with 256 megs of ram? This is about what I get on a P166 running linux postgresql and psycopg using essentially the same script with a slightly different module.. (I'd copy them here, but I am at work... and this is already too long!) I am _EMPHATICALLY_NOT_ trying to start an MS bashing thread here. I just wonder if I am missing a speed/safety technique that is not immediately obvious. Do I need to add threading? Will that prevent lock issues? Thanks! From andy@dustman.net Mon Jun 4 07:44:20 2001 From: andy@dustman.net (Andy Dustman) Date: Mon, 4 Jun 2001 02:44:20 -0400 (EDT) Subject: [DB-SIG] Newbie trying to fetch individual rows w/ PostgreSQL In-Reply-To: Message-ID: On Fri, 1 Jun 2001, Kevin Cole wrote: > Hi, > > I've got a little test code that sorta works, but not as well as I'd > like. fetchall() does what I want, fetchone() appears to fetch the same > row cursor.rowcount times, rather than fetching the next row. Here's > what I'm doing: > > import pgdb > mydb = pgdb.connect("localhost:mydb") > curse = mydb.cursor() > curse.execute("select * from mytable where state = 'MD') > for hits in range(curse.rowcount): > print curse.fetchone() > > If I use "for hits in curse.fetchall():" I get what I expect, and > printing curse.rowcount yields the correct number of rows. What am I > misunderstanding? Nothing that I can see. Each fetchone() invocation should return a new row, so this sounds like a pgdb bug. I wouldn't use range(curse.rowcount) myself; more likely I'd use fetchall(). If the result set can be arbitrarily large, I'd do something like this: row = curse.fetchone() while row: print row row = curse.fetchone() -- Andy Dustman PGP: 0xC72F3F1D @ .net http://dustman.net/andy "I'd rather listen to Newton than to Mundie. He may have been dead for almost three hundred years, but despite that he stinks up the room less." -- Linus T. From yong@linuxkorea.co.kr Mon Jun 4 17:12:31 2001 From: yong@linuxkorea.co.kr (=?ks_c_5601-1987?B?wMy4uL/r?=) Date: Tue, 5 Jun 2001 01:12:31 +0900 Subject: [DB-SIG] DB2 module version 0.99 Message-ID: <001701c0ed11$290208c0$cd1dd8d2@linuxkorea.co.kr> SGksIGV2ZXJ5b25lDQoNCldpdGggdGhlIGhlbHAgZnJvbSBtYW55IHBlb3BsZSwgSUJNIERCMiBt b2R1bGUgZm9yIHB5dGhvbiBoYXMgcmVhY2hlZCB0aGUgdmVyc2lvbiAwLjk5Lg0KDQpJbXByb3Zl bWVudDoNCg0KKiBpbXByb3ZlZCBCTE9CIHN1cHBvcnQgKEd1bnRlciBCYWNoKQ0KKiBXaW4zMiBj b21wbGlhdGlvbiAoSmFjaHltIFNpbWVjZWspDQoqIHNtYWxsIGNoYW5nZXMgaW4gc291cmNlIGNv ZGUgdG8gY29tcGlsZSB3aXRoIG90aGVyIEMgY29tcGlsZXJzIG90aGVyIHRoYW4gZ2NjDQoNClRv IGRvOg0KDQoqIGNvbXBsZXRlIHRocmVhZC1zYWZlbmVzcw0KDQpZb3UgY2FuIGRvd25sb2FkIGl0 IGFzIGZvbGxvd3M6DQoNCmZ0cDovL3Blb3BsZS5saW51eGtvcmVhLmNvLmtyL3B1Yi9EQjIvDQoN CkhhdmUgYSBuaWNlIGRheSENCg0KLS0NCkhhcHB5IFB5dGhvbiENCkJyeWFuKE1hbi1Zb25nKSBM ZWUgeW9uZ0BsaW51eGtvcmVhLmNvLmtyDQoNCg== From kromag@nsacom.net Mon Jun 4 19:26:44 2001 From: kromag@nsacom.net (kromag@nsacom.net) Date: Mon, 4 Jun 2001 11:26:44 -0700 (PDT) Subject: [DB-SIG] file locking error bonking access (long!) Message-ID: <200106041826.f54IQig32727@pop.nsacom.net> After looking over my incredibly clever code and getting a good clout on the head, I saw that I was opening and closing the db connection within my while loop. Opening the connection outside of said loop got my numbers up to: 1000 records written in 12.9600000381 seconds. Which is just a tad better! :-) Hope this helps someone else. d kromag@nsacom.net said: > I am having some weirdness occour while experimenting with access 97 SR2. > > I am using python 2.0 under windows 95B. > > When I run the scripts copied below, I occaisionally get the following: > > --------begin barfage-------------------- > > 816 > 817 > 817 records written in 167.529999971 seconds. > Traceback (most recent call last): > File "c:windowsdesktoptryme.py", line 19, in ? > dbwrite.writetwo(pack) > File "c:python20dbwrite.py", line 26, in writetwo > db=engine.OpenDatabase("windowsdesktopdb1.mdb") > File "", line 2, in OpenDatabase > pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, > 'DAO.Workspace', > "Couldn't lock file.", None, -1, -2146825238), None) > From andy@dustman.net Mon Jun 4 19:21:15 2001 From: andy@dustman.net (Andy Dustman) Date: Mon, 4 Jun 2001 14:21:15 -0400 (EDT) Subject: [DB-SIG] MySQLdb-0.9.0 and ZMySQLDA-2.0.7 released Message-ID: Go get 'em at http://sourceforge.net/projects/mysql-python -- Andy Dustman PGP: 0xC72F3F1D @ .net http://dustman.net/andy "I'd rather listen to Newton than to Mundie. He may have been dead for almost three hundred years, but despite that he stinks up the room less." -- Linus T. From kjcole@gri.gallaudet.edu Tue Jun 5 03:46:17 2001 From: kjcole@gri.gallaudet.edu (Kevin Cole) Date: Mon, 4 Jun 2001 22:46:17 -0400 (EDT) Subject: [DB-SIG] Bug in PostgreSQL/Python (7.1.1/1.5) fetchone() Message-ID: Hi again, It appears that the bug in pgdb occurs for both examples below. Both return the first row multiple times. (This is with PostgreSQL 7.1.1 and Python 1.5.2.) _________________________________________ for hits in range(curse.rowcount): print curse.fetchone() _________________________________________ row = curse.fetchone() while row: print row row = curse.fetchone() _________________________________________ On Mon, 4 Jun 2001, Andy Dustman wrote: >On Fri, 1 Jun 2001, Kevin Cole wrote: > > Hi, > > > > I've got a little test code that sorta works, but not as well as I'd > > like. fetchall() does what I want, fetchone() appears to fetch the > > same row cursor.rowcount times, rather than fetching the next row. > > Here's what I'm doing: > > > > import pgdb > > mydb = pgdb.connect("localhost:mydb") > > curse = mydb.cursor() > > curse.execute("select * from mytable where state = 'MD') > > for hits in range(curse.rowcount): > > print curse.fetchone() > > > > If I use "for hits in curse.fetchall():" I get what I expect, and > > printing curse.rowcount yields the correct number of rows. What am > > I misunderstanding? > Nothing that I can see. Each fetchone() invocation should return a new > row, so this sounds like a pgdb bug. I wouldn't use range(curse.rowcount) > myself; more likely I'd use fetchall(). If the result set can be > arbitrarily large, I'd do something like this: > > row = curse.fetchone() > while row: > print row > row = curse.fetchone() > > -- > Andy Dustman PGP: 0xC72F3F1D > @ .net http://dustman.net/andy -- Kevin Cole | E-mail: kjcole@gri.gallaudet.edu Gallaudet Research Institute | WWW: http://gri.gallaudet.edu/~kjcole/ Hall Memorial Bldg S-419 | Voice: (202) 651-5135 Washington, D.C. 20002-3695 | FAX: (202) 651-5746 From fog@mixadlive.com Tue Jun 5 10:43:26 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: 05 Jun 2001 11:43:26 +0200 Subject: [DB-SIG] about module.Binary() Message-ID: <991734207.831.2.camel@lola> just one fast question: does the dbapi require Binary objects to quote their contents? i.e., is the following code right (assuming postgresql or any other database that uses ' to enclose strings and that need it to be quoted)? >>> import psycopg >>> bin = psycopg.Binary("this is a string that needs 'quoting'") >>> print str(bin) 'this is a string that needs ''quoting''' >>> print repr(bin) "'this is a string that needs ''quoting'''" note how the module enclosed the string in ' to allow for: >>> cursor.execute("INSERT INTO test VALUES (%(str)s)", {'str', bin}) in a more general way, the object returned by a constructor has to provide the quotes (when requested by the db) or not? for example: t = psycopg.Date(2001, 7, 5) print str(t) should print 2001-07-05 or '2001-07-05'? enough for now, ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Debian. The best software from the best people [see above] -- brought to you by One Line Spam From matt@bane.mi.org Tue Jun 5 18:59:07 2001 From: matt@bane.mi.org (Matthew T. Kromer) Date: Tue, 5 Jun 2001 13:59:07 -0400 Subject: [DB-SIG] DCOracle2 Beta1 Announcement Message-ID: <001b01c0ede9$371108c0$0501010a@mi.org> This is my announcment for DCOracle2 Beta 1; while the bulk text below is aimed at Zope users, the DCOracle2 module should be fully utilizable for any Python application. Description DCOracle2 is a replacement for DCOracle, written primarily in C. DCOracle 1 uses OCI 7 bindings for most Oracle calls, with OCI 8 mixed in for LOB support. Oracle 8i disallows mixing of calls within a statement, and so breaks LOB support. DCO2 uses entirely OCI 8 calls, and thus can use LOBs. New in this Release Beta 1 Stored procedure input works properly, cycles in stored procedures removed. Stored procedures now have meaningful docstrings (describing their parameters). Type coercion change from a tuple kludge to a TypeCoercion object. Set ability (and default) to do static binding for BindingArrays, working around dynamic fetch occasional NULL bug on Linux. Batch executemany(). Add backward compatable dbiRaw and execute modes. Alpha 6 Nested Cursors, e.g. SELECT ENAME, CURSOR(SELECT ENAME FROM EMP WHERE MGR=7908) FROM EMP WHERE EMPNO=7908). API 2.0 type objects. Add trim() to LobLocators. Wrap LOB permissions for Zope in ZOracleDA. Change result on execute w/o results to None, not []. Return statement type code after execute(). Alpha 5 Stored procedure fixes, and debugging enhancements. Alpha 4 Stored procedure IN/OUT variables, changes to executemany() Alpha 3 Changed ZOracleDA to not use method call to connect(). Partial stored procedure work Alpha 2 Bug fixes, largely packaging, from Alpha 1. Added SQLT_AFC handler, SPARC alignment fixes. Alpha 1 First www.zope.org release Contents This release contains both DCOracle2 and a slightly modified ZOracleDA; it will register as ZOracleDA would (to silently upgrade Oracle connections) and thus cannot be run concurrently with ZOracleDA/DCOracle. Installation To replace ZOracleDA, untar into lib/python/Products and make, move ZOracleDA out of lib/python/Products, and rename lib/python/Products/DCO2 to lib/python/Products/ZOracleDA. Usage This release is intended for testing with ZOracleDA feature compatibility (including LOB support) and is also intended for general use. Platforms NT support has been tested, Microsoft Visual Studio project files are included; this has only received testing with Oracle 8.0 and Oracle 8.1 on Linux, Solaris, and Windows NT; a wider variety of platform experience is welcomed. Download The product is available at http://www.zope.org/Members/matt/dco2. From julian.gollop@ntlworld.com Wed Jun 6 13:10:33 2001 From: julian.gollop@ntlworld.com (Julian Gollop) Date: Wed, 6 Jun 2001 13:10:33 +0100 Subject: [DB-SIG] db.update() in pygresql fails! Message-ID: Hello everybody, I have just started using pygresql with postgreSQL version 7.0.2, and python 1.5.2 The db wrapper class seems really convenient, but my code fails from pg import * conn = DB('codo', user='postgres' ) tgame = conn.get( 'lsn_game', gameVar['game_id], 'game_id' ) #get record for game conn.update( 'lsn_game', tgame ) #update record the update() fails with KeyError: 'lsn_game' I am sure this should work. I am not sure what version of the pg module this is (it came with the python 1.5.2) Any help would be appreciated. Julian From darcy@druid.net Wed Jun 6 15:16:31 2001 From: darcy@druid.net (D'Arcy J.M. Cain) Date: Wed, 6 Jun 2001 10:16:31 -0400 (EDT) Subject: [DB-SIG] db.update() in pygresql fails! In-Reply-To: "from Julian Gollop at Jun 6, 2001 01:10:33 pm" Message-ID: <20010606141631.956021A8C@druid.net> Thus spake Julian Gollop > Hello everybody, > I have just started using pygresql with postgreSQL version 7.0.2, and python > 1.5.2 > The db wrapper class seems really convenient, but my code fails > > from pg import * > > conn = DB('codo', user='postgres' ) > tgame = conn.get( 'lsn_game', gameVar['game_id], 'game_id' ) #get > record for game > conn.update( 'lsn_game', tgame ) > #update record > > the update() fails with > > KeyError: 'lsn_game' > > I am sure this should work. I am not sure what version of the pg module this > is (it came with the python 1.5.2) You should really be on the PygreSQL list for this as this is using the "Classic" interface, not the DB-API. Anyway, now that we are here, the only thing I can think of without seeing your schema is that you don't have a primary key on lsn_game although it is supposed to handle that. Send me your schema privately and I will try to help you. -- D'Arcy J.M. Cain | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner. From ryanw@inktomi.com Wed Jun 6 23:48:16 2001 From: ryanw@inktomi.com (Ryan Weisenberger) Date: Wed, 06 Jun 2001 15:48:16 -0700 Subject: [DB-SIG] International characters in database table names Message-ID: <4.3.2.7.2.20010606154208.02763370@inkt-3.inktomi.com> We're currently using mxODBC 1.1.1 to retrieve content from databases for our search engine product. Recently we've started running into the problem of international characters in database table names and column names. What is the standard for this? Currently, trying to send a unicode SQL query in mxODBC 1.1.1 causes an error (exceptions.TypeError: SQL command must be a string). I'm assuming that this is resolved in mxODBC 2.0, but I'm not sure how to use it. Can anyone describe to me what the "standard" method is for accessing content stored in a table or column with an international character set? (using mxODBC, if possible.) Thanks a lot, Ryan Weisenberger Inktomi Corp. From mal@lemburg.com Thu Jun 7 09:20:42 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 07 Jun 2001 10:20:42 +0200 Subject: [DB-SIG] International characters in database table names References: <4.3.2.7.2.20010606154208.02763370@inkt-3.inktomi.com> Message-ID: <3B1F395A.80038B47@lemburg.com> Ryan Weisenberger wrote: > > We're currently using mxODBC 1.1.1 to retrieve content from databases for > our search engine product. > > Recently we've started running into the problem of international characters > in database table names and column names. > > What is the standard for this? Currently, trying to send a unicode SQL > query in mxODBC 1.1.1 causes an error (exceptions.TypeError: SQL command > must be a string). I'm assuming that this is resolved in mxODBC 2.0, but > I'm not sure how to use it. mxODBC 2.0 has experimental Unicode support for the column data on input and output. It does not support sending SQL statements using Unicode, nor does it provide a way to extract the column names (in cursor.description) using Unicode. The reason for this is that most ODBC drivers don't work well with Unicode and mxODBC would have to use a complete new ODBC API to get at those Unicode table names (the SQLxxxW() APIs). > Can anyone describe to me what the "standard" method is for accessing > content stored in a table or column with an international character set? > (using mxODBC, if possible.) It should be possible to tell the database or the ODBC driver to translate the Unicode names into e.g. UTF-8. These can then be accessed using mxODBC (even 1.1.1) in the normal way. Please refer to the ODBC driver or database documentation on how this can be achieved. Hope this helps, -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From matt@digicool.com Fri Jun 8 04:06:19 2001 From: matt@digicool.com (Matt Kromer) Date: Thu, 07 Jun 2001 23:06:19 -0400 Subject: [DB-SIG] DCOracle2 Beta 2 Message-ID: Description DCOracle2 is a Python binding to Oracle 8. DCOracle2 is a replacement for DCOracle, written primarily in C. DCOracle 1 uses OCI 7 bindings for most Oracle calls, with OCI 8 mixed in for LOB support. Oracle 8i disallows mixing of calls within a statement, and so breaks LOB support. DCO2 uses entirely OCI 8 calls, and thus can use LOBs. New in this Release Beta 2 Fix ZOracleDA attempting to fetch on non-select. Add explicit C Cursor.close() to break cycle allowing cursor to be deallocated. Change fetchmany() and fetchall() to return None if no results remain rather than []. Several connection/cursor and cursor/procedure cycle deferring actions. SPARC byteorder fixes for stored procedures. Also, binary builds are available for Solaris and Linux i386. Contents This release contains both DCOracle2 and a slightly modified ZOracleDA; it will register as ZOracleDA would (to silently upgrade Oracle connections) and thus cannot be run concurrently with ZOracleDA/DCOracle. Installation To replace ZOracleDA, untar into lib/python/Products and make, move ZOracleDA out of lib/python/Products, and rename lib/python/Products/DCO2 to lib/python/Products/ZOracleDA. Usage This release is intended for testing with ZOracleDA feature compatibility (including LOB support) and is also intended for general use. Platforms NT support has been tested, Microsoft Visual Studio project files are included; this has only received testing with Oracle 8.0 and Oracle 8.1 on Linux, Solaris, and Windows NT; a wider variety of platform experience is welcomed. Download The product is available at http://www.zope.org/Members/matt/dco2 From kromag@nsacom.net Tue Jun 12 20:30:38 2001 From: kromag@nsacom.net (kromag@nsacom.net) Date: Tue, 12 Jun 2001 12:30:38 -0700 (PDT) Subject: [DB-SIG] Taking exeptions to threads and Access... Message-ID: <200106121930.f5CJUcg26906@pop.nsacom.net> I can't for the life of me figure out what is causing this to barf... -----------begin script----------- import win32com.client import random import time import string import thread engine=win32com.client.Dispatch("DAO.DBEngine.35") db=engine.OpenDatabase("\windows\desktop\terror.mdb") ## Function write() writes a list to the database def write(inputtage): time=inputtage[0] data_string=inputtage[1] db.Execute("insert into data values(%f, '%s')" %(time, data_string)) return 'ok' if __name__=='__main__': tik_tok=time.time() surprize=random.choice(['Hellbilly', 'Crunchy Tack', 'Feeble']) the_madness=(tik_tok, surprize) none=0 thread.start_new_thread(write,(the_madness,)) -------------end script------------ It returns: -----------begin error------------- Unhandled exception in thread: Traceback (most recent call last): File "dbwrite.py", line 14, in write db.Execute("insert into data values(%f, '%s')" %(time, data_string)) AttributeError: 'None' object has no attribute 'Execute' >Exit code: 0 ----------end error--------------- I am using: Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32 on winders 95. It looks like to me that since I am passing an empty value in: thread.start_new_thread(write,(the_madness,)) that I somehow need to strip it out somewhere in the function... or not! Can anyone suggest anything? Thanks! d From kromag@nsacom.net Tue Jun 12 20:52:02 2001 From: kromag@nsacom.net (kromag@nsacom.net) Date: Tue, 12 Jun 2001 12:52:02 -0700 (PDT) Subject: [DB-SIG] Taking exeptions to threads and Access... Message-ID: <200106121952.f5CJq2g28437@pop.nsacom.net> I can't for the life of me figure out what is causing this to barf... -----------begin script----------- import win32com.client import random import time import string import thread engine=win32com.client.Dispatch("DAO.DBEngine.35") db=engine.OpenDatabase("\windows\desktop\terror.mdb") ## Function write() writes a list to the database def write(inputtage): time=inputtage[0] data_string=inputtage[1] db.Execute("insert into data values(%f, '%s')" %(time, data_string)) return 'ok' if __name__=='__main__': tik_tok=time.time() surprize=random.choice(['Hellbilly', 'Crunchy Tack', 'Feeble']) the_madness=(tik_tok, surprize) none=0 thread.start_new_thread(write,(the_madness,)) -------------end script------------ It returns: -----------begin error------------- Unhandled exception in thread: Traceback (most recent call last): File "dbwrite.py", line 14, in write db.Execute("insert into data values(%f, '%s')" %(time, data_string)) AttributeError: 'None' object has no attribute 'Execute' >Exit code: 0 ----------end error--------------- I am using: Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32 on winders 95. It looks like to me that since I am passing an empty value in: thread.start_new_thread(write,(the_madness,)) that I somehow need to strip it out somewhere in the function... or not! Can anyone suggest anything? Thanks! d From kromag@nsacom.net Tue Jun 12 21:48:00 2001 From: kromag@nsacom.net (kromag@nsacom.net) Date: Tue, 12 Jun 2001 13:48:00 -0700 (PDT) Subject: [DB-SIG] Taking exeptions to threads and Access... Message-ID: <200106122048.f5CKm0g10305@pop.nsacom.net> Anthony Tuininga said: > Yes, I was just trying to see if the value of the OpenDatabase() call was > None; one way is to "print db" as you have done; the other is to have the > statements > > if db is None: > raise "BAD THINGS HAVE HAPPENED" > > or something similar. How about "AAAIIIIGHH!"? > > I have not used OpenDatabase() using the COM stuff -- ODBC is irritating to > me and I prefer to use the Python DB API instead in any case. You and me both! > What database > are you trying to access? There are several modules available and it is > likely that your database is supported by one of them..... It is *choke* Access97 SR2. If there is a better module than win32com.client I will be glad to give it a whirl. Suggestions anyone? d From kromag@nsacom.net Tue Jun 12 21:51:17 2001 From: kromag@nsacom.net (kromag@nsacom.net) Date: Tue, 12 Jun 2001 13:51:17 -0700 (PDT) Subject: [DB-SIG] Taking exeptions to threads and Access... Message-ID: <200106122051.f5CKpHg26694@pop.nsacom.net> Anthony Tuininga said: > Yes, I was just trying to see if the value of the OpenDatabase() call was > None; one way is to "print db" as you have done; the other is to have the > statements > > if db is None: > raise "BAD THINGS HAVE HAPPENED" > > or something similar. How about "AAAIIIIGHH!"? > > I have not used OpenDatabase() using the COM stuff -- ODBC is irritating to > me and I prefer to use the Python DB API instead in any case. You and me both! > What database > are you trying to access? There are several modules available and it is > likely that your database is supported by one of them..... It is *choke* Access97 SR2. If there is a better module than win32com.client I will be glad to give it a whirl. Suggestions anyone? d From Benjamin.Schollnick@usa.xerox.com Tue Jun 12 19:56:38 2001 From: Benjamin.Schollnick@usa.xerox.com (Schollnick, Benjamin) Date: Tue, 12 Jun 2001 14:56:38 -0400 Subject: [DB-SIG] Taking exceptions to threads and Access... Message-ID: | It is *choke* Access97 SR2. If there is a better module than | win32com.client | I will be glad to give it a whirl. Suggestions anyone? I was using MxODBC, but had to stop due to licensing issues... Right now, I changed two lines of code, and moved to win32.ODBC .... My understanding is that it's a little old, but I haven't noticed any major differences between it, and MxODBC.... It's saved me a lot of extra's in DLLs, etc, and works correctly with Gordon's Installer.... Unlike MxODBC and the later Egenix's version(s) of MxODBC.... I would also be interested in hearing of a newer package, but win32.ODBC works fine with ODBC connections to Access 97.... - Benjamin From Benjamin.Schollnick@usa.xerox.com Tue Jun 12 20:39:57 2001 From: Benjamin.Schollnick@usa.xerox.com (Schollnick, Benjamin) Date: Tue, 12 Jun 2001 15:39:57 -0400 Subject: [DB-SIG] Taking exceptions to threads and Access... Message-ID: It's included in the ActiveState Python collection, which is available from Python.org's download page. - Benjamin -----Original Message----- From: kromag@nsacom.net [mailto:kromag@nsacom.net] Sent: Tuesday, June 12, 2001 5:44 PM To: Schollnick, Benjamin; Schollnick, Benjamin; 'kromag@nsacom.net' Cc: db-sig@python.org Subject: RE: [DB-SIG] Taking exceptions to threads and Access... "Schollnick, Benjamin" said: > > I was using MxODBC, but had to stop due to licensing issues... > Right now, I changed two lines of code, and moved to win32.ODBC .... My > understanding is that it's a little old, but I haven't noticed any major > differences between it, and MxODBC.... Hmmm. I haven't tried MxODBC either. > I would also be interested in hearing of a newer package, but win32.ODBC > works fine with ODBC connections to Access 97.... Sounds good. I was unable to find it on Python.org, vaults of parnassus or google. Anyone know of a site? Thanks! d > > - Benjamin > From zen@shangri-la.dropbear.id.au Sun Jun 17 14:47:42 2001 From: zen@shangri-la.dropbear.id.au (Stuart Bishop) Date: Sun, 17 Jun 2001 23:47:42 +1000 (EST) Subject: [DB-SIG] SQL PEP for discussion Message-ID: I've put togther a PEP for a standard wrapper for DB compliant drivers. I'd like to get peoples thoughts on it before I submit it anywhere. In particular, if the SIG thinks this is the correct approach to take before I do further work on it. I'd also like to find out if there are similar projects that have had work started on them, as they may make this work irrelevant. PEP: TBA Title: Standard Wrapper For SQL Databases Version: $Revision: 1.4 $ Author: zen@shangri-la.dropbear.id.au (Stuart Bishop) Discussions-To: db-sig@python.org Status: pre submission Draft Type: Standards Track Requires: 234,248,249 Created: 17-Jun-2001 Post-History: Never Abstract Python has had a solid Database API for SQL databases since 1996 [1]. This API has been intentially kept lean to make it simpler to develop and maintain drivers, as a richer feature set could be implemented by a higher level wrapper and maintained in a single place rather than in every API compliant driver. The goal of this PEP is to define and implement such a wrapper, and merge it into the standard Python distribution. Copyright This document has been placed in the public domain. Specification Use of this wrapper requires a DB API v2.0 compliant driver to be installed in sys.path. The wrapper may support DB API v1.0 drivers (to be determined). Module Interface (sql.py) connect(driver,dsn,user,password,host,database) driver -- The name of the DB API compliant driver module. dsn -- Datasource Name user -- username (optional) password - password (optional) host -- hostname or network address (optional) database -- database name (optional) Returns a Connection object Does the connect function need to accept arbitrary keyword arguments? Exceptions (unchanged from DB API 2.0) Exceptions thrown need to be subclasses of those defined in the sql module, to allow calling code to catch them without knowning which driver module they were thrown from. This is particularly of use for code that is connecting to multiple databases using different drivers. Connection Object close() commit() rollback() cursor() As per DB API 2.0. The rollback method will raise a NotSupportedError exception if the driver doesn't support transactions. quote(object) Returns a ANSI SQL quoted version of the given value as a string. For example: >>> print con.quote(42) 42 >>> print con.quote("Don't do that!") 'Don''t do that!' Note that this cannot currently be done for dates, which generally have a RDBMS dependant syntax. This would need to be resolved in the next version of the DB API spec, and is the reason why this is the method of the connection object as opposed to a function. A quote method is invaluable for generating logs of SQL commands or for dynamically generating SQL queries. execute(operation,[seq_of_parameters]) A Cursor is created and its execute method is called. Returns the newly created cursor. for row in con.execute('select actor,year from sketches'): [ ... ] Insert note abour cursor creation overheads and how to avoid here. Should cursor handling be hidden? It would be quite possible to have the connection object maintain a pool of cursors which are used to create iterators. The iterator would return the cursor to the pool when iteration is completed, or in its destructor. What should the default format be for bind variables? driver_connection() Return the unwrapped connection object as produced by the driver. This may be required for accessing RDBMS specific features. capabilities A dictionary describing the capabilities of the driver. Currently defined values are: apilevel String constant stating the supported DB API level of the driver. Currently only the strings '1.0' and '2.0' are allowed. threadsafety As per DB API 2.0. Irrelevant if we can trasparently enforce good threading. rollback 1 if the driver supports the rollback() method 0 if the driver does not support the rollback() method nextset 1 if the driver's cursors supports the nextset() method 0 if the nextset() method is not supported. Cursor Object The cursor object becomes an iterator after its execute method has been called. Rows are retrieved using the drivers fetchmany(arraysize) method. execute(operation,sequence_of_parameters) As per the executemany method in the DB API 2.0 spec. Is there any need for both execute and executemany in this API? Returns self, so the following code is valid for row in mycursor.execute('select actor,year from sketches'): [ ... ] What should the default format be for bind variables? callproc(procedure,[parameters]) arraysize description rowcount close() setinputsizes(size) setoutputsizes(size[,column] As per DB API 2.0. Perhaps descriptions should be trimmed to the basics of column name & datatype? driver_cursor() Return the unwrapped cursor object as produced by the driver. This may be required to access driver specific features. next() Return the Row object for the next row from the currently executing SQL statement. As per DB API 2.0 spec, except a StopIteration exception is raised when the result set is exhausted. __iter__() Returns self. nextset() I guess as per DB API 2.0 spec. Row Object When a Cursor is iterated over, it returns Row objects. [index_or_key] Retrieve a field from the Row. If index_or_key is an integer, the column of the field is referenced by number (with the first column index 0). If index_or_key is a string, the column is referenced by name. Note that referencing columns by name may cause problems if you are trying to write platform independant code and should be avoided, as different vendors capitalize their column names differently. Type Objects and Constructors As per DB API 2.0 spec. It would be nice if there was a more intelligent standard Date class in the Python core that we could leverage. It is probably worth putting this in another PEP that we would depend on. Rationale The module is called sql.py, to avoid any ambiguity with non-realational or non-SQL compliant database interfaces. This also nicely limits the scope of the project. RDBMS abstraction classes, such as Python dictionary -> RDBMS table mappings, object persistance, connection pools etc. are better suited to other modules (or submodules) and are _currently_ beyond the scope of this PEP. The core of the API is identical to the Python Database Interface v2.0 (PEP-249). This API is already familiar to Python programers and is a proven solid foundation. To this core I have added some helper functions and iterator support (PEP-234). Python previously defined a common relational database API that was implemented by all drivers, and application programmers accessed the drivers directly. This caused the following issues: It was difficult to write code that connected to multiple database vendor's databases. Each seperate driver used defined its own heirarchy of exceptions that needed to be handled, and similar problems occurred with identifying column datatypes etc. Platform independant code could not be written simply, due to the differing paramater styles used by bind variables. This also caused problems with publishing example code and tutorials. The API remained minimal, as any new features would need to be implemented by all driver maintainers. The DB-SIG felt most feature suggestions would be better implemented by higher level wrappers, such as that defined by this PEP. More than a few people didn't realise that Python _had_ a database API, as it required people to find the DB-SIG on python.org. The wrapper will ease writing RDBMS independant code: The wrapper will enforce thread safety. It is unknown at this stage if this will be done by passing calls to a single worker thread for non thread safe drivers, or by raising an exception if an attempt is made to use a driver in a unsafe manner. (Can we enforce thread safety? Thread ID's get reused, so it is possible for the following to happen: Thread ID 10 is created Thread ID 10 creates a cursor 'c' Thread ID stops running A new thread created, and happens to have thread ID 10 again. The new thread ID 10 attempts to use cursor 'c' Fixing this might require another PEP, and probably is impossible if we allow for threads being spawned from C code.) The driver name is passed to the connect method as a string, rather than importing the driver module, to allow the driver being used by an application to be defined in a configuration file or other resource file. Bound parameters are specified in a common format, and translated to the native format used by the driver. Some basic guidelines on writing RDBMS independant code will be provided in the documentation (this is not always obvious to developers who only work with one vendor's database system). Only one heirarchy of exceptions needs to be caught, as opposed to one heirarchy per driver being used. Language Comparison The design proposed in this PEP is the same as that used by Java, where a relational database API is shipped as part of the core language, but requires the installation of 3rd party drivers to be used. Perl has a similar arrangement of API and driver separation. Perl does not ship with PerlDBI or drivers as part of the core language, but it is well documented and trivial to add using the CPAN tools. PHP has a seperate API for each database vendor, although work is underway (completed?) to define a common abstraction layer similar to Java, Perl and Python. All of the drivers ship as part of the core language. Gadfly Gadfly is a RDBMS written in Python by (insert name of author when I'm online). If Gadfly was also included in the Python distribution, it would be extremly useful as a learning tool. It would also allow the creation of products that require a RDBMS back end, yet work with any standard Python installation. Gadfly has already been shipped as part of Digital Creation's Zope for similar reasons. Shipping a fully functional RDBMS with Python would also be useful for the nyer-nyer factor in language advocacy. Should Gadfly be included in the core Python distribution as a default RDBMS? Hmm... this is beyond the scope of the abstract and should be a seperate PEP (esp as it isn't dependant on this PEP). I'll leave it here for now as a discussion point. References [1] PEP-248 and PEP-249 -- Stuart Bishop From fog@mixadlive.com Sun Jun 17 15:37:56 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: 17 Jun 2001 16:37:56 +0200 Subject: [DB-SIG] SQL PEP for discussion In-Reply-To: References: Message-ID: <992788676.15197.1.camel@lola> personally i am quite against such a wrapper. here are some critics... On 17 Jun 2001 23:47:42 +1000, Stuart Bishop wrote: [snip] > > Python previously defined a common relational database API that > was implemented by all drivers, and application programmers accessed > the drivers directly. This caused the following issues: > > It was difficult to write code that connected to multiple > database vendor's databases. Each seperate driver used defined > its own heirarchy of exceptions that needed to be handled, and > similar problems occurred with identifying column datatypes etc. DBAPI specifies how column datatypes should be treated and identified. > Platform independant code could not be written simply, > due to the differing paramater styles used by bind variables. > This also caused problems with publishing example code and tutorials. > > The API remained minimal, as any new features would need to > be implemented by all driver maintainers. The DB-SIG felt > most feature suggestions would be better implemented by higher > level wrappers, such as that defined by this PEP. but this pep does not define an higher level or meny new features. 90% of the api is a re-definition of the stuff in the DBAPI documenti. > More than a few people didn't realise that Python _had_ a database > API, as it required people to find the DB-SIG on python.org. so lets make the DB-SIG and the dbapi document more visible. you won't sole the problem by adding yet another layer. > The wrapper will ease writing RDBMS independant code: > > The wrapper will enforce thread safety. It is unknown at this stage > if this will be done by passing calls to a single worker thread for > non thread safe drivers, or by raising an exception if an attempt > is made to use a driver in a unsafe manner. if you're awriting a multithreaded application, your code is only part of the whole. you need to think carefully about the db, the driver, etc. if you really *need* multithreading independant code is the last of your problems. > The driver name is passed to the connect method as a string, > rather than importing the driver module, to allow the driver being > used by an application to be defined in a configuration file or > other resource file. no need for that. look at the (preliminary) test suite in psycopg and how you can give it the name of the module to be loaded on the command line. it a three-liner... > Bound parameters are specified in a common format, and translated > to the native format used by the driver. this *is* usefull, but you don't need to completely reimplement the dbapi for that. just a couple of functions will do it. > Some basic guidelines on writing RDBMS independant code will > be provided in the documentation (this is not always obvious > to developers who only work with one vendor's database system). my main concern is that rdbms independant code is not that much important as you say. defferent db have different features and only the most basic application uses a common subset of sql. when you write an application, choosing the right db, is an integral part of the development process. the dbapi is there for, imho, the developer being able to switch from db to another without haveing to re-learn a completely new api, not for automagically porting applications froma db to anothr. porting an application always involves some work and changing a couple of sql statements or some function calls is not the worse part of the process. anyway, a wrapper with some utility functions is, imho, a very good idea. things *really* db independent like a function that transform any bound parameters to a particular format and such. i'll call it sqlutils.py, though. ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org All programmers are optimists. -- Frederick P. Brooks, Jr. From zen@shangri-la.dropbear.id.au Mon Jun 18 09:55:00 2001 From: zen@shangri-la.dropbear.id.au (Stuart Bishop) Date: Mon, 18 Jun 2001 18:55:00 +1000 (EST) Subject: [DB-SIG] SQL PEP for discussion In-Reply-To: <992788676.15197.1.camel@lola> Message-ID: > > It was difficult to write code that connected to multiple > > database vendor's databases. Each seperate driver used defined > > its own heirarchy of exceptions that needed to be handled, and > > similar problems occurred with identifying column datatypes etc. > > DBAPI specifies how column datatypes should be treated and identified. What I was getting at here was they are defined in each driver. The following code attempts to show some of the issues I have been considering try: con_a = drivera.connect(....) con_b = driverb.connect(....) con_c = driverc.connect(....) cur_a = con_a.cursor() cur_b = con_b.cursor() cur_c = con_c.cursor() cur_a.execute('select c1,c2,c3 from sometable') cur_b.execute('select c1,c2,c3 from sometable') # How can I check that the table being accessed by cur_a and cur_b # have the same datatypes? I can't say # cur_a.description[x] == cur_b.description[x]. for cur in (cur_a,cur_b): row = cur.fetchrow() while row is not None: # Note that the bind parameter placeholders may change if we use # a different DB vendor for driver C, and row may need to become # a mapping # cur_c.execute('insert into sometable values (:p1,:p2,:p3)',row) # Note that the exceptions thrown from the drivers do not have a common base # class we can test for. # except (drivera.Error,driverb.Error,driverc.Error),x: print >>log,'DB error - %s' % str(x) raise > but this pep does not define an higher level or meny new features. 90% > of the api is a re-definition of the stuff in the DBAPI documenti. I'm expecting suggestions and possibly implementations from the SIG. I just put in what I consider the minimum at this stage. > so lets make the DB-SIG and the dbapi document more visible. you won't > sole the problem by adding yet another layer. Yup. If this or a similar PEP doesn't go ahead, I'd personally like to see the DB API spec moved into the library reference. > if you're awriting a multithreaded application, your code is only part > of the whole. you need to think carefully about the db, the driver, etc. > if you really *need* multithreading independant code is the last of your > problems. Yes - I can now see situations where this could lead to deadlocks so this will need to be scrapped :-( > anyway, a wrapper with some utility functions is, imho, a very good > idea. things *really* db independent like a function that transform any > bound parameters to a particular format and such. i'll call it > sqlutils.py, though. One of the things that I forgot to put in the PEP was that this wrapper would effectivly replace DB API 3.0. As it stands, it would be able to add iterator support and a common bind parameter style to all DB API 2.0 and probably all DB API 1.0 compliant drivers, without the driver authors having to update their code. The v2.0 spec was finalized April 1999, and a number of drivers are still being developed. Assuming iterator support rolls out with Python 2.2, this spec will become obsolete IMHO (if anything can obviously make use of iterators, it is a relational database API). I've thrown this PEP into the ring as one approach to tackling these issues, by simultaneously updating the API and updating most if not all of the existing drivers. Another approach is as Federico suggests, and having a library of utility functions to work with the DB drivers. A third approach would be to define DB API 3.0 so drivers make use of a shared library of Exceptions, constants and Mixins. -- Stuart Bishop From mal@lemburg.com Mon Jun 18 10:47:50 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 18 Jun 2001 11:47:50 +0200 Subject: [DB-SIG] SQL PEP for discussion References: Message-ID: <3B2DCE46.C0F6D4A0@lemburg.com> Stuart Bishop wrote: > > > > It was difficult to write code that connected to multiple > > > database vendor's databases. Each seperate driver used defined > > > its own heirarchy of exceptions that needed to be handled, and > > > similar problems occurred with identifying column datatypes etc. > > > > DBAPI specifies how column datatypes should be treated and identified. > > What I was getting at here was they are defined in each driver. > > The following code attempts to show some of the issues I have been > considering > > try: > con_a = drivera.connect(....) > con_b = driverb.connect(....) > con_c = driverc.connect(....) > > cur_a = con_a.cursor() > cur_b = con_b.cursor() > cur_c = con_c.cursor() > > cur_a.execute('select c1,c2,c3 from sometable') > cur_b.execute('select c1,c2,c3 from sometable') > > # How can I check that the table being accessed by cur_a and cur_b > # have the same datatypes? I can't say > # cur_a.description[x] == cur_b.description[x]. True. You'd first have to map the description value to one of the standard DB API type objects and then compare these. > for cur in (cur_a,cur_b): > row = cur.fetchrow() > while row is not None: > > # Note that the bind parameter placeholders may change if we use > # a different DB vendor for driver C, and row may need to become > # a mapping > # > cur_c.execute('insert into sometable values (:p1,:p2,:p3)',row) Also true -- this case is normally handled with in an abstraction class around connection and cursor objects. > # Note that the exceptions thrown from the drivers do not have a common base > # class we can test for. > # > except (drivera.Error,driverb.Error,driverc.Error),x: > print >>log,'DB error - %s' % str(x) > raise Dito here: the abstraction class can make this possible by catching the exceptions and reraising them using a common base class. IMHO, the cursors or connection objects should provide access to the exceptions objects too: that way you would be able to catch DB errors in a database interface independent way (at the cost of polluting the cursor/connection attribute namespace). > > but this pep does not define an higher level or meny new features. 90% > > of the api is a re-definition of the stuff in the DBAPI documenti. > > I'm expecting suggestions and possibly implementations from the SIG. I > just put in what I consider the minimum at this stage. I think that rather than duplicating a the DB API layer on top of the existing DB interfaces, we should provide the DB interface writers with more standard tools. E.g. a helper which converts between the different parameter binding schemes would be nice (I think someone already wrote such a tool...). We could add these to a standard Python modules dbtools.py. > > so lets make the DB-SIG and the dbapi document more visible. you won't > > sole the problem by adding yet another layer. > > Yup. If this or a similar PEP doesn't go ahead, I'd personally like to > see the DB API spec moved into the library reference. The DB API is already available as PEP. I am not sure what Fred thinks about adding these informational PEPs to the std lib reference. > > if you're awriting a multithreaded application, your code is only part > > of the whole. you need to think carefully about the db, the driver, etc. > > if you really *need* multithreading independant code is the last of your > > problems. > > Yes - I can now see situations where this could lead to deadlocks so > this will need to be scrapped :-( > > > anyway, a wrapper with some utility functions is, imho, a very good > > idea. things *really* db independent like a function that transform any > > bound parameters to a particular format and such. i'll call it > > sqlutils.py, though. > > One of the things that I forgot to put in the PEP was that > this wrapper would effectivly replace DB API 3.0. As it stands, > it would be able to add iterator support and a common bind > parameter style to all DB API 2.0 and probably all DB API 1.0 > compliant drivers, without the driver authors having to update > their code. > > The v2.0 spec was finalized April 1999, and a number of drivers are still > being developed. Assuming iterator support rolls out with Python 2.2, > this spec will become obsolete IMHO (if anything can obviously make use > of iterators, it is a relational database API). I've thrown this PEP into the > ring as one approach to tackling these issues, by simultaneously updating the > API and updating most if not all of the existing drivers. > > Another approach is as Federico suggests, and having a library of > utility functions to work with the DB drivers. > > A third approach would be to define DB API 3.0 so drivers make use > of a shared library of Exceptions, constants and Mixins. That would be similar to the XML parser approach: you pass in drivers and the XML lib uses it only as backend, providing a consistent interface to the user. Not a bad idea, but in certain situations you really want to get full access to the underlying driver, e.g. to call driver specific APIs which only apply to the specific driver context. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From djc@object-craft.com.au Tue Jun 19 04:28:54 2001 From: djc@object-craft.com.au (Dave Cole) Date: 19 Jun 2001 13:28:54 +1000 Subject: [DB-SIG] Sybase module 0.26 released Message-ID: What is it: The Sybase module provides a Python interface to the Sybase relational database system. The Sybase package supports almost all of the Python Database API, version 2.0 with extensions. The module works with Python versions 1.5.2 and later and Sybase versions 11.0.3 and later. It is based on the Sybase Client Library (ct_* API), and the Bulk-Library Client (blk_* API) interfaces. The 0.20 and later releases are a reimplementation of the module using a thin C wrapper on the Sybase-CT API, and a Python module to provide the DB-API functionality. It is still a work in progress, but should be good enough for most purposes. Changes for this release: - A collection of Sybase example programs was found and converted to use the sybasect module. This highlighted some bugs and many omissions. For the curious the example programs have been included in the release. array_bind.py diag_example.py mult_text.py bulkcopy.py dynamic_cur.py params.py cursor_sel.py dynamic_ins.py rpc.py cursor_upd.py example.py timeout.py On the whole I have avoided relying on using Sybase CT library callback functions. The timeout.py example program requires the use of a callback. Since callbacks cause the Python interpreter to be reentered, you cannot compile the module with multi-thread support. This is controlled via the WANT_THREADS #define in sybasect.h - The ntsetup.py distutils program was merged into the setup.py - The Buffer type was renamed to DataBuf to avoid type name clashes with the Python BufferType. - Bug was fixed in blk_bind() which was passing Python type object by value instead of by reference - oops. - All of the extension types in the sybasect module are now exported. - More work has been done on the documentation. There is very little outstanding programming work for the module. Most future work will be concentrated on the documentation. Where can you get it: http://www.object-craft.com.au/projects/sybase/ - Dave -- http://www.object-craft.com.au From djc@object-craft.com.au Tue Jun 26 11:33:45 2001 From: djc@object-craft.com.au (Dave Cole) Date: 26 Jun 2001 20:33:45 +1000 Subject: [DB-SIG] Sybase module 0.27 released Message-ID: What is it: The Sybase module provides a Python interface to the Sybase relational database system. The Sybase package supports almost all of the Python Database API, version 2.0 with extensions. The module works with Python versions 1.5.2 and later and Sybase versions 11.0.3 and later. It is based on the Sybase Client Library (ct_* API), and the Bulk-Library Client (blk_* API) interfaces. The 0.20 and later releases are a re-implementation of the module using a thin C wrapper on the Sybase-CT API, and a Python module to provide the DB-API functionality. It is still a work in progress, but should be good enough for most purposes. Changes for this release: - Sybase.py module no longer imports exceptions module. - Optional auto_commit argument has been added to Sybase.connect(). The default value is 0. - Optional delay_connect argument has been added to Sybase.connect(). The default value is 0. This allows you to manipulate the Sybase connection before connecting to the server. >>> import Sybase >>> db = Sybase.connect(server, user, passwd, delay_connect = 1) >>> db.set_property(Sybase.CS_HOSTNAME, 'secret') >>> db.connect() - Removed redundant argument from sybasect.ct_data_info() - Added pickle capability to NumericType - I somehow forgot to copy this over from the old 0.13 module. - Re-arranged sybasect.h to make it easier to follow - I hope. - Documentation updates. - Dave -- http://www.object-craft.com.au From djc@object-craft.com.au Wed Jun 27 04:05:27 2001 From: djc@object-craft.com.au (Dave Cole) Date: 27 Jun 2001 13:05:27 +1000 Subject: [DB-SIG] Sybase module 0.28 (Brown Paper Bag) released Message-ID: What is it: The Sybase module provides a Python interface to the Sybase relational database system. The Sybase package supports almost all of the Python Database API, version 2.0 with extensions. The module works with Python versions 1.5.2 and later and Sybase versions 11.0.3 and later. It is based on the Sybase Client Library (ct_* API), and the Bulk-Library Client (blk_* API) interfaces. The 0.20 and later releases are a re-implementation of the module using a thin C wrapper on the Sybase-CT API, and a Python module to provide the DB-API functionality. It is still a work in progress, but should be good enough for most purposes. Changes for this release: After shooting my mouth off about the cool things that you can do with bulkcopy I went back and tested my claims. I found that I had not implemented support for the bulkcopy optional argument to Sybase.connect()... Changes for this release: - The following claim I made earlier today on comp.lang.python is now true: If your source data is CSV format then you can use another one of the modules I wrote to load it into a format suitable to feeding the bulkcopy object in the Sybase module. The module was written specifically to handle data type data produced by Access and Excel. http://www.object-craft.com.au/projects/csv/ >>> import Sybase, csv >>> >>> db = Sybase.connect('SYBASE', 'user', 'password', bulkcopy = 1, auto_commit = 1) >>> db.execute('create table #bogus (name varchar(40), num int)') >>> >>> p = csv.parser() >>> bcp = db.bulkcopy('#bogus') >>> for line in open('datafile').readlines(): >>> fields = p.parse(line) >>> if fields: >>> bcp.rowxfer(line) >>> print 'Loaded', bcp.done(), 'rows' - Documentation updates. - Dave P.S. Hopefully there will be a period of more than one day before the next release... -- http://www.object-craft.com.au From nathan@geerbox.com Wed Jun 27 04:50:59 2001 From: nathan@geerbox.com (Nathan Clegg) Date: Tue, 26 Jun 2001 20:50:59 -0700 Subject: [DB-SIG] any 2.0-compliant packages for postgresql? Message-ID: I've tried PyGreSQL, psycopg, and PoPy. None of them appear to properly support paramater binding (specifically do not implement the details from footnote 5 in the DB API 2.0 specification). Are there any packages out there for postgresql that fully implement 2.0? Also, which of these (or others) work properly with python 2.1? I know PyGreSQL has not been updated since 1.52. Seems the change in how long ints are handled can adversely affect SQL handling. From fog@mixadlive.com Wed Jun 27 09:04:46 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: 27 Jun 2001 10:04:46 +0200 Subject: [DB-SIG] any 2.0-compliant packages for postgresql? In-Reply-To: References: Message-ID: <993629087.1302.1.camel@lola> psycopg fully implements dbapi-2.0 (apart from the missing Binary constructor that will be added before the 1.0 release.) On 26 Jun 2001 20:50:59 -0700, Nathan Clegg wrote: > I've tried PyGreSQL, psycopg, and PoPy. None of them appear to properly > support paramater binding (specifically do not implement the details from > footnote 5 in the DB API 2.0 specification). psycopg uses the 'pyformat' param style, so it force you to use a *mapping* to pass in the attributes. if somebody knows of a way to specify parameters by *position* using pyformat, allowing the use of sequence i will be glad to implement it. time for a question now... Q: psycopg correctly put strings obtained by the Date and Time constructors inside quotes ('') as requested by the api. it should do the same (and maybe escape) normal strings? if that's the case don't we need a String constructor? how is the module supposed to know when a string is real string and need quoting and when i am passing it just as convenience but it really is a number? example... d = {'id':'768', name:'a number'} curs.execute("CREATE TABLE test (id int4, name text)") curs.execute("INSERT INTO test VALUES (%(id)s, %(name))") this will obviously produce the following (wrong) SQL: INSERT INTO test VALUES (768, a number) what i really wanted was 'a number', quoted. but how can the module understand that? > Are there any packages out > there for postgresql that fully implement 2.0? Also, which of these (or > others) work properly with python 2.1? I know PyGreSQL has not been updated > since 1.52. Seems the change in how long ints are handled can adversely > affect SQL handling. moving from 1.5.x to 2.0 only needed a recompile, so i think that moving from 2.0 to 2.1 will be as easy as "./configure ... && make && make install". ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Don't dream it. Be it. -- Dr. Frank'n'further From nathan@geerbox.net Wed Jun 27 15:25:49 2001 From: nathan@geerbox.net (Nathan Clegg) Date: Wed, 27 Jun 2001 07:25:49 -0700 Subject: [DB-SIG] any 2.0-compliant packages for postgresql? In-Reply-To: <993629087.1302.1.camel@lola> Message-ID: > Q: psycopg correctly put strings obtained by the Date and Time > constructors inside quotes ('') as requested by the api. it should do > the same (and maybe escape) normal strings? if that's the case don't we > need a String constructor? how is the module supposed to know when a > string is real string and need quoting and when i am passing it just as > convenience but it really is a number? example... This is the part I have issue with. The api spec does require that normal strings be escaped and quoted. In regards to passing numbers as strings for convenience, I would suggest doing away with it. If you mean it to be a number, send a number, and furthermore specify %(name)d instead of %(name)s. I appreciate conveniences, but I think this introduces the kind of ambiguity that python's specification set out to squash in the first place. Furthermore, all of the databases I have personally used have no problem casting strings to numbers where required. That is, if you pass postgresql (or oracle, or mysql...) a quoted string '768' to compare against a number field, it with cast it to a number for you and get you the desired result. The module doesn't really need to implement this convenience casting because the database engine already does. From fog@mixadlive.com Wed Jun 27 15:38:26 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: 27 Jun 2001 16:38:26 +0200 Subject: [DB-SIG] any 2.0-compliant packages for postgresql? In-Reply-To: References: Message-ID: <993652707.2541.6.camel@lola> On 27 Jun 2001 07:25:49 -0700, Nathan Clegg wrote: > > Q: psycopg correctly put strings obtained by the Date and Time > > constructors inside quotes ('') as requested by the api. it should do > > the same (and maybe escape) normal strings? if that's the case don't we > > need a String constructor? how is the module supposed to know when a > > string is real string and need quoting and when i am passing it just as > > convenience but it really is a number? example... > > This is the part I have issue with. The api spec does require that normal > strings be escaped and quoted. In regards to passing numbers as strings for > convenience, I would suggest doing away with it. If you mean it to be a > number, send a number, and furthermore specify %(name)d instead of %(name)s. > I appreciate conveniences, but I think this introduces the kind of ambiguity > that python's specification set out to squash in the first place. > Furthermore, all of the databases I have personally used have no problem > casting strings to numbers where required. That is, if you pass postgresql > (or oracle, or mysql...) a quoted string '768' to compare against a number > field, it with cast it to a number for you and get you the desired result. > The module doesn't really need to implement this convenience casting because > the database engine already does. so, if i try to do as follows: curs.execute("CREATE TABLE test (a text)") curs.execute("INSERT INTO test VALUES (%(a)s)", {'a':'some text...'}) the method call fails miserably because the generated SQL is: INSERT INTO test VALUES (some text...) without quotes. footnote 5 requires argument binding to prodive quotes and escape sequences but how can the module know is the given string will be a string in the db too and requires quoting or it is something different (let's say a user-defined type in psogresql) that does not require it? anyway, your answer at least means that psycopg isn't more broken that all other drivers... ;) ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org God is real. Unless declared integer. -- Anonymous FORTRAN programmer From nathan@geerbox.net Wed Jun 27 16:03:12 2001 From: nathan@geerbox.net (Nathan Clegg) Date: Wed, 27 Jun 2001 08:03:12 -0700 Subject: [DB-SIG] any 2.0-compliant packages for postgresql? In-Reply-To: <993652707.2541.6.camel@lola> Message-ID: Ahhhh...this is probably where our experience differs then. My experience is that bound paramaters may only be used for literal values. You cannot use a paramater to specify the name of a table, column, type, or any other object. This is enforced, for example, by Oracle when a query with bound paramaters is prepared by the engine. I don't think postgresql supports bound paramaters internally, does it? So it's always the package, library, or module that is inserting the values directly, thus some of the limitations don't apply. In short, "real" bound paramaters can't be used for the purposes you described. Since postgresql doesn't support "real" bound paramaters, the situation is less obvious. -----Original Message----- From: db-sig-admin@python.org [mailto:db-sig-admin@python.org]On Behalf Of Federico Di Gregorio Sent: Wednesday, June 27, 2001 7:38 AM To: Nathan Clegg Cc: Python DB-SIG Mailing List Subject: RE: [DB-SIG] any 2.0-compliant packages for postgresql? On 27 Jun 2001 07:25:49 -0700, Nathan Clegg wrote: > > Q: psycopg correctly put strings obtained by the Date and Time > > constructors inside quotes ('') as requested by the api. it should do > > the same (and maybe escape) normal strings? if that's the case don't we > > need a String constructor? how is the module supposed to know when a > > string is real string and need quoting and when i am passing it just as > > convenience but it really is a number? example... > > This is the part I have issue with. The api spec does require that normal > strings be escaped and quoted. In regards to passing numbers as strings for > convenience, I would suggest doing away with it. If you mean it to be a > number, send a number, and furthermore specify %(name)d instead of %(name)s. > I appreciate conveniences, but I think this introduces the kind of ambiguity > that python's specification set out to squash in the first place. > Furthermore, all of the databases I have personally used have no problem > casting strings to numbers where required. That is, if you pass postgresql > (or oracle, or mysql...) a quoted string '768' to compare against a number > field, it with cast it to a number for you and get you the desired result. > The module doesn't really need to implement this convenience casting because > the database engine already does. so, if i try to do as follows: curs.execute("CREATE TABLE test (a text)") curs.execute("INSERT INTO test VALUES (%(a)s)", {'a':'some text...'}) the method call fails miserably because the generated SQL is: INSERT INTO test VALUES (some text...) without quotes. footnote 5 requires argument binding to prodive quotes and escape sequences but how can the module know is the given string will be a string in the db too and requires quoting or it is something different (let's say a user-defined type in psogresql) that does not require it? anyway, your answer at least means that psycopg isn't more broken that all other drivers... ;) ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org God is real. Unless declared integer. -- Anonymous FORTRAN programmer _______________________________________________ DB-SIG maillist - DB-SIG@python.org http://mail.python.org/mailman/listinfo/db-sig From fog@mixadlive.com Wed Jun 27 16:30:56 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: 27 Jun 2001 17:30:56 +0200 Subject: [DB-SIG] any 2.0-compliant packages for postgresql? In-Reply-To: References: Message-ID: <993655857.2586.8.camel@lola> On 27 Jun 2001 08:03:12 -0700, Nathan Clegg wrote: > Ahhhh...this is probably where our experience differs then. My experience > is that bound paramaters may only be used for literal values. You cannot > use a paramater to specify the name of a table, column, type, or any other > object. This is enforced, for example, by Oracle when a query with bound i am *not* using the parameter to specify "the name of a table, column, type, or any other object". it specifies a very simple literal value (a string) to be inserted into the db. i am using the pyargs type of sustitution but if you prefer i can change my example as follows: curs.execute("INSERT INTO test VALUES (?)", ['some text']) that does not change the problem i outlined (see discussion below). ciao, federico > paramaters is prepared by the engine. I don't think postgresql supports > bound paramaters internally, does it? So it's always the package, library, > or module that is inserting the values directly, thus some of the > limitations don't apply. > > In short, "real" bound paramaters can't be used for the purposes you > described. Since postgresql doesn't support "real" bound paramaters, the > situation is less obvious. > > > > -----Original Message----- > From: db-sig-admin@python.org [mailto:db-sig-admin@python.org]On Behalf > Of Federico Di Gregorio > Sent: Wednesday, June 27, 2001 7:38 AM > To: Nathan Clegg > Cc: Python DB-SIG Mailing List > Subject: RE: [DB-SIG] any 2.0-compliant packages for postgresql? > > > On 27 Jun 2001 07:25:49 -0700, Nathan Clegg wrote: > > > Q: psycopg correctly put strings obtained by the Date and Time > > > constructors inside quotes ('') as requested by the api. it should do > > > the same (and maybe escape) normal strings? if that's the case don't we > > > need a String constructor? how is the module supposed to know when a > > > string is real string and need quoting and when i am passing it just as > > > convenience but it really is a number? example... > > > > This is the part I have issue with. The api spec does require that normal > > strings be escaped and quoted. In regards to passing numbers as strings > for > > convenience, I would suggest doing away with it. If you mean it to be a > > number, send a number, and furthermore specify %(name)d instead of > %(name)s. > > I appreciate conveniences, but I think this introduces the kind of > ambiguity > > that python's specification set out to squash in the first place. > > Furthermore, all of the databases I have personally used have no problem > > casting strings to numbers where required. That is, if you pass > postgresql > > (or oracle, or mysql...) a quoted string '768' to compare against a number > > field, it with cast it to a number for you and get you the desired result. > > The module doesn't really need to implement this convenience casting > because > > the database engine already does. > > > so, if i try to do as follows: > > curs.execute("CREATE TABLE test (a text)") > curs.execute("INSERT INTO test VALUES (%(a)s)", {'a':'some text...'}) > > the method call fails miserably because the generated SQL is: > > INSERT INTO test VALUES (some text...) > > without quotes. footnote 5 requires argument binding to prodive quotes > and escape sequences but how can the module know is the given string > will be a string in the db too and requires quoting or it is something > different (let's say a user-defined type in psogresql) that does not > require it? > > anyway, your answer at least means that psycopg isn't more broken that > all other drivers... ;) > > ciao, > federico > > -- > Federico Di Gregorio > MIXAD LIVE Chief of Research & Technology fog@mixadlive.com > Debian GNU/Linux Developer & Italian Press Contact fog@debian.org > God is real. Unless declared integer. -- Anonymous FORTRAN programmer > > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig > -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org The devil speaks truth much oftener than he's deemed. He has an ignorant audience. -- Byron From nathan@geerbox.net Wed Jun 27 20:52:07 2001 From: nathan@geerbox.net (Nathan Clegg) Date: Wed, 27 Jun 2001 12:52:07 -0700 Subject: [DB-SIG] any 2.0-compliant packages for postgresql? In-Reply-To: <993655857.2586.8.camel@lola> Message-ID: Sorry that I misunderstood. Most get around this problem by setting the type of the paramater at preparation time. This is an optional step that is usually only done when the type is custom or a LOB. But is there any kind of value you wouldn't want to quote except for numbers? Is there a user-defined type that could not be placed in either of two categories: "number" and "not necessarily a number" ? -----Original Message----- From: db-sig-admin@python.org [mailto:db-sig-admin@python.org]On Behalf Of Federico Di Gregorio Sent: Wednesday, June 27, 2001 8:31 AM To: Nathan Clegg Cc: Python DB-SIG Mailing List Subject: RE: [DB-SIG] any 2.0-compliant packages for postgresql? On 27 Jun 2001 08:03:12 -0700, Nathan Clegg wrote: > Ahhhh...this is probably where our experience differs then. My experience > is that bound paramaters may only be used for literal values. You cannot > use a paramater to specify the name of a table, column, type, or any other > object. This is enforced, for example, by Oracle when a query with bound i am *not* using the parameter to specify "the name of a table, column, type, or any other object". it specifies a very simple literal value (a string) to be inserted into the db. i am using the pyargs type of sustitution but if you prefer i can change my example as follows: curs.execute("INSERT INTO test VALUES (?)", ['some text']) that does not change the problem i outlined (see discussion below). ciao, federico > paramaters is prepared by the engine. I don't think postgresql supports > bound paramaters internally, does it? So it's always the package, library, > or module that is inserting the values directly, thus some of the > limitations don't apply. > > In short, "real" bound paramaters can't be used for the purposes you > described. Since postgresql doesn't support "real" bound paramaters, the > situation is less obvious. > > > > -----Original Message----- > From: db-sig-admin@python.org [mailto:db-sig-admin@python.org]On Behalf > Of Federico Di Gregorio > Sent: Wednesday, June 27, 2001 7:38 AM > To: Nathan Clegg > Cc: Python DB-SIG Mailing List > Subject: RE: [DB-SIG] any 2.0-compliant packages for postgresql? > > > On 27 Jun 2001 07:25:49 -0700, Nathan Clegg wrote: > > > Q: psycopg correctly put strings obtained by the Date and Time > > > constructors inside quotes ('') as requested by the api. it should do > > > the same (and maybe escape) normal strings? if that's the case don't we > > > need a String constructor? how is the module supposed to know when a > > > string is real string and need quoting and when i am passing it just as > > > convenience but it really is a number? example... > > > > This is the part I have issue with. The api spec does require that normal > > strings be escaped and quoted. In regards to passing numbers as strings > for > > convenience, I would suggest doing away with it. If you mean it to be a > > number, send a number, and furthermore specify %(name)d instead of > %(name)s. > > I appreciate conveniences, but I think this introduces the kind of > ambiguity > > that python's specification set out to squash in the first place. > > Furthermore, all of the databases I have personally used have no problem > > casting strings to numbers where required. That is, if you pass > postgresql > > (or oracle, or mysql...) a quoted string '768' to compare against a number > > field, it with cast it to a number for you and get you the desired result. > > The module doesn't really need to implement this convenience casting > because > > the database engine already does. > > > so, if i try to do as follows: > > curs.execute("CREATE TABLE test (a text)") > curs.execute("INSERT INTO test VALUES (%(a)s)", {'a':'some text...'}) > > the method call fails miserably because the generated SQL is: > > INSERT INTO test VALUES (some text...) > > without quotes. footnote 5 requires argument binding to prodive quotes > and escape sequences but how can the module know is the given string > will be a string in the db too and requires quoting or it is something > different (let's say a user-defined type in psogresql) that does not > require it? > > anyway, your answer at least means that psycopg isn't more broken that > all other drivers... ;) > > ciao, > federico > > -- > Federico Di Gregorio > MIXAD LIVE Chief of Research & Technology fog@mixadlive.com > Debian GNU/Linux Developer & Italian Press Contact fog@debian.org > God is real. Unless declared integer. -- Anonymous FORTRAN programmer > > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig > -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org The devil speaks truth much oftener than he's deemed. He has an ignorant audience. -- Byron _______________________________________________ DB-SIG maillist - DB-SIG@python.org http://mail.python.org/mailman/listinfo/db-sig From zen@shangri-la.dropbear.id.au Thu Jun 28 02:06:58 2001 From: zen@shangri-la.dropbear.id.au (Stuart Bishop) Date: Thu, 28 Jun 2001 11:06:58 +1000 (EST) Subject: [DB-SIG] Constructors for varchar & numeric types? Message-ID: I'm preparing a second draft of the pre-PEP I sent out to this SIG last week. Some feedback I received talked about datatype limitations in different vendors systems, in particular the different limitations in VARCHAR field lengths. Is it a good idea to encode database meta information in the drivers (datatype lengths, numeric precisions etc)? If I remember correctly, there was a project last year to put this sort of information into a seperate module. (I'll trawl the archives when I have time for the thread). If this information is available, I can provide VARCHAR, NUMBER, RAW objects etc. that will do type checking so paranoid programmers can ensure exceptions will be thrown by Python if limitations are exceeded (as opposed to hoping the RDBMS throws an exception rather than just truncating or rounding the value). Would people use such a feature? I know I personally have always relied on exceptions being thrown from the RDBMS... -- Stuart Bishop From zen@shangri-la.dropbear.id.au Thu Jun 28 02:44:17 2001 From: zen@shangri-la.dropbear.id.au (Stuart Bishop) Date: Thu, 28 Jun 2001 11:44:17 +1000 (EST) Subject: [DB-SIG] any 2.0-compliant packages for postgresql? In-Reply-To: <993655857.2586.8.camel@lola> Message-ID: [attributions removed since I'd probably get them mixed up :-)] > > > > Q: psycopg correctly put strings obtained by the Date and Time > > > > constructors inside quotes ('') as requested by the api. it should do > > > > the same (and maybe escape) normal strings? if that's the case don't we > > > > need a String constructor? how is the module supposed to know when a > > > > string is real string and need quoting and when i am passing it just as > > > > convenience but it really is a number? example... It knows it is a real string that needs quoting because it was passed a string. Failing to quote the string properly can lead to major security problems and threads on Bugtraq: curs.execute("INSERT INTO test VALUES '(%a)s')", mydict) Now - if mydict was initialized from an untrusted source (eg. parameters passed from a web form), it malicious user could initialise mydict['a'] to something like ''' ');delete * from test;insert into test values '0wn3d ''' Failing to escape string types in the driver requires programmers to continually call some sort of quote function (such as could be done automatically by a String class like you mention). Enforcing this would be an annoyance since (I would assume) in the majority of cases, if I have a string object I need to pass to a RDBMS it is to be inserted into a VARCHAR field. If an string should not be quoted as a string by the python driver, it should be passed as some other type. This is already the way Date and Binary types are handled. The only legitimate use I can think of for passing a non-string as a string to the driver is if you need to pass a number where its precsion exceeds Python's ability to represent. The databases I have experience with will cast this back into a number anyway so this is handled for you (and if there are databases that don't do this, we need Number class to handle extended precision or wait for Python to support extended precision floats). > > curs.execute("CREATE TABLE test (a text)") > > curs.execute("INSERT INTO test VALUES (%(a)s)", {'a':'some text...'}) > > > > the method call fails miserably because the generated SQL is: > > > > INSERT INTO test VALUES (some text...) > > > > without quotes. footnote 5 requires argument binding to prodive quotes > > and escape sequences but how can the module know is the given string > > will be a string in the db too and requires quoting or it is something > > different (let's say a user-defined type in psogresql) that does not > > require it? -- Stuart Bishop From fog@mixadlive.com Thu Jun 28 08:33:15 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: 28 Jun 2001 09:33:15 +0200 Subject: [DB-SIG] any 2.0-compliant packages for postgresql? In-Reply-To: References: Message-ID: <993713596.1100.1.camel@lola> you got the point. strings *should* be quoted. i'll add an explanation of why to the dbsig and send a patch to this list for approval. thank you for clearing up that, federico On 28 Jun 2001 11:44:17 +1000, Stuart Bishop wrote: > [attributions removed since I'd probably get them mixed up :-)] > > > > > > Q: psycopg correctly put strings obtained by the Date and Time > > > > > constructors inside quotes ('') as requested by the api. it should do > > > > > the same (and maybe escape) normal strings? if that's the case don't we > > > > > need a String constructor? how is the module supposed to know when a > > > > > string is real string and need quoting and when i am passing it just as > > > > > convenience but it really is a number? example... > > It knows it is a real string that needs quoting because it was passed > a string. Failing to quote the string properly can lead to major security > problems and threads on Bugtraq: > > curs.execute("INSERT INTO test VALUES '(%a)s')", mydict) > > Now - if mydict was initialized from an untrusted source (eg. parameters > passed from a web form), it malicious user could initialise mydict['a'] > to something like ''' > ');delete * from test;insert into test values '0wn3d > ''' > > Failing to escape string types in the driver requires programmers > to continually call some sort of quote function (such as could be > done automatically by a String class like you mention). Enforcing > this would be an annoyance since (I would assume) in the majority of > cases, if I have a string object I need to pass to a RDBMS it is to > be inserted into a VARCHAR field. If an string should not be quoted > as a string by the python driver, it should be passed as some other > type. This is already the way Date and Binary types are handled. > The only legitimate use I can think of for passing a non-string > as a string to the driver is if you need to pass a number where > its precsion exceeds Python's ability to represent. The databases > I have experience with will cast this back into a number anyway > so this is handled for you (and if there are databases that don't do > this, we need Number class to handle extended precision or wait > for Python to support extended precision floats). > > > > curs.execute("CREATE TABLE test (a text)") > > > curs.execute("INSERT INTO test VALUES (%(a)s)", {'a':'some text...'}) > > > > > > the method call fails miserably because the generated SQL is: > > > > > > INSERT INTO test VALUES (some text...) > > > > > > without quotes. footnote 5 requires argument binding to prodive quotes > > > and escape sequences but how can the module know is the given string > > > will be a string in the db too and requires quoting or it is something > > > different (let's say a user-defined type in psogresql) that does not > > > require it? > > -- > Stuart Bishop > > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org All programmers are optimists. -- Frederick P. Brooks, Jr. From andy@dustman.net Fri Jun 29 04:24:11 2001 From: andy@dustman.net (Andy Dustman) Date: Thu, 28 Jun 2001 23:24:11 -0400 (EDT) Subject: [DB-SIG] any 2.0-compliant packages for postgresql? In-Reply-To: <993629087.1302.1.camel@lola> Message-ID: On 27 Jun 2001, Federico Di Gregorio wrote: > Q: psycopg correctly put strings obtained by the Date and Time > constructors inside quotes ('') as requested by the api. it should do > the same (and maybe escape) normal strings? if that's the case don't we > need a String constructor? how is the module supposed to know when a > string is real string and need quoting and when i am passing it just as > convenience but it really is a number? example... > > d = {'id':'768', name:'a number'} > curs.execute("CREATE TABLE test (id int4, name text)") > curs.execute("INSERT INTO test VALUES (%(id)s, %(name))") > > this will obviously produce the following (wrong) SQL: > > INSERT INTO test VALUES (768, a number) > > what i really wanted was 'a number', quoted. but how can the module > understand that? Here's what I did for MySQLdb: All of the placeholders in the query should be %s. The module is responsible for converting all Python types/instances into a correct SQL literal string. Thus, for your example (corrected somewhat): d = {'id':768, name:'a number'} curs.execute("CREATE TABLE test (id int4, name text)") curs.execute("INSERT INTO test VALUES (%(id)s, %(name)s)", d) produces the query: INSERT INTO test VALUES (768, 'a number') The string converter uses MySQL's quoting function (mysql_real_escape_string()) to escape any special characters, namely single-quote ('), NUL (\0) and backslash (\). Of course, MySQLdb allows you to use sequences as well as mappings: curs.execute("INSERT INTO test VALUES (%s, %s)", (768,'a number')) -- Andy Dustman PGP: 0xC72F3F1D @ .net http://dustman.net/andy I'll give spammers one bite of the apple, but they'll have to guess which bite has the razor blade in it. From piotr.trawinski@weblab.pl Fri Jun 29 15:55:03 2001 From: piotr.trawinski@weblab.pl (Piotr Trawinski) Date: Fri, 29 Jun 2001 16:55:03 +0200 Subject: [DB-SIG] Simple db access Message-ID: <3B3C96C7.6E6539D0@weblab.pl> I am pretty much new to python. At the moment i am trying to figure out how to make a simple connection to a mysql db. The Python Database API Specification doesnt make it clear for me. Could you please give me an example or an equilvalent of this php code: mysql_pconnect ($host,$user,$passwd); mysql_select_db($name,); $query=mysql_query("select * from test"); while ($data=mysql_fetch_row($query)) { echo $data[0]; echo $data[1]; } I will greatly appreciate any help Piotr Trawinski From Billy G. Allie" Message-ID: <200106291634.f5TGYd617984@bajor.mug.org> > Andy Dustman wrote: > > On 27 Jun 2001, Federico Di Gregorio wrote: > > > > > Q: psycopg correctly put strings obtained by the Date and Time > > > constructors inside quotes ('') as requested by the api. it should do > > > the same (and maybe escape) normal strings? if that's the case don't we > > > need a String constructor? how is the module supposed to know when a > > > string is real string and need quoting and when i am passing it just as > > > convenience but it really is a number? example... > > > > > > d = {'id':'768', name:'a number'} > > > curs.execute("CREATE TABLE test (id int4, name text)") > > > curs.execute("INSERT INTO test VALUES (%(id)s, %(name))") > > > > > > this will obviously produce the following (wrong) SQL: > > > > > > INSERT INTO test VALUES (768, a number) > > > > > > what i really wanted was 'a number', quoted. but how can the module > > > understand that? > > > > Here's what I did for MySQLdb: All of the placeholders in the query should > > be %s. The module is responsible for converting all Python types/instances > > into a correct SQL literal string. Thus, for your example (corrected > > somewhat): > > > > d = {'id':768, name:'a number'} > > curs.execute("CREATE TABLE test (id int4, name text)") > > curs.execute("INSERT INTO test VALUES (%(id)s, %(name)s)", d) > > > > produces the query: > > > > INSERT INTO test VALUES (768, 'a number') > > > > The string converter uses MySQL's quoting function > > (mysql_real_escape_string()) to escape any special characters, namely > > single-quote ('), NUL (\0) and backslash (\). > > > > Of course, MySQLdb allows you to use sequences as well as mappings: > > > > curs.execute("INSERT INTO test VALUES (%s, %s)", (768,'a number')) I agree with Andy's statement. That is how I implemented execute in my PostgreSQL DB-API 2.0 module, PyPgSQL. You always use '%s' (or '%(name)s') as placeholder in the query string and PyPgSQL will apply the correct quoting based on the type of the parameter. BTW: PyPgSQL is available on SourceForge (http://www.sf.net/projects/pypgsql) I will be announcing a new release of PyPgSQL this weekend. ___________________________________________________________________________ ____ | Billy G. Allie | Domain....: Bill.Allie@mug.org | /| | 7436 Hartwell | MSN.......: B_G_Allie@email.msn.com |-/-|----- | Dearborn, MI 48126| |/ |LLIE | (313) 582-1540 | From andy@dustman.net Fri Jun 29 21:30:51 2001 From: andy@dustman.net (Andy Dustman) Date: Fri, 29 Jun 2001 16:30:51 -0400 (EDT) Subject: [DB-SIG] Simple db access In-Reply-To: <3B3C96C7.6E6539D0@weblab.pl> Message-ID: On Fri, 29 Jun 2001, Piotr Trawinski wrote: > I am pretty much new to python. At the moment i am trying to figure out > how to make a simple connection to a mysql db. The Python Database API > Specification doesnt make it clear for me. Could you please give me an > example or an equilvalent of this php code: > > mysql_pconnect ($host,$user,$passwd); > mysql_select_db($name,); > > > $query=mysql_query("select * from test"); > > while ($data=mysql_fetch_row($query)) { > echo $data[0]; > echo $data[1]; > } Example will be for MySQLdb, but is generally applicable to other DB API modules. import MySQLdb db=MySQLdb.connect(host=host, user=user, passwd=passwd, db=name) # This could also be done instead of supplying db above but is # non-standard # db.select_db(name) c=db.cursor() c.execute("select * from test") for row in c.fetchall(): # assumes result set is not huge print row[0], row[1] # alternate for potentially huge result sets # you would probably also use this non-standard connection: # db=MySQLdb.connect(cursorclass=MySQLdb.SSCursor,...) row = c.fetchone() while row: print row[0], row[1] row = c.fetchone() -- Andy Dustman PGP: 0xC72F3F1D @ .net http://dustman.net/andy I'll give spammers one bite of the apple, but they'll have to guess which bite has the razor blade in it. From fog@mixadlive.com Sat Jun 30 00:25:14 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: 30 Jun 2001 01:25:14 +0200 Subject: [DB-SIG] Re: any 2.0-compliant packages for postgresql? (Andy Dustman) In-Reply-To: <200106291634.f5TGYd617984@bajor.mug.org> References: <200106291634.f5TGYd617984@bajor.mug.org> Message-ID: <993857115.1370.0.camel@lola> On 29 Jun 2001 12:34:39 -0400, Billy G. Allie wrote: [snip] > I agree with Andy's statement. That is how I implemented execute in my > PostgreSQL DB-API 2.0 module, PyPgSQL. You always use '%s' (or '%(name)s') > as placeholder in the query string and PyPgSQL will apply the correct quoting > based on the type of the parameter. carefully looking at the dbapi and at the postings on this ml, this is, imo, wrong. the '' should be added by the driver not by the user. is this right? is this what the dbapi is trying to say in footnote 5? can someone of the original authors of the dbpai-2.0 please clarify? ciao, federico (going to add quoting to psycopg... :) -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org The number of the beast: vi vi vi. -- Delexa Jones From paul@dubois.ws Sat Jun 30 05:22:18 2001 From: paul@dubois.ws (Paul DuBois) Date: Fri, 29 Jun 2001 23:22:18 -0500 Subject: [DB-SIG] Re: any 2.0-compliant packages for postgresql? (Andy Dustman) In-Reply-To: <993857115.1370.0.camel@lola> References: <200106291634.f5TGYd617984@bajor.mug.org> <993857115.1370.0.camel@lola> Message-ID: At 1:25 AM +0200 6/30/01, Federico Di Gregorio wrote: >On 29 Jun 2001 12:34:39 -0400, Billy G. Allie wrote: >[snip] >> I agree with Andy's statement. That is how I implemented execute in my >> PostgreSQL DB-API 2.0 module, PyPgSQL. You always use '%s' (or '%(name)s') >> as placeholder in the query string and PyPgSQL will apply the >>correct quoting >> based on the type of the parameter. > >carefully looking at the dbapi and at the postings on this ml, this is, >imo, wrong. the '' should be added by the driver not by the user. is >this right? is this what the dbapi is trying to say in footnote 5? Looking back at the original postings, it appears that the '' you refer to are simply there to set off %s and %(name)s in the paragraph. They're not present in the code in question. The behavior of the code is just what you think it should be. > >can someone of the original authors of the dbpai-2.0 please clarify? > >ciao, >federico (going to add quoting to psycopg... :) > >-- >Federico Di Gregorio >MIXAD LIVE Chief of Research & Technology fog@mixadlive.com >Debian GNU/Linux Developer & Italian Press Contact fog@debian.org > The number of the beast: vi vi vi. -- Delexa Jones From fog@mixadlive.com Sat Jun 30 12:36:38 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: 30 Jun 2001 13:36:38 +0200 Subject: [DB-SIG] Re: any 2.0-compliant packages for postgresql? (Andy Dustman) In-Reply-To: References: <200106291634.f5TGYd617984@bajor.mug.org> <993857115.1370.0.camel@lola> Message-ID: <993900998.3150.6.camel@lola> On 29 Jun 2001 23:22:18 -0500, Paul DuBois wrote: > At 1:25 AM +0200 6/30/01, Federico Di Gregorio wrote: > >On 29 Jun 2001 12:34:39 -0400, Billy G. Allie wrote: > >[snip] > >> I agree with Andy's statement. That is how I implemented execute in my > >> PostgreSQL DB-API 2.0 module, PyPgSQL. You always use '%s' (or '%(name)s') > >> as placeholder in the query string and PyPgSQL will apply the > >>correct quoting > >> based on the type of the parameter. > > > >carefully looking at the dbapi and at the postings on this ml, this is, > >imo, wrong. the '' should be added by the driver not by the user. is > >this right? is this what the dbapi is trying to say in footnote 5? > > Looking back at the original postings, it appears that the '' you refer > to are simply there to set off %s and %(name)s in the paragraph. They're > not present in the code in question. The behavior of the code is just > what you think it should be. oops! my fault then. we all agree. back to hacking. -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org All programmers are optimists. -- Frederick P. Brooks, Jr.