From pshafaee@hostway.com Wed Apr 4 03:57:38 2001 From: pshafaee@hostway.com (John Shafaee) Date: Tue, 03 Apr 2001 21:57:38 -0500 Subject: [DB-SIG] Use of record sets Message-ID: <3ACA8DA2.6049193C@hostway.com> --------------68D287B8B3DAD1597F2E9030 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Hello! My name is John Shafaee and I am a Python developer (sorry for opening with the traditional AA like greeting). I am very excited to see the continuing development of the Python DB API. My friends and I have been working with several Python modules for connecting to MySQL, PostgreSQL, and Interbase RDBMSs in the past two years. To help generalize DB access code, we decided to put together a DBFactory module for creating connections to multiple DBs. The module is at a higher-level than the Python DB API specifications, but there are some overlaps. I noticed that the DB API interface is still using sequences for working with query results. Although this is sufficient, sequences are indexed by integers and are therefore difficult to program with. For example, as shown in the following code, using sequence indexes make the code less readable and more error prone. curs.execute( "SELECT id, first_name, last_name FROM some_table WHERE id > 10" ) result = curs.fetchone() print "Found id '%s', first name '%s' and last name '%s'"%( result[0], result[1], result[2] ) The DBFactory module returns query results as a sequence of Records. Records are Python UserDict objects that index field data by the field(column) name. This is very convenient when interpolating results in a string. The above example code would look like the following if written in terms of lists of Records: print "Found id '%s', first name '%s' and last name '%s'"%( result['id'], result['first_name'], result['last_name'] ) I was wondering if this is something that you have considered. If not, I was wondering what were the proper channels that I would need to go through to make this suggestion open to the rest of the Python DB API community. Looking forward to your response. Best, John Shafaee --------------68D287B8B3DAD1597F2E9030 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit Hello!

My name is John Shafaee and I am a Python developer (sorry for opening with the traditional AA like greeting). I am very excited to see  the continuing development of the Python DB API.

My friends and I have been working with several Python modules for connecting to MySQL, PostgreSQL, and Interbase RDBMSs in the past two years. To help generalize DB access code, we decided to put together a DBFactory module for creating connections to multiple DBs. The module is at a higher-level than the Python DB API specifications, but there are some overlaps.

I noticed that the DB API interface is still using sequences for working with query results. Although this is sufficient, sequences are indexed by integers and are therefore difficult to program with. For example, as shown in the following code, using sequence indexes make the code less readable and more error prone.

curs.execute( "SELECT id, first_name, last_name FROM some_table WHERE id > 10" )

result = curs.fetchone()

print "Found id '%s', first name '%s' and last name '%s'"%( result[0], result[1], result[2] )

The DBFactory module returns query results as a sequence of Records. Records are Python UserDict objects that index field data by the field(column) name. This is very convenient when interpolating results in a string. The above example code would look like the following if written in terms of lists of Records:

print "Found id '%s', first name '%s' and last name '%s'"%( result['id'], result['first_name'], result['last_name'] )

I was wondering if this is something that you have considered. If not, I was wondering what were the proper channels that I would need to go through to make this suggestion open to the rest of the Python DB API community.

Looking forward to your response.

Best,
John Shafaee --------------68D287B8B3DAD1597F2E9030-- From daniel.dittmar@sap.com Wed Apr 4 17:11:37 2001 From: daniel.dittmar@sap.com (Dittmar, Daniel) Date: Wed, 4 Apr 2001 18:11:37 +0200 Subject: [DB-SIG] Use of record sets Message-ID: > result = curs.fetchone() > print "Found id '%s', first name '%s' and last name '%s'"%( result[0], result[1], result[2] ) vs. > print "Found id '%s', first name '%s' and last name '%s'"%( result['id'], result['first_name'], > result['last_name'] ) How about id, first_name, last_name = curs.fetchone () print "Found id '%s', first name '%s' and last name '%s'"%( id, first_name, last_name) or better still for id, first_name, last_name in curs: print "Found id '%s', first name '%s' and last name '%s'"%( id, first_name, last_name) This is of couse not directly an argument against using strings as a column index (DBI has them, JDBC has them). There is one annoying property of SQL in this context: identifier are converted to uppercase unless quoted. Should the UserDict mimic this behaviour by trying both the actual string and the uppercase version? And how about returning row objects. That would allow .dot notation (if the column names are valid Python identifier): print "Found id '%s', first name '%s' and last name '%s'"%( result.id, result.first_name, result.last_name) My personal preference: 'for id, first_name, last_name in curs:' Daniel -- Daniel Dittmar daniel.dittmar@sap.com SAP DB, SAP Labs Berlin http://www.sapdb.org/ From pshafaee@hostway.com Wed Apr 4 17:33:30 2001 From: pshafaee@hostway.com (John Shafaee) Date: Wed, 04 Apr 2001 11:33:30 -0500 Subject: [DB-SIG] Use of record sets References: Message-ID: <3ACB4CDA.C80E5F56@hostway.net> Yes, I agree with the style that you have listed. I too use the same technique. I have found that using dictionaries has an added advantage of storing your DB results in its own namespace. I use this technique to generate HTML pages by replacing dynamic tags. The tags in the HTML page reference column values. The Record class has the namespace that the generator uses to lookup and replace the tags. I also can pack in to the dictionary additional fields that are not returned from by the query. Most of this can be facilitated in line and as needed. However, using dictionaries makes good use of Python's awesome OO framework. Just a thought. I have added a few comments below based on my experience with the Record class. "Dittmar, Daniel" wrote: > > result = curs.fetchone() > > print "Found id '%s', first name '%s' and last name '%s'"%( result[0], > result[1], result[2] ) > > vs. > > > print "Found id '%s', first name '%s' and last name '%s'"%( result['id'], > result['first_name'], > > result['last_name'] ) > > How about > > id, first_name, last_name = curs.fetchone () > print "Found id '%s', first name '%s' and last name '%s'"%( id, first_name, > last_name) > > or better still > > for id, first_name, last_name in curs: > print "Found id '%s', first name '%s' and last name '%s'"%( id, > first_name, last_name) > > This is of couse not directly an argument against using strings as a column > index (DBI has them, JDBC has them). > > There is one annoying property of SQL in this context: identifier are > converted to uppercase unless quoted. Should the UserDict mimic this > behaviour by trying both the actual string and the uppercase version? This is highly dependent on the native module used to connect to the DB. For example, I have found that MySQL preserves column names including upper and lower case letter. However, PostgreSQL disallows the use of upper case letters for column names. I am not exactly sure what the best policy would be, but I personally favor good documentation of the code side effects as opposed the any logic that attempts to make the best guess. > > > And how about returning row objects. That would allow .dot notation (if the > column names are valid Python identifier): > print "Found id '%s', first name '%s' and last name '%s'"%( result.id, > result.first_name, result.last_name) Yes, this is a problem if you attempt to bind the row data to the working namespace. I have not had a need to do so and as a result have been lucky to avoid such problems. Good observation. Any possible solution? > > > > My personal preference: 'for id, first_name, last_name in curs:' > > Daniel > > -- > Daniel Dittmar > daniel.dittmar@sap.com > SAP DB, SAP Labs Berlin > http://www.sapdb.org/ From jon+python-db-sig@unequivocal.co.uk Wed Apr 4 18:10:18 2001 From: jon+python-db-sig@unequivocal.co.uk (Jon Ribbens) Date: Wed, 4 Apr 2001 18:10:18 +0100 Subject: [DB-SIG] Use of record sets In-Reply-To: <3ACA8DA2.6049193C@hostway.com>; from pshafaee@hostway.com on Tue, Apr 03, 2001 at 09:57:38PM -0500 References: <3ACA8DA2.6049193C@hostway.com> Message-ID: <20010404181018.A27576@snowy.squish.net> John Shafaee wrote: > I noticed that the DB API interface is still using sequences for working > with query results. Although this is sufficient, sequences are indexed > by integers and are therefore difficult to program with. For example, as > shown in the following code, using sequence indexes make the code less > readable and more error prone. MySQLdb allows you to specify the DictCursor class as the cursor type and then it provides results in mappings instead of sequences. Maybe the DB API should standardise on this. From daniel.dittmar@sap.com Wed Apr 4 19:43:48 2001 From: daniel.dittmar@sap.com (Dittmar, Daniel) Date: Wed, 4 Apr 2001 20:43:48 +0200 Subject: [DB-SIG] Use of record sets Message-ID: > > There is one annoying property of SQL in this context: identifier are > > converted to uppercase unless quoted. Should the UserDict mimic this > > behaviour by trying both the actual string and the uppercase version? > > This is highly dependent on the native module used to connect to the DB. > For example, I have found that MySQL preserves column names including > upper and lower case letter. However, PostgreSQL disallows the use of > upper case letters for column names. I am not exactly sure what the > best policy would be, but I personally favor good documentation of > the code side effects as opposed the > any logic that attempts to make the best guess. The rule should probably be that you can use those strings as keys that you used in the SQL SELECT list. When the database converts identifier to upper case, the driver should - try for an exact match (because you can have lower case identifier if properly quoted) - convert string to upper case and try this > > print "Found id '%s', first name '%s' and last name '%s'"%( result.id, > > result.first_name, result.last_name) > > Yes, this is a problem if you attempt to bind the row data to the > working namespace. I have not had a need to do so and as a result > have been lucky to avoid such problems. Good observation. Any > possible solution? If the column name isn't a valid Python identifier, then you can always revert to getattr () or using dictionary access. Daniel -- Daniel Dittmar daniel.dittmar@sap.com SAP DB, SAP Labs Berlin http://www.sapdb.org/ From dieter@sz-sb.de Thu Apr 5 08:25:34 2001 From: dieter@sz-sb.de (Dr. Dieter Maurer) Date: Thu, 5 Apr 2001 09:25:34 +0200 (CEST) Subject: [DB-SIG] Use of record sets In-Reply-To: References: Message-ID: <15052.7429.996614.54139@dieter.sz-sb.de> Daniel Dittmar: > ... case variation problematic ... There is another problematic with mappings describing a "select" row: * "select" columns do not need to have a name * several columns may have the same name Thus, there should be a way to access the row representation both as a mapping as well as a sequence. Dieter From bkc@murkworks.com Thu Apr 5 13:24:51 2001 From: bkc@murkworks.com (Brad Clements) Date: Thu, 5 Apr 2001 08:24:51 -0400 Subject: [DB-SIG] Use of record sets In-Reply-To: <3ACA8DA2.6049193C@hostway.com> Message-ID: <3ACC2BC3.25647.367ED993@localhost> On 3 Apr 2001, at 21:57, John Shafaee wrote: > The DBFactory module returns query results as a sequence of Records. > Records are Python UserDict objects that index field data by the > field(column) name. This is very convenient when interpolating results in a > string. The above example code would look like the following if written in > terms of lists of Records: > > print "Found id '%s', first name '%s' and last name > '%s'"%( result['id'], result['first_name'], result['last_name'] ) > > I was wondering if this is something that you have considered. If not, I > was wondering what were the proper channels that I would need to go through > to make this suggestion open to the rest of the Python DB API community. Look for the module SQLDict.py which does exactly this (and much more) Much of my Zope/Python work is done with either SQLDict, or at the base db-api level. I use the db-api level when I need the maximum performance. I think dict-base (userdicts, whatever) should be a bolt-on above db-api.. Not a replacement for it. Brad Clements, bkc@murkworks.com (315)268-1000 http://www.murkworks.com (315)268-9812 Fax netmeeting: ils://ils.murkworks.com AOL-IM: BKClements From pshafaee@hostway.com Thu Apr 5 18:33:24 2001 From: pshafaee@hostway.com (John Shafaee) Date: Thu, 05 Apr 2001 12:33:24 -0500 Subject: [DB-SIG] Records as an addition to the api Message-ID: <3ACCAC64.40B9A472@hostway.net> --------------82905410A4F0311E6D4CD3FC Content-Type: text/plain; charset=EUC-KR Content-Transfer-Encoding: 7bit I agree. Records (my terminology for mapping names to returned filed data) should be a "bolt-on" to the lower level db api. They are certainly convenient and very useful in certain programming paradigms. In response to the difficulties with mappings, I do agree that there are some design and implementation details that still need to be worked out, however, I don't see any of them as being insurmountable. Most of the issues can be resolved at the API level and in some cases may require special handling based on the RDBMs. For example, as pointed out earlier when performing a SELECT * type query there is the possibility of multiple return fields with the same field name as a result of a join operation. However, at the SQL level, each joined column (I am referring to the where clause) must be qualified with a table name, otherwise the operation is considered ambiguous. Since the table names are provided, all that we need to do is use the fully qualified field name for the returned results. In my Record implementation I do this exact logic when the resulting field names are not unique. Otherwise, I store the results with the field's common name. Here is a code sample to better illustrate the idea. db = DBFactory.getMySQLDataAccess( DB_CONF_FILE ) db.execute( """select * from Transaction, Domain where Transaction.Id = Domain.Id """ ) results = db.getRecords() print results[0] Results in: [many other fields, here but I have truncated them] It would really be convenient to have dictionary based lookups as a standard feature (an addition) to the API. It would also make the API that much more powerful as it provides a number of ways to access and work with SQL results. Best, John Shafaee > > > Subject: Re: [DB-SIG] Use of record sets > Date: Thu, 5 Apr 2001 08:24:51 -0400 > From: "Brad Clements" > Organization: MurkWorks, Incorporated. > To: db-sig@python.org > > On 3 Apr 2001, at 21:57, John Shafaee wrote: > > > The DBFactory module returns query results as a sequence of Records. > > Records are Python UserDict objects that index field data by the > > field(column) name. This is very convenient when interpolating results in a > > string. The above example code would look like the following if written in > > terms of lists of Records: > > > > print "Found id '%s', first name '%s' and last name > > '%s'"%( result['id'], result['first_name'], result['last_name'] ) > > > > I was wondering if this is something that you have considered. If not, I > > was wondering what were the proper channels that I would need to go through > > to make this suggestion open to the rest of the Python DB API community. > > Look for the module SQLDict.py which does exactly this (and much more) > > Much of my Zope/Python work is done with either SQLDict, or at the base db-api level. I > use the db-api level when I need the maximum performance. > > I think dict-base (userdicts, whatever) should be a bolt-on above db-api.. Not a > replacement for it. > > Brad Clements, bkc@murkworks.com (315)268-1000 > http://www.murkworks.com (315)268-9812 Fax > netmeeting: ils://ils.murkworks.com AOL-IM: BKClements --------------82905410A4F0311E6D4CD3FC Content-Type: text/html; charset=EUC-KR Content-Transfer-Encoding: 7bit I agree. Records (my terminology for mapping names to returned filed data) should be a "bolt-on" to the lower level db api. They are certainly convenient and very useful in certain programming paradigms.

In response to the difficulties with mappings, I do agree that there are some design and implementation details that still need to be worked out, however, I don't see any of them as being insurmountable. Most of the issues can be resolved at the API level and in some cases may require special handling based on the RDBMs.

For example, as pointed out earlier when performing a SELECT * type query there is the possibility of multiple return fields with the same field name as a result of a join operation. However, at the SQL level, each joined column (I am referring to the where clause) must be qualified with a table name, otherwise the operation is considered ambiguous. Since the table names are provided, all that we need to do is use the fully qualified field name for the returned results.

In my Record implementation I do this exact logic when the resulting field names are not unique.  Otherwise, I store the results with the field's common name.

Here is a code sample to better illustrate the idea.

db = DBFactory.getMySQLDataAccess( DB_CONF_FILE )
db.execute( """select * from Transaction, Domain where Transaction.Id = Domain.Id """ )
results = db.getRecords()
print results[0]

Results in:
<Record>
 <Field name="Domain.Cycle" value="3"/>
 <Field name="Transaction.Id" value="1"/>
 [many other fields, here but I have truncated them]
 <Field name="Domain.Id" value="1"/>
</Record>

It would really be convenient to have dictionary based lookups as a standard feature (an addition) to the API. It would also make the API that much more powerful as it provides a number of ways to access and work with SQL results.

Best,
John Shafaee
 

 

Subject: Re: [DB-SIG] Use of record sets
Date: Thu, 5 Apr 2001 08:24:51 -0400
From: "Brad Clements" <bkc@murkworks.com>
Organization: MurkWorks, Incorporated.
To: db-sig@python.org

On 3 Apr 2001, at 21:57, John Shafaee wrote:

> The DBFactory module returns query results as a sequence of Records.
> Records are Python UserDict objects that index field data by the
> field(column) name. This is very convenient when interpolating results in a
> string. The above example code would look like the following if written in
> terms of lists of Records:
>
> print "Found id '%s', first name '%s' and last name
> '%s'"%( result['id'], result['first_name'], result['last_name'] )
>
> I was wondering if this is something that you have considered. If not, I
> was wondering what were the proper channels that I would need to go through
> to make this suggestion open to the rest of the Python DB API community.

Look for the module SQLDict.py which does exactly this (and much more)

Much of my Zope/Python work is done with either SQLDict, or at the base db-api level. I
use the db-api level when I need the maximum performance.

I think dict-base (userdicts, whatever) should be a bolt-on above db-api.. Not a
replacement for it.

Brad Clements,                bkc@murkworks.com   (315)268-1000
http://www.murkworks.com                          (315)268-9812 Fax
netmeeting: ils://ils.murkworks.com               AOL-IM: BKClements

--------------82905410A4F0311E6D4CD3FC-- From scortum@yahoo.com Fri Apr 6 06:26:49 2001 From: scortum@yahoo.com (rho handsome) Date: Thu, 5 Apr 2001 22:26:49 -0700 (PDT) Subject: [DB-SIG] How to detect wether a database exist ? Message-ID: <20010406052649.57265.qmail@web13808.mail.yahoo.com> Hello guys ... I am rather new at programming in Python and I hope to get some pointers from anyone out there who can help. I am using the MySQLdb bindings and there is one problem I face. I am writing an address book program. How do i check at when the program starts wether a certain database exist within the localhost and if it doesn't exist.. create it ? __________________________________________________ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ From paul@dubois.ws Sat Apr 7 00:58:51 2001 From: paul@dubois.ws (Paul DuBois) Date: Fri, 6 Apr 2001 18:58:51 -0500 Subject: [DB-SIG] How to detect wether a database exist ? In-Reply-To: <20010406052649.57265.qmail@web13808.mail.yahoo.com> References: <20010406052649.57265.qmail@web13808.mail.yahoo.com> Message-ID: At 10:26 PM -0700 4/5/01, rho handsome wrote: >Hello guys ... I am rather new at programming in >Python and I hope to get some pointers from anyone out >there who can help. I am using the MySQLdb bindings >and there is one problem I face. I am writing an >address book program. How do i check at when the >program starts wether a certain database exist within >the localhost and if it doesn't exist.. create it ? SHOW DATABASES lists all the databases. SHOW DATABASES LIKE 'pattern' shows databases with names that match the given SQL pattern. So run one of those queries and inspect the output. From andy@dustman.net Sat Apr 7 04:41:46 2001 From: andy@dustman.net (Andy Dustman) Date: Fri, 6 Apr 2001 23:41:46 -0400 (EDT) Subject: [DB-SIG] Use of record sets In-Reply-To: <3ACC2BC3.25647.367ED993@localhost> Message-ID: On Thu, 5 Apr 2001, Brad Clements wrote: > On 3 Apr 2001, at 21:57, John Shafaee wrote: > > > The DBFactory module returns query results as a sequence of Records. > > Records are Python UserDict objects that index field data by the > > field(column) name. This is very convenient when interpolating results in a > > string. The above example code would look like the following if written in > > terms of lists of Records: > > > > print "Found id '%s', first name '%s' and last name > > '%s'"%( result['id'], result['first_name'], result['last_name'] ) > > > > I was wondering if this is something that you have considered. If not, I > > was wondering what were the proper channels that I would need to go through > > to make this suggestion open to the rest of the Python DB API community. > > > Look for the module SQLDict.py which does exactly this (and much more) Actually, SQLDict doesn't return dictionaries. It gives a dictionary interface to your SQL database. It also has a facility for returning records as real classes: print "Found id '%s', first name '%s' and last name '%s'"%\ ( result.id, result.first_name, result.last_name ) print result # better still, define __str__ to do this > Much of my Zope/Python work is done with either SQLDict, or at the base db-api level. I > use the db-api level when I need the maximum performance. I hardly ever hear of anyone using SQLDict... Also, as someone else pointed out, MySQLdb can return dictionaries instead of tuples, if you use a DictCursor. This resulted from user-clamoring, since the older MySQLmodule supported fetchDictxxx() methods. I suppose derivative cursor classes could be added to other databases as well. And just today I was wrestling a bit with _mysql (the low-level portion of MySQLdb), since Zope ZMySQLDA uses that. Zope's DA interface is a database API in itself, and making it play with _mysql is as easy or easier (and probably a tad faster) than using MySQLdb. -- Andy Dustman PGP: 0xC72F3F1D @ .net http://dustman.net/andy "Normally with carbonara you use eggs, but I used lobster brains instead." -- Masahiko Kobe (Iron Chef Italian): 30-year-old Giant Lobster Battle From gstein@lyra.org Sat Apr 7 17:01:33 2001 From: gstein@lyra.org (Greg Stein) Date: Sat, 7 Apr 2001 09:01:33 -0700 Subject: [DB-SIG] Use of record sets In-Reply-To: ; from daniel.dittmar@sap.com on Wed, Apr 04, 2001 at 06:11:37PM +0200 References: Message-ID: <20010407090133.D31832@lyra.org> Euh... there is already a module to handle result sets as tuple, dictionaries, or attributed objects. It was written over five years ago :-) http://www.lyra.org/greg/python/ Right around the middle of the page: dtuple.py Cheers, -g On Wed, Apr 04, 2001 at 06:11:37PM +0200, Dittmar, Daniel wrote: > > result = curs.fetchone() > > print "Found id '%s', first name '%s' and last name '%s'"%( result[0], > result[1], result[2] ) > > vs. > > > print "Found id '%s', first name '%s' and last name '%s'"%( result['id'], > result['first_name'], > > result['last_name'] ) > > How about > > id, first_name, last_name = curs.fetchone () > print "Found id '%s', first name '%s' and last name '%s'"%( id, first_name, > last_name) > > or better still > > for id, first_name, last_name in curs: > print "Found id '%s', first name '%s' and last name '%s'"%( id, > first_name, last_name) > > This is of couse not directly an argument against using strings as a column > index (DBI has them, JDBC has them). > > There is one annoying property of SQL in this context: identifier are > converted to uppercase unless quoted. Should the UserDict mimic this > behaviour by trying both the actual string and the uppercase version? > > And how about returning row objects. That would allow .dot notation (if the > column names are valid Python identifier): > print "Found id '%s', first name '%s' and last name '%s'"%( result.id, > result.first_name, result.last_name) > > My personal preference: 'for id, first_name, last_name in curs:' > > Daniel > > -- > Daniel Dittmar > daniel.dittmar@sap.com > SAP DB, SAP Labs Berlin > http://www.sapdb.org/ > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig -- Greg Stein, http://www.lyra.org/ From jon+python-db-sig@unequivocal.co.uk Sat Apr 7 18:04:09 2001 From: jon+python-db-sig@unequivocal.co.uk (Jon Ribbens) Date: Sat, 7 Apr 2001 18:04:09 +0100 Subject: [DB-SIG] How to detect wether a database exist ? In-Reply-To: <20010406052649.57265.qmail@web13808.mail.yahoo.com>; from scortum@yahoo.com on Thu, Apr 05, 2001 at 10:26:49PM -0700 References: <20010406052649.57265.qmail@web13808.mail.yahoo.com> Message-ID: <20010407180409.B10561@snowy.squish.net> rho handsome wrote: > How do i check at when the > program starts wether a certain database exist within > the localhost and if it doesn't exist.. create it ? CREATE TABLE IF NOT EXISTS blah ( blah ) From jno@glasnet.ru Mon Apr 9 07:06:06 2001 From: jno@glasnet.ru (Eugene V . Dvurechenski) Date: Mon, 9 Apr 2001 10:06:06 +0400 Subject: [DB-SIG] DCOracle and Python 2.0 Message-ID: <20010409100606.F24229@glas.net> What is the proper version of DCOracle to use with Python 2.0? DCO 1.3.1 want not to be compiled in this environment... Platforms are: Linux, Solaris, HPUX. -- SY, jno (PRIVATE PERSON) [ http://www.glasnet.ru/~jno ] a TeleRoss techie [ http://www.aviation.ru/ ] If God meant man to fly, He'd have given him more money. From andreas@digicool.com Mon Apr 9 12:29:39 2001 From: andreas@digicool.com (Andreas Jung) Date: Mon, 9 Apr 2001 07:29:39 -0400 Subject: [DB-SIG] DCOracle and Python 2.0 References: <20010409100606.F24229@glas.net> Message-ID: <00aa01c0c0ef$69151540$9865fea9@SUXLAP> The latest version of DCOracle ist 1.3.2 (or .1). What are your compilation problems. Usually the Setup file or the Makefile needs some adjustments. You can also take at Matt Kromers DCO2. This is the new implementation of the Oracle 8 API based on the Python database API. You can find it on www.zope.org. It comes with support for DistUtils and should hopefully install out-of-the-box. Andreas Jung Digital Creations ----- Original Message ----- From: "Eugene V . Dvurechenski" To: Sent: Monday, April 09, 2001 2:06 AM Subject: [DB-SIG] DCOracle and Python 2.0 > > What is the proper version of DCOracle to use with Python 2.0? > DCO 1.3.1 want not to be compiled in this environment... > Platforms are: Linux, Solaris, HPUX. > > -- > SY, > jno (PRIVATE PERSON) [ http://www.glasnet.ru/~jno ] > a TeleRoss techie [ http://www.aviation.ru/ ] > If God meant man to fly, He'd have given him more money. > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig From pshafaee@hostway.com Mon Apr 9 15:54:54 2001 From: pshafaee@hostway.com (John Shafaee) Date: Mon, 09 Apr 2001 09:54:54 -0500 Subject: [DB-SIG] Use of record sets References: <20010407090133.D31832@lyra.org> Message-ID: <3AD1CD3E.56539AAB@hostway.net> This may be a stupid question, but is the dtuple.py specs. part of the Python DB API? I do not have access to the older SIGs, but looking over the current API interface docs I did not see anything mentioned about such a class. Actually, the concept of viewing DB results as records has been around for much longer than Python itself, no argument there. I am certain that at this time there are a number of Python implementations out there; all equally good and useful. Just in the past DB-SIG posts there were two different implementations mentioned (Record and SQLDict). However, I think there is a need to standardize on this concept (if it has not already been done and/or I just missed it). I believe that the most appropriate definition for this concept would be in the Python DB API. It would be great if all implementations of the DB API would provide a facility for treating results as record sets. So, as there are many different implementations of converting fetchall() results into records sets, there isn't a standard one that we can expect when working with a Python DB module. Best, John S. Greg Stein wrote: > Euh... there is already a module to handle result sets as tuple, > dictionaries, or attributed objects. It was written over five years ago :-) > > http://www.lyra.org/greg/python/ > > Right around the middle of the page: dtuple.py > > Cheers, > -g > > On Wed, Apr 04, 2001 at 06:11:37PM +0200, Dittmar, Daniel wrote: > > > result = curs.fetchone() > > > print "Found id '%s', first name '%s' and last name '%s'"%( result[0], > > result[1], result[2] ) > > > > vs. > > > > > print "Found id '%s', first name '%s' and last name '%s'"%( result['id'], > > result['first_name'], > > > result['last_name'] ) > > > > How about > > > > id, first_name, last_name = curs.fetchone () > > print "Found id '%s', first name '%s' and last name '%s'"%( id, first_name, > > last_name) > > > > or better still > > > > for id, first_name, last_name in curs: > > print "Found id '%s', first name '%s' and last name '%s'"%( id, > > first_name, last_name) > > > > This is of couse not directly an argument against using strings as a column > > index (DBI has them, JDBC has them). > > > > There is one annoying property of SQL in this context: identifier are > > converted to uppercase unless quoted. Should the UserDict mimic this > > behaviour by trying both the actual string and the uppercase version? > > > > And how about returning row objects. That would allow .dot notation (if the > > column names are valid Python identifier): > > print "Found id '%s', first name '%s' and last name '%s'"%( result.id, > > result.first_name, result.last_name) > > > > My personal preference: 'for id, first_name, last_name in curs:' > > > > Daniel > > > > -- > > Daniel Dittmar > > daniel.dittmar@sap.com > > SAP DB, SAP Labs Berlin > > http://www.sapdb.org/ > > > > _______________________________________________ > > DB-SIG maillist - DB-SIG@python.org > > http://mail.python.org/mailman/listinfo/db-sig > > -- > Greg Stein, http://www.lyra.org/ From mal@lemburg.com Mon Apr 9 16:45:52 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 09 Apr 2001 17:45:52 +0200 Subject: [DB-SIG] Use of record sets References: <20010407090133.D31832@lyra.org> <3AD1CD3E.56539AAB@hostway.net> Message-ID: <3AD1D930.5A677A84@lemburg.com> John Shafaee wrote: > > This may be a stupid question, but is the dtuple.py specs. part of the Python > DB API? I do not have access to the older SIGs, but looking over the current > API interface docs I did not see anything mentioned about such a class. > > Actually, the concept of viewing DB results as records has been around for much > longer than Python itself, no argument there. I am certain that at this time > there are a number of Python implementations out there; all equally good and > useful. Just in the past DB-SIG posts there were two different implementations > mentioned (Record and SQLDict). > > However, I think there is a need to standardize on this concept (if it has not > already been done and/or I just missed it). I believe that the most appropriate > definition for this concept would be in the Python DB API. It would be great if > all implementations of the DB API would provide a facility for treating results > as record sets. > > So, as there are many different implementations of converting fetchall() results > into records sets, there isn't a standard one that we can expect when working > with a Python DB module. dtuple.py works with all DB API compliant database interfaces. There is really no need to add any such mechanism to the DB API spec. itself, since writing wrappers which enable this features in one of the possible ways is easy and can be done by the application programmer rather than the interface author. Note that mapping columns to column names is not necessarily a 1-1 mapping. Some DBs may not even support querying column names in result sets. > Best, > John S. > > Greg Stein wrote: > > > Euh... there is already a module to handle result sets as tuple, > > dictionaries, or attributed objects. It was written over five years ago :-) > > > > http://www.lyra.org/greg/python/ > > > > Right around the middle of the page: dtuple.py > > > > Cheers, > > -g > > > > On Wed, Apr 04, 2001 at 06:11:37PM +0200, Dittmar, Daniel wrote: > > > > result = curs.fetchone() > > > > print "Found id '%s', first name '%s' and last name '%s'"%( result[0], > > > result[1], result[2] ) > > > > > > vs. > > > > > > > print "Found id '%s', first name '%s' and last name '%s'"%( result['id'], > > > result['first_name'], > > > > result['last_name'] ) > > > > > > How about > > > > > > id, first_name, last_name = curs.fetchone () > > > print "Found id '%s', first name '%s' and last name '%s'"%( id, first_name, > > > last_name) > > > > > > or better still > > > > > > for id, first_name, last_name in curs: > > > print "Found id '%s', first name '%s' and last name '%s'"%( id, > > > first_name, last_name) > > > > > > This is of couse not directly an argument against using strings as a column > > > index (DBI has them, JDBC has them). > > > > > > There is one annoying property of SQL in this context: identifier are > > > converted to uppercase unless quoted. Should the UserDict mimic this > > > behaviour by trying both the actual string and the uppercase version? > > > > > > And how about returning row objects. That would allow .dot notation (if the > > > column names are valid Python identifier): > > > print "Found id '%s', first name '%s' and last name '%s'"%( result.id, > > > result.first_name, result.last_name) > > > > > > My personal preference: 'for id, first_name, last_name in curs:' > > > > > > Daniel > > > > > > -- > > > Daniel Dittmar > > > daniel.dittmar@sap.com > > > SAP DB, SAP Labs Berlin > > > http://www.sapdb.org/ > > > > > > _______________________________________________ > > > DB-SIG maillist - DB-SIG@python.org > > > http://mail.python.org/mailman/listinfo/db-sig > > > > -- > > Greg Stein, http://www.lyra.org/ > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Pages: http://www.lemburg.com/python/ From fog@mixadlive.com Mon Apr 9 16:57:27 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Mon, 9 Apr 2001 17:57:27 +0200 Subject: [DB-SIG] Use of record sets In-Reply-To: <3AD1D930.5A677A84@lemburg.com>; from mal@lemburg.com on Mon, Apr 09, 2001 at 05:45:52PM +0200 References: <20010407090133.D31832@lyra.org> <3AD1CD3E.56539AAB@hostway.net> <3AD1D930.5A677A84@lemburg.com> Message-ID: <20010409175727.B3380@mixadlive.com> Scavenging the mail folder uncovered M.-A. Lemburg's letter: > dtuple.py works with all DB API compliant database interfaces. > There is really no need to add any such mechanism to the DB API > spec. itself, since writing wrappers which enable this features > in one of the possible ways is easy and can be done by the application > programmer rather than the interface author. i agree. this (not the fact that i agree ;-) but the fact that exists a module that works with all DB API compliant drivers) closes the argument. but, while we are at it, what is the decision procedure for this sig? i mean, the other argument, the one on typecasting ended with some proposals and nothing done? what should we do? should i write a resumee and thensubmit it as an ufficial addition to the db api? and *how* do you/we accept stuff like this one? there is a voting procedure? ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Don't dream it. Be it. -- Dr. Frank'n'further From mal@lemburg.com Mon Apr 9 17:12:47 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 09 Apr 2001 18:12:47 +0200 Subject: [DB-SIG] Extending the DB API (Use of record sets) References: <20010407090133.D31832@lyra.org> <3AD1CD3E.56539AAB@hostway.net> <3AD1D930.5A677A84@lemburg.com> <20010409175727.B3380@mixadlive.com> Message-ID: <3AD1DF7F.4AE59DFB@lemburg.com> Federico Di Gregorio wrote: > > Scavenging the mail folder uncovered M.-A. Lemburg's letter: > > > dtuple.py works with all DB API compliant database interfaces. > > There is really no need to add any such mechanism to the DB API > > spec. itself, since writing wrappers which enable this features > > in one of the possible ways is easy and can be done by the application > > programmer rather than the interface author. > > i agree. this (not the fact that i agree ;-) but the fact that exists a > module that works with all DB API compliant drivers) closes the argument. > > but, while we are at it, what is the decision procedure for this sig? > i mean, the other argument, the one on typecasting ended with some > proposals and nothing done? what should we do? should i write a resumee > and thensubmit it as an ufficial addition to the db api? and *how* > do you/we accept stuff like this one? there is a voting procedure? I'd suggest to write up these DB API extensions as PEP-sub sections which are then added to the standard document as guide lines for DB API interface authors. Things like type casts are nice to have, but should not be forced onto interface authors as yet another feature to support. If the underlying database has support for these things, then they may consider implementing these "optional standard extensions", but they should not be forced to do so. About the decision procedure: last time around, we simply voted whether or not a specific feature request was worthwhile adding or not. This worked out quite well, so why not proceed in the same direction for the proposed extensions ?! -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Pages: http://www.lemburg.com/python/ From gstein@lyra.org Mon Apr 9 19:47:46 2001 From: gstein@lyra.org (Greg Stein) Date: Mon, 9 Apr 2001 11:47:46 -0700 Subject: [DB-SIG] Extending the DB API In-Reply-To: <3AD1DF7F.4AE59DFB@lemburg.com>; from mal@lemburg.com on Mon, Apr 09, 2001 at 06:12:47PM +0200 References: <20010407090133.D31832@lyra.org> <3AD1CD3E.56539AAB@hostway.net> <3AD1D930.5A677A84@lemburg.com> <20010409175727.B3380@mixadlive.com> <3AD1DF7F.4AE59DFB@lemburg.com> Message-ID: <20010409114746.Y31832@lyra.org> On Mon, Apr 09, 2001 at 06:12:47PM +0200, M.-A. Lemburg wrote: > Federico Di Gregorio wrote: >... > > i agree. this (not the fact that i agree ;-) but the fact that exists a > > module that works with all DB API compliant drivers) closes the argument. DBAPI 1.0 and 2.0. That module is *old*. Predates the 1.0 spec, actually. > > but, while we are at it, what is the decision procedure for this sig? > > i mean, the other argument, the one on typecasting ended with some > > proposals and nothing done? what should we do? should i write a resumee > > and thensubmit it as an ufficial addition to the db api? and *how* > > do you/we accept stuff like this one? there is a voting procedure? > > I'd suggest to write up these DB API extensions as PEP-sub sections > which are then added to the standard document as guide lines for > DB API interface authors. This would be a good way to do it. An "informational" PEP which we could refer to from the DB pages. > Things like type casts are nice to have, but should not be forced [ I believe he was referring to the types-sig ] > onto interface authors as yet another feature to support. If the > underlying database has support for these things, then they may > consider implementing these "optional standard extensions", but > they should not be forced to do so. Exactly. The DBAPI is all about getting access to the DB in a reasonably uniform manner. Every database is different, though, so we allow the module author leeway to deal with that. Once the access is available to the Python programmer, then higher-level modules can unify them (if required). > About the decision procedure: last time around, we simply voted > whether or not a specific feature request was worthwhile adding > or not. This worked out quite well, so why not proceed in the > same direction for the proposed extensions ?! Yup. We were able to reach a consensus without too much problem. It helped that only a few people were actively involved :-). I think the PEP idea is nice since it allows us to organize and record the ideas a bit better. Cheers, -g -- Greg Stein, http://www.lyra.org/ From kjcole@gri.gallaudet.edu Wed Apr 11 00:53:13 2001 From: kjcole@gri.gallaudet.edu (Kevin Cole) Date: Tue, 10 Apr 2001 19:53:13 -0400 (EDT) Subject: [DB-SIG] Are there examples of "cursor" use? Message-ID: Hi, I'm working with PostgreSQL and Python. According to www.python.org, D'Arcy J.M. Cain's pgsqldb.py implements the DB-API and after looking at AMK's DB-API documentation, I've tried stuff like: import pgsqldb (OK) db = pgsqldb.connect("my_database") (OK) cursor = db.cursor() (Not OK) That didn't work so I tried several other things and eventually tried: cursor = pgsqldb._cursor("my_database") (OK) but then stuff like cursor.execute("select * from my_table") doesn't seem to work either. So, is there any documentation for SQL novices or example that uses cursor? -- Kevin Cole | E-mail: kjcole@gri.gallaudet.edu Gallaudet Research Institute | WWW: http://gri.gallaudet.edu/~kjcole/ Hall Memorial Bldg S-419 | Voice: (202) 651-5135 Washington, D.C. 20002-3695 | FAX: (202) 651-5746 From kromag@nsacom.net Wed Apr 11 08:23:50 2001 From: kromag@nsacom.net (kromag@nsacom.net) Date: Wed, 11 Apr 2001 00:23:50 -0700 (PDT) Subject: [DB-SIG] Newbie: PyGreSQL and Insertion of random numbers Message-ID: <200104110723.f3B7Nnk12724@pop.nsacom.net> I am attempting to insert random numbers into a PostGreSQL database using _pg and random. I am unsure as to syntax when adding variables. I have attempted the following: import _pg import random db.connect('test', 'localhost') db.query("insert into pork values(`random.randint(1,1000)`+ 'stuff' +'1-2- 1999')") which returns a parse error at the second opening parenthesis. (So a query doesn't act like a string! I learned something!) I have also attempted: db.query("insert into pork values('`random.randint(1,1000)`', 'stuff', '1-2- 1999')") which, of course inserts random.randint() as a literal string. What am I doing wrong? From fog@mixadlive.com Wed Apr 11 07:55:41 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Wed, 11 Apr 2001 08:55:41 +0200 Subject: [DB-SIG] Newbie: PyGreSQL and Insertion of random numbers In-Reply-To: <200104110723.f3B7Nnk12724@pop.nsacom.net>; from kromag@nsacom.net on Wed, Apr 11, 2001 at 12:23:50AM -0700 References: <200104110723.f3B7Nnk12724@pop.nsacom.net> Message-ID: <20010411085541.A710@initd.org> Scavenging the mail folder uncovered kromag@nsacom.net's letter: > db.query("insert into pork values('`random.randint(1,1000)`', 'stuff', '1-2- > 1999')") > > which, of course inserts random.randint() as a literal string. > > What am I doing wrong? first, you are not using DB API calls while writing to the DB API SIG ;) anyway, with *any* DB API compliant python module you can do as follows: import random import module # put your module name here, not "module" o = module.connect("test", host="localhost") # some modules support anly/also o = module.connect("dbname=test host=localhost") c = o.cursor() c.execute("INSERT INTO pork VALUES (%d, 'stuff', '1-2-1999')", random.randint(1,1000)) -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Qu'est ce que la folie? Juste un sentiment de liberté si fort qu'on en oublie ce qui nous rattache au monde... -- J. de Loctra From mal@lemburg.com Wed Apr 11 08:30:44 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 11 Apr 2001 09:30:44 +0200 Subject: [DB-SIG] Are there examples of "cursor" use? References: Message-ID: <3AD40824.78FF2A7D@lemburg.com> Kevin Cole wrote: > > Hi, > > I'm working with PostgreSQL and Python. According to www.python.org, > D'Arcy J.M. Cain's pgsqldb.py implements the DB-API and after looking at > AMK's DB-API documentation, I've tried stuff like: > > import pgsqldb (OK) > db = pgsqldb.connect("my_database") (OK) > cursor = db.cursor() (Not OK) This is how it is supposed to work. What error message do you get ? (Could be that the connection is not properly set up...) > That didn't work so I tried several other things and eventually tried: > > cursor = pgsqldb._cursor("my_database") (OK) > > but then stuff like cursor.execute("select * from my_table") doesn't > seem to work either. > > So, is there any documentation for SQL novices or example that uses > cursor? There are several Python books out there which have examples on how to use the DB API compatible packages. There are some online docs as well: see the database topic guide on www.python.org. -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Pages: http://www.lemburg.com/python/ From darcy@druid.net Wed Apr 11 11:57:03 2001 From: darcy@druid.net (D'Arcy J.M. Cain) Date: Wed, 11 Apr 2001 06:57:03 -0400 (EDT) Subject: [DB-SIG] Are there examples of "cursor" use? In-Reply-To: "from Kevin Cole at Apr 10, 2001 07:53:13 pm" Message-ID: <20010411105703.C693D1A7D@druid.net> Thus spake Kevin Cole > I'm working with PostgreSQL and Python. According to www.python.org, > D'Arcy J.M. Cain's pgsqldb.py implements the DB-API and after looking at > AMK's DB-API documentation, I've tried stuff like: > > import pgsqldb (OK) >>> import pgsqldb Traceback (most recent call last): File "", line 1, in ? ImportError: No module named pgsqldb Did you actually mean "import pgdb"? > db = pgsqldb.connect("my_database") (OK) > cursor = db.cursor() (Not OK) What error do you get? -- D'Arcy J.M. Cain | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner. From kjcole@gri.gallaudet.edu Wed Apr 11 17:22:33 2001 From: kjcole@gri.gallaudet.edu (Kevin Cole) Date: Wed, 11 Apr 2001 12:22:33 -0400 (EDT) Subject: [DB-SIG] Are there examples of "cursor" use? In-Reply-To: <20010411105703.C693D1A7D@druid.net> Message-ID: Nope. I actually meant "import pgsqldb" as written. I'm on a Red Hat 6.2 Linux system, with postgresql-python 7.0.2. The comments at the top of pgsqldb.py indicate it was written by you, and I've created a log of the attempts I made, (with perhaps a lot more than anyone needs). Rather than include it here, and possibly overload someone's email, I've put it on the web. Have a look at: http://gri.gallaudet.edu/~kjcole/pgsqldb.log.html On Wed, 11 Apr 2001, D'Arcy J.M. Cain wrote: > >>> import pgsqldb > Traceback (most recent call last): > File "", line 1, in ? > ImportError: No module named pgsqldb > > Did you actually mean "import pgdb"? > > > db = pgsqldb.connect("my_database") (OK) > > cursor = db.cursor() (Not OK) > > What error do you get? > Thus spake Kevin Cole > > I'm working with PostgreSQL and Python. According to www.python.org, > > D'Arcy J.M. Cain's pgsqldb.py implements the DB-API and after looking at > > AMK's DB-API documentation, I've tried stuff like: > > > > import pgsqldb (OK) -- Kevin Cole, RHCE, Linux Admin | E-mail: kjcole@gri.gallaudet.edu Gallaudet Research Institute | WWW: http://gri.gallaudet.edu/~kjcole/ Hall Memorial Bldg S-419 | Voice: (202) 651-5135 Washington, D.C. 20002-3695 | FAX: (202) 651-5746 From darcy@druid.net Thu Apr 12 11:01:32 2001 From: darcy@druid.net (D'Arcy J.M. Cain) Date: Thu, 12 Apr 2001 06:01:32 -0400 (EDT) Subject: [DB-SIG] Are there examples of "cursor" use? In-Reply-To: "from Kevin Cole at Apr 11, 2001 12:22:33 pm" Message-ID: <20010412100132.662621A7D@druid.net> Thus spake Kevin Cole > Nope. I actually meant "import pgsqldb" as written. I'm on a Red Hat > 6.2 Linux system, with postgresql-python 7.0.2. The comments at the top > of pgsqldb.py indicate it was written by you, and I've created a log of > the attempts I made, (with perhaps a lot more than anyone needs). > Rather than include it here, and possibly overload someone's email, I've > put it on the web. Have a look at: > > http://gri.gallaudet.edu/~kjcole/pgsqldb.log.html I looked but I still don't know where that pgsqldb file came from. The > test version, which comes with PostGreSQL, is called pgdb.py and that string doesn't appear anywhere in the distribution. That may have been a very early attempt that was aborted when someone donated the current version. Are you building PostGreSQL from source? If so, run configure with the --with-python option. It will just build it with the latest version. -- D'Arcy J.M. Cain | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner. From kjcole@gri.gallaudet.edu Thu Apr 12 20:58:30 2001 From: kjcole@gri.gallaudet.edu (Kevin Cole) Date: Thu, 12 Apr 2001 15:58:30 -0400 (EDT) Subject: [DB-SIG] postgresql-7.0.3.tar.gz != postgresql-7.0.3-2.src.rpm In-Reply-To: <20010412100132.662621A7D@druid.net> Message-ID: Hello all, I'm cc'ing the pgsql-bugs address, since I'm starting to think it's their problem. It appears that the folks at www.postgresql.org don't know about pgdb.py either. Or, rather, not consistently. Apparently, the 7.0.3-2 source tarball doesn't equal the 7.0.3-2 source RPM. I just downloaded the latest PostgreSQL source RPM (7.0.3-2) from www.postgresql.org and rebuilt everything. (From that source, it built 10 binary RPM's: the client, server, development stuff, perl, python, tcl and tk, test, odbc, and jdbc packages.) When finished, I ended up with exactly what I had before: pgsqldb.py et al. No pgdb.py. On Thu, 12 Apr 2001, D'Arcy J.M. Cain wrote: > Thus spake Kevin Cole > > Nope. I actually meant "import pgsqldb" as written. I'm on a Red Hat > > 6.2 Linux system, with postgresql-python 7.0.2. The comments at the top > > of pgsqldb.py indicate it was written by you, and I've created a log of > > the attempts I made, (with perhaps a lot more than anyone needs). > > Rather than include it here, and possibly overload someone's email, I've > > put it on the web. Have a look at: > > > > http://gri.gallaudet.edu/~kjcole/pgsqldb.log.html > > I looked but I still don't know where that pgsqldb file came from. The > test version, which comes with PostGreSQL, is called pgdb.py and that > string doesn't appear anywhere in the distribution. That may have been a > very early attempt that was aborted when someone donated the current > version. > > Are you building PostGreSQL from source? If so, run configure with the > --with-python option. It will just build it with the latest version. -- Kevin Cole, RHCE, Linux Admin | E-mail: kjcole@gri.gallaudet.edu Gallaudet Research Institute | WWW: http://gri.gallaudet.edu/~kjcole/ Hall Memorial Bldg S-419 | Voice: (202) 651-5135 Washington, D.C. 20002-3695 | FAX: (202) 651-5746 From darcy@druid.net Fri Apr 13 01:41:15 2001 From: darcy@druid.net (D'Arcy J.M. Cain) Date: Thu, 12 Apr 2001 20:41:15 -0400 (EDT) Subject: [DB-SIG] Re: postgresql-7.0.3.tar.gz != postgresql-7.0.3-2.src.rpm In-Reply-To: "from Kevin Cole at Apr 12, 2001 03:58:30 pm" Message-ID: <20010413004115.659C31A7F@druid.net> Thus spake Kevin Cole > I'm cc'ing the pgsql-bugs address, since I'm starting to think it's > their problem. > > It appears that the folks at www.postgresql.org don't know about pgdb.py > either. Or, rather, not consistently. Apparently, the 7.0.3-2 source > tarball doesn't equal the 7.0.3-2 source RPM. > > I just downloaded the latest PostgreSQL source RPM (7.0.3-2) from > www.postgresql.org and rebuilt everything. (From that source, it built > 10 binary RPM's: the client, server, development stuff, perl, python, > tcl and tk, test, odbc, and jdbc packages.) When finished, I ended up > with exactly what I had before: pgsqldb.py et al. No pgdb.py. I'm not sure what's in the RPM (I run NetBSD) but the PostgreSQL sources for PyGreSQL are the horse's mouth. The official development tree moved into PygreSQL about a month ago. -- D'Arcy J.M. Cain | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner. From kromag@nsacom.net Thu Apr 19 06:33:24 2001 From: kromag@nsacom.net (kromag@nsacom.net) Date: Wed, 18 Apr 2001 22:33:24 -0700 (PDT) Subject: [DB-SIG] Where is mxDateTime.h Hiding? Message-ID: <200104190533.f3J5XOR16897@pop.nsacom.net> Hi folks, I am having some difficulties in getting mxDateTime.h to install properly and psycopg to compile on my slackware linux system. I have Python2.1b2, PostGreSQL 7.0.3 and egenix-base.2.0.0 installed. I attempt configure with the following: doug:~/psycopg-0.5.1$ ./configure --with-mxdatetime- includes=/usr/local/lib/python2.1/site-packages/mx/DateTime/mxDateTime --with- postgres-libraries=/usr/local/pgsql/lib --with-postgres- includes=/usr/local/pgsql/include --enable-devel=yes which gives me: ----stuff removed----- checking for main in -lcrypt... yes Warning: next test can fail because of a missing libcrypt checking for PQconnectStart in -lpq... yes updating cache ./config.cache creating ./config.status creating Setup creating config.h creating Makefile.pre creating Makefile which seems dandy until: doug:~/psycopg-0.5.1$ make gcc -fpic -g -O2 -Wall -Wstrict-prototypes -Wall - I/usr/local/include/python2.0 -I/usr/local/lib/python2.0/config - DHAVE_CONFIG_H=1 -DHAVE_LIBCRYPT=1 -I/usr/local/pgsql/include - I/usr/local/lib/python2.1/site-packages/mx/DateTime/mxDateTime - DVERSION="0.5.1" -DDEBUG -D_REENTRANT -D_GNU_SOURCE -c ./module.c ./module.c:29: mxDateTime.h: No such file or directory In file included from ./module.c:33: module.h:33: mxDateTime.h: No such file or directory make: *** [module.o] Error 1 I have looked for mxDateTime.h and found it, not in the python libs where I expected it to be, but in: /home/doug/egenix-mx-base-2.0.0./mx/DateTime/mxDateTime/mxDateTime.h Where do I need to move mxDateTime.h (and any other gotchaStupid.h files) to make the compilation work? Is there a way to specify it in the setup.py/cfg files? I can post more of the ./configure output if needed. Thanks! d From fog@mixadlive.com Thu Apr 19 08:22:47 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Thu, 19 Apr 2001 09:22:47 +0200 Subject: [DB-SIG] Where is mxDateTime.h Hiding? In-Reply-To: <200104190533.f3J5XOR16897@pop.nsacom.net>; from kromag@nsacom.net on Wed, Apr 18, 2001 at 10:33:24PM -0700 References: <200104190533.f3J5XOR16897@pop.nsacom.net> Message-ID: <20010419092247.B22007@initd.org> Scavenging the mail folder uncovered kromag@nsacom.net's letter: > I have looked for mxDateTime.h and found it, not in the python libs where I > expected it to be, but in: > > /home/doug/egenix-mx-base-2.0.0./mx/DateTime/mxDateTime/mxDateTime.h > > Where do I need to move mxDateTime.h (and any other gotchaStupid.h files) to > make the compilation work? Is there a way to specify it in the setup.py/cfg > files? the simple and fast solution is to cd to psycopg source directory and then: cp /home/doug/egenix-mx-base-2.0.0./mx/DateTime/mxDateTime/mx*.h . ./configure --with-mxdatetime-includes=. ... then make. this is how we build the debian binaries. ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Qu'est ce que la folie? Juste un sentiment de liberté si fort qu'on en oublie ce qui nous rattache au monde... -- J. de Loctra From mal@lemburg.com Thu Apr 19 08:42:56 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 19 Apr 2001 09:42:56 +0200 Subject: [DB-SIG] Where is mxDateTime.h Hiding? References: <200104190533.f3J5XOR16897@pop.nsacom.net> Message-ID: <3ADE9700.C98E7B2D@lemburg.com> kromag@nsacom.net wrote: > > Hi folks, > > I am having some difficulties in getting mxDateTime.h to install properly and > psycopg to compile on my slackware linux system. I have Python2.1b2, > PostGreSQL 7.0.3 and egenix-base.2.0.0 installed. > > I attempt configure with the following: > > doug:~/psycopg-0.5.1$ ./configure --with-mxdatetime- > includes=/usr/local/lib/python2.1/site-packages/mx/DateTime/mxDateTime --with- > postgres-libraries=/usr/local/pgsql/lib --with-postgres- > includes=/usr/local/pgsql/include --enable-devel=yes > > which gives me: > > ----stuff removed----- > > checking for main in -lcrypt... yes > Warning: next test can fail because of a missing libcrypt > checking for PQconnectStart in -lpq... yes > updating cache ./config.cache > creating ./config.status > creating Setup > creating config.h > creating Makefile.pre > creating Makefile > > which seems dandy until: > > doug:~/psycopg-0.5.1$ make > gcc -fpic -g -O2 -Wall -Wstrict-prototypes -Wall - > I/usr/local/include/python2.0 -I/usr/local/lib/python2.0/config - > DHAVE_CONFIG_H=1 -DHAVE_LIBCRYPT=1 -I/usr/local/pgsql/include - > I/usr/local/lib/python2.1/site-packages/mx/DateTime/mxDateTime - > DVERSION="0.5.1" -DDEBUG -D_REENTRANT -D_GNU_SOURCE -c ./module.c > ./module.c:29: mxDateTime.h: No such file or directory > In file included from ./module.c:33: > module.h:33: mxDateTime.h: No such file or directory > make: *** [module.o] Error 1 > > I have looked for mxDateTime.h and found it, not in the python libs where I > expected it to be, but in: > > /home/doug/egenix-mx-base-2.0.0./mx/DateTime/mxDateTime/mxDateTime.h > > Where do I need to move mxDateTime.h (and any other gotchaStupid.h files) to > make the compilation work? Is there a way to specify it in the setup.py/cfg > files? > > I can post more of the ./configure output if needed. Interesting. Previously, the .h file lived in the same dir as the C extension itself. Since distutils only installs Python scripts and the compiled C extensions (not the header files) per default, the header file is no longer installed in the target dir. I think I'll have to issue a new patch release of egenix-mx-base which solves this problem... in any case, moving the .h file into site-packages/mx/DateTime/mxDateTime will solve the problem for you. -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Pages: http://www.lemburg.com/python/ From fog@mixadlive.com Thu Apr 19 09:03:05 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Thu, 19 Apr 2001 10:03:05 +0200 Subject: [DB-SIG] (Going off-topic...) Where is mxDateTime.h Hiding? In-Reply-To: <3ADE9700.C98E7B2D@lemburg.com>; from mal@lemburg.com on Thu, Apr 19, 2001 at 09:42:56AM +0200 References: <200104190533.f3J5XOR16897@pop.nsacom.net> <3ADE9700.C98E7B2D@lemburg.com> Message-ID: <20010419100305.D22007@initd.org> Scavenging the mail folder uncovered M.-A. Lemburg's letter: snip] > Interesting. Previously, the .h file lived in the same dir as > the C extension itself. Since distutils only installs Python > scripts and the compiled C extensions (not the header files) > per default, the header file is no longer installed in the > target dir. > > I think I'll have to issue a new patch release of egenix-mx-base > which solves this problem... in any case, moving the .h file should header files live in python lib dir? imho (but i am a FHS fan, so take this with a grain of salt) they should stay in /usr/include. in this case under /usr/includes/mx/mxDateTime. ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org The number of the beast: vi vi vi. -- Delexa Jones Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org A short story: I want you. I love you. I'll miss you. -- Me From mal@lemburg.com Thu Apr 19 09:09:33 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 19 Apr 2001 10:09:33 +0200 Subject: [DB-SIG] (Going off-topic...) Where is mxDateTime.h Hiding? References: <200104190533.f3J5XOR16897@pop.nsacom.net> <3ADE9700.C98E7B2D@lemburg.com> <20010419100305.D22007@initd.org> Message-ID: <3ADE9D3C.F795A837@lemburg.com> Federico Di Gregorio wrote: > > Scavenging the mail folder uncovered M.-A. Lemburg's letter: > snip] > > Interesting. Previously, the .h file lived in the same dir as > > the C extension itself. Since distutils only installs Python > > scripts and the compiled C extensions (not the header files) > > per default, the header file is no longer installed in the > > target dir. > > > > I think I'll have to issue a new patch release of egenix-mx-base > > which solves this problem... in any case, moving the .h file > > should header files live in python lib dir? imho (but i am a > FHS fan, so take this with a grain of salt) they should stay in > /usr/include. in this case under /usr/includes/mx/mxDateTime. Good question: for historic reasons I will probably put them into the site-packages subdir again (that's where it was hiding before ;). Seriously, I'm not sure whether there is a standard for storing header files of extensions in distutils... looking at the distutils code there is a "header" option I could use which would then install the header files in: /usr/local/include/python21/egenix-mx-base/ Don't know if that's desirable though, since there's no way for other tools to find it by querying the mx.DateTime package dirs for it... -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Pages: http://www.lemburg.com/python/ From kromag@nsacom.net Fri Apr 20 06:36:27 2001 From: kromag@nsacom.net (kromag@nsacom.net) Date: Thu, 19 Apr 2001 22:36:27 -0700 (PDT) Subject: [DB-SIG] mxDateTime stress continues... Message-ID: <200104200536.f3K5aRm29364@pop.nsacom.net> I removed all traces of python from my system and (re)built python2.1/egenix/psycopg. Everything appeared to build and install fine. However, I still get the following error when attempting to import psycopg: Python 2.1 (#1, Apr 19 2001, 22:45:02) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import psychopg Traceback (most recent call last): File "", line 1, in ? ImportError: No module named psychopg >>> Aw Darn. Apparently mxDateTime is not where it belongs, or I don't have the gumption to properly manually add it to my python path. Any clues for the clueless? I really want to get this module running, as the examples in the psycopg/examples directory seem really straightforward. *sigh* Python may be easy, but getting stuff to compile on linux is still getting stuff to compile on linux! :-) Thanks for all your help so far! d From kromag@nsacom.net Fri Apr 20 06:39:19 2001 From: kromag@nsacom.net (kromag@nsacom.net) Date: Thu, 19 Apr 2001 22:39:19 -0700 (PDT) Subject: [DB-SIG] mxDateTime stress- oops Message-ID: <200104200539.f3K5dJm28961@pop.nsacom.net> Sorry folks, I pasted in the wrong error in that earlier post. The error should have read: Python 2.1 (#1, Apr 19 2001, 22:45:02) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import psycopg Traceback (most recent call last): File "", line 1, in ? ImportError: No module named mxDateTime Sorry for any confusion. d From fog@mixadlive.com Fri Apr 20 07:48:25 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Fri, 20 Apr 2001 08:48:25 +0200 Subject: [DB-SIG] mxDateTime stress- oops In-Reply-To: <200104200539.f3K5dJm28961@pop.nsacom.net>; from kromag@nsacom.net on Thu, Apr 19, 2001 at 10:39:19PM -0700 References: <200104200539.f3K5dJm28961@pop.nsacom.net> Message-ID: <20010420084825.A5219@mixadlive.com> Scavenging the mail folder uncovered kromag@nsacom.net's letter: > Sorry folks, I pasted in the wrong error in that earlier post. The error > should have read: > > Python 2.1 (#1, Apr 19 2001, 22:45:02) > [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 > Type "copyright", "credits" or "license" for more information. > >>> import psycopg > Traceback (most recent call last): > File "", line 1, in ? > ImportError: No module named mxDateTime can you send me the complete configure output, and the paths where you installed python/mxdatetime/psycopg? maybe that's our fault... ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Those who do not study Lisp are doomed to reimplement it. Poorly. -- from Karl M. Hegbloom .signature From fog@mixadlive.com Fri Apr 20 07:51:52 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Fri, 20 Apr 2001 08:51:52 +0200 Subject: [DB-SIG] Re: [Psycopg] mxDateTime still eludes me.... In-Reply-To: <200104200657.f3K6v7m30936@pop.nsacom.net>; from kromag@nsacom.net on Thu, Apr 19, 2001 at 11:57:07PM -0700 References: <200104200657.f3K6v7m30936@pop.nsacom.net> Message-ID: <20010420085152.B5219@mixadlive.com> Scavenging the mail folder uncovered kromag@nsacom.net's letter: > kromag@nsacom.net said: > /usr/local/lib/python2.1/site-packages/mx/DateTime/mxDateTime.so > to /usr/local/lib/python2.1/site-packages > > (the latter in conformance with Johann Sand's earlier post to PsycoPG) yes, this is a know problem of psycopg, it can't find mxDateTime outside the top level python library path. will be resolved in next release, i hope... ciao, federico > and was rewarded with success when I attempted to run the example files! > Thanks for all the help and suggestions! you're welcome. -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org The number of the beast: vi vi vi. -- Delexa Jones From mal@lemburg.com Fri Apr 20 09:06:46 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 20 Apr 2001 10:06:46 +0200 Subject: [DB-SIG] Re: [Psycopg] mxDateTime still eludes me.... References: <200104200657.f3K6v7m30936@pop.nsacom.net> <20010420085152.B5219@mixadlive.com> Message-ID: <3ADFEE16.8A600E93@lemburg.com> Federico Di Gregorio wrote: > > Scavenging the mail folder uncovered kromag@nsacom.net's letter: > > kromag@nsacom.net said: > > > /usr/local/lib/python2.1/site-packages/mx/DateTime/mxDateTime.so > > to /usr/local/lib/python2.1/site-packages > > > > (the latter in conformance with Johann Sand's earlier post to PsycoPG) > > yes, this is a know problem of psycopg, it can't find mxDateTime outside > the top level python library path. will be resolved in next release, i > hope... Note that if you use the import API mxDateTime_ImportModuleAndAPI() from mxDateTime.h it will find the package all by itself -- you should never import the C extension mxDateTime directly, please always use import mx.DateTime. The package exports a Python C object which you can use to access the low-level C API, all of whcih is defined in mxDateTime.h. Background: I believe that trying to import the .so directly is the source of the problems which "kromag" is seeing here. -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Pages: http://www.lemburg.com/python/ From fog@mixadlive.com Fri Apr 20 09:23:26 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Fri, 20 Apr 2001 10:23:26 +0200 Subject: [DB-SIG] Re: [Psycopg] mxDateTime still eludes me.... In-Reply-To: <3ADFEE16.8A600E93@lemburg.com>; from mal@lemburg.com on Fri, Apr 20, 2001 at 10:06:46AM +0200 References: <200104200657.f3K6v7m30936@pop.nsacom.net> <20010420085152.B5219@mixadlive.com> <3ADFEE16.8A600E93@lemburg.com> Message-ID: <20010420102326.D5219@mixadlive.com> Scavenging the mail folder uncovered M.-A. Lemburg's letter: > > yes, this is a know problem of psycopg, it can't find mxDateTime outside > > the top level python library path. will be resolved in next release, i > > hope... > > Note that if you use the import API mxDateTime_ImportModuleAndAPI() > from mxDateTime.h it will find the package all by itself -- you should > never import the C extension mxDateTime directly, please always use > import mx.DateTime. currently i am still using 1.3.0 (from python i do an import DateTime, without the 'mx') and yes, we tried to call mxDateTime_ImportModuleAndAPI() but, apparently, this does not work when the psycopg module is loaded inside Zope. so we use a slightly modified version of mxDateTime_ImportModuleAndAPI() (attached.) i will be more than happy to use the Right (TM) function, if i understand why it does not work inside Zope... the problem is MXDATETIME_LOCATION, calculated (the wrong way, else we should net have had any problems) at configure time. msys = PyImport_ImportModule("sys"); if (!msys) { return; } v = PyObject_GetAttrString(msys, "path"); datetime_path = PyString_FromString(MXDATETIME_LOCATION); if (!datetime_path) { return; } if (PyList_Insert(v, 0, datetime_path) == -1) { Py_DECREF(datetime_path); return; } mod = PyImport_ImportModule(MXDATETIME_API_MODULE); if (mod == NULL) { return ; } api_obj = PyObject_GetAttrString(mod, MXDATETIME_MODULE"API"); if (api_obj == NULL) { Py_DECREF(mod); return; } Py_DECREF(mod); Dprintf("initpsycopg(): API object found\n"); api = PyCObject_AsVoidPtr(api_obj); if (api == NULL) { Py_DECREF(mod); Py_DECREF(v); return; } memcpy(&mxDateTime, api, sizeof(mxDateTime)); mxDateTimeP = &mxDateTime; -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Don't dream it. Be it. -- Dr. Frank'n'further From mal@lemburg.com Fri Apr 20 09:50:16 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 20 Apr 2001 10:50:16 +0200 Subject: [DB-SIG] Re: [Psycopg] mxDateTime still eludes me.... References: <200104200657.f3K6v7m30936@pop.nsacom.net> <20010420085152.B5219@mixadlive.com> <3ADFEE16.8A600E93@lemburg.com> <20010420102326.D5219@mixadlive.com> Message-ID: <3ADFF848.B553C713@lemburg.com> Federico Di Gregorio wrote: > > Scavenging the mail folder uncovered M.-A. Lemburg's letter: > > > yes, this is a know problem of psycopg, it can't find mxDateTime outside > > > the top level python library path. will be resolved in next release, i > > > hope... > > > > Note that if you use the import API mxDateTime_ImportModuleAndAPI() > > from mxDateTime.h it will find the package all by itself -- you should > > never import the C extension mxDateTime directly, please always use > > import mx.DateTime. > > currently i am still using 1.3.0 (from python i do an import DateTime, > without the 'mx') and yes, we tried to call mxDateTime_ImportModuleAndAPI() > but, apparently, this does not work when the psycopg module is loaded > inside Zope. so we use a slightly modified version of > mxDateTime_ImportModuleAndAPI() (attached.) i will be more than happy to > use the Right (TM) function, if i understand why it does not work inside > Zope... Funny, I added the mx package to avoid problems people had with using mxDateTime together with Zope... I'd suggest moving to the new version which does work with Zope and other packages which have their own DateTime module or package. > the problem is MXDATETIME_LOCATION, calculated (the wrong way, else > we should net have had any problems) at configure time. For egenix-mx-base you don't need this sys.path tweaking at all: the mx package is the top-level package to look for. > msys = PyImport_ImportModule("sys"); > if (!msys) { > return; > } > v = PyObject_GetAttrString(msys, "path"); > datetime_path = PyString_FromString(MXDATETIME_LOCATION); > if (!datetime_path) { > return; > } > if (PyList_Insert(v, 0, datetime_path) == -1) { > Py_DECREF(datetime_path); > return; > } > mod = PyImport_ImportModule(MXDATETIME_API_MODULE); > if (mod == NULL) { > return ; > } > api_obj = PyObject_GetAttrString(mod, MXDATETIME_MODULE"API"); > if (api_obj == NULL) { > Py_DECREF(mod); > return; > } > Py_DECREF(mod); > Dprintf("initpsycopg(): API object found\n"); > api = PyCObject_AsVoidPtr(api_obj); > if (api == NULL) { > Py_DECREF(mod); > Py_DECREF(v); > return; > } > memcpy(&mxDateTime, api, sizeof(mxDateTime)); > mxDateTimeP = &mxDateTime; Here is the new 2.0.0 mxDateTime version of the above API: static int mxDateTime_ImportModuleAndAPI(void) { PyObject *mod = 0, *v = 0; char *apimodule = MXDATETIME_API_MODULE; char *apiname = MXDATETIME_MODULE"API"; void *api; DPRINTF("Importing the %s C API...\n",apimodule); mod = PyImport_ImportModule(apimodule); if (mod == NULL) { /* Fallback solution to remain backward compatible */ PyErr_Clear(); apimodule = "DateTime"; DPRINTF(" package not found, trying %s instead\n",apimodule); mod = PyImport_ImportModule(apimodule); if (mod == NULL) goto onError; } DPRINTF(" %s package found\n",apimodule); v = PyObject_GetAttrString(mod,apiname); if (v == NULL) goto onError; Py_DECREF(mod); DPRINTF(" API object %s found\n",apiname); api = PyCObject_AsVoidPtr(v); if (api == NULL) goto onError; Py_DECREF(v); memcpy(&mxDateTime,api,sizeof(mxDateTime)); DPRINTF(" API object loaded and initialized.\n"); return 0; onError: DPRINTF(" not found.\n"); Py_XDECREF(mod); Py_XDECREF(v); return -1; } > -- > Federico Di Gregorio > MIXAD LIVE Chief of Research & Technology fog@mixadlive.com > Debian GNU/Linux Developer & Italian Press Contact fog@debian.org > Don't dream it. Be it. -- Dr. Frank'n'further -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Pages: http://www.lemburg.com/python/ From fog@mixadlive.com Fri Apr 20 09:57:22 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Fri, 20 Apr 2001 10:57:22 +0200 Subject: [DB-SIG] Re: [Psycopg] mxDateTime still eludes me.... In-Reply-To: <3ADFF848.B553C713@lemburg.com>; from mal@lemburg.com on Fri, Apr 20, 2001 at 10:50:16AM +0200 References: <200104200657.f3K6v7m30936@pop.nsacom.net> <20010420085152.B5219@mixadlive.com> <3ADFEE16.8A600E93@lemburg.com> <20010420102326.D5219@mixadlive.com> <3ADFF848.B553C713@lemburg.com> Message-ID: <20010420105722.F5219@mixadlive.com> thank you very much. we'll move to the new mx packages asap. thanx again, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Best friends are often failed lovers. -- Me From djc@object-craft.com.au Fri Apr 20 13:23:33 2001 From: djc@object-craft.com.au (Dave Cole) Date: 20 Apr 2001 22:23:33 +1000 Subject: [DB-SIG] Sybase module 0.13 (Chad Lester release) released Message-ID: What is it: The Sybase module provides a Python interface to the Sybase relational database system. The Sybase package supports almost all of the Python Database API, version 2.0 with extensions. The module works with Python versions 1.5.2 and later and Sybase versions 11.0.3 and later. It is based on the Sybase Client Library (ct_* API), and the Bulk-Library Client (blk_* API) interfaces. Changes: - A refcount leak in the exception code was found and fixed by Chad Lester Where can you get it: http://www.object-craft.com.au/projects/sybase/ - Dave -- http://www.object-craft.com.au From pythonzope2000@yahoo.com Tue Apr 24 06:00:04 2001 From: pythonzope2000@yahoo.com (Manoj Kothari) Date: Mon, 23 Apr 2001 22:00:04 -0700 (PDT) Subject: [DB-SIG] problem in importing database module Message-ID: <20010424050004.48943.qmail@web14104.mail.yahoo.com> Hello everybody, I tried to connect to access database through a python program using odbc.pyd and idb.dll module in win32 package.I the program is run it gives error i.e SystemError: _PyImport_FixupExtension: module Products.TutorialPoll.odbc not loaded i am importing odbc.pyd and idb.dll in my program.I have also set my pythonpath to the folder containing the above modules. I would be very thankful If anybody could help me out with this problem. Thanking You, Manoj B KOthari __________________________________________________ Do You Yahoo!? Yahoo! Auctions - buy the things you want at great prices http://auctions.yahoo.com/ From Sharriff.Aina@med-iq.de Wed Apr 25 15:35:36 2001 From: Sharriff.Aina@med-iq.de (Sharriff.Aina@med-iq.de) Date: Wed, 25 Apr 2001 15:35:36 +0100 Subject: [DB-SIG] RE: problem in importing database module (Manoj Kothari) Message-ID: I had the same problem as you for a couple of days, I think installing "Activestate Python" (www.activestate.com) would solve your problem. This distribution has all the Win32 extensions ( odbc ...e.t.c) and it installs them in the right paths. Cheers Sharriff Aina db-sig-request@ python.org An: db-sig@python.org Gesendet von: Kopie: db-sig-admin@py Thema: DB-SIG digest, Vol 1 #403 - 1 msg thon.org 24.04.01 17:01 Bitte antworten an db-sig Send DB-SIG mailing list submissions to db-sig@python.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.python.org/mailman/listinfo/db-sig or, via email, send a message with subject or body 'help' to db-sig-request@python.org You can reach the person managing the list at db-sig-admin@python.org When replying, please edit your Subject line so it is more specific than "Re: Contents of DB-SIG digest..." Today's Topics: 1. problem in importing database module (Manoj Kothari) Message: 1 Message-ID: <20010424050004.48943.qmail@web14104.mail.yahoo.com> Date: Mon, 23 Apr 2001 22:00:04 -0700 (PDT) From: Manoj Kothari To: db-sig@python.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: [DB-SIG] problem in importing database module Sender: db-sig-admin@python.org Precedence: bulk List-Help: List-Post: List-Subscribe: , List-Id: SIG on Python Tabular Databases List-Unsubscribe: , List-Archive: Hello everybody, I tried to connect to access database through a python program using odbc.pyd and idb.dll module in win32 package.I the program is run it gives error i.e SystemError: _PyImport_FixupExtension: module Products.TutorialPoll.odbc not loaded i am importing odbc.pyd and idb.dll in my program.I have also set my pythonpath to the folder containing the above modules. I would be very thankful If anybody could help me out with this problem. Thanking You, Manoj B KOthari __________________________________________________ Do You Yahoo!? Yahoo! Auctions - buy the things you want at great prices http://auctions.yahoo.com/ From chris@cogdon.org Fri Apr 27 01:21:11 2001 From: chris@cogdon.org (Chris Cogdon) Date: Thu, 26 Apr 2001 17:21:11 -0700 (PDT) Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 Message-ID: [This is all IMHO, of course :)] The Problem ----------- One of the main purposes of the DB-API was to set a 'standard' for API writiers to write to, so that users of the API could switch from one database to another without rewriting large chunks of their code. However, DB-API 2.0 currently encourages non-consistant behaviour through the 'vague' requirements in the 'paramstyle' module-level variable. Ie, we do not place any requirements on which paramstyle should be coded for, only that the paramstyle chosen needs to be specified in that variable. This means that if I'm currently using database BAR, who's Python API uses paramstyle='format', and I switch to database BAZ who's Python API uses paramstyle='pyformat', then I have to rewrite huge chunks of my code. Alternatively, I could simply write a 'wrapper' around the existing api, which converts paramstyles from what my original code used, to what the new API expects. Yes, that'll work just fine, but... isn't that what the API is supposed to be doing for me? The Proposal ------------ The API be changed (2.1?) to include the following. There is a module-level variable, valid_paramstyles, that containsthe list of all paramstyles the API code can support. There is a new method on the connection object, set_paramstyle, that indicates which paramstyle the calling code is going to use. If the calling code sets a paramstyle that is not supported, an exception is raised. The default paramstyle is that specified in the existing 'paramstyle' variable. It is not modified by calls to object.set_paramstyle. There is a new module function, also called set_paramstyle, that sets the 'default' paramstyle that is used when connection objects are created. In this case, the module.paramstyle IS modified. (Alternately, we could let code change the variable module.paramstyle directly. Connection objects would throw an exception if the default is not supported.) The Benifit ----------- I can write my database application using a single paramstyle (my favourite is 'format'). All I then need to do is call set_paramstyle to tell the DB-API to use the paramstyle I want. DB-API writers would by default support the paramstyle easiest for the database C interface. (assuming easy = most-efficient), and then call a limited number of conversion functions for supporting other methods. I perceive the DB-API module change to be made in a /single/ location, whereas changing paramstyles in a database application would need to be done all over the place, or a wrapper written. Backward Combatibility ---------------------- The change above I perceive as being totally backward compatible. Existing code that changes its own calling habits based on 'paramstyle' will still work as expected. Comments, anyone? ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From djc@object-craft.com.au Fri Apr 27 01:53:31 2001 From: djc@object-craft.com.au (Dave Cole) Date: 27 Apr 2001 10:53:31 +1000 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: Chris Cogdon's message of "Thu, 26 Apr 2001 17:21:11 -0700 (PDT)" References: Message-ID: >>>>> "Chris" == Chris Cogdon writes: Chris> There is a module-level variable, valid_paramstyles, that Chris> containsthe list of all paramstyles the API code can support. Chris> There is a new method on the connection object, set_paramstyle, Chris> that indicates which paramstyle the calling code is going to Chris> use. If the calling code sets a paramstyle that is not Chris> supported, an exception is raised. The default paramstyle is Chris> that specified in the existing 'paramstyle' variable. It is not Chris> modified by calls to object.set_paramstyle. Chris> There is a new module function, also called set_paramstyle, Chris> that sets the 'default' paramstyle that is used when connection Chris> objects are created. In this case, the module.paramstyle IS Chris> modified. (Alternately, we could let code change the variable Chris> module.paramstyle directly. Connection objects would throw an Chris> exception if the default is not supported.) Sybase only supports the ? place holder. I am not sure how feasible this is, but maybe a (simple) SQL parser could be used to support any placeholder style which would then be transformed into the native style to interface with the database API. - Dave -- http://www.object-craft.com.au From chris@cogdon.org Fri Apr 27 01:48:59 2001 From: chris@cogdon.org (Chris Cogdon) Date: Thu, 26 Apr 2001 17:48:59 -0700 (PDT) Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: Message-ID: On 27 Apr 2001, Dave Cole wrote: > Sybase only supports the ? place holder. > > I am not sure how feasible this is, but maybe a (simple) SQL parser > could be used to support any placeholder style which would then be > transformed into the native style to interface with the database API. That would work for me too. Frankly, I dont see why the API should not support /all/ styles. The API writer codes the style the backend database most easily supports, then helper functions (supplied by the python-core libraries, for example) turn all the other styles into that style for you. The /user/ of the DB-API should not have to care what the 'native' style is, since there is a deterministic conversion between all styles (I think). However it is done, I should not have to rewrite code just because I slot in a different API. ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From mal@lemburg.com Fri Apr 27 08:34:18 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 27 Apr 2001 09:34:18 +0200 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 References: Message-ID: <3AE920FA.4E849807@lemburg.com> > [Making the DB API interface handle multiple paramstyles] > > > Sybase only supports the ? place holder. > > > > I am not sure how feasible this is, but maybe a (simple) SQL parser > > could be used to support any placeholder style which would then be > > transformed into the native style to interface with the database API. > > That would work for me too. Frankly, I dont see why the API should not > support /all/ styles. The API writer codes the style the backend database > most easily supports, then helper functions (supplied by the python-core > libraries, for example) turn all the other styles into that style for you. > > The /user/ of the DB-API should not have to care what the 'native' style > is, since there is a deterministic conversion between all styles (I > think). > > However it is done, I should not have to rewrite code just because I slot > in a different API. That's easy for you to say ;-) However, your small benefit will cost a lot of module writers a lot of work and sometimes it may even not be possible getting this right. To support all the different styles, the module writer would have to parse the SQL statement to figure out how to map the paramstyle used in there to the native one. Writing an SQL parser is not necessarily possible though, since there are so many different SQL variations in use out there. I'm not even talking about quoting rules here ;-) If you really care about being able to change database interfaces, then you'll have to write an abstraction for your interface anyway, so why put the burden on the DB API writer here ? (This only makes it harder for interface writers to come up with code which can be considered DB API compatible.) -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Pages: http://www.lemburg.com/python/ From harri.pasanen@trema.com Fri Apr 27 08:44:23 2001 From: harri.pasanen@trema.com (Harri Pasanen) Date: Fri, 27 Apr 2001 09:44:23 +0200 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 References: <3AE920FA.4E849807@lemburg.com> Message-ID: <3AE92357.DFCB3270@trema.com> "M.-A. Lemburg" wrote: > > > [Making the DB API interface handle multiple paramstyles] > > > > > Sybase only supports the ? place holder. > > > > > > I am not sure how feasible this is, but maybe a (simple) SQL parser > > > could be used to support any placeholder style which would then be > > > transformed into the native style to interface with the database API. > > > > That would work for me too. Frankly, I dont see why the API should not > > support /all/ styles. The API writer codes the style the backend database > > most easily supports, then helper functions (supplied by the python-core > > libraries, for example) turn all the other styles into that style for you. > > > > The /user/ of the DB-API should not have to care what the 'native' style > > is, since there is a deterministic conversion between all styles (I > > think). > > > > However it is done, I should not have to rewrite code just because I slot > > in a different API. > > That's easy for you to say ;-) However, your small benefit will cost a > lot of module writers a lot of work and sometimes it may even > not be possible getting this right. > > To support all the different styles, the module writer would have > to parse the SQL statement to figure out how to map the paramstyle > used in there to the native one. Writing an SQL parser is not > necessarily possible though, since there are so many different > SQL variations in use out there. I'm not even talking about quoting > rules here ;-) > > If you really care about being able to change database interfaces, > then you'll have to write an abstraction for your interface anyway, > so why put the burden on the DB API writer here ? (This only makes > it harder for interface writers to come up with code which can > be considered DB API compatible.) > How about specifying a single 'python meta' -style. So the module writers would support the database native format, and the Python DB-Api format, from which a conversion to native format would exist. I spent maybe 2 minutes thinking about it, so maybe this is not feasible, just a thought... -Harri From chris@cogdon.org Fri Apr 27 08:54:40 2001 From: chris@cogdon.org (Chris Cogdon) Date: Fri, 27 Apr 2001 00:54:40 -0700 (PDT) Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: <3AE920FA.4E849807@lemburg.com> Message-ID: On Fri, 27 Apr 2001, M.-A. Lemburg wrote: > That's easy for you to say ;-) However, your small benefit will cost a > lot of module writers a lot of work and sometimes it may even > not be possible getting this right. > > To support all the different styles, the module writer would have > to parse the SQL statement to figure out how to map the paramstyle > used in there to the native one. Writing an SQL parser is not > necessarily possible though, since there are so many different > SQL variations in use out there. I'm not even talking about quoting > rules here ;-) > > If you really care about being able to change database interfaces, > then you'll have to write an abstraction for your interface anyway, > so why put the burden on the DB API writer here ? (This only makes > it harder for interface writers to come up with code which can > be considered DB API compatible.) Okay... that could be a fair statement... lets analyse it: The difference between format and pyformat is really python specific. I dont know of any C DB-API's that accept python style formatting... in at least that case, and many other DBs where 'format' is used, python is used to turn the query and all the arguments into a single string, escaping each of the argument types appropriately. Just looking at format and pyformat, I'd hate to have to write a wrapper function just because some Python API writer chose pyformat rather than format.... I /like/ the ability to specify my arguments positionally, rather than having to wrap them up in a dictionary. A wrapper in this case is very difficult, too, since I'd have to parse the query for the %s's, then turn them into %(somevarN). Not trivial. It would be a /lot/ easier for the API writer to have two pieces of code that turn a format, and a pyformat, request into the appropriate chunks to pass to the C API. Of course, this is making a lot of assumptions about what the C API looks like. In both Mysql, and Postgresql's case, the API takes a single string. So... what's stopping Python API's for those two Databases from supporting /all/ the paramstyles?, since they all have to be interpreted one way or the other. I can certainly see wanting to place a restriction on the paramstyle supported if the C API actually supported one or the other of the given types. But for the above two databases, there's no excuse. The python DB team could very well supply a library to assist API writers in this regard. ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From chris@onca.catsden.net Fri Apr 27 09:35:32 2001 From: chris@onca.catsden.net (chris@onca.catsden.net) Date: Fri, 27 Apr 2001 01:35:32 -0700 (PDT) Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: Message-ID: On Fri, 27 Apr 2001, Chris Cogdon wrote: > I can certainly see wanting to place a restriction on the paramstyle > supported if the C API actually supported one or the other of the given > types. But for the above two databases, there's no excuse. The python DB > team could very well supply a library to assist API writers in this > regard. Just to prove my point a bit, I've written an (lightly tested) helper utility that will convert a query to a single string in any one of 5 different paramstyles http://onca.catsden.net/~chris/dbhelper.py For DB backends that just take simple strings, why cant the API use something like this to support /all/ the possible paramstyles? This way, I can switch between mysql and postgresql (which, presumably could be easily modified to take all 5 paramstyles), and sybase (which only takes qmark??). by coding for qmark. I dont have to change a thing to switch between these three :) ("`-/")_.-'"``-._ Ch'marr, a.k.a. . . `; -._ )-;-,_`) Chris Cogdon (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' FC1.3: FFH3cmA+>++C++D++H++M++P++R++T+++WZ++Sm++ ((,.-' ((,/ fL RLCT acl+++d++e+f+++h++i++++jp-sm++ From fog@mixadlive.com Fri Apr 27 09:59:08 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Fri, 27 Apr 2001 10:59:08 +0200 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: ; from chris@cogdon.org on Fri, Apr 27, 2001 at 12:54:40AM -0700 References: <3AE920FA.4E849807@lemburg.com> Message-ID: <20010427105908.E759@mixadlive.com> Scavenging the mail folder uncovered Chris Cogdon's letter: > The difference between format and pyformat is really python specific. I > dont know of any C DB-API's that accept python style formatting... in at > least that case, and many other DBs where 'format' is used, python is used > to turn the query and all the arguments into a single string, escaping > each of the argument types appropriately. > > Just looking at format and pyformat, I'd hate to have to write a wrapper > function just because some Python API writer chose pyformat rather than > format.... I /like/ the ability to specify my arguments positionally, > rather than having to wrap them up in a dictionary. i don't understand. the following code (taken from psycopg) implements *both* format and pyformat: query = PyString_Format(str, dict_or_tuple) that wotks both with %(name)s stype and a dict or %s stype and a tuple. > A wrapper in this case is very difficult, too, since I'd have to parse the > query for the %s's, then turn them into %(somevarN). Not trivial. It would > be a /lot/ easier for the API writer to have two pieces of code that turn > a format, and a pyformat, request into the appropriate chunks to pass to > the C API. sometimes it is much easier to format a full query string and simply pass it to the C library, as we did. > Of course, this is making a lot of assumptions about what the C API looks > like. In both Mysql, and Postgresql's case, the API takes a single string. > So... what's stopping Python API's for those two Databases from supporting > /all/ the paramstyles?, since they all have to be interpreted one way or > the other. lack of time of the authors? we implemented the two param styles most used, others will follow when somebody will request them or when i'll have some free time... :) > I can certainly see wanting to place a restriction on the paramstyle > supported if the C API actually supported one or the other of the given > types. But for the above two databases, there's no excuse. The python DB > team could very well supply a library to assist API writers in this > regard. a well-written piece of C code that takes the args in python form (the ones you give to PyArgs_ParseTuple) and parse any of the four formats spitting out the query string will be much welcome... ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Generated by Signify v1.07 [http://www.debian.org/] -- brought to you by One Line Spam From chris@cogdon.org Fri Apr 27 09:47:16 2001 From: chris@cogdon.org (Chris Cogdon) Date: Fri, 27 Apr 2001 01:47:16 -0700 (PDT) Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: <20010427105908.E759@mixadlive.com> Message-ID: On Fri, 27 Apr 2001, Federico Di Gregorio wrote: > i don't understand. the following code (taken from psycopg) > implements *both* format and pyformat: > > query = PyString_Format(str, dict_or_tuple) > > that wotks both with %(name)s stype and a dict or %s stype and a tuple. Thats excellent, unfortunately the DB API as it stands has no way of advertising the fact that it could support both paramstyles. > sometimes it is much easier to format a full query string and simply > pass it to the C library, as we did. Agreed. I've only seen the wrappers for MySQL and PostgreSQL, and that's what they seem to do. If the C API can support this, then you can easily turn /any/ paramstyle into this string, and I cant see much SQL parsing being required (see my other post for the reference to a quick and dirty function I wrote to do this) > lack of time of the authors? we implemented the two param styles most > used, others will follow when somebody will request them or when i'll > have some free time... :) The issue I wanted to raise first was the lack of support for even /reporting/ that it can be done in the API spec, which at the moment assumes that the API writer will only ever write one calling method. > a well-written piece of C code that takes the args in python form > (the ones you give to PyArgs_ParseTuple) and parse any of the four > formats spitting out the query string will be much welcome... If you hadn't said 'well written', and 'C', I'd say 'Done' :) ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From fog@mixadlive.com Fri Apr 27 10:07:00 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Fri, 27 Apr 2001 11:07:00 +0200 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: ; from chris@onca.catsden.net on Fri, Apr 27, 2001 at 01:35:32AM -0700 References: Message-ID: <20010427110700.F759@mixadlive.com> Scavenging the mail folder uncovered chris@onca.catsden.net's letter: > Just to prove my point a bit, I've written an (lightly tested) helper > utility that will convert a query to a single string in any one of 5 > different paramstyles > > http://onca.catsden.net/~chris/dbhelper.py > > For DB backends that just take simple strings, why cant the API use > something like this to support /all/ the possible paramstyles? This way, I > can switch between mysql and postgresql (which, presumably could be easily > modified to take all 5 paramstyles), and sybase (which only takes > qmark??). by coding for qmark. I dont have to change a thing to switch > between these three :) code it in C and i will be happy to include it in psycopg. :) ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Those who do not study Lisp are doomed to reimplement it. Poorly. -- from Karl M. Hegbloom .signature From fog@mixadlive.com Fri Apr 27 10:16:11 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Fri, 27 Apr 2001 11:16:11 +0200 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: ; from chris@cogdon.org on Fri, Apr 27, 2001 at 01:47:16AM -0700 References: <20010427105908.E759@mixadlive.com> Message-ID: <20010427111611.G759@mixadlive.com> Scavenging the mail folder uncovered Chris Cogdon's letter: > On Fri, 27 Apr 2001, Federico Di Gregorio wrote: > > > i don't understand. the following code (taken from psycopg) > > implements *both* format and pyformat: > > > > query = PyString_Format(str, dict_or_tuple) > > > > that wotks both with %(name)s stype and a dict or %s stype and a tuple. > > Thats excellent, unfortunately the DB API as it stands has no way of > advertising the fact that it could support both paramstyles. agreed. [snip] > The issue I wanted to raise first was the lack of support for even > /reporting/ that it can be done in the API spec, which at the moment > assumes that the API writer will only ever write one calling method. agreed again. this will go in the proposed extensions, when we'll have a proposed extensions document. :) ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org God is real. Unless declared integer. -- Anonymous FORTRAN programmer From fog@mixadlive.com Fri Apr 27 10:37:48 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Fri, 27 Apr 2001 11:37:48 +0200 Subject: [DB-SIG] new pep about proposed extensions Message-ID: <20010427113748.I759@mixadlive.com> i'd like to start writing a new pep, titled "Python Database API Extensions". i would like to put in it a summary of all the discussion and proposed extensions of the last two months. imho, the best way to proceed will be to have two classes of extensions: proposed extensions - discussion going on, everybody implements them as he likes because there is no consensus suggested extensions - discussion ended and we reached some kind of consensus. developers of modules are not requested to implement the extension, but if they choose to, it is strongly suggested they follow the given api. extensions are just that and are no way part of the "real" db-api. what do you think about? can i go on and write the pep? ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Don't dream it. Be it. -- Dr. Frank'n'further From fog@mixadlive.com Fri Apr 27 10:45:07 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: Fri, 27 Apr 2001 11:45:07 +0200 Subject: [DB-SIG] patch to the dbapi document in pep format Message-ID: <20010427114507.A3115@mixadlive.com> --CE+1k2dSO48ffgeK Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Hi, I finally had time to checkout from the sourceforge cvs and move my patch from html to pep. the patch is just a bunch of little clarifications summarized from the discussion that begun about 2 months ago. i hope there will be no problems in including it. attached are both full-text and diff. ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org 99.99999999999999999999% still isn't 100% but sometimes suffice. -- Me --CE+1k2dSO48ffgeK Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="pep-0249.txt" PEP: 249 Title: Python Database API Specification v2.0 Version: $Revision: 1.3 $ Author: db-sig@python.org (Python Database SIG) Editor: mal@lemburg.com (Marc-Andre Lemburg) Status: Draft Type: Informational Replaces: 248 Introduction This API has been defined to encourage similarity between the Python modules that are used to access databases. By doing this, we hope to achieve a consistency leading to more easily understood modules, code that is generally more portable across databases, and a broader reach of database connectivity from Python. The interface specification consists of several sections: * Module Interface * Connection Objects * Cursor Objects * DBI Helper Objects * Type Objects and Constructors * Implementation Hints * Major Changes from 1.0 to 2.0 Comments and questions about this specification may be directed to the SIG for Database Interfacing with Python (db-sig@python.org). For more information on database interfacing with Python and available packages see the Database Topic Guide at http://www.python.org/topics/database/. This document describes the Python Database API Specification 2.0. The previous version 1.0 version is still available as reference, in PEP 248. Package writers are encouraged to use this version of the specification as basis for new interfaces. Module Interface Access to the database is made available through connection objects. The module must provide the following constructor for these: connect(parameters...) Constructor for creating a connection to the database. Returns a Connection Object. It takes a number of parameters which are database dependent. [1] These module globals must be defined: apilevel String constant stating the supported DB API level. Currently only the strings '1.0' and '2.0' are allowed. If not given, a DB-API 1.0 level interface should be assumed. threadsafety Integer constant stating the level of thread safety the interface supports. Possible values are: 0 Threads may not share the module. 1 Threads may share the module, but not connections. 2 Threads may share the module and connections. 3 Threads may share the module, connections and cursors. Sharing in the above context means that two threads may use a resource without wrapping it using a mutex semaphore to implement resource locking. Note that you cannot always make external resources thread safe by managing access using a mutex: the resource may rely on global variables or other external sources that are beyond your control. paramstyle String constant stating the type of parameter marker formatting expected by the interface. Possible values are [2]: 'qmark' Question mark style, e.g. '...WHERE name=?' 'numeric' Numeric, positional style, e.g. '...WHERE name=:1' 'named' Named style, e.g. '...WHERE name=:name' 'format' ANSI C printf format codes, e.g. '...WHERE name=%s' 'pyformat' Python extended format codes, e.g. '...WHERE name=%(name)s' The module should make all error information available through these exceptions or subclasses thereof: Warning Exception raised for important warnings like data truncations while inserting, etc. It must be a subclass of the Python StandardError (defined in the module exceptions). Error Exception that is the base class of all other error exceptions. You can use this to catch all errors with one single 'except' statement. Warnings are not considered errors and thus should not use this class as base. It must be a subclass of the Python StandardError (defined in the module exceptions). InterfaceError Exception raised for errors that are related to the database interface rather than the database itself. It must be a subclass of Error. DatabaseError Exception raised for errors that are related to the database. It must be a subclass of Error. DataError Exception raised for errors that are due to problems with the processed data like division by zero, numeric value out of range, etc. It must be a subclass of DatabaseError. OperationalError Exception raised for errors that are related to the database's operation and not necessarily under the control of the programmer, e.g. an unexpected disconnect occurs, the data source name is not found, a transaction could not be processed, a memory allocation error occurred during processing, etc. It must be a subclass of DatabaseError. IntegrityError Exception raised when the relational integrity of the database is affected, e.g. a foreign key check fails. It must be a subclass of DatabaseError. InternalError Exception raised when the database encounters an internal error, e.g. the cursor is not valid anymore, the transaction is out of sync, etc. It must be a subclass of DatabaseError. ProgrammingError Exception raised for programming errors, e.g. table not found or already exists, syntax error in the SQL statement, wrong number of parameters specified, etc. It must be a subclass of DatabaseError. NotSupportedError Exception raised in case a method or database API was used which is not supported by the database, e.g. requesting a .rollback() on a connection that does not support transaction or has transactions turned off. It must be a subclass of DatabaseError. This is the exception inheritance layout: StandardError |__Warning |__Error |__InterfaceError |__DatabaseError |__DataError |__OperationalError |__IntegrityError |__InternalError |__ProgrammingError |__NotSupportedError Note: The values of these exceptions are not defined. They should give the user a fairly good idea of what went wrong, though. Connection Objects Connection Objects should respond to the following methods: close() Close the connection now (rather than whenever __del__ is called). The connection will be unusable from this point forward; an Error (or subclass) exception will be raised if any operation is attempted with the connection. The same applies to all cursor objects trying to use the connection. Note that closing a connection without committing the changes first will cause an implicit rollback to be performed. commit() Commit any pending transaction to the database. Note that if the database supports an auto-commit feature, this must be initially off. An interface method may be provided to turn it back on. Database modules that do not support transactions should implement this method with void functionality. rollback() This method is optional since not all databases provide transaction support. [3] In case a database does provide transactions this method causes the the database to roll back to the start of any pending transaction. Closing a connection without committing the changes first will cause an implicit rollback to be performed. cursor() Return a new Cursor Object using the connection. If the database does not provide a direct cursor concept, the module will have to emulate cursors using other means to the extent needed by this specification. [4] Cursor Objects These objects represent a database cursor, which is used to manage the context of a fetch operation. Cursors created from the same connection are not isolated, i.e., any changes done to the database by a cursor are immediately visible by the other cursors. Cursors created from different connections can or can not be isolated, depending on how the transaction support is implemented (see also the connection's rollback() and commit() methods.) Cursor Objects should respond to the following methods and attributes: description This read-only attribute is a sequence of 7-item sequences. Each of these sequences contains information describing one result column: (name, type_code, display_size, internal_size, precision, scale, null_ok). The first two items (name and type_code) are mandatory, the other five are optional and must be set to None if meaningfull values are not provided. This attribute will be None for operations that do not return rows or if the cursor has not had an operation invoked via the executeXXX() method yet. The type_code can be interpreted by comparing it to the Type Objects specified in the section below. rowcount This read-only attribute specifies the number of rows that the last executeXXX() produced (for DQL statements like 'select') or affected (for DML statements like 'update' or 'insert'). The attribute is -1 in case no executeXXX() has been performed on the cursor or the rowcount of the last operation is not determinable by the interface. [7] callproc(procname[,parameters]) (This method is optional since not all databases provide stored procedures. [3]) Call a stored database procedure with the given name. The sequence of parameters must contain one entry for each argument that the procedure expects. The result of the call is returned as modified copy of the input sequence. Input parameters are left untouched, output and input/output parameters replaced with possibly new values. The procedure may also provide a result set as output. This must then be made available through the standard fetchXXX() methods. close() Close the cursor now (rather than whenever __del__ is called). The cursor will be unusable from this point forward; an Error (or subclass) exception will be raised if any operation is attempted with the cursor. execute(operation[,parameters]) Prepare and execute a database operation (query or command). Parameters may be provided as sequence or mapping and will be bound to variables in the operation. Variables are specified in a database-specific notation (see the module's paramstyle attribute for details). [5] A reference to the operation will be retained by the cursor. If the same operation object is passed in again, then the cursor can optimize its behavior. This is most effective for algorithms where the same operation is used, but different parameters are bound to it (many times). For maximum efficiency when reusing an operation, it is best to use the setinputsizes() method to specify the parameter types and sizes ahead of time. It is legal for a parameter to not match the predefined information; the implementation should compensate, possibly with a loss of efficiency. The parameters may also be specified as list of tuples to e.g. insert multiple rows in a single operation, but this kind of usage is depreciated: executemany() should be used instead. Return values are not defined. executemany(operation,seq_of_parameters) Prepare a database operation (query or command) and then execute it against all parameter sequences or mappings found in the sequence seq_of_parameters. Modules are free to implement this method using multiple calls to the execute() method or by using array operations to have the database process the sequence as a whole in one call. The same comments as for execute() also apply accordingly to this method. Return values are not defined. fetchone() Fetch the next row of a query result set, returning a single sequence, or None when no more data is available. [6] An Error (or subclass) exception is raised if the previous call to executeXXX() did not produce any result set or no call was issued yet. fetchmany([size=cursor.arraysize]) Fetch the next set of rows of a query result, returning a sequence of sequences (e.g. a list of tuples). An empty sequence is returned when no more rows are available. The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor's arraysize determines the number of rows to be fetched. The method should try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned. An Error (or subclass) exception is raised if the previous call to executeXXX() did not produce any result set or no call was issued yet. Note there are performance considerations involved with the size parameter. For optimal performance, it is usually best to use the arraysize attribute. If the size parameter is used, then it is best for it to retain the same value from one fetchmany() call to the next. fetchall() Fetch all (remaining) rows of a query result, returning them as a sequence of sequences (e.g. a list of tuples). Note that the cursor's arraysize attribute can affect the performance of this operation. An Error (or subclass) exception is raised if the previous call to executeXXX() did not produce any result set or no call was issued yet. nextset() (This method is optional since not all databases support multiple result sets. [3]) This method will make the cursor skip to the next available set, discarding any remaining rows from the current set. If there are no more sets, the method returns None. Otherwise, it returns a true value and subsequent calls to the fetch methods will return rows from the next result set. An Error (or subclass) exception is raised if the previous call to executeXXX() did not produce any result set or no call was issued yet. arraysize This read/write attribute specifies the number of rows to fetch at a time with fetchmany(). It defaults to 1 meaning to fetch a single row at a time. Implementations must observe this value with respect to the fetchmany() method, but are free to interact with the database a single row at a time. It may also be used in the implementation of executemany(). setinputsizes(sizes) This can be used before a call to executeXXX() to predefine memory areas for the operation's parameters. sizes is specified as a sequence -- one item for each input parameter. The item should be a Type Object that corresponds to the input that will be used, or it should be an integer specifying the maximum length of a string parameter. If the item is None, then no predefined memory area will be reserved for that column (this is useful to avoid predefined areas for large inputs). This method would be used before the executeXXX() method is invoked. Implementations are free to have this method do nothing and users are free to not use it. setoutputsize(size[,column]) Set a column buffer size for fetches of large columns (e.g. LONGs, BLOBs, etc.). The column is specified as an index into the result sequence. Not specifying the column will set the default size for all large columns in the cursor. This method would be used before the executeXXX() method is invoked. Implementations are free to have this method do nothing and users are free to not use it. Type Objects and Constructors Many databases need to have the input in a particular format for binding to an operation's input parameters. For example, if an input is destined for a DATE column, then it must be bound to the database in a particular string format. Similar problems exist for "Row ID" columns or large binary items (e.g. blobs or RAW columns). This presents problems for Python since the parameters to the executeXXX() method are untyped. When the database module sees a Python string object, it doesn't know if it should be bound as a simple CHAR column, as a raw BINARY item, or as a DATE. To overcome this problem, a module must provide the constructors defined below to create objects that can hold special values. When passed to the cursor methods, the module can then detect the proper type of the input parameter and bind it accordingly. A Cursor Object's description attribute returns information about each of the result columns of a query. The type_code must compare equal to one of Type Objects defined below. Type Objects may be equal to more than one type code (e.g. DATETIME could be equal to the type codes for date, time and timestamp columns; see the Implementation Hints below for details). The module exports the following constructors and singletons: Date(year,month,day) This function constructs an object holding a date value. Time(hour,minute,second) This function constructs an object holding a time value. Timestamp(year,month,day,hour,minute,second) This function constructs an object holding a time stamp value. DateFromTicks(ticks) This function constructs an object holding a date value from the given ticks value (number of seconds since the epoch; see the documentation of the standard Python time module for details). TimeFromTicks(ticks) This function constructs an object holding a time value from the given ticks value (number of seconds since the epoch; see the documentation of the standard Python time module for details). TimestampFromTicks(ticks) This function constructs an object holding a time stamp value from the given ticks value (number of seconds since the epoch; see the documentation of the standard Python time module for details). Binary(string) This function constructs an object capable of holding a binary (long) string value. STRING This type object is used to describe columns in a database that are string-based (e.g. CHAR). BINARY This type object is used to describe (long) binary columns in a database (e.g. LONG, RAW, BLOBs). NUMBER This type object is used to describe numeric columns in a database. DATETIME This type object is used to describe date/time columns in a database. ROWID This type object is used to describe the "Row ID" column in a database. SQL NULL values are represented by the Python None singleton on input and output. Note: Usage of Unix ticks for database interfacing can cause troubles because of the limited date range they cover. Implementation Hints * The preferred object types for the date/time objects are those defined in the mxDateTime package. It provides all necessary constructors and methods both at Python and C level. * The preferred object type for Binary objects are the buffer types available in standard Python starting with version 1.5.2. Please see the Python documentation for details. For information about the the C interface have a look at Include/bufferobject.h and Objects/bufferobject.c in the Python source distribution. * Here is a sample implementation of the Unix ticks based constructors for date/time delegating work to the generic constructors: import time def DateFromTicks(ticks): return apply(Date,time.localtime(ticks)[:3]) def TimeFromTicks(ticks): return apply(Time,time.localtime(ticks)[3:6]) def TimestampFromTicks(ticks): return apply(Timestamp,time.localtime(ticks)[:6]) * This Python class allows implementing the above type objects even though the description type code field yields multiple values for on type object: class DBAPITypeObject: def __init__(self,*values): self.values = values def __cmp__(self,other): if other in self.values: return 0 if other < self.values: return 1 else: return -1 The resulting type object compares equal to all values passed to the constructor. * Here is a snippet of Python code that implements the exception hierarchy defined above: import exceptions class Error(exceptions.StandardError): pass class Warning(exceptions.StandardError): pass class InterfaceError(Error): pass class DatabaseError(Error): pass class InternalError(DatabaseError): pass class OperationalError(DatabaseError): pass class ProgrammingError(DatabaseError): pass class IntegrityError(DatabaseError): pass class DataError(DatabaseError): pass class NotSupportedError(DatabaseError): pass In C you can use the PyErr_NewException(fullname, base, NULL) API to create the exception objects. Major Changes from Version 1.0 to Version 2.0 The Python Database API 2.0 introduces a few major changes compared to the 1.0 version. Because some of these changes will cause existing DB API 1.0 based scripts to break, the major version number was adjusted to reflect this change. These are the most important changes from 1.0 to 2.0: * The need for a separate dbi module was dropped and the functionality merged into the module interface itself. * New constructors and Type Objects were added for date/time values, the RAW Type Object was renamed to BINARY. The resulting set should cover all basic data types commonly found in modern SQL databases. * New constants (apilevel, threadlevel, paramstyle) and methods (executemany, nextset) were added to provide better database bindings. * The semantics of .callproc() needed to call stored procedures are now clearly defined. * The definition of the .execute() return value changed. Previously, the return value was based on the SQL statement type (which was hard to implement right) -- it is undefined now; use the more flexible .rowcount attribute instead. Modules are free to return the old style return values, but these are no longer mandated by the specification and should be considered database interface dependent. * Class based exceptions were incorporated into the specification. Module implementors are free to extend the exception layout defined in this specification by subclassing the defined exception classes. Open Issues Although the version 2.0 specification clarifies a lot of questions that were left open in the 1.0 version, there are still some remaining issues: * Define a useful return value for .nextset() for the case where a new result set is available. * Create a fixed point numeric type for use as loss-less monetary and decimal interchange format. Footnotes [1] As a guideline the connection constructor parameters should be implemented as keyword parameters for more intuitive use and follow this order of parameters: dsn Data source name as string user User name as string (optional) password Password as string (optional) host Hostname (optional) database Database name (optional) E.g. a connect could look like this: connect(dsn='myhost:MYDB',user='guido',password='234$') [2] Module implementors should prefer 'numeric', 'named' or 'pyformat' over the other formats because these offer more clarity and flexibility. [3] If the database does not support the functionality required by the method, the interface should throw an exception in case the method is used. The preferred approach is to not implement the method and thus have Python generate an AttributeError in case the method is requested. This allows the programmer to check for database capabilities using the standard hasattr() function. For some dynamically configured interfaces it may not be appropriate to require dynamically making the method available. These interfaces should then raise a NotSupportedError to indicate the non-ability to perform the roll back when the method is invoked. [4] a database interface may choose to support named cursors by allowing a string argument to the method. This feature is not part of the specification, since it complicates semantics of the .fetchXXX() methods. [5] The module will use the __getitem__ method of the parameters object to map either positions (integers) or names (strings) to parameter values. This allows for both sequences and mappings to be used as input. The term "bound" refers to the process of binding an input value to a database execution buffer. In practical terms, this means that the input value is directly used as a value in the operation. The client should not be required to "escape" the value so that it can be used -- the value should be equal to the actual database value. [6] Note that the interface may implement row fetching using arrays and other optimizations. It is not guaranteed that a call to this method will only move the associated cursor forward by one row. [7] The rowcount attribute may be coded in a way that updates its value dynamically. This can be useful for databases that return usable rowcount values only after the first call to a .fetchXXX() method. Acknowledgements Many thanks go to Andrew Kuchling who converted the Python Database API Specification 2.0 from the original HTML format into the PEP format. Copyright This document has been placed in the Public Domain. Local Variables: mode: indented-text indent-tabs-mode: nil End: --CE+1k2dSO48ffgeK Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="pep-0249.txt.diff" --- pep-0249.txt.old Fri Apr 27 11:20:02 2001 +++ pep-0249.txt Fri Apr 27 11:24:49 2001 @@ -1,6 +1,6 @@ PEP: 249 Title: Python Database API Specification v2.0 -Version: $Revision: 1.2 $ +Version: $Revision: 1.3 $ Author: db-sig@python.org (Python Database SIG) Editor: mal@lemburg.com (Marc-Andre Lemburg) Status: Draft @@ -198,7 +198,10 @@ forward; an Error (or subclass) exception will be raised if any operation is attempted with the connection. The same applies to all cursor objects trying to use the - connection. + connection. Note that closing a connection without + committing the changes first will cause an implicit + rollback to be performed. + commit() @@ -232,7 +235,13 @@ Cursor Objects These objects represent a database cursor, which is used to - manage the context of a fetch operation. + manage the context of a fetch operation. Cursors created from + the same connection are not isolated, i.e., any changes + done to the database by a cursor are immediately visible by the + other cursors. Cursors created from different connections can + or can not be isolated, depending on how the transaction support + is implemented (see also the connection's rollback() and commit() + methods.) Cursor Objects should respond to the following methods and attributes: @@ -243,7 +252,11 @@ sequences. Each of these sequences contains information describing one result column: (name, type_code, display_size, internal_size, precision, scale, - null_ok). This attribute will be None for operations that + null_ok). The first two items (name and type_code) are + mandatory, the other five are optional and must be set to + None if meaningfull values are not provided. + + This attribute will be None for operations that do not return rows or if the cursor has not had an operation invoked via the executeXXX() method yet. --CE+1k2dSO48ffgeK-- From mal@lemburg.com Fri Apr 27 11:09:22 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 27 Apr 2001 12:09:22 +0200 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 References: Message-ID: <3AE94552.B621B008@lemburg.com> Chris Cogdon wrote: > > On Fri, 27 Apr 2001, M.-A. Lemburg wrote: > > > That's easy for you to say ;-) However, your small benefit will cost a > > lot of module writers a lot of work and sometimes it may even > > not be possible getting this right. > > > > To support all the different styles, the module writer would have > > to parse the SQL statement to figure out how to map the paramstyle > > used in there to the native one. Writing an SQL parser is not > > necessarily possible though, since there are so many different > > SQL variations in use out there. I'm not even talking about quoting > > rules here ;-) > > > > If you really care about being able to change database interfaces, > > then you'll have to write an abstraction for your interface anyway, > > so why put the burden on the DB API writer here ? (This only makes > > it harder for interface writers to come up with code which can > > be considered DB API compatible.) > > Okay... that could be a fair statement... lets analyse it: > > The difference between format and pyformat is really python specific. I > dont know of any C DB-API's that accept python style formatting... in at > least that case, and many other DBs where 'format' is used, python is used > to turn the query and all the arguments into a single string, escaping > each of the argument types appropriately. > > Just looking at format and pyformat, I'd hate to have to write a wrapper > function just because some Python API writer chose pyformat rather than > format.... I /like/ the ability to specify my arguments positionally, > rather than having to wrap them up in a dictionary. > > A wrapper in this case is very difficult, too, since I'd have to parse the > query for the %s's, then turn them into %(somevarN). Not trivial. It would > be a /lot/ easier for the API writer to have two pieces of code that turn > a format, and a pyformat, request into the appropriate chunks to pass to > the C API. > > Of course, this is making a lot of assumptions about what the C API looks > like. In both Mysql, and Postgresql's case, the API takes a single string. > So... what's stopping Python API's for those two Databases from supporting > /all/ the paramstyles?, since they all have to be interpreted one way or > the other. You are missing the point here: when talking about changing the DB API specs you enforce a standard on all DB API conform modules, not only the ones which could easily implement the change. This is unacceptable. Note that the DB API spec opens up the possible parameter format to *simplify* the task of the module writer -- she can choose whichever parameter format suits her best. The proper solution for your wish would be to define a "standard extension" to the DB API (we've had this discussion already) which is then appended to the spec for module writers to use as guideline in case they want to add this functionality to their modules. Please write up such an "standard extension" section and post it to the list. > I can certainly see wanting to place a restriction on the paramstyle > supported if the C API actually supported one or the other of the given > types. But for the above two databases, there's no excuse. The python DB > team could very well supply a library to assist API writers in this > regard. You mean: a library for parsing parameter markers and finding the right way to quote data ? I doubt that this will be possible since databases usually have their own view of how to escape data in SQL statements. Also note that some DBs have restrictions on the overall size of the SQL statement, which could render this approach useless for e.g. BLOBs. -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From mal@lemburg.com Fri Apr 27 12:17:12 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 27 Apr 2001 13:17:12 +0200 Subject: [DB-SIG] new pep about proposed extensions References: <20010427113748.I759@mixadlive.com> Message-ID: <3AE95538.BE5A470E@lemburg.com> Federico Di Gregorio wrote: > > i'd like to start writing a new pep, titled "Python Database API > Extensions". i would like to put in it a summary of all the discussion > and proposed extensions of the last two months. imho, the best way > to proceed will be to have two classes of extensions: > > proposed extensions - discussion going on, everybody implements them > as he likes because there is no consensus > > suggested extensions - discussion ended and we reached some kind of Make that "Experimental Extensions" and "Standard Extensions". > consensus. developers of modules are not requested > to implement the extension, but if they choose to, > it is strongly suggested they follow the given api. > > extensions are just that and are no way part of the "real" db-api. > > what do you think about? can i go on and write the pep? Good idea. I will then add a reference to the new PEP in the DB API spec. -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From mal@lemburg.com Fri Apr 27 12:20:43 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 27 Apr 2001 13:20:43 +0200 Subject: [DB-SIG] patch to the dbapi document in pep format References: <20010427114507.A3115@mixadlive.com> Message-ID: <3AE9560B.4ADC48B6@lemburg.com> Federico Di Gregorio wrote: > > Hi, > > I finally had time to checkout from the sourceforge cvs and > move my patch from html to pep. the patch is just a bunch of little > clarifications summarized from the discussion that begun about 2 > months ago. i hope there will be no problems in including it. > > attached are both full-text and diff. Checked in. Thanks, -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From mal@lemburg.com Fri Apr 27 12:24:53 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 27 Apr 2001 13:24:53 +0200 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 References: <3AE920FA.4E849807@lemburg.com> <3AE92357.DFCB3270@trema.com> Message-ID: <3AE95705.38D5CF25@lemburg.com> Harri Pasanen wrote: > [Making the DB API interface handle multiple paramstyles] > > How about specifying a single 'python meta' -style. So the module > writers would support the database native format, and the Python DB-Api > format, from which a conversion to native format would exist. I spent > maybe 2 minutes thinking about it, so maybe this is not feasible, just a > thought... This is not necessarily efficient, e.g. MySQL and Postgresql are just fine with wrapping data up in the SQL statement itself, but e.g. many ODBC drivers pass in data using column binding and only parse the SQL once (parsing and checking for correctness takes a long time...) ! I'd rather leave it at the current open state of affairs. People who want to use a generic interface can use wrappers or Chris' dbhelper.py module if they want to. -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From gary.aviv@intec-telecom-systems.com Fri Apr 27 19:19:25 2001 From: gary.aviv@intec-telecom-systems.com (Gary Aviv) Date: Fri, 27 Apr 2001 14:19:25 -0400 Subject: [DB-SIG] What is avilable for Oracle? Message-ID: I am new to database programming with Python. I have been searching for a native Oracle module. There seem to be at least two: DCOracle and cx_Oracle. I notice that DCOracle is not API specification 2.0 compliant. I built DCOracle successfuly but I had several problems using it which is the subject of another note. I was wondering what the SIG's recommendation is for which of these, or any others, one should use. Gary Aviv e-mail: gary.aviv@intec-telecom-systems.com From gstein@lyra.org Fri Apr 27 21:22:52 2001 From: gstein@lyra.org (Greg Stein) Date: Fri, 27 Apr 2001 13:22:52 -0700 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: <3AE95705.38D5CF25@lemburg.com>; from mal@lemburg.com on Fri, Apr 27, 2001 at 01:24:53PM +0200 References: <3AE920FA.4E849807@lemburg.com> <3AE92357.DFCB3270@trema.com> <3AE95705.38D5CF25@lemburg.com> Message-ID: <20010427132252.Q1374@lyra.org> On Fri, Apr 27, 2001 at 01:24:53PM +0200, M.-A. Lemburg wrote: > Harri Pasanen wrote: > > [Making the DB API interface handle multiple paramstyles] > > > > How about specifying a single 'python meta' -style. So the module > > writers would support the database native format, and the Python DB-Api > > format, from which a conversion to native format would exist. I spent > > maybe 2 minutes thinking about it, so maybe this is not feasible, just a > > thought... > > This is not necessarily efficient, e.g. MySQL and Postgresql are > just fine with wrapping data up in the SQL statement itself, > but e.g. many ODBC drivers pass in data using column binding and > only parse the SQL once (parsing and checking for correctness > takes a long time...) ! > > I'd rather leave it at the current open state of affairs. People who > want to use a generic interface can use wrappers or Chris' dbhelper.py > module if they want to. Normally, I like to jump in on these discussions and chatter on about the API, its view of the world, and how it helps authors and users, and ... But damn. Marc-Andre has said everything exactly as I would. I've got nothing more to add! Dang :-) [ hmm. maybe I've hounded M-A enough over the past five years, that I've brainwashed him? :-) ] I support the suggestion that a DBAPI module can *optionally*: advertise multiple paramstyles, and allow switching the paramstyle "mode". As always, there should be a way for a module author to ignore this option as too much hassle for themselves (and punt to an external system). So if PEP-249 is revamped to make the "add'l paramstyles" optional, then I'd be +1. The param styles were chosen with the explicit intent that the DBAPI module would not even touch the strings. The "?" and ":1" and ":name" forms can all be passed directly to certain SQL engines, along with input bindings. "Lesser" databases require munging everything into a single SQL string, which means they have more flexibility on the param style. Cheers, -g -- Greg Stein, http://www.lyra.org/ From gstein@lyra.org Fri Apr 27 21:33:39 2001 From: gstein@lyra.org (Greg Stein) Date: Fri, 27 Apr 2001 13:33:39 -0700 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: <20010427132252.Q1374@lyra.org>; from gstein@lyra.org on Fri, Apr 27, 2001 at 01:22:52PM -0700 References: <3AE920FA.4E849807@lemburg.com> <3AE92357.DFCB3270@trema.com> <3AE95705.38D5CF25@lemburg.com> <20010427132252.Q1374@lyra.org> Message-ID: <20010427133339.S1374@lyra.org> On Fri, Apr 27, 2001 at 01:22:52PM -0700, Greg Stein wrote: >... > I support the suggestion that a DBAPI module can *optionally*: advertise > multiple paramstyles, and allow switching the paramstyle "mode". As always, > there should be a way for a module author to ignore this option as too much > hassle for themselves (and punt to an external system). So if PEP-249 is > revamped to make the "add'l paramstyles" optional, then I'd be +1. Oops. I just realized that PEP-249 is the spec itself, not the suggested change. Ah well.. you know what I meant :-) btw, why was the spec changed from its nice HTML format to the plain old text format? Was that mainly to get it into the PEP area so that more people could change it? so that we could track changes? etc? Cheers, -g -- Greg Stein, http://www.lyra.org/ From chris@onca.catsden.net Fri Apr 27 21:58:56 2001 From: chris@onca.catsden.net (chris@onca.catsden.net) Date: Fri, 27 Apr 2001 13:58:56 -0700 (PDT) Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: <20010427132252.Q1374@lyra.org> Message-ID: Okay, if I understand correctly, there's more support for adding in 'optional' support for 'optionally' supporting more than one paramstyle. Ie, in the next version of the DB-API, we could add: ---- In addition to the above 'required' interface, the Python API implementer may choose to offer the following interface: - include interface changes from my earlier post here - ---- Then, the /user/ of the API would have to check to see if the facilities are available (ie, the presence of valid_paramstyles and set_paramstyle) before attempting to use them. My suggestion was to /require/ of a API implementor, if they want to make their API compliant with DB-API 2.1 (or whatever we call it), to supply the above interfaces, even if the only thing she does is to have a single entry inside valid_paramstyles, and passing 'set_paramstyle' always throws an exception if any other paramstyle is chosen. I can see pros and cons on both sides: Extension is Optional --------------------- Pro: Very easy for the API implementor Con: API user needs to write extra code to check Pro: No problems caused if a 2.0 API is given to a 2.1 aware database APP Extension is Required, even if Minimimally Implemented ------------------------------------------------------ Con: Many API's need to be modified to be 2.1 compliant (but the change is trivial) Pro: Less code change in the applications, unless you consder the code change they'd go through to utilise the extension. Given the above, I'd say that I too would support a 'optional' implementation. Although, we do need to specifiy the requirements if any API writer wants to implement this facility. Ie, "This is optional, but if you do it, you must do it like this". ("`-/")_.-'"``-._ Ch'marr, a.k.a. . . `; -._ )-;-,_`) Chris Cogdon (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' FC1.3: FFH3cmA+>++C++D++H++M++P++R++T+++WZ++Sm++ ((,.-' ((,/ fL RLCT acl+++d++e+f+++h++i++++jp-sm++ From Andreas Jung" Message-ID: <009f01c0cf7e$c1522930$0901000a@SUXLAP> Depending on your Oracle version you should give DCO2 (available from www.zope.org. DCO2 should be mostly compliant. At least it is build on top of OCI 8 and supports Oracle 8i including LOB support. Andreas Jung Digital Creations ----- Original Message ----- From: "Gary Aviv" To: Sent: Friday, April 27, 2001 2:19 PM Subject: [DB-SIG] What is avilable for Oracle? > I am new to database programming with Python. > I have been searching for a native Oracle module. There seem > to be at least two: > > DCOracle and cx_Oracle. > > I notice that DCOracle is not API specification 2.0 compliant. > > I built DCOracle successfuly but I had several problems using it which > is the subject of another note. > > I was wondering what the SIG's recommendation is for which of these, or > any others, one should use. > > Gary Aviv > e-mail: gary.aviv@intec-telecom-systems.com > > > > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig From harri.pasanen@trema.com Sat Apr 28 10:24:05 2001 From: harri.pasanen@trema.com (Harri Pasanen) Date: Sat, 28 Apr 2001 11:24:05 +0200 Subject: [DB-SIG] What is avilable for Oracle? References: <009f01c0cf7e$c1522930$0901000a@SUXLAP> Message-ID: <3AEA8C35.502ACFEA@trema.com> Does the same apply for Sybase module? Is DC:s version Api compliant? -Harri Andreas Jung wrote: > > Depending on your Oracle version you should give DCO2 (available > from www.zope.org. DCO2 should be mostly compliant. At least > it is build on top of OCI 8 and supports Oracle 8i including LOB support. > > Andreas Jung > Digital Creations > ----- Original Message ----- > From: "Gary Aviv" > To: > Sent: Friday, April 27, 2001 2:19 PM > Subject: [DB-SIG] What is avilable for Oracle? > > > I am new to database programming with Python. > > I have been searching for a native Oracle module. There seem > > to be at least two: > > > > DCOracle and cx_Oracle. > > > > I notice that DCOracle is not API specification 2.0 compliant. > > > > I built DCOracle successfuly but I had several problems using it which > > is the subject of another note. > > > > I was wondering what the SIG's recommendation is for which of these, or > > any others, one should use. > > > > Gary Aviv > > e-mail: gary.aviv@intec-telecom-systems.com > > > > > > > > > > _______________________________________________ > > DB-SIG maillist - DB-SIG@python.org > > http://mail.python.org/mailman/listinfo/db-sig > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig From mal@lemburg.com Sat Apr 28 18:24:31 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Sat, 28 Apr 2001 19:24:31 +0200 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 References: <3AE920FA.4E849807@lemburg.com> <3AE92357.DFCB3270@trema.com> <3AE95705.38D5CF25@lemburg.com> <20010427132252.Q1374@lyra.org> <20010427133339.S1374@lyra.org> Message-ID: <3AEAFCCF.96450C22@lemburg.com> Greg Stein wrote: > > btw, why was the spec changed from its nice HTML format to the plain old > text format? Was that mainly to get it into the PEP area so that more people > could change it? so that we could track changes? etc? The latter... PEPs can be tracked using CVS, diffs, etc. this has many advantages over HTML. Another advantage is making the spec more visible, since PEPs sort of define the directions of development. -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From dieter@sz-sb.de Mon Apr 30 07:57:44 2001 From: dieter@sz-sb.de (Dr. Dieter Maurer) Date: Mon, 30 Apr 2001 08:57:44 +0200 (CEST) Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 In-Reply-To: <3AEAFCCF.96450C22@lemburg.com> References: <3AE920FA.4E849807@lemburg.com> <3AE92357.DFCB3270@trema.com> <3AE95705.38D5CF25@lemburg.com> <20010427132252.Q1374@lyra.org> <20010427133339.S1374@lyra.org> <3AEAFCCF.96450C22@lemburg.com> Message-ID: <15085.3086.546714.57892@dieter.sz-sb.de> M.-A. Lemburg writes: > Greg Stein wrote: > > > > btw, why was the spec changed from its nice HTML format to the plain old > > text format? Was that mainly to get it into the PEP area so that more people > > could change it? so that we could track changes? etc? > > The latter... PEPs can be tracked using CVS, diffs, etc. this has > many advantages over HTML. Another advantage is making the spec > more visible, since PEPs sort of define the directions of development. You might consider a "Structured Text" view on PEP's. It would provide the advantages of text with respect to change tracking but nevertheless could be view comfortably with an HTML browser. The current Structured Text version has some problems with rendering, but many might be overcome with StructuredTextNG. Dieter From mal@lemburg.com Mon Apr 30 08:56:15 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 30 Apr 2001 09:56:15 +0200 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 References: <3AE920FA.4E849807@lemburg.com> <3AE92357.DFCB3270@trema.com> <3AE95705.38D5CF25@lemburg.com> <20010427132252.Q1374@lyra.org> <20010427133339.S1374@lyra.org> <3AEAFCCF.96450C22@lemburg.com> <15085.3086.546714.57892@dieter.sz-sb.de> Message-ID: <3AED1A9F.92A54842@lemburg.com> "Dr. Dieter Maurer" wrote: > > M.-A. Lemburg writes: > > Greg Stein wrote: > > > > > > btw, why was the spec changed from its nice HTML format to the plain old > > > text format? Was that mainly to get it into the PEP area so that more people > > > could change it? so that we could track changes? etc? > > > > The latter... PEPs can be tracked using CVS, diffs, etc. this has > > many advantages over HTML. Another advantage is making the spec > > more visible, since PEPs sort of define the directions of development. > You might consider a "Structured Text" view on PEP's. > > It would provide the advantages of text with respect to change tracking > but nevertheless could be view comfortably with an HTML browser. > > The current Structured Text version has some problems with > rendering, but many might be overcome with StructuredTextNG. Would the "Structured Text" view be different than what we already have on SourceForge (http://python.sourceforge.net) ? The tool for generating HTML out of PEPs was written by Fredrik Lundh and does a pretty clean job, IMHO. Thanks, -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From mal@lemburg.com Mon Apr 30 09:22:16 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 30 Apr 2001 10:22:16 +0200 Subject: [DB-SIG] PEP249 - Proposed change to DB-API 2.0 References: <3AE920FA.4E849807@lemburg.com> <3AE92357.DFCB3270@trema.com> <3AE95705.38D5CF25@lemburg.com> <20010427132252.Q1374@lyra.org> Message-ID: <3AED20B8.2F4DAA1E@lemburg.com> Greg Stein wrote: > > On Fri, Apr 27, 2001 at 01:24:53PM +0200, M.-A. Lemburg wrote: > > Harri Pasanen wrote: > > > [Making the DB API interface handle multiple paramstyles] > > > > > > How about specifying a single 'python meta' -style. So the module > > > writers would support the database native format, and the Python DB-Api > > > format, from which a conversion to native format would exist. I spent > > > maybe 2 minutes thinking about it, so maybe this is not feasible, just a > > > thought... > > > > This is not necessarily efficient, e.g. MySQL and Postgresql are > > just fine with wrapping data up in the SQL statement itself, > > but e.g. many ODBC drivers pass in data using column binding and > > only parse the SQL once (parsing and checking for correctness > > takes a long time...) ! > > > > I'd rather leave it at the current open state of affairs. People who > > want to use a generic interface can use wrappers or Chris' dbhelper.py > > module if they want to. > > Normally, I like to jump in on these discussions and chatter on about the > API, its view of the world, and how it helps authors and users, and ... > > But damn. Marc-Andre has said everything exactly as I would. I've got > nothing more to add! Dang :-) Hehe :-) > [ hmm. maybe I've hounded M-A enough over the past five years, that I've > brainwashed him? :-) ] Probably... :-) > I support the suggestion that a DBAPI module can *optionally*: advertise > multiple paramstyles, and allow switching the paramstyle "mode". As always, > there should be a way for a module author to ignore this option as too much > hassle for themselves (and punt to an external system). So if PEP-249 is > revamped to make the "add'l paramstyles" optional, then I'd be +1. Same here. I'd suggest to use cursor and connections methods for setting or querying the current paramstyles in use: .get_paramstyle() Returns one of the defined paramstyle strings currently in effect. .get_paramstyles() Returns a list of possible paramstyles. .set_paramstyle(paramstyle) Sets the paramstyle for the next statement executed on the cursor to paramstyle. The module level .paramstyle should be set to the default paramstyle which the module expects without calling .set_paramstyle(). As for most cursor level flags, the connection should offer the same APIs and then inherit the setting to all newly created cursors on that connection. > The param styles were chosen with the explicit intent that the DBAPI module > would not even touch the strings. The "?" and ":1" and ":name" forms can all > be passed directly to certain SQL engines, along with input bindings. > "Lesser" databases require munging everything into a single SQL string, > which means they have more flexibility on the param style. Right. -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From seyan@cgenerator.com Mon Apr 30 11:12:46 2001 From: seyan@cgenerator.com (Kang Se Yan) Date: Mon, 30 Apr 2001 18:12:46 +0800 Subject: [DB-SIG] Connection of MS SQL Residing in Microsoft NT with Python Residing in Linux Message-ID: Dear Everbody, I am have a problem in how a Python residing in Linux platform communicates with a MS SQL which is resided at Windows NT. Thank you for your help. Best regards, Se Yan From mal@lemburg.com Mon Apr 30 11:24:51 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 30 Apr 2001 12:24:51 +0200 Subject: [DB-SIG] Connection of MS SQL Residing in Microsoft NT with Python Residing in Linux References: Message-ID: <3AED3D73.3B002DD7@lemburg.com> Kang Se Yan wrote: > > Dear Everbody, > > I am have a problem in how a Python residing in Linux platform > communicates with a MS SQL which is resided at Windows NT. > Thank you for your help. There are several ways to solve this problem: 1. use a proxy which then executes the query on the WinNT box and sends back the results to the Linux box * pyxmlmssql can be used for this * mxODBC + EasySoft ODBC-ODBC bridge too 2. use an interface which speaks the MS SQL Server wire protocol natively * you could try one of the sybase interfaces (they once were protocol compatible with MS SQL Server) * mxODBC + one of the ODBC driver kits for MS SQL Server -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From Anthony Baxter Mon Apr 30 15:21:38 2001 From: Anthony Baxter (Anthony Baxter) Date: Tue, 01 May 2001 00:21:38 +1000 Subject: [DB-SIG] Connection of MS SQL Residing in Microsoft NT with Python Residing in Linux In-Reply-To: Message from "Kang Se Yan" of "Mon, 30 Apr 2001 18:12:46 +0800." Message-ID: <200104301421.AAA01282@mbuna.arbhome.com.au> I was doing this successfully for quite some time using a couple of different Sybase adaptors. Fortunately we got to turn off the MS SQL servers recently so I don't have to do that any more. :) I think theres howtos on www.zope.org about it... Anthony >>> "Kang Se Yan" wrote > Dear Everbody, > > I am have a problem in how a Python residing in Linux platform > communicates with a MS SQL which is resided at Windows NT. > Thank you for your help. > > Best regards, > Se Yan > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig > -- Anthony Baxter It's never too late to have a happy childhood.