From djc@object-craft.com.au Fri Aug 3 02:47:04 2001 From: djc@object-craft.com.au (Dave Cole) Date: 03 Aug 2001 11:47:04 +1000 Subject: [DB-SIG] Sybase module 0.31 released In-Reply-To: Dave Cole's message of "18 Jul 2001 13:09:33 +1000" Message-ID: What is it: The Sybase module provides a Python interface to the Sybase relational database system. It supports all of the Python Database API, version 2.0 with extensions. Commercial support is available for this module. Please refer to our website for details: http://www.object-craft.com.au/support.html For those people who do not have LaTeX installed there is a PDF document. These is also a PostScript document which prints out a handy A5 booklet on an A4 duplex printer. The module and documentation are all available on here: http://www.object-craft.com.au/projects/sybase/ Thanks ------ Timothy Docker reported and fixed two bugs one of which was quite nasty. Derek Harland supplied a patch to allow mxDateTime.DateTime objects to be supplied as parameters to cursor.execute(). Changes for this release ------------------------ 1) The Connection.execute() method was incorrectly returning DATETIME column values. This was due to a bug in the internal DataBuf implementation which always returned the DATETIME value at index 0 in the buffer regardless of the fact that the buffer contained 16 column values. Note that the Cursor.execute() method was not affected since it only allocates buffers for one row. Thanks to Timothy Docker for fixing this. 2) The internal Sybase.py context initialisation was being performed before the DB-API exceptions were defined. If the initialisation failed the module tried to raise an exception which had not been defined. Ooops. Thanks to Timothy Docker for fixing this. 3) The internal DateTime object no longer allows you to modify the cracked datetime attributes. Previously you may have been fooled into thinking that modifying these attributes would change the internal value... Thanks to Derek Harland for pointing this out. 4) You can now pass an mxDateTime.DateTime object as a parameter to Cursor.execute(). Thanks to Derek Harland for implementing this feature. - Dave -- http://www.object-craft.com.au From bkline@rksystems.com Sat Aug 4 20:09:42 2001 From: bkline@rksystems.com (Bob Kline) Date: Sat, 4 Aug 2001 15:09:42 -0400 (EDT) Subject: [DB-SIG] Cursor.executemany() and Cursor.nextset() Message-ID: I'm a little bit puzzled about how the specs intend for the executemany and nextset methods of the Cursor class to interact with each other. I can easily picture the standard use cases for the nextset() method. The user invokes a stored procedure. The procedure returns more than one result set. The nextset() method allows the user to move through the result sets. Happens all the time. I can even imagine the justification for providing executemany() in the case of data manipulation queries (INSERT, UPDATE, DELETE), though it's not entirely clear to me that such batching isn't more appropriately handled by client code. The problem is that the specification for executemany() does not restrict its use to actions which do not return result sets. So, if I were implementing the API, what should my code do if executemany() is invoked with multiple parameter sets on a stored procedure which returns multiple result sets? The choices I can imagine include: 1. Ignore the result sets; throw an exception for fecthXXX() and nextset() calls. 2. Ignore the result sets silently. 3. Detect the condition and throw an exception from executemany(). 4. Perform invocations of the command for each of the parameter sets, without waiting for the fetchXXX() or nextset() calls, gathering and caching the data for all result sets. 5. Delay invocation of subsequent iterations of the command as soon as it is detected that a result set has been generated, holding off until the results are fetched and nextset() is invoked. None of these options seem very satisfactory, to put it politely. In the absence of more specific language in the spec barring the use of executemany() for commands which produce result sets, it seems at best dishonest to just ignore the result sets, silently or otherwise, which would eliminate the first two approaches. The third option, though the least of the evils listed, is excessively crude, both in light of the absence of a warning in the spec that such an exception might be thrown, and because of the indeterminate nature of the point in time at which the condition would be detected, leaving the possibility that some, but not all of the invocations might be performed. The fourth option could be prohibitively expensive, and neither the user nor the API implementation has any reliable way of specifying (in the user's case) or detecting (in the driver's case) when such pre-fetching of result sets would be required, so it would have to be employed in all cases, killing the performance for the common case of sequential walk through the first few rows of a very large result set. The last option is clearly unworkable, as there is no way for the driver to tell when fetchXXX() or nextset() calls are to be expected. Have I missed any obvious solutions? It seems to me that the API is either overly ambitious or insufficiently specified (or both) in this area. I strongly recommend that the use of executemany for commands producing result sets be deprecated, and eventually forbidden (assuming the method is retained at all). I'll be grateful for any insight into solving this problem which I have missed. [Sorry if this topic has been covered before. I didn't see a search interface for the archives, and though I did a brute force review of the past few months, didn't see any discussion which solved the problem. Is there a FAQ document for this SIG?] -- Bob Kline mailto:bkline@rksystems.com http://www.rksystems.com From zen@shangri-la.dropbear.id.au Sun Aug 5 17:19:09 2001 From: zen@shangri-la.dropbear.id.au (Stuart Bishop) Date: Mon, 6 Aug 2001 02:19:09 +1000 (EST) Subject: [DB-SIG] DB API 3.0 strawman Message-ID: Here is an update to the DB API update I posted to this list a few weeks ago. It has moved its focus away from being a wrapper for 1.0 and 2.0 compliant drivers, and more into a new DB API standard (although this wrapper functionality is still there). I'm after feedback, particularly if this is the direction to head and if it should proceed to an official PEP. PEP: TBA Title: DB API 3.0 Version: $Revision: 1.9 $ Author: zen@shangri-la.dropbear.id.au (Stuart Bishop) Discussions-To: db-sig@python.org Status: strawman Type: Standards Track Requires: 234,248,249, Date PEP Created: 18-May-2001 Post-History: Never Abstract Python has had a solid Database API for SQL databases since 1996 [1]. This revision introduces iterator support and attempts to fix known issues with the interface. This version provides a module to be integrated into the core Python distribution, which provides the interface to 3rd party RDBMS drivers. This module can use DB API 1.0 and 2.0 compliant modules as the drivers, although it it expected that these drivers will be updated to support this API more fully. Copyright This document has been placed in the public domain, except for those sections attributed to other sources. Specification Use of this wrapper requires a DB API v2.0 compliant driver to be installed in sys.path. The wrapper may support DB API v1.0 drivers (to be determined). Module Interface (sql.py) connect(driver,dsn,user,password,host,database) driver -- The name of the DB API compliant driver module. dsn -- Datasource Name user -- username (optional) password - password (optional) host -- hostname or network address (optional) database -- database name (optional) Returns a Connection object. This will be a wrapper object for DB API 1.0 or DB API 2.0 compliant drivers. DB API 3.0 compliant drivers will have a Connection instance returned unmodified. Does the connect function need to accept arbitrary keyword arguments? Exceptions (unchanged from DB API 2.0) Exceptions thrown need to be subclasses of those defined in the sql module, to allow calling code to catch them without knowning which driver module they were thrown from. This is particularly of use for code that is connecting to multiple databases using different drivers. Connection Object close() commit() cursor() As per DB API 2.0. rollback(savepoint=None) Rollback the current transaction entirely, or to the given savepoint if a savepoint is passed. The savepoint object must have been returned by the savepoint method during the current tranaction. The rollback method will raise a NotSupportedError exception if the driver doesn't support transactions. quote(object) Returns a ANSI SQL quoted version of the given value as a string. For example: >>> print con.quote(42) 42 >>> print con.quote("Don't do that!") 'Don''t do that!' Note that quoted dates often have a RDBMS dependant syntax (eg. "TO_DATE('01-Jan-2001 12:01','DD-MMM-YYYY H24:MI')" for Oracle). I need to track down the official ANSI SQL 1992 or 1999 compliant syntax for specifying date/time/and datetime datatype as strings (if it exists). The quote method is invaluable for generating logs of SQL commands or for dynamically generating SQL queries. This method may be made a module level function as opposed to a Connection method if it can be show that string quoting will always be RDBMS independant. execute(operation,[seq_of_parameters]) A Cursor is created and its execute method is called. Returns the newly created cursor. for row in con.execute('select actor,year from sketches'): [ ... ] Insert note abour cursor creation overheads and how to avoid here. What should the default format be for bind variables? driver_connection() Returns the unwrapped connection object if the driver is a DB API 1.0 or DB API 2.0 compliant driver. This iis required for accessing RDBMS specific features. Returns self if the connection object is unwrapped. capabilities A dictionary describing the capabilities of the driver. Currently defined values are: apilevel String constant stating the supported DB API level of the driver. Currently only the strings '1.0', '2.0' and '3.0' are allowed. threadsafety As per DB API 2.0. rollback 1 if the driver supports the rollback() method 0 if the driver does not support the rollback() method nextset 1 if the driver's cursors supports the nextset() method 0 if the nextset() method is not supported. default_transaction_level The default transaction isolation level for this driver. set_transaction_level(level) Set the transaction isolation level to the given level, or a more restrictive isolation level. A NotSupported exception is raised if the given or more restrictive isolation level cannot be set. Returns the transaction isolation level actually set. autocommit(flag) Do we want an autocommit method? It would need to default to false for backwards compatibility, and remembering to turn autocommit on is as easy as explicitly calling commit(). savepoint() Sets a savepoint in the current transaction, or throws a NotSupportedError exception. Returns an object which may be passed to the rollback method to rollback the transaction to this point. Cursor Object The cursor object becomes an iterator after its execute method has been called. Rows are retrieved using the drivers fetchmany(arraysize) method. execute(operation,sequence_of_parameters) As per the executemany method in the DB API 2.0 spec. Is there any need for both execute and executemany in this API? Setting arraysize to 1 will call the drivers fetch() method rather than fetchmany(). Returns self, so the following code is valid for row in mycursor.execute('select actor,year from sketches'): [ ... ] What should the default format be for bind variables? callproc(procedure,[parameters]) arraysize description rowcount close() setinputsizes(size) setoutputsizes(size[,column] As per DB API 2.0 driver_cursor() Return the unwrapped cursor object as produced by the driver. This may be required to access driver specific features. next() Return the Row object for the next row from the currently executing SQL statement. As per DB API 2.0 spec, except a StopIteration exception is raised when the result set is exhausted. __iter__() Returns self. nextset() I guess as per DB API 2.0 spec. warnings List of Warning exceptions generated so far by the currently executing statement. This list is cleared automatically by the execute and callproc methods. clear_warnings() Erase the list of warnings. Same as 'del cursor.warnings[:]' Is this method required? raise_warnings If set to 0, Warning exceptions are not raised and instead only appended to the warnings list. If set to 1, Warning excepions are raised as they occur. Defaults to 1 Row Object When a Cursor is iterated over, it returns Row objects. dtuple.py (http://www.lyra.org/greg/python/) provides such an implementation already. [index_or_key] Retrieve a field from the Row. If index_or_key is an integer, the column of the field is referenced by number (with the first column index 0). If index_or_key is a string, the column is referenced by its lowercased name (lowercased to avoid problems with the differing methods vendors use to capitalize their column names). Type Objects and Constructors As per DB API 2.0 spec. Do we need Date, Time and DateTime classes, or just DateTime? Need to author the 'Date PEP'. It would be nice if there was a more intelligent standard Date class in the Python core that we could leverage. If we don't, people will start using the sql module for the more intelligent DateTime object it would need to provide even if they arn't using databases. ConnectionPool Object A connection pool, to be documented. Transaction Isolation Levels Insert description of dirty reads, non repeatable reads and phantom reads, probably stolen from JDBC 3.0 documentation. The following symbolic constants are defined to describe the four transaction isolation levels defined by SQL99. Note that they are in numeric order so comparison operators can be safely used to test for restrictivness. Note that these definitions are taken from the JDBC API 3.0 documentation by Sun. TRANSACTION_NONE No transaction isolation is guarenteed. TRANSACTION_READ_UNCOMMITTED Allows transactions to see uncommitted changes to the data. This means that dirty reads, nonrepeatable reads, and phantom reads are possible. TRANSACTION_READ_COMMITTED Means that any changes made inside a transaction are not visible outside the transaction until the transaction is committed. This prevents dirty reads, but nonrepeatable reads and phantom reads are still possible. TRANSACTION_REPEATABLE_READ Disallows dirty reads and nonrepeatable reads. Phantom read are still possible. TRANSACTION_SERIALIZABLE Specifies that dirty reads, nonrepeatable reads, and phantom reads are prevented. Test Suite A common test suite will be part of the implementation, to allow driver authors and driver evaluators to test and excercise the systems. Two possbilities to start with are the test suites in psycopg by Federico Di Gregorio and mxODBC by eGenix.com Software. Example DDL for various systems will need to be provided. Rationale The module is called sql.py, to avoid any ambiguity with non-realational or non-SQL compliant database interfaces. This also nicely limits the scope of the project. Other suggestions are 'sqlutil' and 'rdbms', since 'sql' may refer to the language itself. Previous versions of the DB API have been intentially kept lean to make it simpler to develop and maintain drivers, as a richer feature set could be implemented by a higher level wrapper and maintained in a single place rather than in every API compliant driver. As this revision provides a single place to maintain code, these features can now be provided without placing a burden to the driver authors. Existing DB API 1.0 and 2.0 drivers can be used to power the backend of this module. This means that there will be a full compliment of drivers available from day 1 of this modules release without placing a burden on driver developers and maintainers. The core of the API is identical to the Python Database API v2.0 (PEP-249). This API is already familiar to Python programers and is a proven solid foundation. Python previously defined a common relational database API that was implemented by all drivers, and application programmers accessed the drivers directly. This caused the following issues: It was difficult to write code that connected to multiple database vendor's databases. Each seperate driver used defined its own heirarchy of exceptions that needed to be handled, their own datetime class and their own set of datatype constants. Platform independant code could not be written simply, due to the differing paramater styles used by bind variables. This also caused problems with publishing example code and tutorials. The API remained minimal, as any new features would need to be implemented by all driver maintainers. The DB-SIG felt most feature suggestions would be better implemented by higher level wrappers, such as that defined by this PEP. Language Comparison The design proposed in this PEP is the same as that used by Java, where a relational database API is shipped as part of the core language, but requires the installation of 3rd party drivers to be used. Perl has a similar arrangement of API and driver separation. Perl does not ship with PerlDBI or drivers as part of the core language, but it is well documented and trivial to add using the CPAN tools. PHP has a seperate API for each database vendor, although work is underway (completed?) to define a common abstraction layer similar to Java, Perl and Python. All of the drivers ship as part of the core language. References [1] PEP-248 and PEP-249 -- Stuart Bishop From fog@mixadlive.com Mon Aug 6 09:36:52 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: 06 Aug 2001 10:36:52 +0200 Subject: [DB-SIG] DB API 3.0 strawman In-Reply-To: References: Message-ID: <997087017.1484.22.camel@lola> hi, i don't like all the complexity you added to the dbapi, so i will be a little bit harsh in my comments. also, i think much of the stuff here can safetly go into a wrapper module. so my advice is to don't call this pep dbapi-3.0. On 06 Aug 2001 02:19:09 +1000, Stuart Bishop wrote: > Module Interface (sql.py) why are you calling this pep dbapi if it documents a different thing? > rollback(savepoint=None) what db support savepoints? if it is just one, lets treat this as a db-specific extension, no need to pollute the simplicity of dbapi. > quote(object) nice. i'd put it at module level. > execute(operation,[seq_of_parameters]) why to add a new fucntion just to avoid two method calls? > capabilities i'd like to see a list called `formats', listing all the supported formats. almost all the drivers support more than one bound args format and currently there is no why to support them. > set_transaction_level(level) i agree, but doing this right will be difficult. does we really want to put such a burden on driver developers? > > autocommit(flag) > > Do we want an autocommit method? It would need to default > to false for backwards compatibility, and remembering > to turn autocommit on is as easy as explicitly calling > commit(). ok. (but autocommit is just a specific case of transaction level.) > savepoint() > > Sets a savepoint in the current transaction, or throws a > NotSupportedError exception. Returns an object which may > be passed to the rollback method to rollback the transaction > to this point. see above. > next() > > Return the Row object for the next row from the currently > executing SQL statement. As per DB API 2.0 spec, except > a StopIteration exception is raised when the result set > is exhausted. nice addition to dbapi. > > __iter__() > > Returns self. why? > > nextset() > > I guess as per DB API 2.0 spec. > > warnings > > List of Warning exceptions generated so far by the currently > executing statement. This list is cleared automatically by the > execute and callproc methods. this is good for me. > > clear_warnings() > > Erase the list of warnings. Same as 'del cursor.warnings[:]' > Is this method required? no. you can just do cursor.warnings = None > Row Object > > When a Cursor is iterated over, it returns Row objects. > dtuple.py (http://www.lyra.org/greg/python/) provides such an > implementation already. > > [index_or_key] > > Retrieve a field from the Row. If index_or_key is an integer, > the column of the field is referenced by number (with the first > column index 0). If index_or_key is a string, the column is > referenced by its lowercased name (lowercased to avoid problems > with the differing methods vendors use to capitalize their column > names). i like this concept. this is really missing from the dbapi. > > Type Objects and Constructors > > As per DB API 2.0 spec. > > Do we need Date, Time and DateTime classes, or just DateTime? i think that every dbapi compliant driver should provide Date and Time, usually wrapping the mxDateTime objects. > > Need to author the 'Date PEP'. It would be nice if there was a > more intelligent standard Date class in the Python core that > we could leverage. If we don't, people will start using the > sql module for the more intelligent DateTime object it would > need to provide even if they arn't using databases. > > ConnectionPool Object > > A connection pool, to be documented. better done at user-level (i say that *even* if psycopg supports connection pools natively. for us war a design choice but should not be forced on developers of drivers, imho.) > > Rationale > > The module is called sql.py, to avoid any ambiguity with non-realational > or non-SQL compliant database interfaces. This also nicely limits > the scope of the project. Other suggestions are 'sqlutil' and 'rdbms', > since 'sql' may refer to the language itself. > > Previous versions of the DB API have been intentially kept lean to > make it simpler to develop and maintain drivers, as a richer feature > set could be implemented by a higher level wrapper and maintained in > a single place rather than in every API compliant driver. As this > revision provides a single place to maintain code, these features > can now be provided without placing a burden to the driver authors. mmm... so why are you proposing this to be dbapi-3.0? imo, we should separate work in 3 parts: 1/ dbapi, always lean and easy to implement to have a large base of compliant drivers. 2/ sql.py, based on dbapi, to provide richer functionality. 3/ dbapi extensions (the document i am writing, sorry for being so late), that describes common dbapi extensions (usually features not provided by every db and so not pertinent to 1 or 2.) > Existing DB API 1.0 and 2.0 drivers can be used to power the backend > of this module. This means that there will be a full compliment of > drivers available from day 1 of this modules release without placing > a burden on driver developers and maintainers. completely agreed. > It was difficult to write code that connected to multiple > database vendor's databases. Each seperate driver used defined > its own heirarchy of exceptions that needed to be handled, > their own datetime class and their own set of datatype constants. but the hierarchy of exceptions is documented in the dbapi pep. lets fix the drivers, first. > Platform independant code could not be written simply, > due to the differing paramater styles used by bind variables. > This also caused problems with publishing example code and tutorials. this is true, but i think it was a design decision to allow for different format styles. anyway, a single format supported (using a wrapper) by all drivers seems good to me. > The API remained minimal, as any new features would need to > be implemented by all driver maintainers. The DB-SIG felt > most feature suggestions would be better implemented by higher > level wrappers, such as that defined by this PEP. i mostly agree with you so: 1/ found a name for this pep (*not* dbapi, it isn't) 2/ if everybody agree, start implement it. ciao, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org Best friends are often failed lovers. -- Me From mal@lemburg.com Mon Aug 6 09:40:29 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 06 Aug 2001 10:40:29 +0200 Subject: [DB-SIG] DB API 3.0 strawman References: Message-ID: <3B6E57FD.9E2D30CB@lemburg.com> Stuart Bishop wrote: > > Here is an update to the DB API update I posted to this > list a few weeks ago. It has moved its focus away from > being a wrapper for 1.0 and 2.0 compliant drivers, > and more into a new DB API standard (although this wrapper > functionality is still there). I don't think you should name this "DB API" -- you are proposing a Perl like generic database interface which uses third party drivers to actually connect to databases. The DB API specifies how this connection is supposed to be exposed, while your interface merely wraps the existing interfaces to further simplify database access from within Python. Note that I am not sure that this is the right approach, though. Python has had great success with its DB API spec and the dozens of different database interfaces (which I think results from the fact that the DB API spec has tried to use the KISS principle all along). Your proposed layer doesn't really introduce any significant new and needed features; it also misses the point that cross-database applications have more problems with the database specific SQL dialects than the connection and transaction mechanisms which provide access to it. Anyway, here are some additional comments... > I'm after feedback, particularly if this is the direction to > head and if it should proceed to an official PEP. > > PEP: TBA > Title: DB API 3.0 > Version: $Revision: 1.9 $ > Author: zen@shangri-la.dropbear.id.au (Stuart Bishop) > Discussions-To: db-sig@python.org > Status: strawman > Type: Standards Track > Requires: 234,248,249, Date PEP > Created: 18-May-2001 > Post-History: Never > > Abstract > > Python has had a solid Database API for SQL databases since 1996 [1]. > This revision introduces iterator support and attempts to fix > known issues with the interface. This version provides a module > to be integrated into the core Python distribution, which provides > the interface to 3rd party RDBMS drivers. This module can use > DB API 1.0 and 2.0 compliant modules as the drivers, although it > it expected that these drivers will be updated to support this > API more fully. > > Copyright > > This document has been placed in the public domain, except for > those sections attributed to other sources. > > Specification > > Use of this wrapper requires a DB API v2.0 compliant driver to > be installed in sys.path. The wrapper may support DB API v1.0 > drivers (to be determined). > > Module Interface (sql.py) > > connect(driver,dsn,user,password,host,database) > > driver -- The name of the DB API compliant driver module. > dsn -- Datasource Name > user -- username (optional) > password - password (optional) > host -- hostname or network address (optional) > database -- database name (optional) > > Returns a Connection object. This will be a wrapper object > for DB API 1.0 or DB API 2.0 compliant drivers. DB API 3.0 > compliant drivers will have a Connection instance returned > unmodified. > > Does the connect function need to accept arbitrary keyword > arguments? For ODBC at least, the above few arguments don't suffice. The ODBC DSN string can contain many different and driver specific additional settings which may be sometimes even needed in order to successfully connect. I'd rather suggest making all options except dsn optional. The dsn string can then contain the user and password information as well. > Exceptions (unchanged from DB API 2.0) > > Exceptions thrown need to be subclasses of > those defined in the sql module, to allow calling code > to catch them without knowning which driver module > they were thrown from. This is particularly of use > for code that is connecting to multiple databases > using different drivers. You ar placiong a requirement on all database modules here -- this won't work out. Better catch the driver specific exceptions and reraise them using the base classes you define in sql.py. > Connection Object > > close() > commit() > cursor() > > As per DB API 2.0. > > rollback(savepoint=None) > > Rollback the current transaction entirely, or to the given > savepoint if a savepoint is passed. The savepoint object > must have been returned by the savepoint method during the > current tranaction. > > The rollback method will raise a NotSupportedError exception > if the driver doesn't support transactions. I don't think that the concept of a savepoint is widely supported. > quote(object) > > Returns a ANSI SQL quoted version of the given value as a > string. For example: > > >>> print con.quote(42) > 42 > >>> print con.quote("Don't do that!") > 'Don''t do that!' > > Note that quoted dates often have a RDBMS dependant syntax > (eg. "TO_DATE('01-Jan-2001 12:01','DD-MMM-YYYY H24:MI')" for > Oracle). I need to track down the official ANSI SQL 1992 > or 1999 compliant syntax for specifying date/time/and datetime > datatype as strings (if it exists). > > The quote method is invaluable for generating logs of SQL > commands or for dynamically generating SQL queries. > > This method may be made a module level function as opposed > to a Connection method if it can be show that string quoting > will always be RDBMS independant. What purpose does this method have ? The DB API style way of passing argument which need to be SQL escaped is by binding parameters -- not explicitly in the SQL command string. > execute(operation,[seq_of_parameters]) > > A Cursor is created and its execute method is called. > Returns the newly created cursor. > > for row in con.execute('select actor,year from sketches'): > [ ... ] > > Insert note abour cursor creation overheads and how to > avoid here. > > What should the default format be for bind variables? Please don't reintroduce this DB API 1.0 feature. Cursors can easily be created explicitly and the .execute method called on these objects. > driver_connection() > > Returns the unwrapped connection object if the driver > is a DB API 1.0 or DB API 2.0 compliant driver. This iis > required for accessing RDBMS specific features. > Returns self if the connection object is unwrapped. > > capabilities > > A dictionary describing the capabilities of the driver. > Currently defined values are: > > apilevel > > String constant stating the supported DB API level > of the driver. Currently only the strings '1.0', > '2.0' and '3.0' are allowed. > > threadsafety > > As per DB API 2.0. > > rollback > > 1 if the driver supports the rollback() method > 0 if the driver does not support the rollback() method > > nextset > > 1 if the driver's cursors supports the nextset() method > 0 if the nextset() method is not supported. > > default_transaction_level > > The default transaction isolation level for this > driver. Good idea -- but an application will have to successfully connect to a data source first in order to query these parameters... this may not be ideal. > set_transaction_level(level) > > Set the transaction isolation level to the given level, > or a more restrictive isolation level. A NotSupported > exception is raised if the given or more restrictive > isolation level cannot be set. > > Returns the transaction isolation level actually set. I don't think we need this in the generic interface -- transactions are not even widely supported and transaction isolation level is something which is even less standardized among databases. > autocommit(flag) > > Do we want an autocommit method? It would need to default > to false for backwards compatibility, and remembering > to turn autocommit on is as easy as explicitly calling > commit(). We don't :-) > savepoint() > > Sets a savepoint in the current transaction, or throws a > NotSupportedError exception. Returns an object which may > be passed to the rollback method to rollback the transaction > to this point. See above: Not needed. > Cursor Object > > The cursor object becomes an iterator after its execute > method has been called. Rows are retrieved using the drivers > fetchmany(arraysize) method. > > execute(operation,sequence_of_parameters) > > As per the executemany method in the DB API 2.0 spec. > Is there any need for both execute and executemany in this > API? Yes, because sequence_of_strings (.execute()) is a subset of sequence_of_sequences (.executemany()); we need two APIs to tell the difference. > Setting arraysize to 1 will call the drivers fetch() method > rather than fetchmany(). > > Returns self, so the following code is valid > > for row in mycursor.execute('select actor,year from sketches'): > [ ... ] > > What should the default format be for bind variables? A layer can easily map the different formats to one and also fiddle the parameters right if necessary. For simplicity, I'd suggest to use positional markers such as '?'. > callproc(procedure,[parameters]) > arraysize > description > rowcount > close() > setinputsizes(size) > setoutputsizes(size[,column] > > As per DB API 2.0 > > driver_cursor() > > Return the unwrapped cursor object as produced by the driver. > This may be required to access driver specific features. > > next() > > Return the Row object for the next row from the currently > executing SQL statement. As per DB API 2.0 spec, except > a StopIteration exception is raised when the result set > is exhausted. > > __iter__() > > Returns self. > > nextset() > > I guess as per DB API 2.0 spec. > > warnings > > List of Warning exceptions generated so far by the currently > executing statement. This list is cleared automatically by the > execute and callproc methods. > > clear_warnings() > > Erase the list of warnings. Same as 'del cursor.warnings[:]' > Is this method required? Not needed. > raise_warnings > > If set to 0, Warning exceptions are not raised and instead > only appended to the warnings list. > > If set to 1, Warning excepions are raised as they occur. > > Defaults to 1 Not sure about this one. Warnings tend to be important during .execute() and .fetch(), but not at connection time. Testing .warnings should do the trick in most cases. > Row Object > > When a Cursor is iterated over, it returns Row objects. > dtuple.py (http://www.lyra.org/greg/python/) provides such an > implementation already. > > [index_or_key] > > Retrieve a field from the Row. If index_or_key is an integer, > the column of the field is referenced by number (with the first > column index 0). If index_or_key is a string, the column is > referenced by its lowercased name (lowercased to avoid problems > with the differing methods vendors use to capitalize their column > names). How is the interface supposed to find the right column name then ? > Type Objects and Constructors > > As per DB API 2.0 spec. > > Do we need Date, Time and DateTime classes, or just DateTime? Yes, because SQL defines them. > Need to author the 'Date PEP'. It would be nice if there was a > more intelligent standard Date class in the Python core that > we could leverage. If we don't, people will start using the > sql module for the more intelligent DateTime object it would > need to provide even if they arn't using databases. Many database interfaces are already actively using mxDateTime -- why reinvent the wheel ? Note that the DB API spec defines the date/time interfaces needed by database modules. You can basically plug in any compatible implementation (such as mxDateTime or the examples provided in DB API 2.0). > ConnectionPool Object > > A connection pool, to be documented. Not sure whether this is a good idea... connection pooling tends to have uncontrollable side-effects. > Transaction Isolation Levels > > Insert description of dirty reads, non repeatable reads and > phantom reads, probably stolen from JDBC 3.0 documentation. > > The following symbolic constants are defined to describe the > four transaction isolation levels defined by SQL99. Note that > they are in numeric order so comparison operators can be > safely used to test for restrictivness. Note that these definitions > are taken from the JDBC API 3.0 documentation by Sun. > > TRANSACTION_NONE > > No transaction isolation is guarenteed. > > TRANSACTION_READ_UNCOMMITTED > > Allows transactions to see uncommitted changes to the data. > This means that dirty reads, nonrepeatable reads, and > phantom reads are possible. > > TRANSACTION_READ_COMMITTED > > Means that any changes made inside a transaction are not visible > outside the transaction until the transaction is committed. This > prevents dirty reads, but nonrepeatable reads and phantom > reads are still possible. > > TRANSACTION_REPEATABLE_READ > > Disallows dirty reads and nonrepeatable reads. Phantom read are > still possible. > > TRANSACTION_SERIALIZABLE > > Specifies that dirty reads, nonrepeatable reads, and phantom > reads are prevented. Not needed; see above. > Test Suite > > A common test suite will be part of the implementation, to > allow driver authors and driver evaluators to test and excercise > the systems. Two possbilities to start with are the test suites > in psycopg by Federico Di Gregorio and mxODBC by eGenix.com Software. > Example DDL for various systems will need to be provided. Note that test.py in mxODBC is mainly meant to test the database backends for various data types and other SQL incompatilbilities (the testing of the mxODBC interface is a nice side-effect, though ;-). > Rationale > > The module is called sql.py, to avoid any ambiguity with non-realational > or non-SQL compliant database interfaces. This also nicely limits > the scope of the project. Other suggestions are 'sqlutil' and 'rdbms', > since 'sql' may refer to the language itself. You may want to have a look at a small project I started (but which didn't really get off the ground) called dbinfo.py on my Python Pages. It tries to help with writing cross-database applications by providing abstract query interfaces... I think that this may be more in sync with what your real intention is here. Perhaps you could use it as base for you sql.py ?! > Previous versions of the DB API have been intentially kept lean to > make it simpler to develop and maintain drivers, as a richer feature > set could be implemented by a higher level wrapper and maintained in > a single place rather than in every API compliant driver. As this > revision provides a single place to maintain code, these features > can now be provided without placing a burden to the driver authors. > > Existing DB API 1.0 and 2.0 drivers can be used to power the backend > of this module. This means that there will be a full compliment of > drivers available from day 1 of this modules release without placing > a burden on driver developers and maintainers. > > The core of the API is identical to the Python Database API v2.0 > (PEP-249). This API is already familiar to Python programers and > is a proven solid foundation. > > Python previously defined a common relational database API that > was implemented by all drivers, and application programmers accessed > the drivers directly. This caused the following issues: > > It was difficult to write code that connected to multiple > database vendor's databases. Each seperate driver used defined > its own heirarchy of exceptions that needed to be handled, Note that a simple way to solve this problem is to require the database module to expose the used exception structure as attributes of the connection objects. You can then write: c = connection.cursor() try: c.execute(...) except connection.Error: pass > their own datetime class and their own set of datatype constants. > > Platform independant code could not be written simply, > due to the differing paramater styles used by bind variables. > This also caused problems with publishing example code and tutorials. > > The API remained minimal, as any new features would need to > be implemented by all driver maintainers. The DB-SIG felt > most feature suggestions would be better implemented by higher > level wrappers, such as that defined by this PEP. > > Language Comparison > > The design proposed in this PEP is the same as that used by Java, > where a relational database API is shipped as part of the core language, > but requires the installation of 3rd party drivers to be used. > > Perl has a similar arrangement of API and driver separation. Perl > does not ship with PerlDBI or drivers as part of the core language, > but it is well documented and trivial to add using the CPAN tools. > > PHP has a seperate API for each database vendor, although work is > underway (completed?) to define a common abstraction layer > similar to Java, Perl and Python. All of the drivers ship as > part of the core language. > > References > > [1] PEP-248 and PEP-249 > > -- > Stuart Bishop > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Consulting & Company: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From daniel.dittmar@sap.com Mon Aug 6 13:20:00 2001 From: daniel.dittmar@sap.com (Dittmar, Daniel) Date: Mon, 6 Aug 2001 14:20:00 +0200 Subject: [DB-SIG] Cursor.executemany() and Cursor.nextset() Message-ID: > I can even imagine the justification for providing > executemany() in the > case of data manipulation queries (INSERT, UPDATE, DELETE), > though it's > not entirely clear to me that such batching isn't more appropriately > handled by client code. Some drivers can implement this more efficient than a simple loop could. > So, if I were implementing the API, what should my code do if > executemany() is invoked with multiple parameter sets on a stored > procedure which returns multiple result sets? The choices I There was a discussion about this topic a few weeks/months ago, with the result being something like this: - the semantics of executemany () with regard to SELECT/CALL are very database specific. Writers of drivers are reluctant to hide specific features of the underlying database. - one possible solution was to allow multiple result sets, but not to require them. Thus databases which allow batch SELECTs could create a single result set, other databases would create one for each parameter set. Portable client code would have to check for multiple result sets. > It seems to me that the API is either overly ambitious or > insufficiently > specified (or both) in this area. It is insufficiently specified, but for a reason. This should probably be stated more clearly in the specification. > I strongly recommend that the use of executemany for commands > producing > result sets be deprecated, and eventually forbidden (assuming > the method > is retained at all). Depending on the database, there may be no possibility to check whether a statement returns a result set short of executing it. Daniel -- Daniel Dittmar daniel.dittmar@sap.com SAP DB, SAP Labs Berlin http://www.sapdb.org/ From bkline@rksystems.com Mon Aug 6 19:23:37 2001 From: bkline@rksystems.com (Bob Kline) Date: Mon, 6 Aug 2001 14:23:37 -0400 (EDT) Subject: [DB-SIG] Cursor.executemany() and Cursor.nextset() In-Reply-To: Message-ID: On Mon, 6 Aug 2001, Dittmar, Daniel wrote: Hi, Daniel. Thanks for your reply. > > So, if I were implementing the API, what should my code do if > > executemany() is invoked with multiple parameter sets on a stored > > procedure which returns multiple result sets? The choices I > > - one possible solution was to allow multiple result sets, but not > to require them. Thus databases which allow batch SELECTs could > create a single result set, other databases would create one for > each parameter set. Portable client code would have to check for > multiple result sets. This does not address the problem I raised in my previous message. Perhaps an example will make it clearer what this problem is. Furthermore, the problem exists regardless of the number of result sets returned by the stored procedure. Imagine a stored procedure p which takes two parameters and returns a single result set. Client code invokes the stored procedure using the executemany() method on a cursor object: c = conn.cursor() c.executemany("EXEC p ?,?", [[40,75], [100,125]]) Now, what does the driver code do? It uses the first set of parameter values to invoke the stored procedure once. So far, its behavior seems reasonable. The result set produced by this invocation is available for retrieval with fetchXXX() calls. What about the second set of parameter values? If the implementation of executemany() proceeds on to invoke the stored procedure a second time, the result set for the first invocation will have been discarded, unless the driver fetches all the data and saves it internally. This breaks the standard DBMS model of deferring retrieval of a recordset's data until the client asks for it. It means that some operations which would be possible using that model (for example, submit a query whose result set would exhaust the resources available to the driver, and incrementally fetch the rows as the client code requests them) would become impossible. Surely we don't need a database API specification today which discards the benefits of the client/server paradigm. Suppose, then, that the driver decides to stop after the first submission of the requested action to the back end, waiting until the client has retrieved all of the data from that invocation. If the client never requests all of the rows in the result set for the first invocation, then the second invocation will never take place, nor will any of the DML operations expected for that invocation have happened. Note that it is not possible in the general case for the driver to detect whether the 'operation' argument to the executemany() method represents DQL, DML, or a combination. In fact, it cannot reliably predict what a subsequent invocation of a stored procedure will produce with other parameter sets, based on what it has seen happen for the first invocation of that same procedure. The fact that a stored procedure invocation can produce multiple result sets just complicates the problem further. Therefore it seems apparent to me that while executemany() may be safely useful for DML the API should clearly state that use of this method for operations which produce result sets constitutes undefined behavior, and may result in an exception being raised by the driver. > > > It seems to me that the API is either overly ambitious or > > insufficiently specified (or both) in this area. > > It is insufficiently specified, but for a reason. I'd be interested in knowing what that reason was. > > > I strongly recommend that the use of executemany for commands > > producing result sets be deprecated, and eventually forbidden > > (assuming the method is retained at all). > > Depending on the database, there may be no possibility to check > whether a statement returns a result set short of executing it. In the general case, this is probably true for all but the most trivial database products. However, at the point when the driver recognizes that one of the invocations of the operation requested by a call to executemany() has produced a result set it should be free to raise an exception and the programmer should have been warned by the API's specification that such an exception is possible (indeed, likely). His data may be left in a logically inconsistent state if his stored procedure mixes DQL and DML, but that wouldn't be surprising for usage which the API should describe as "undefined behavior." If you disagree, please tell me exactly what the driver implementation should do in the case I described above. -- Bob Kline mailto:bkline@rksystems.com http://www.rksystems.com From andy@dustman.net Tue Aug 7 00:20:51 2001 From: andy@dustman.net (Andy Dustman) Date: Mon, 6 Aug 2001 19:20:51 -0400 (EDT) Subject: [DB-SIG] Cursor.executemany() and Cursor.nextset() In-Reply-To: Message-ID: On Mon, 6 Aug 2001, Bob Kline wrote: > Imagine a stored procedure p which takes two parameters and returns a > single result set. Client code invokes the stored procedure using the > executemany() method on a cursor object: > > c = conn.cursor() > c.executemany("EXEC p ?,?", [[40,75], [100,125]]) > > Now, what does the driver code do? It breaks. executemany() is mostly for doing multi-row INSERTs, i.e. c.executemany("INSERT INTO foo (spam,eggs) VALUES (%s,%s)", \ [(40,75), (100,125)]) # params for MySQLdb Despite having two rows of data, only one INSERT (of two rows) is performed. (Or at least, this is the general idea. Likely, not all database client libraries are capable of this; they may perform two separate INSERTs instead.) Your EXEC statement does not have a repetitive set of parameters, so it's behavior is undefined. Mostly likely your client library would not know what to do with this either, since it is being passed two sequences. So... don't use executemany() in this case. INSERT is the only SQL statement I have ever used with executemany(), AFAIK. -- Andy Dustman PGP: 0xC72F3F1D @ .net http://dustman.net/andy I'll give spammers one bite of the apple, but they'll have to guess which bite has the razor blade in it. From andy@dustman.net Tue Aug 7 00:47:27 2001 From: andy@dustman.net (Andy Dustman) Date: Mon, 6 Aug 2001 19:47:27 -0400 (EDT) Subject: [DB-SIG] DB API 3.0 strawman In-Reply-To: Message-ID: On Mon, 6 Aug 2001, Stuart Bishop wrote: > Connection Object ... > quote(object) > > Returns a ANSI SQL quoted version of the given value as a > string. For example: > > >>> print con.quote(42) > 42 > >>> print con.quote("Don't do that!") > 'Don''t do that!' > > Note that quoted dates often have a RDBMS dependant syntax > (eg. "TO_DATE('01-Jan-2001 12:01','DD-MMM-YYYY H24:MI')" for > Oracle). I need to track down the official ANSI SQL 1992 > or 1999 compliant syntax for specifying date/time/and datetime > datatype as strings (if it exists). > > The quote method is invaluable for generating logs of SQL > commands or for dynamically generating SQL queries. > > This method may be made a module level function as opposed > to a Connection method if it can be show that string quoting > will always be RDBMS independant. MySQLdb-1.0.0 will expose a conn.literal() method. It seems that your quote() does the same thing. I rejected quote() as a name because literal() seems to better reflect the result, in my mind, at least. I could live with quote() as a name, though. (I've never needed such a method for my own code, but it was requested, and adding it allowed me to simplify some of the cursor code a bit.) It's also a connection object method, rather than a module function, because MySQL makes use of character sets for it's quoting mechanism, and these are maintained on per-connection bases. Actually, there is a module function version of it as well, which more or less assumes ASCII. (The method calls mysql_real_escape_string(), the function calls mysql_escape_string(). In fact, the function is really an unbound method, in a sense...) > Row Object > > When a Cursor is iterated over, it returns Row objects. > dtuple.py (http://www.lyra.org/greg/python/) provides such an > implementation already. > > [index_or_key] > > Retrieve a field from the Row. If index_or_key is an integer, > the column of the field is referenced by number (with the first > column index 0). If index_or_key is a string, the column is > referenced by its lowercased name (lowercased to avoid problems > with the differing methods vendors use to capitalize their column > names). An interesting idea. MySQLdb has different Cursor classes, one of which is DictCursor, which returns dictionaries instead of tuples. If I understand the above correctly, Row objects would act as tuples AND dictionaries. However, I think column names should not be molested. Most SQL implementations are case-insensitive, AFAIK, but MySQL is not. FOO and foo and two different columns (though it is a dumb idea to do this). Perhaps each driver should supply it's own Row class, and some implementations could choose to use case-insensitive keys. (Lack of comments does not mean I agree on other points...) -- Andy Dustman PGP: 0xC72F3F1D @ .net http://dustman.net/andy I'll give spammers one bite of the apple, but they'll have to guess which bite has the razor blade in it. From bkline@rksystems.com Tue Aug 7 01:38:28 2001 From: bkline@rksystems.com (Bob Kline) Date: Mon, 6 Aug 2001 20:38:28 -0400 (EDT) Subject: [DB-SIG] Cursor.executemany() and Cursor.nextset() In-Reply-To: Message-ID: On Mon, 6 Aug 2001, Andy Dustman wrote: > On Mon, 6 Aug 2001, Bob Kline wrote: > > > Imagine a stored procedure p which takes two parameters and returns a > > single result set. Client code invokes the stored procedure using the > > executemany() method on a cursor object: > > > > c = conn.cursor() > > c.executemany("EXEC p ?,?", [[40,75], [100,125]]) > > > > Now, what does the driver code do? > > It breaks. executemany() is mostly for doing multi-row INSERTs, i.e. > > c.executemany("INSERT INTO foo (spam,eggs) VALUES (%s,%s)", \ > [(40,75), (100,125)]) # params for MySQLdb > > Despite having two rows of data, only one INSERT (of two rows) is > performed. (Or at least, this is the general idea. Likely, not all > database client libraries are capable of this; they may perform two > separate INSERTs instead.) Your EXEC statement does not have a > repetitive set of parameters, so it's behavior is undefined. Mostly > likely your client library would not know what to do with this > either, since it is being passed two sequences. I certainly don't expect my client library to have anything to do with interpreting the Python code. That's the task of the driver implementation, to translate this statement into code which the client library for the DBMS understands. In this case the API spec says that, absent an appropriate array operation, the driver is supposed to invoke the action identified in the first argument to executemany once for each of the sets of parameters supplied in the second argument. What I am trying to say (and so are you, it appears) is that such behavior doesn't make sense if the SQL behind that action contains SELECT statements. What's more, the API should say so, since - as you point out - we know that the driver code will break under such a condition. > So... don't use executemany() in this case. INSERT is the only SQL > statement I have ever used with executemany(), AFAIK. This is good advice, and sounds like an informal way of presenting what I would like the API to say. What am I missing here? Has the spec for this API been handed to the SIG by some testy deity who would be offended by suggestions for improvement? What's so terrible about having the language of the spec amended to identify those conditions for which use of this method constitutes undefined behavior, subject to having an exception raised? Is there a formal process that I should be following for submitting proposed amendments to the spec? If so, I'll be happy to use it. Just let me know. -- Bob Kline mailto:bkline@rksystems.com http://www.rksystems.com From nikinakof@gmx.net Tue Aug 7 09:04:56 2001 From: nikinakof@gmx.net (nikinakof) Date: Tue, 7 Aug 2001 11:04:56 +0300 Subject: [DB-SIG] python + mysql on slackware 8.0 Message-ID: <20010807110456.2d93be6a.nikinakof@gmx.net> Is there anyone can help me, configuring python + mysql on slackware 8.0 ? What can i do ? -- Nihat Ciddi nikinakof@gmx.net http://nikinakof.cjb.net From daniel.dittmar@sap.com Tue Aug 7 09:19:16 2001 From: daniel.dittmar@sap.com (Dittmar, Daniel) Date: Tue, 7 Aug 2001 10:19:16 +0200 Subject: [DB-SIG] Cursor.executemany() and Cursor.nextset() Message-ID: > What I am trying to say (and so are you, it appears) is that such > behavior doesn't make sense if the SQL behind that action contains > SELECT statements. At least in SAP DB, array SELECTS are possible and amount to the UNION of all the single SELECTs. I would prefer the spec to say that the behaviour is undefined rather than requiring the driver to raise an exception. Mostly because I don't want to look at the SQL command at all. Daniel -- Daniel Dittmar daniel.dittmar@sap.com SAP DB, SAP Labs Berlin http://www.sapdb.org/ From eric@nekhem.com Tue Aug 7 10:18:42 2001 From: eric@nekhem.com (Eric Bianchi) Date: Tue, 7 Aug 2001 12:18:42 +0300 Subject: [DB-SIG] DB API 3.0 strawman In-Reply-To: <20010807091253.B470@neo>; from eric@.nekhem.com on Tue, Aug 07, 2001 at 09:12:53AM +0200 References: <20010807091253.B470@neo> Message-ID: <20010807121842.A2207@arioch.nekhem.fr> > Stuart Bishop wrote: > > > > Here is an update to the DB API update I posted to this > > list a few weeks ago. It has moved its focus away from > > being a wrapper for 1.0 and 2.0 compliant drivers, > > and more into a new DB API standard (although this wrapper > > functionality is still there). So basically, what is in the DB API 3.0 and what is in the sql.py ? Do the developers keep the old DB API 2.0 Specifications or must we develop this API in our drivers ? Who is gonna write sql.py ? How can we help to write this wrapper ? > > I'm after feedback, particularly if this is the direction to > > head and if it should proceed to an official PEP. > > > > PEP: TBA > > Title: DB API 3.0 > > Version: $Revision: 1.9 $ > > Author: zen@shangri-la.dropbear.id.au (Stuart Bishop) > > Discussions-To: db-sig@python.org > > Status: strawman > > Type: Standards Track > > Requires: 234,248,249, Date PEP > > Created: 18-May-2001 > > Post-History: Never > > > > Abstract > > > > Python has had a solid Database API for SQL databases since 1996 [1]. > > This revision introduces iterator support and attempts to fix > > known issues with the interface. This version provides a module > > to be integrated into the core Python distribution, which provides > > the interface to 3rd party RDBMS drivers. This module can use > > DB API 1.0 and 2.0 compliant modules as the drivers, although it > > it expected that these drivers will be updated to support this > > API more fully. > > > > Copyright > > > > This document has been placed in the public domain, except for > > those sections attributed to other sources. > > > > Specification > > > > Use of this wrapper requires a DB API v2.0 compliant driver to > > be installed in sys.path. The wrapper may support DB API v1.0 > > drivers (to be determined). > > > > Module Interface (sql.py) > > > > connect(driver,dsn,user,password,host,database) > > > > driver -- The name of the DB API compliant driver module. > > dsn -- Datasource Name > > user -- username (optional) > > password - password (optional) > > host -- hostname or network address (optional) > > database -- database name (optional) > > > > Returns a Connection object. This will be a wrapper object > > for DB API 1.0 or DB API 2.0 compliant drivers. DB API 3.0 > > compliant drivers will have a Connection instance returned > > unmodified. > > > > Does the connect function need to accept arbitrary keyword > > arguments? Well, I thought as a wrapper, we should use a connection string like the one specified in JDBC which is "standard". Something like c = connect(driver,url) where url = "dbapi://name:surname@host:port/dbname" this connect function would be part of sql.py > > Exceptions (unchanged from DB API 2.0) > > > > Exceptions thrown need to be subclasses of > > those defined in the sql module, to allow calling code > > to catch them without knowning which driver module > > they were thrown from. This is particularly of use > > for code that is connecting to multiple databases > > using different drivers. > > Connection Object > > > > close() > > commit() > > cursor() > > > > As per DB API 2.0. > > > > rollback(savepoint=None) > > > > Rollback the current transaction entirely, or to the given > > savepoint if a savepoint is passed. The savepoint object > > must have been returned by the savepoint method during the > > current tranaction. > > > > The rollback method will raise a NotSupportedError exception > > if the driver doesn't support transactions. > > I don't think that the concept of a savepoint is widely supported. True ! Althought it is a nice concept. > > > quote(object) > > > > Returns a ANSI SQL quoted version of the given value as a > > string. For example: > > > > >>> print con.quote(42) > > 42 > > >>> print con.quote("Don't do that!") > > 'Don''t do that!' > > > > Note that quoted dates often have a RDBMS dependant syntax > > (eg. "TO_DATE('01-Jan-2001 12:01','DD-MMM-YYYY H24:MI')" for > > Oracle). I need to track down the official ANSI SQL 1992 > > or 1999 compliant syntax for specifying date/time/and datetime > > datatype as strings (if it exists). > > > > The quote method is invaluable for generating logs of SQL > > commands or for dynamically generating SQL queries. > > > > This method may be made a module level function as opposed > > to a Connection method if it can be show that string quoting > > will always be RDBMS independant. Well, I have the same opinion than Frederico di Gregorio. For security reasons, the quote function should be part of the driver itself. > > execute(operation,[seq_of_parameters]) > > > > A Cursor is created and its execute method is called. > > Returns the newly created cursor. > > > > for row in con.execute('select actor,year from sketches'): > > [ ... ] > > > > Insert note abour cursor creation overheads and how to > > avoid here. > > > > What should the default format be for bind variables? > > Please don't reintroduce this DB API 1.0 feature. Cursors can > easily be created explicitly and the .execute method called on > these objects. I stronly agree with Lemburg. > > > driver_connection() > > > > Returns the unwrapped connection object if the driver > > is a DB API 1.0 or DB API 2.0 compliant driver. This iis > > required for accessing RDBMS specific features. > > Returns self if the connection object is unwrapped. What is the prupose of this function exactly ? > > > > capabilities > > > > A dictionary describing the capabilities of the driver. > > Currently defined values are: > > > > apilevel > > > > String constant stating the supported DB API level > > of the driver. Currently only the strings '1.0', > > '2.0' and '3.0' are allowed. > > > > threadsafety > > > > As per DB API 2.0. > > > > rollback > > > > 1 if the driver supports the rollback() method > > 0 if the driver does not support the rollback() method > > > > nextset > > > > 1 if the driver's cursors supports the nextset() method > > 0 if the nextset() method is not supported. > > > > default_transaction_level > > > > The default transaction isolation level for this > > driver. Maybe we should add a formats variable which is the list of types supported by the driver. > > set_transaction_level(level) > > > > Set the transaction isolation level to the given level, > > or a more restrictive isolation level. A NotSupported > > exception is raised if the given or more restrictive > > isolation level cannot be set. > > > > Returns the transaction isolation level actually set. > > > autocommit(flag) > > > > Do we want an autocommit method? It would need to default > > to false for backwards compatibility, and remembering > > to turn autocommit on is as easy as explicitly calling > > commit(). > I don't think we need that either. > > savepoint() > > > > Sets a savepoint in the current transaction, or throws a > > NotSupportedError exception. Returns an object which may > > be passed to the rollback method to rollback the transaction > > to this point. > > Cursor Object > > > > The cursor object becomes an iterator after its execute > > method has been called. Rows are retrieved using the drivers > > fetchmany(arraysize) method. > > > > execute(operation,sequence_of_parameters) > > > > As per the executemany method in the DB API 2.0 spec. > > Is there any need for both execute and executemany in this > > API? > > Yes, because sequence_of_strings (.execute()) is a subset of > sequence_of_sequences (.executemany()); we need two APIs to tell > the difference. > > > Setting arraysize to 1 will call the drivers fetch() method > > rather than fetchmany(). > > > > Returns self, so the following code is valid > > > > for row in mycursor.execute('select actor,year from sketches'): > > [ ... ] > > > > What should the default format be for bind variables? > > callproc(procedure,[parameters]) > > arraysize > > description > > rowcount > > close() > > setinputsizes(size) > > setoutputsizes(size[,column] > > > > As per DB API 2.0 > > > > driver_cursor() > > > > Return the unwrapped cursor object as produced by the driver. > > This may be required to access driver specific features. > > > > next() > > > > Return the Row object for the next row from the currently > > executing SQL statement. As per DB API 2.0 spec, except > > a StopIteration exception is raised when the result set > > is exhausted. > > > > __iter__() > > > > Returns self. What is that ? > > > > nextset() > > > > I guess as per DB API 2.0 spec. > > > > warnings > > > > List of Warning exceptions generated so far by the currently > > executing statement. This list is cleared automatically by the > > execute and callproc methods. > > > > clear_warnings() > > > > Erase the list of warnings. Same as 'del cursor.warnings[:]' > > Is this method required? > > Not needed. OK, not needed. > > > raise_warnings > > > > If set to 0, Warning exceptions are not raised and instead > > only appended to the warnings list. > > > > If set to 1, Warning excepions are raised as they occur. > > > > Defaults to 1 > > > Row Object > > > > When a Cursor is iterated over, it returns Row objects. > > dtuple.py (http://www.lyra.org/greg/python/) provides such an > > implementation already. > > > > [index_or_key] > > > > Retrieve a field from the Row. If index_or_key is an integer, > > the column of the field is referenced by number (with the first > > column index 0). If index_or_key is a string, the column is > > referenced by its lowercased name (lowercased to avoid problems > > with the differing methods vendors use to capitalize their column > > names). > > Type Objects and Constructors > > > > As per DB API 2.0 spec. > > > > Do we need Date, Time and DateTime classes, or just DateTime? > > > > Need to author the 'Date PEP'. It would be nice if there was a > > more intelligent standard Date class in the Python core that > > we could leverage. If we don't, people will start using the > > sql module for the more intelligent DateTime object it would > > need to provide even if they arn't using databases. > > > ConnectionPool Object > > > > A connection pool, to be documented. > This object could be included in sql.py. > > > Transaction Isolation Levels > > > > Insert description of dirty reads, non repeatable reads and > > phantom reads, probably stolen from JDBC 3.0 documentation. > > > > The following symbolic constants are defined to describe the > > four transaction isolation levels defined by SQL99. Note that > > they are in numeric order so comparison operators can be > > safely used to test for restrictivness. Note that these definitions > > are taken from the JDBC API 3.0 documentation by Sun. > > > > TRANSACTION_NONE > > > > No transaction isolation is guarenteed. > > > > TRANSACTION_READ_UNCOMMITTED > > > > Allows transactions to see uncommitted changes to the data. > > This means that dirty reads, nonrepeatable reads, and > > phantom reads are possible. > > > > TRANSACTION_READ_COMMITTED > > > > Means that any changes made inside a transaction are not visible > > outside the transaction until the transaction is committed. This > > prevents dirty reads, but nonrepeatable reads and phantom > > reads are still possible. > > > > TRANSACTION_REPEATABLE_READ > > > > Disallows dirty reads and nonrepeatable reads. Phantom read are > > still possible. > > > > TRANSACTION_SERIALIZABLE > > > > Specifies that dirty reads, nonrepeatable reads, and phantom > > reads are prevented. it is really usefull to have so many transaction levels ? > > > Test Suite > > > > A common test suite will be part of the implementation, to > > allow driver authors and driver evaluators to test and excercise > > the systems. Two possbilities to start with are the test suites > > in psycopg by Federico Di Gregorio and mxODBC by eGenix.com Software. > > Example DDL for various systems will need to be provided. It could be in sql.test. is it possible to post a test suite file on this mailing list so every driver's developers can have a look. > > > Rationale > > > > The module is called sql.py, to avoid any ambiguity with non-realational > > or non-SQL compliant database interfaces. This also nicely limits > > the scope of the project. Other suggestions are 'sqlutil' and 'rdbms', > > since 'sql' may refer to the language itself. > > > Previous versions of the DB API have been intentially kept lean to > > make it simpler to develop and maintain drivers, as a richer feature > > set could be implemented by a higher level wrapper and maintained in > > a single place rather than in every API compliant driver. As this > > revision provides a single place to maintain code, these features > > can now be provided without placing a burden to the driver authors. > > > > Existing DB API 1.0 and 2.0 drivers can be used to power the backend > > of this module. This means that there will be a full compliment of > > drivers available from day 1 of this modules release without placing > > a burden on driver developers and maintainers. Do we really need to make a huge wrapper ? it is gonna spoil each driver's specific functions. I think sql.py should have a connect function and that's all. > > > > The core of the API is identical to the Python Database API v2.0 > > (PEP-249). This API is already familiar to Python programers and > > is a proven solid foundation. > > > > Python previously defined a common relational database API that > > was implemented by all drivers, and application programmers accessed > > the drivers directly. This caused the following issues: > > > > It was difficult to write code that connected to multiple > > database vendor's databases. Each seperate driver used defined > > its own heirarchy of exceptions that needed to be handled, > > Note that a simple way to solve this problem is to require the > database module to expose the used exception structure as attributes > of the connection objects. > > > their own datetime class and their own set of datatype constants. > > > > Platform independant code could not be written simply, > > due to the differing paramater styles used by bind variables. > > This also caused problems with publishing example code and tutorials. > > > > The API remained minimal, as any new features would need to > > be implemented by all driver maintainers. The DB-SIG felt > > most feature suggestions would be better implemented by higher > > level wrappers, such as that defined by this PEP. > > > > Language Comparison > > > > The design proposed in this PEP is the same as that used by Java, > > where a relational database API is shipped as part of the core language, > > but requires the installation of 3rd party drivers to be used. > > > > Perl has a similar arrangement of API and driver separation. Perl > > does not ship with PerlDBI or drivers as part of the core language, > > but it is well documented and trivial to add using the CPAN tools. > > > > PHP has a seperate API for each database vendor, although work is > > underway (completed?) to define a common abstraction layer > > similar to Java, Perl and Python. All of the drivers ship as > > part of the core language. > > > > References > > > > [1] PEP-248 and PEP-249 > > > > -- > > Stuart Bishop > > > > _______________________________________________ > > DB-SIG maillist - DB-SIG@python.org > > http://mail.python.org/mailman/listinfo/db-sig -- Thierry Michel Eric Bianchi From mal@lemburg.com Tue Aug 7 09:21:38 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Tue, 07 Aug 2001 10:21:38 +0200 Subject: [DB-SIG] Cursor.executemany() and Cursor.nextset() References: Message-ID: <3B6FA512.BA8002E8@lemburg.com> [.executemany() and stored procedures] > > > So... don't use executemany() in this case. INSERT is the only SQL > > statement I have ever used with executemany(), AFAIK. > > This is good advice, and sounds like an informal way of presenting what > I would like the API to say. What am I missing here? Has the spec for > this API been handed to the SIG by some testy deity who would be > offended by suggestions for improvement? What's so terrible about > having the language of the spec amended to identify those conditions for > which use of this method constitutes undefined behavior, subject to > having an exception raised? > > Is there a formal process that I should be following for submitting > proposed amendments to the spec? If so, I'll be happy to use it. Just > let me know. If you only want to clarify certain bits in the DB API spec 2.0, then a patch against the PEP version of it submitted to SourceForge would be nice. New features will require a new version of the spec, e.g. 2.1, but I'm not sure whether we've already got enough requirements for such a step. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Consulting & Company: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From bkline@rksystems.com Tue Aug 7 13:08:39 2001 From: bkline@rksystems.com (Bob Kline) Date: Tue, 7 Aug 2001 08:08:39 -0400 (EDT) Subject: [DB-SIG] Cursor.executemany() and Cursor.nextset() In-Reply-To: Message-ID: On Tue, 7 Aug 2001, Dittmar, Daniel wrote: > > What I am trying to say (and so are you, it appears) is that such > > behavior doesn't make sense if the SQL behind that action contains > > SELECT statements. > > At least in SAP DB, array SELECTS are possible and amount to the > UNION of all the single SELECTs. > > I would prefer the spec to say that the behaviour is undefined > rather than requiring the driver to raise an exception. Please go back and re-read the message to which you are responding. I did not ask that the spec require the driver to raise an exception. I said: "Therefore it seems apparent to me that while executemany() may be safely useful for DML the API should clearly state that use of this method for operations which produce result sets constitutes undefined behavior, and may result in an exception being raised by the driver." > Mostly because I don't want to look at the SQL command at all. Precisely. If you don't parse the first argument to the executemany() method (and, for most data sources, even if you do), you will not be able to distinguish consistently between cases which could be safely mapped to the semantic equivalent of an array SELECT and those which cannot, for all possible data sources. The fact that there may be a DBMS with such restricted capabilities (for example, no stored procedures or user-defined functions) for which it might be possible to make such a distinction does not mean that the API specification can safely ignore the general case. -- Bob Kline mailto:bkline@rksystems.com http://www.rksystems.com From bkline@rksystems.com Tue Aug 7 13:40:50 2001 From: bkline@rksystems.com (Bob Kline) Date: Tue, 7 Aug 2001 08:40:50 -0400 (EDT) Subject: [DB-SIG] Cursor.executemany() and Cursor.nextset() In-Reply-To: <3B6FA512.BA8002E8@lemburg.com> Message-ID: On Tue, 7 Aug 2001, M.-A. Lemburg wrote: > > Is there a formal process that I should be following for submitting > > proposed amendments to the spec? If so, I'll be happy to use it. Just > > let me know. > > If you only want to clarify certain bits in the DB API spec 2.0, > then a patch against the PEP version of it submitted to SourceForge > would be nice. Thanks, I'll be glad to do that. I notice that although the file names in the text content of the hyperlinks to the PEPs at [1] have the extension ".txt" the actual targets of the hyperlinks are in fact HTML pages. I assume that it would be preferable to submit patches against the plain text version (and the PEP on PEPs[2] confirms this assumption. Unfortunately, the link to the SourceForge CVS documents [3] is broken. Could you supply a replacement link that is working? Thanks. [1] http://python.sourceforge.net/peps/ [2] http://python.sourceforge.net/peps/pep-0001.html [3] http://cvs.sourceforge.net/cgi-bin/cvsweb.cgi/python/nondist/peps/?cvsroot=python -- Bob Kline mailto:bkline@rksystems.com http://www.rksystems.com From bmoore@dhh.state.la.us Tue Aug 7 16:28:23 2001 From: bmoore@dhh.state.la.us (Bill Moore) Date: Tue, 07 Aug 2001 10:28:23 -0500 Subject: [DB-SIG] Clipper database? Message-ID: Hi. I'm new to Python and this list. Has anyone written an interface to = dBASE or Clipper tables? I would like to be able to view field names and browse the contents of the = data file. Optionally, it would be nice to be able to actually lock a = record and modify its contents. Thanks. Bill Moore From fe@phgroup.com Tue Aug 7 16:35:10 2001 From: fe@phgroup.com (Frederic Vander Elst) Date: Tue, 07 Aug 2001 16:35:10 +0100 Subject: [DB-SIG] Clipper database? References: Message-ID: <3B700AAE.A20FB678@phgroup.com> This is a multi-part message in MIME format. --------------70CC157745C8512996C24A0A Content-Type: multipart/alternative; boundary="------------FD800CBABF6FC5931B58A796" --------------FD800CBABF6FC5931B58A796 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Hi, enclosed the self-documenting code. Readonly I'm afraid, you'll have to use some ODBC (mxODBC) for writing. Original comments at top: > A module for reading dbf files. Tested ONLY on linux. > > Author : Michal Spalinski (mspal@fuw.edu.pl) > Copying policy: unlimited (do what you want with it) > Warranty : none whatsoever > I've hacked it a bit and added some error-checking. If you improve it greatly, please let me/ the list have the update ! -freddie to reply, try adding a v as the second letter of my uname. Bill Moore wrote: > Hi. I'm new to Python and this list. Has anyone written an interface to dBASE or Clipper tables? > > I would like to be able to view field names and browse the contents of the data file. Optionally, it would be nice to be able to actually lock a record and modify its contents. > > Thanks. > > Bill Moore > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig --------------FD800CBABF6FC5931B58A796 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit Hi,

enclosed the self-documenting code.  Readonly I'm afraid, you'll have to use some ODBC (mxODBC) for writing.

Original comments at top:
 

A module for reading dbf files. Tested ONLY on linux.

Author                : Michal Spalinski (mspal@fuw.edu.pl)
Copying policy: unlimited (do what you want with it)
Warranty              : none whatsoever


I've hacked it a bit and added some error-checking.  If you improve it greatly, please let me/ the list have the update !

-freddie
to reply, try adding a v as the second letter of my uname.
 

Bill Moore wrote:

Hi.  I'm new to Python and this list.  Has anyone written an interface to dBASE or Clipper tables?

I would like to be able to view field names and browse the contents of the data file.  Optionally, it would be nice to be able to actually lock a record and modify its contents.

Thanks.

Bill Moore

_______________________________________________
DB-SIG maillist  -  DB-SIG@python.org
http://mail.python.org/mailman/listinfo/db-sig

--------------FD800CBABF6FC5931B58A796-- --------------70CC157745C8512996C24A0A Content-Type: text/plain; charset=us-ascii; name="dbf.py" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="dbf.py" # -*- tab-width: 4 -*- # # A module for reading dbf files. Tested ONLY on linux. # # Author : Michal Spalinski (mspal@fuw.edu.pl) # Copying policy: unlimited (do what you want with it) # Warranty : none whatsoever # """ This is a module for reading dbf files. The code fragment import dbf db = dbf.dbf('mydata.dbf') creates a dbf object db associated with an existing dbf file 'mydata.dbf'. Then: -- db.fields returns a a list of tuples describing the fields -- db.nrecs returns the number of records -- db[n] returns a tuple containing the contents of record number n (0 <= n < nrecs). All field contents are represented as strings ... -- db.status() prints some data about the dbf file So, for example, to list the first two fields of all the records in mydata.dbf one could try: for r in db: print r[0], r[1] Good luck! """ from struct import unpack from string import lower, strip def int_o_strip(x): if strip(x) == "": return 0 else: try: return int(strip(x)) except ValueError: return float(strip(x)) def float_o_strip(x): if strip(x) == "": return 0 else: return float(strip(x)) class dbf: def __init__(self, fname): self.f = open(fname,'rb') head = self.f.read(32) if (head[0] != '\003') and (head[0] != '\203'): raise TypeError, 'Not a Dbase III+ file!' (self.nrecs, self.hlen, self.rlen) = unpack('4xihh20x', head) fdalen = (self.hlen - 33)/32 # read the field descriptor array fda = [] for k in range(fdalen): fda.append(self.f.read(32)) # interpret the field descriptors self.fields = [] for fd in fda: bytes = unpack('12c4xBB14x', fd) field = '' for i in range(11): if bytes[i] == '\000': break field = field+bytes[i] type = bytes[11] length = bytes[12] dec = bytes[13] self.fields.append((field,type,length,dec)) self.hook = {} for i in range(len(self.fields)): field, type, length, dec = self.fields[i] pos = i if type == 'N': if dec == 0: func = int_o_strip else: func = float_o_strip else: func = str self.hook[lower(field)] = (pos, func, type, length, dec) # record numbers go from 0 to self.nrecs-1 def _get(self, recno): offs = self.hlen + recno*self.rlen self.f.seek(offs,0) return self.f.read(self.rlen) def __getitem__(self, recno): if recno < 0 or recno >= self.nrecs: raise IndexError, 'No such record.' else: raw = self._get(recno) res, pos = [], 0 for field in self.fields: end = pos+field[2] item = raw[pos+1:end+1] pos=end res.append(item) # return tuple(res) return row(tuple(res), self.hook) def status(self): print '' print 'Header length :', self.hlen print 'Record length :', self.rlen print 'Number of records :', self.nrecs print '' print '%-12s %-12s %-12s %-12s' % ('Field','Type','Length','Decimal') print '%-12s %-12s %-12s %-12s' % ('-----','----','------','-------') for k in self.fields: print '%-12s %-12s %-12s %-12s' % k print '' class row: def __init__(self, ptuple=None, hook=None): if ptuple: self.data = ptuple else: raise TypeError, 'pas de row vide !' if hook: self.hook = hook else: self.hook = None def __repr__(self): return `self.data` def __len__(self): return len( self.data) def __getitem__(self, i): return self.data[i] def __setitem__(self, i, item): raise TypeError, 'immutable row so far' def __delitem__(self, i, item): raise TypeError, 'immutable row so far' def __getattr__(self, aname): if not self.hook: raise TypeError, 'need a hook' if not self.hook.has_key(lower(aname)): raise ValueError, ('"%s": no such field !' % aname) pos, func, type, len, dec = self.hook[lower(aname)] return func(self.data[pos]) --------------70CC157745C8512996C24A0A-- From mal@lemburg.com Tue Aug 7 16:56:52 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Tue, 07 Aug 2001 17:56:52 +0200 Subject: [DB-SIG] Clipper database? References: Message-ID: <3B700FC4.7BC39D56@lemburg.com> Bill Moore wrote: > > Hi. I'm new to Python and this list. Has anyone written an interface to dBASE or Clipper tables? > There is a .dbf file interface somewhere out there (search on python.org or Parnassus). I think it only support reading the files though. > I would like to be able to view field names and browse the contents of the data file. Optionally, it would be nice to be able to actually lock a record and modify its contents. > That should be possible by using mxODBC after having installed the MSDAC kit from Microsoft (assuming you are using Windows). On Unix things are not as nice, but I think that the unixODBC project has written some ODBC drivers for file based databases which you could access using mxODBC as well. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Consulting & Company: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From jmdeschamps@cvm.qc.ca Tue Aug 7 19:22:34 2001 From: jmdeschamps@cvm.qc.ca (jmdeschamps) Date: Tue, 7 Aug 2001 14:22:34 -0400 Subject: [DB-SIG] (no subject) Message-ID: From sgriffitts@fwsymphony.org Tue Aug 7 21:23:23 2001 From: sgriffitts@fwsymphony.org (Scott Griffitts) Date: Tue, 07 Aug 2001 15:23:23 -0500 Subject: [DB-SIG] Newbie DSN-less ODBC help Message-ID: Hello. I am able to connect to an Access db using: import dbi, odbc DSN =3D 'sysdsn1' s =3D odbc.odbc(DSN) but I would like to make a DSN-less connection. Could someone please = explain how to that for a db located at I:\\test1\test2.mdb Thanks, Scott From mal@lemburg.com Wed Aug 8 08:46:01 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 08 Aug 2001 09:46:01 +0200 Subject: [DB-SIG] Newbie DSN-less ODBC help References: Message-ID: <3B70EE39.72ED40A@lemburg.com> Scott Griffitts wrote: > > Hello. > > I am able to connect to an Access db using: > > import dbi, odbc > DSN = 'sysdsn1' > s = odbc.odbc(DSN) > > but I would like to make a DSN-less connection. Could someone please explain how to that for a db located at > I:\\test1\test2.mdb Check the mxODBC docs for a section on file DSNs. You basically use the name FILEDSN instead of DSN to connect to a file. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Consulting & Company: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From matt@zope.com Wed Aug 8 15:59:47 2001 From: matt@zope.com (Matthew T. Kromer) Date: Wed, 08 Aug 2001 10:59:47 -0400 Subject: [DB-SIG] DCOracle2 Beta 5 Message-ID: <3B7153E3.5873D11B@zope.com> DCORACLE2 DCOracle2 is Zope Corporation's second generation of its python bindings for Oracle. DCOracle2 is written to use the OCI 8 API, not the OCI 7 API that DCOracle uses, giving DCOracle2 compatibility with more advanced features in newer generations of the Oracle software. RELEASE DCOracle2 Beta 5 is now released, available at http://www.zope.org/Members/matt/dco2 in ZIP and tar.gz formats. Beta 5 is a unified source/binary distribution; the 'install.py' script will attempt to copy the proper binary in place for execution on your system. CONTENTS o DCOracle2 source o DCOracle2 binaries for Oracle 8i (Windows, Solaris, Linux) o ZOracleDA (the Zope Database Adapter) o DCOracleStorage (an Oracle Storage for Zope) INSTALLATION After unpacking, either run 'install.py' to copy an appropriate binary file from the binaries directory to the DCOracle2 directory, or type 'make' to attempt to make a new binary (unix only). For use as ZOracleDA, copy the DCO2 directory to {ZOPE ROOT}/lib/python/Products/ZOracleDA (renaming the directory from DCO2 to ZOracleDA). AVAILABILITY DCOracle2 is available from http://www.zope.org/Members/matt/dco2 immediately. BUG REPORTING Please use the issue tracker at http://www.zope.org/Members/matt/dco2/Tracker to report any issues with DCOracle2. From pierre@saiph.com Wed Aug 8 16:52:47 2001 From: pierre@saiph.com (pierre@saiph.com) Date: Wed, 8 Aug 2001 17:52:47 +0200 (CEST) Subject: [DB-SIG] (no subject) In-Reply-To: References: Message-ID: <15217.24655.90611.922625@petru.saiph.fr> jmdeschamps writes: >=20 >=20 > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig No subject, no message. Bravo. Next steps in nothingness would be: No signature No sending of the message. But I guess thats still a little bit too radical. Maybe within 10 years or so, with a lot of concentration? --=20 Pierre Imbaud 12 Rue des Bosquets 91480 Quincy Sous S=E9nart France Tel: 01 69 00 94 57 Fax 09 47 From bkline@rksystems.com Wed Aug 8 22:22:23 2001 From: bkline@rksystems.com (Bob Kline) Date: Wed, 8 Aug 2001 17:22:23 -0400 (EDT) Subject: [DB-SIG] ODBC driver source Message-ID: I understand that the non-commercial ODBC driver listed on the DB-SIG page for DB-API 2.0 compliant drivers does not have a maintainer currently. What I do not understand is how to find the source code for it. Any clues? -- Bob Kline mailto:bkline@rksystems.com http://www.rksystems.com From thierry@nekhem.com Thu Aug 9 11:43:55 2001 From: thierry@nekhem.com (Thierry MICHEL) Date: Thu, 9 Aug 2001 11:43:55 +0100 Subject: [DB-SIG] Description cursor attribute Message-ID: <20010809114355.A1611@nekhem.com> Hi all, I added a lot of features to PoPy 2.0.x to be *really* DB API 2.0 compliant, but I have some problems with Postgresql C interface. Someone could give me more informations about different items of the description cursor attribute ? With postgresql, I have problems to get: * precision * scale * null_ok <- very important!!! I don't find C API functions which allow me to get these kinds of informations. Thanks for your help. -- Thierry MICHEL Nekhem Technologies France thierry@nekhem.com thierry@nekhem.fr thierry@gnue.org Agroparc Batiment A 200, rue Michel de Montaigne 84911 AVIGNON Cedex 09 Tel: +33(0)490840154 "Sainthood in the Church of Emacs requires living a life of purity, but in the Church of Emacs, this does not require celibacy." Richard Stallman. From stanton@livsform.com Thu Aug 9 22:36:24 2001 From: stanton@livsform.com (Stanton Schell) Date: Thu, 09 Aug 2001 17:36:24 -0400 Subject: [DB-SIG] Persistent Database Connections Message-ID: Is there any good documentation on how to create and use persistent db connections (in imported modules, etc.)? I am using MySQL (3.23.36) and Py2.1.1. All the documentation I have found on persistence only deals with persistent data. I am fairly in the dark (newbie), so even a couple lines explaining how to easily pass a connection from one area of my program to another will help... Thanks! -stanton -- stanton.schell | stanton@livsform.com | o. 202.328.1967 livsform | http://www.livsform.com/ | c. 703.989.6591 f. 202.318.8870 From andy@dustman.net Mon Aug 13 15:37:18 2001 From: andy@dustman.net (Andy Dustman) Date: Mon, 13 Aug 2001 10:37:18 -0400 (EDT) Subject: [DB-SIG] Persistent Database Connections In-Reply-To: Message-ID: On Thu, 9 Aug 2001, Stanton Schell wrote: > Is there any good documentation on how to create and use persistent db > connections (in imported modules, etc.)? I am using MySQL (3.23.36) and > Py2.1.1. All the documentation I have found on persistence only deals with > persistent data. I am fairly in the dark (newbie), so even a couple lines > explaining how to easily pass a connection from one area of my program to > another will help... For MySQLdb (and in general for DB API databases): db = MySQLdb.connect(...) db is a database Connection object which may be passed around as needed. To perform queries, you need a Cursor object: c = db.cursor() Queries are performed with execute(): c.execute(query_string, optional_sequence_of_parameters) Results, if any, can be returned by the various fetchXXX() methods: row = c.fetchone() rows = c.fetchmany(n) # n optional all_rows = c.fetchall() A row is usually a tuple of column values, or None if no more are available. Methods returning a sequence of rows return an empty sequence if no more are available. db.commit() # or rollback should be obvious, though not usually needed or useful for MySQL. -- Andy Dustman PGP: 0xC72F3F1D @ .net http://dustman.net/andy I'll give spammers one bite of the apple, but they'll have to guess which bite has the razor blade in it. From ryanw@inktomi.com Tue Aug 14 19:41:17 2001 From: ryanw@inktomi.com (Ryan Weisenberger) Date: Tue, 14 Aug 2001 11:41:17 -0700 Subject: [DB-SIG] BLOBs Message-ID: <4.3.2.7.2.20010814113930.00c20310@inkt-3.inktomi.com> I'm trying to insert and retrieve BLOBs from a SQL Server db using mxODBC. I'm assuming I need some function equivalent to addSlashes() to escape the binary stream before I insert it in the db. Does anyone know of an easy way to do this in Python? Is there such a function? Thanks, Ryan _________________________ Ryan Weisenberger Software Developer ryanw@inktomi.com (650) 653-4595 _________________________ From mal@lemburg.com Tue Aug 14 20:59:27 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Tue, 14 Aug 2001 21:59:27 +0200 Subject: [DB-SIG] BLOBs References: <4.3.2.7.2.20010814113930.00c20310@inkt-3.inktomi.com> Message-ID: <3B79831F.C33AEB1A@lemburg.com> Ryan Weisenberger wrote: > > I'm trying to insert and retrieve BLOBs from a SQL Server db using > mxODBC. I'm assuming I need some function equivalent to addSlashes() to > escape the binary stream before I insert it in the db. Does anyone know of > an easy way to do this in Python? Is there such a function? You normally don't need to do that as long as you pass the BLOB as bound parameter: cursor.execute('INSERT INTO ... VALUES (?,?)', (id, blob)) mxODBC will then pass the BLOB data to the ODBC driver and the driver will apply any escaping which might be necessary (if at all). -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Consulting & Company: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From ryanw@inktomi.com Tue Aug 14 21:15:05 2001 From: ryanw@inktomi.com (Ryan Weisenberger) Date: Tue, 14 Aug 2001 13:15:05 -0700 Subject: [DB-SIG] BLOBs In-Reply-To: <3B79831F.C33AEB1A@lemburg.com> References: <4.3.2.7.2.20010814113930.00c20310@inkt-3.inktomi.com> Message-ID: <4.3.2.7.2.20010814131443.00b57600@inkt-3.inktomi.com> That did the trick! That's for all the help. - Ryan At 09:59 PM 8/14/01 +0200, M.-A. Lemburg wrote: >Ryan Weisenberger wrote: > > > > I'm trying to insert and retrieve BLOBs from a SQL Server db using > > mxODBC. I'm assuming I need some function equivalent to addSlashes() to > > escape the binary stream before I insert it in the db. Does anyone know of > > an easy way to do this in Python? Is there such a function? > >You normally don't need to do that as long as you pass the BLOB >as bound parameter: > >cursor.execute('INSERT INTO ... VALUES (?,?)', (id, blob)) > >mxODBC will then pass the BLOB data to the ODBC driver and the >driver will apply any escaping which might be necessary (if at all). > >-- >Marc-Andre Lemburg >CEO eGenix.com Software GmbH >______________________________________________________________________ >Consulting & Company: http://www.egenix.com/ >Python Software: http://www.lemburg.com/python/ _________________________ Ryan Weisenberger Software Developer ryanw@inktomi.com (650) 653-4595 _________________________ From merk@pacificnet.net Wed Aug 15 11:11:53 2001 From: merk@pacificnet.net (Ian) Date: Wed, 15 Aug 2001 03:11:53 -0700 Subject: [DB-SIG] problem with SELECT Message-ID: <5.1.0.14.0.20010815030757.01777758@popserver2.pacificnet.net> Hey all, i'm trying to write a script that updates a database. I dont seem to have any problems inserting data into the tables. But I'm trying to get the ID for a the records and its not working. I tried using cursor.fetchone and all i get returned is None, and i also tried using cursor.execute and i just get an empty tuple. The sql statement works fine when i go into postgres and run it there. I've tried doing 'SELECT NEXTVAL("id_sequence")' and i also tried something simple like 'SELECT * FROM table' both of the above work fine in psql, but give me the problems i described. Does anyone have any suggestions? From fog@mixadlive.com Wed Aug 15 11:17:47 2001 From: fog@mixadlive.com (Federico Di Gregorio) Date: 15 Aug 2001 12:17:47 +0200 Subject: [DB-SIG] problem with SELECT In-Reply-To: <5.1.0.14.0.20010815030757.01777758@popserver2.pacificnet.net> References: <5.1.0.14.0.20010815030757.01777758@popserver2.pacificnet.net> Message-ID: <997870667.28398.7.camel@lola> On 15 Aug 2001 03:11:53 -0700, Ian wrote: > Hey all, > > i'm trying to write a script that updates a database. I dont seem to have > any problems inserting data into the tables. > > But I'm trying to get the ID for a the records and its not working. > > I tried using cursor.fetchone and all i get returned is None, and i also > tried using cursor.execute and i just get an empty tuple. > > The sql statement works fine when i go into postgres and run it there. > > I've tried doing 'SELECT NEXTVAL("id_sequence")' and i also tried something > simple like 'SELECT * FROM table' you should use *both* .execute() and .fetchone(). if 'c' is a cursor, you can then do: c.execute('SELECT NEXTVAL("id_sequence")') id = c.fetchone()[0] note that you should wrap the code in a try/except block, to catch possible errors. hope this helps, federico -- Federico Di Gregorio MIXAD LIVE Chief of Research & Technology fog@mixadlive.com Debian GNU/Linux Developer & Italian Press Contact fog@debian.org All programmers are optimists. -- Frederick P. Brooks, Jr. From merk@pacificnet.net Wed Aug 15 11:41:59 2001 From: merk@pacificnet.net (Ian) Date: Wed, 15 Aug 2001 03:41:59 -0700 Subject: [DB-SIG] problem with SELECT In-Reply-To: <997870667.28398.7.camel@lola> References: <5.1.0.14.0.20010815030757.01777758@popserver2.pacificnet.net> <5.1.0.14.0.20010815030757.01777758@popserver2.pacificnet.net> Message-ID: <5.1.0.14.0.20010815033630.02863e98@popserver2.pacificnet.net> At 12:17 PM 8/15/2001 +0200, you wrote: >On 15 Aug 2001 03:11:53 -0700, Ian wrote: > > Hey all, > > > > i'm trying to write a script that updates a database. I dont seem to have > > any problems inserting data into the tables. > > > > But I'm trying to get the ID for a the records and its not working. > > > > I tried using cursor.fetchone and all i get returned is None, and i also > > tried using cursor.execute and i just get an empty tuple. > > > > The sql statement works fine when i go into postgres and run it there. > > > > I've tried doing 'SELECT NEXTVAL("id_sequence")' and i also tried > something > > simple like 'SELECT * FROM table' > >you should use *both* .execute() and .fetchone(). if 'c' is a cursor, >you can then do: > > c.execute('SELECT NEXTVAL("id_sequence")') > id = c.fetchone()[0] > >note that you should wrap the code in a try/except block, to catch >possible errors. > >hope this helps, >federico > >-- >Federico Di Gregorio Thanks, That did it :) Ian From mal@lemburg.com Wed Aug 15 11:38:26 2001 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 15 Aug 2001 12:38:26 +0200 Subject: [DB-SIG] BLOBs References: Message-ID: <3B7A5122.5E5227FC@lemburg.com> moored@reed.edu wrote: > > On Tue, 14 Aug 2001, M.-A. Lemburg wrote: > > > You normally don't need to do that as long as you pass the BLOB > > as bound parameter: > > > > cursor.execute('INSERT INTO ... VALUES (?,?)', (id, blob)) > > > > mxODBC will then pass the BLOB data to the ODBC driver and the > > driver will apply any escaping which might be necessary (if at all). > > I have found that the mysql driver doesn't escape single quotes (') > and that I have to do a string replace "\'" -> "\\\'" before my > sql server could deal with a random datastream. I recommend that the > first user test his driver to make sure the bound variable thingy does > all that is claimed. A good test for this should be running the test.py script included in mxODBC (in the mx.ODBC.Misc subpackage). It tests the different capabilities of the driver. It should also catch and display the above problem you see with the MyODBC driver. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Consulting & Company: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/ From ian@mulvany.net Wed Aug 22 19:46:05 2001 From: ian@mulvany.net (ian@mulvany.net) Date: 22 Aug 2001 11:46:05 -0700 Subject: [DB-SIG] installing MySQLdb.py Message-ID: <20010822184605.2223.cpmta@c001.snv.cp.net> Hi, I am new to using DB modules in Python. I am trying to install in MySQLdb.py module and am having the difficulties shown below, If anyone could explain to me the problem, and suggest lines of attack I would be really gratefull, many thanks, -Ian ---- this is the version and system info: [root@linux MySQL-python-0.3.5]# python Python 2.0 (#1, Apr 11 2001, 19:18:08) [GCC 2.96 20000731 (Linux-Mandrake 8.0 2.96-0.48mdk)] on linux-i386 this is the result I have been staring at after attempting to install MySQLdb.py: [root@linux MySQL-python-0.3.5]# python setup.py build running build running build_py not copying MySQLdb.py (output up-to-date) not copying CompatMysqldb.py (output up-to-date) not copying _mysql_const/__init__.py (output up-to-date) not copying _mysql_const/CR.py (output up-to-date) not copying _mysql_const/FIELD_TYPE.py (output up-to-date) not copying _mysql_const/ER.py (output up-to-date) not copying _mysql_const/FLAG.py (output up-to-date) not copying _mysql_const/REFRESH.py (output up-to-date) not copying _mysql_const/CLIENT.py (output up-to-date) running build_ext Traceback (most recent call last): File "setup.py", line 102, in ? extra_objects=extra_objects, File "/usr/lib/python2.0/distutils/core.py", line 138, in setup dist.run_commands() File "/usr/lib/python2.0/distutils/dist.py", line 829, in run_commands self.run_command(cmd) File "/usr/lib/python2.0/distutils/dist.py", line 849, in run_command cmd_obj.run() File "/usr/lib/python2.0/distutils/command/build.py", line 106, in run self.run_command(cmd_name) File "/usr/lib/python2.0/distutils/cmd.py", line 328, in run_command self.distribution.run_command(command) File "/usr/lib/python2.0/distutils/dist.py", line 849, in run_command cmd_obj.run() File "/usr/lib/python2.0/distutils/command/build_ext.py", line 200, in run customize_compiler(self.compiler) File "/usr/lib/python2.0/distutils/sysconfig.py", line 106, in customize_compiler (cc, opt, ccshared, ldshared, so_ext) = \ File "/usr/lib/python2.0/distutils/sysconfig.py", line 368, in get_config_vars func() File "/usr/lib/python2.0/distutils/sysconfig.py", line 280, in _init_posix raise DistutilsPlatformError, my_msg distutils.errors.DistutilsPlatformError: invalid Python installation: unable to open /usr/lib/python2.0/config/Makefile (No such file or directory) |Ian Mulvany ;0| www.mulvany.net| From dieter@sz-sb.de Thu Aug 23 07:58:34 2001 From: dieter@sz-sb.de (Dr. Dieter Maurer) Date: Thu, 23 Aug 2001 08:58:34 +0200 (CEST) Subject: [DB-SIG] installing MySQLdb.py In-Reply-To: <20010822184605.2223.cpmta@c001.snv.cp.net> References: <20010822184605.2223.cpmta@c001.snv.cp.net> Message-ID: <15236.43385.852753.359186@dieter.sz-sb.de> ian@mulvany.net writes: > I am new to using DB modules in Python. > I am trying to install in MySQLdb.py module > and am having the difficulties shown below, You need to install the Python development package. Dieter From ian@mulvany.net Mon Aug 27 20:44:27 2001 From: ian@mulvany.net (ian@mulvany.net) Date: 27 Aug 2001 12:44:27 -0700 Subject: [DB-SIG] gcc -lc flag Message-ID: <20010827194427.12702.cpmta@c001.snv.cp.net> Hello, When I try to build MySQLdb.py hte build falls over at the following point: $ python setup.py build gcc -shared build/temp.linux-i686-2.0/_mysqlmodule.o -L/usr/lib/mysql -lmysqlclient -lz -o build/lib.linux-i686-2.0/_mysql.so /usr/bin/ld: cannot find -lz collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 If I omit the '-lz' flag I can build and install but when I attempt to import MySQLdb I get the following: # python setup.py install Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.0/site-packages/MySQLdb.py", line 19, in ? import _mysql ImportError: /usr/lib/python2.0/site-packages/_mysql.so: undefined symbol: uncompress it's obvious that the -lz flag has something to do with the problem, but i am unfamiliar with the flag. Could anyone explain what the flag does and how I can get gcc to accept it or how I can get around the problem at all? many thanks, -Ian |Ian Mulvany ;0| www.mulvany.net| From matt@zope.com Mon Aug 27 20:58:35 2001 From: matt@zope.com (Matthew T. Kromer) Date: Mon, 27 Aug 2001 15:58:35 -0400 Subject: [DB-SIG] gcc -lc flag References: <20010827194427.12702.cpmta@c001.snv.cp.net> Message-ID: <3B8AA66B.20609@zope.com> ian@mulvany.net wrote: >Hello, > >When I try to build MySQLdb.py hte >build falls over at the following point: > >$ python setup.py build > >gcc -shared build/temp.linux-i686-2.0/_mysqlmodule.o -L/usr/lib/mysql -lmysqlclient -lz -o build/lib.linux-i686-2.0/_mysql.so >/usr/bin/ld: cannot find -lz >collect2: ld returned 1 exit status >error: command 'gcc' failed with exit status 1 > >If I omit the '-lz' flag I can build and install >but when I attempt to import MySQLdb I get the following: > ># python setup.py install > >Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib/python2.0/site-packages/MySQLdb.py", line 19, in ? > import _mysql >ImportError: /usr/lib/python2.0/site-packages/_mysql.so: undefined symbol: uncompress > >it's obvious that the -lz flag has something to do with >the problem, but i am unfamiliar with the flag. > >Could anyone explain what the flag does and how I can get gcc to accept it or how I can get around the problem at all? > >many thanks, > >-Ian > Ian, the -lz flag specifes that zlib is to be linked into the result. If you don't have the zlib RPM installed (I cant tell which distribution you are using) you can build zlib yourself. You should be able to pick up zlib from www.zlib.org. Compile and install and you should be able to link MySQLdb. From mcm@initd.net Tue Aug 28 12:46:06 2001 From: mcm@initd.net (Michele Comitini) Date: Tue, 28 Aug 2001 13:46:06 +0200 Subject: [DB-SIG] psycopg 0.96b1 Message-ID: <20010828134606.E2445@developers.mixadlive.com> --ILuaRSyQpoVaJ1HG Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable A new version of psycopg with support for Binary objects and SQL binary cursors is available at http://www.initd.org/Software/psycopg. Psycopg is a DB-API 2.0 compliant module for the Postgresql RDBM (http://www.postgresql.org) it is distributed under the GPL licence. This version adds support for using the bytea data type available in postgresql 7.1.x. Support for Binary objects is new so you may want to test it before putting it into a production environment.=20 You can find more information on psycopg at the above URL. Remember to send any bug reports or comments about psycopg specific problems to the email adresses reported there and not on this list. Thank you. Best regards, Michele Comitini Mixad Live srl http://www.mixadlive.com initd.org http://www.initd.org --ILuaRSyQpoVaJ1HG Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.4 (GNU/Linux) Comment: For info see http://www.gnupg.org iD4DBQE7i4R+HygWwn5gTsIRApdIAJ4g6nz5i1iZsmX5K0t/GDXOGTvOuwCXcZem JB3aAD/p5hHOgLqtYLODAA== =E86a -----END PGP SIGNATURE----- --ILuaRSyQpoVaJ1HG-- From eric@nekhem.com Thu Aug 30 10:09:41 2001 From: eric@nekhem.com (Eric Bianchi) Date: Thu, 30 Aug 2001 12:09:41 +0300 Subject: [DB-SIG] Re: psycopg 0.96b1 Message-ID: <20010830120941.B949@arioch.nekhem.fr> Hi, Concerning the binary objects support in Postgresql 7.1, It seems to me that bytea is a long text type and can't be considered as a binary object. Binary objects are handled by low level functions like lo_create, lo_open, ... and their type is oid. According to Postgresql's documentation, bytea type must be considered as a text type. PoPy handles not quoted long text which is different than binary objects. Best Regards. -- Eric Bianchi From mcm@initd.net Thu Aug 30 20:49:49 2001 From: mcm@initd.net (Michele Comitini) Date: Thu, 30 Aug 2001 21:49:49 +0200 Subject: [DB-SIG] Re: psycopg 0.96b1 References: <20010830120941.B949@arioch.nekhem.fr> Message-ID: <20010830214948.F4387@developers.mixadlive.com> --z+pzSjdB7cqptWpS Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hello Eric, There are two simple ways to store DB-API 2.0 Binary objects in postgresql= =20 >=3D 7.1. One is to store them in the text data type that requires escaping in and unescaping out nulls. The other way is using the bytea data type which is like text but is null safe exactly as a C char *=20 or a python String object. Using byteas allows to retrieve binary data directly from the db using BINARY cursors, moreover unlike the lo_xxx functions the bytea fields can be accessed like any other fields and psycopg converts them to standard buffer objects with no fuzz. Without the tedious record size limit which affected postgresql < 7.1 and the variable sized text, bytea fields, capable of growing up to at least 2G= b, I see no much reason in using the lo_xxx functions besides backward compatibility with postgresql < 7.1 For anyone interested a simple python example on how to store and retrieve binary files in postgresql with and without the usage of an=20 SQL BINARY cursor is avalable in the examples dir in the psycopg source distribution. Of course you can see how to access and put data in bytea fields in the driver C source code. You can get the psycopg driver a http://www.initd.org/Software/psycopg, and remember that as I wrote erlier the code for binary objects is quite new so please test it before using it in a production environment. Thank you, Michele Comitini init.d http://www.initd.org On Thu, Aug 30, 2001 at 12:09:41PM +0300, Eric Bianchi wrote: > Hi, >=20 > Concerning the binary objects support in Postgresql 7.1, It seems to me t= hat bytea is a long text type and can't be considered as a binary object. B= inary objects are handled by low level functions like lo_create, lo_open, .= .. and their type is oid.=20 >=20 > According to Postgresql's documentation, bytea type must be considered as= a text type.=20 >=20 > PoPy handles not quoted long text which is different than binary objects. >=20 > Best Regards. >=20 > --=20 > Eric Bianchi >=20 >=20 >=20 --z+pzSjdB7cqptWpS Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.4 (GNU/Linux) Comment: For info see http://www.gnupg.org iD8DBQE7jpjcHygWwn5gTsIRAg+IAJ92SH3vfCLe642cGlOSBBKUEgWJaQCfWyD5 qdpFsWDIDy5zeRi1DCNVLPg= =qy8S -----END PGP SIGNATURE----- --z+pzSjdB7cqptWpS-- From bga@mug.org Fri Aug 31 01:34:10 2001 From: bga@mug.org (Billy G. Allie) Date: Thu, 30 Aug 2001 20:34:10 -0400 (EDT) Subject: [DB-SIG] pyPgSQL - Version 1.5.1 is released. Message-ID: <200108310034.f7V0YAU24621@bajor.mug.org> pyPgSQL v1.5.1 has been released. It is a bug fix release to version 1.5. It corrects problems with the PgLargeObject.read() method and improves the stability of pyPgSQL in the MS Windows environment. It is available at http://sourceforge.net/projects/pypgsql. pyPgSQL is a package of two (2) modules that provide a Python DB-API 2.0 compliant interface to PostgreSQL databases. The first module, libpq, exports the PostgreSQL C API to Python. This module is written in C and can be compiled into Python or can be dynamically loaded on demand. The second module, PgSQL, provides the DB-API 2.0 compliant interface and support for various PostgreSQL data types, such as INT8, NUMERIC, MONEY, BOOL, ARRAYS, etc. This module is written in Python and works with PostgreSQL 6.5.2 or later and Python 2.0 or later. Note: It is highly recommended that you use PostgreSQL 7.1 or later and Python 2.1 or later. PostgreSQL is a sophisticated Object-Relational DBMS, supporting almost all SQL constructs, including sub-selects, transactions, and user-defined types and functions. It is the most advanced open-source database available anywhere More information about PostgreSQL can be found at the PostgreSQL home page at http://www.postgresql.org. Python is an interpreted, interactive, object-oriented programming lang- uage. It combines remarkable power with very clear syntax. It has modules, classes, exceptions, very high level dynamic data types, and dynamic typing. There are interfaces to many system calls and libraries, as well as to various windowing systems (X11, Motif, Tk, Mac, MFC). New builtin modules are easily written in C or C++. Python is also usable as an extension language for applications that need a programmable interface. Python is copyrighted but freely usable and distributable, even for commercial use. More information about Python can be found on the Python home page at http://www.python.org. --------------------------------------------------------------------------- ChangeLog: =========================================================================== Changes since pyPgSQL Version 1.5 ================================= Compiled the code with options to perform extra syntactic, symantic, and lint-like checks and corrected any problems that were found. Changes to pglargeobject.c -------------------------- * Why did I introduce a new variable (save) and execute another call to lo_tell() when it wasn't needed? (I have to stop working on code late at night. :-) * Fixed another bug in PgLo_read(). This one would only rear it's ugly head when read() was called with no arguments, and then, only sometimes. (I REALLY need to stop working on code late at night [or is that early in the morning] :-). Changes to pgnotify.c --------------------- * Changed calls to PyObject_DEL() to Py_XDECREF() when deleting objects embedded in my extension objects. This corrects a problem expirenced on MS Windows. Changes to pgresult.c --------------------- * Part of the logic for building the cursor.desccription attribute retrives an OID from the database that is usually zero (0). This causes a query to the database to check to see if the OID represents a large object. An OID of zero can never be a large object, so the code was changed so that when the OID is zero, the check to see if it is a large object is skipped. Changes to setup.py ------------------- * Add include_dirs and lib_dirs for building on cygwin. This change should allow pyPgSQL to build 'out of the box' on MS Windows using the cygwin environment. Changes since pyPgSQL Version 1.4 ================================= The code for PgConnection, PgLargeObject, PgNotify, and PgResult was moved from libpqmodule.c into separate source files. This was done to make it easier to work on the individual pieces of the code. The following source code files were added to Version 1.5 of pyPgSQL: libpqmodule.h - This include file brings together all the various header files needed to build libpqmodule into one place. pgconnection.[ch] - Implements the PgConnection class. pglargeobject.[ch] - Implements the PgLargeObject class. pgnotify.[ch] - Implements the PgNotify class. pgresult.[ch] - Implements the PgResult class. Also, any constant, read-only attributes in PgConnection, PgLargeObject, PgNotify, PgResult, and PgVersion are now stored as a Python object instead of as a native C type. This was done so that when the attribute is refer- enced in Python, it does not have to be converted to a Python object. Changes to PgSQL.py ------------------- * Change the code to rollback an open transaction, if any, when the last cursor of a connection is closed. [Bug #454653] * Added code that will, if weak references are not available, remove any cursors that are only referenced from the connection.cursors list. This is in effect garbage collection for deleted cursors. The garbage col- lection of orphaned cursor will occur at the following points: 1. After a new cursor is created. 2. When a cursor is closed. 3. When the connection.autocommit value is changed. * Changed cursor.isClosed to cursor.closed. Why? closed is used in other object (file and PgLargeObject, for example) and I thought I would follow the trend (i.e. for no good reason). * Change from the use of hash() to generate the portal name to the use of id(). * Added code to trap a failure to connect to a database and delete the Connection object on failure. This resolves the problem reported by Adam Buraczewski. [Bug #449743] * Fixed a bug in fetchmany() that could return more than the requested rows if PostgreSQL Portals aren't used in the query. Changes to libpqmodule.c ------------------------ * Moved code for PgLargeObject, PgNotify, PgConnection, and PgResult to their own files. This was done to make maintenance easier. Changes to pgconnection.c ------------------------- * Changed how ibuf is defined in pgFixEsc to ensure that the memory pointed to by ibuf is allocated from the stack, and thus is writable memory. The way it was defined before could result in the memory being allocated in a read-only segment. [Bug #450330] Changes to pgresult.c --------------------- * Changed the return type of result.cmdTuple from a Python String object to an Integer object. * The code now used pgversion.post70 as a Python Integer. * Constant, read-only attributes are now stored as Python objects. This way they do not have to be created each time they are referenced. Changes to pgversion.c ---------------------- * Change code to use Py_BuildValue() for creating Python objects from C values (where possible). * Change attributes that were read-only constants from a C type to a Python Object. This way they don't need to be converted to a Python Object each time they are referenced. * General code clean up. Removed some definitions that are in the new libpqmodule.h header file. From Bill.Allie@mug.org Fri Aug 31 04:48:54 2001 From: Bill.Allie@mug.org (Bill.Allie@mug.org) Date: Thu, 30 Aug 2001 23:48:54 -0400 (EDT) Subject: [DB-SIG] pyPgSQL - Version 1.5.1 is released. Message-ID: <200108310348.f7V3msn26154@bajor.mug.org> pyPgSQL v1.5.1 has been released. It is a bug fix release to version 1.5. It corrects problems with the PgLargeObject.read() method and improves the stability of pyPgSQL in the MS Windows environment. It is available at http://sourceforge.net/projects/pypgsql. pyPgSQL is a package of two (2) modules that provide a Python DB-API 2.0 compliant interface to PostgreSQL databases. The first module, libpq, exports the PostgreSQL C API to Python. This module is written in C and can be compiled into Python or can be dynamically loaded on demand. The second module, PgSQL, provides the DB-API 2.0 compliant interface and support for various PostgreSQL data types, such as INT8, NUMERIC, MONEY, BOOL, ARRAYS, etc. This module is written in Python and works with PostgreSQL 6.5.2 or later and Python 2.0 or later. Note: It is highly recommended that you use PostgreSQL 7.1 or later and Python 2.1 or later. PostgreSQL is a sophisticated Object-Relational DBMS, supporting almost all SQL constructs, including sub-selects, transactions, and user-defined types and functions. It is the most advanced open-source database available anywhere More information about PostgreSQL can be found at the PostgreSQL home page at http://www.postgresql.org. Python is an interpreted, interactive, object-oriented programming lang- uage. It combines remarkable power with very clear syntax. It has modules, classes, exceptions, very high level dynamic data types, and dynamic typing. There are interfaces to many system calls and libraries, as well as to various windowing systems (X11, Motif, Tk, Mac, MFC). New builtin modules are easily written in C or C++. Python is also usable as an extension language for applications that need a programmable interface. Python is copyrighted but freely usable and distributable, even for commercial use. More information about Python can be found on the Python home page at http://www.python.org. --------------------------------------------------------------------------- ChangeLog: =========================================================================== Changes since pyPgSQL Version 1.5 ================================= Compiled the code with options to perform extra syntactic, symantic, and lint-like checks and corrected any problems that were found. Changes to pglargeobject.c -------------------------- * Why did I introduce a new variable (save) and execute another call to lo_tell() when it wasn't needed? (I have to stop working on code late at night. :-) * Fixed another bug in PgLo_read(). This one would only rear it's ugly head when read() was called with no arguments, and then, only sometimes. (I REALLY need to stop working on code late at night [or is that early in the morning] :-). Changes to pgnotify.c --------------------- * Changed calls to PyObject_DEL() to Py_XDECREF() when deleting objects embedded in my extension objects. This corrects a problem expirenced on MS Windows. Changes to pgresult.c --------------------- * Part of the logic for building the cursor.desccription attribute retrives an OID from the database that is usually zero (0). This causes a query to the database to check to see if the OID represents a large object. An OID of zero can never be a large object, so the code was changed so that when the OID is zero, the check to see if it is a large object is skipped. Changes to setup.py ------------------- * Add include_dirs and lib_dirs for building on cygwin. This change should allow pyPgSQL to build 'out of the box' on MS Windows using the cygwin environment. Changes since pyPgSQL Version 1.4 ================================= The code for PgConnection, PgLargeObject, PgNotify, and PgResult was moved from libpqmodule.c into separate source files. This was done to make it easier to work on the individual pieces of the code. The following source code files were added to Version 1.5 of pyPgSQL: libpqmodule.h - This include file brings together all the various header files needed to build libpqmodule into one place. pgconnection.[ch] - Implements the PgConnection class. pglargeobject.[ch] - Implements the PgLargeObject class. pgnotify.[ch] - Implements the PgNotify class. pgresult.[ch] - Implements the PgResult class. Also, any constant, read-only attributes in PgConnection, PgLargeObject, PgNotify, PgResult, and PgVersion are now stored as a Python object instead of as a native C type. This was done so that when the attribute is refer- enced in Python, it does not have to be converted to a Python object. Changes to PgSQL.py ------------------- * Change the code to rollback an open transaction, if any, when the last cursor of a connection is closed. [Bug #454653] * Added code that will, if weak references are not available, remove any cursors that are only referenced from the connection.cursors list. This is in effect garbage collection for deleted cursors. The garbage col- lection of orphaned cursor will occur at the following points: 1. After a new cursor is created. 2. When a cursor is closed. 3. When the connection.autocommit value is changed. * Changed cursor.isClosed to cursor.closed. Why? closed is used in other object (file and PgLargeObject, for example) and I thought I would follow the trend (i.e. for no good reason). * Change from the use of hash() to generate the portal name to the use of id(). * Added code to trap a failure to connect to a database and delete the Connection object on failure. This resolves the problem reported by Adam Buraczewski. [Bug #449743] * Fixed a bug in fetchmany() that could return more than the requested rows if PostgreSQL Portals aren't used in the query. Changes to libpqmodule.c ------------------------ * Moved code for PgLargeObject, PgNotify, PgConnection, and PgResult to their own files. This was done to make maintenance easier. Changes to pgconnection.c ------------------------- * Changed how ibuf is defined in pgFixEsc to ensure that the memory pointed to by ibuf is allocated from the stack, and thus is writable memory. The way it was defined before could result in the memory being allocated in a read-only segment. [Bug #450330] Changes to pgresult.c --------------------- * Changed the return type of result.cmdTuple from a Python String object to an Integer object. * The code now used pgversion.post70 as a Python Integer. * Constant, read-only attributes are now stored as Python objects. This way they do not have to be created each time they are referenced. Changes to pgversion.c ---------------------- * Change code to use Py_BuildValue() for creating Python objects from C values (where possible). * Change attributes that were read-only constants from a C type to a Python Object. This way they don't need to be converted to a Python Object each time they are referenced. * General code clean up. Removed some definitions that are in the new libpqmodule.h header file.