From cito at online.de Tue Aug 1 23:34:22 2006 From: cito at online.de (Christoph Zwerschke) Date: Tue, 01 Aug 2006 23:34:22 +0200 Subject: [DB-SIG] date/time handling Message-ID: <44CFC8DE.1080406@online.de> Hello all, trying to improve date/time handling in PyGreSQL we stumbled over the following questions: 1) Should mx.DateTime still be preferred over stdlib datetime (if both are available)? Or should we handle it the other way around meanwhile? 2) We want to provide a possibility to explicitly choose the preferred date/time type, i.e. mx.DateTime or stdlib datetime or even Ticks or Python tuples, or strings. The mx.odbc module does this via an attribute "datetimeformat" of the connection object that can be set to constant values named DATETIME_DATETIMEFORMAT or STRING_DATETIMEFORMAT (unfortunately, the former means mx.DateTime in this context, and there is no value defined for stdlib datetime). Do you think this is a good idea? Maybe we can agree on a preliminary standard that can be proposed for a future DB-API 3 (btw, is there any work in progress to create such a new version)? -- Christoph From patriciap.gu at gmail.com Wed Aug 2 05:23:51 2006 From: patriciap.gu at gmail.com (Patricia) Date: Wed, 2 Aug 2006 03:23:51 +0000 (UTC) Subject: [DB-SIG] Mysql query Message-ID: Hi all, I'm having a hard time trying to create this query. I have two tables: log - columns: id, name and date info - columns: id, tag, value i.e.: log id | name | date ------------------------------------- 1 | machine_name | 07/31/06 2 | another_machine_name | 08/01/06 info id | tag | value -------------------------------- 1 | release | release_name 1 | arch | 32-bit 1 | mode | amode 2 | release | release_name 2 | arch | 64-bit 2 | mode | amode Basically, the info table contains detailed information of each machine in the log table. The same machine can appear more than once, but with a different id number, that's why I use group by in the following query: select lo.name, inf.value, lo.date from log as lo join info as inf on lo.id=inf.id where (inf.tag='release' and inf.tag='arch') AND lo.id in (select max(id) from log group by name) order by name Everything worked well, but now I've been asked to display this data based on "mode" (located in the info table). I don't have to display mode, but I have to use it as a condition since there are more than two modes. I'm not sure how to add this condition to the query, though. I tried adding this to the query above but, of course, it didn't work: AND inf.id in (select id from info where value LIKE %s) I'd appreciate any help Thanks, Patricia From andy47 at halfcooked.com Thu Aug 3 00:08:19 2006 From: andy47 at halfcooked.com (Andy Todd) Date: Thu, 03 Aug 2006 08:08:19 +1000 Subject: [DB-SIG] date/time handling In-Reply-To: <44CFC8DE.1080406@online.de> References: <44CFC8DE.1080406@online.de> Message-ID: <44D12253.8090502@halfcooked.com> Christoph Zwerschke wrote: > Hello all, > > trying to improve date/time handling in PyGreSQL we stumbled over the > following questions: > > 1) Should mx.DateTime still be preferred over stdlib datetime (if both > are available)? Or should we handle it the other way around meanwhile? > > 2) We want to provide a possibility to explicitly choose the preferred > date/time type, i.e. mx.DateTime or stdlib datetime or even Ticks or > Python tuples, or strings. > > The mx.odbc module does this via an attribute "datetimeformat" of the > connection object that can be set to constant values named > DATETIME_DATETIMEFORMAT or STRING_DATETIMEFORMAT (unfortunately, the > former means mx.DateTime in this context, and there is no value defined > for stdlib datetime). > > Do you think this is a good idea? Maybe we can agree on a preliminary > standard that can be proposed for a future DB-API 3 (btw, is there any > work in progress to create such a new version)? > > -- Christoph > _______________________________________________ > DB-SIG maillist - DB-SIG at python.org > http://mail.python.org/mailman/listinfo/db-sig I don't know about the mechanism, but my preference for any DB-API module would be stdlib first and then mx.DateTime. This has been previously discussed on the list and I believe the conclusion was to change to this approach in a future version of the DB-API. Regards, Andy -- -------------------------------------------------------------------------------- From the desk of Andrew J Todd esq - http://www.halfcooked.com/ From andy47 at halfcooked.com Thu Aug 3 00:25:38 2006 From: andy47 at halfcooked.com (Andy Todd) Date: Thu, 03 Aug 2006 08:25:38 +1000 Subject: [DB-SIG] Mysql query In-Reply-To: References: Message-ID: <44D12662.4060100@halfcooked.com> Patricia wrote: > Hi all, > > I'm having a hard time trying to create this query. > I have two tables: > log - columns: id, name and date > info - columns: id, tag, value > > i.e.: > log > id | name | date > ------------------------------------- > 1 | machine_name | 07/31/06 > 2 | another_machine_name | 08/01/06 > > info > id | tag | value > -------------------------------- > 1 | release | release_name > 1 | arch | 32-bit > 1 | mode | amode > 2 | release | release_name > 2 | arch | 64-bit > 2 | mode | amode > > Basically, the info table contains detailed information of > each machine in the log table. The same machine can appear > more than once, but with a different id number, that's why > I use group by in the following query: > > select lo.name, inf.value, lo.date from log as lo > join info as inf on lo.id=inf.id where (inf.tag='release' > and inf.tag='arch') AND lo.id in (select max(id) > from log group by name) order by name > > Everything worked well, but now I've been asked to display > this data based on "mode" (located in the info table). > I don't have to display mode, but I have to use it as > a condition since there are more than two modes. > I'm not sure how to add this condition to the query, though. > I tried adding this to the query above but, of course, > it didn't work: > > AND inf.id in (select id from info where value LIKE %s) > > I'd appreciate any help > Thanks, > Patricia > > > > _______________________________________________ > DB-SIG maillist - DB-SIG at python.org > http://mail.python.org/mailman/listinfo/db-sig Patricia, This isn't a Python or MySQL problem, it's an issue with your schema design. You need to change your schema. You could change your info table to something less generic like; id | release | arch | mode ---------------------------------- 1 | release_name | 32-bit | amode 2 | release_name | 64-bit | amode 3 | release_name | 64-bit | amode This will make your initial query a lot simpler, something like; select lo.name, inf.release, inf.arch, inf.mode, lo.date from log as lo join info as inf on lo.id=inf.id where lo.id in (select max(id) from log group by name) order by name You could make it even simpler by putting a 'latest' flag on your log table, e.g id | name | date | latest ---------------------------------------------- 1 | machine_name | 07/31/06 | Y 2 | another_machine_name | 08/01/06 | N 3 | another_machine_name | 08/02/06 | Y then your query could look something like; select lo.name, inf.release, inf.arch, inf.mode, lo.date from log as lo join info as inf on lo.id=inf.id where lo.latest = 'Y' order by name Having a generic table with key value pairs like your current info table just leads to some really horrible SQL as you are currently finding. Unless you truly don't know what key-value pairs you are going to store it's always best to design your table to store the specific values you are going to store. Regards, Andy -- -------------------------------------------------------------------------------- From the desk of Andrew J Todd esq - http://www.halfcooked.com/ From mal at egenix.com Thu Aug 3 11:36:32 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 03 Aug 2006 11:36:32 +0200 Subject: [DB-SIG] date/time handling In-Reply-To: <44CFC8DE.1080406@online.de> References: <44CFC8DE.1080406@online.de> Message-ID: <44D1C3A0.1040806@egenix.com> Christoph Zwerschke wrote: > Hello all, > > trying to improve date/time handling in PyGreSQL we stumbled over the > following questions: > > 1) Should mx.DateTime still be preferred over stdlib datetime (if both > are available)? Or should we handle it the other way around meanwhile? Module authors are free to choose and support whatever date/time package they feel fits the job. The DB API has historically preferred mxDateTime as it was the only package available to handle date/time. Nowadays, Python's builtin datetime package is available for interface purposes as well. Supporting both certainly isn't all that hard, so why not go for that setup ? Furthermore, there are applications that don't need or require any special date/time packages. Those would much rather see the data stored as ticks or tuples. This is the reason why mxODBC tries to support all of them and leads up to the next question... > 2) We want to provide a possibility to explicitly choose the preferred > date/time type, i.e. mx.DateTime or stdlib datetime or even Ticks or > Python tuples, or strings. > > The mx.odbc module does this via an attribute "datetimeformat" of the > connection object that can be set to constant values named > DATETIME_DATETIMEFORMAT or STRING_DATETIMEFORMAT (unfortunately, the > former means mx.DateTime in this context, and there is no value defined > for stdlib datetime). It's not an ideal approach. At the time I just needed some way to configure the format. What we need for DB API 3 is a way to let the user configure the type mapping in a more generic way and for both the input and output types. Furthermore, you will want to be able to setup this mapping per connection and cursor (with cursors inheriting the setting from the connection and the connection). > Do you think this is a good idea? Maybe we can agree on a preliminary > standard that can be proposed for a future DB-API 3 (btw, is there any > work in progress to create such a new version)? There have been a few attempts at it, but nothing has come out of those yet. I would like to work on a new version in the coming months. My main aim is to: * turn some of the optional extensions we already have in the DB API 2 into standard APIs and * as new feature, add some way to do the type mapping -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 03 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From miroslav.siket at gmail.com Thu Aug 3 12:21:21 2006 From: miroslav.siket at gmail.com (Miroslav Siket) Date: Thu, 3 Aug 2006 12:21:21 +0200 Subject: [DB-SIG] cx_Oracle 4.2 In-Reply-To: <703ae56b0607211046m551a1c8al3e2af641906fb9eb@mail.gmail.com> References: <703ae56b0607211046m551a1c8al3e2af641906fb9eb@mail.gmail.com> Message-ID: <878b9ee0608030321s644f3541p6e209043b424342f@mail.gmail.com> Hello all, I was wondering if there is a way to do bulk operations with cx_Oracle - like bulk inserts, updates,... Thanks, Miro On 21/07/06, Anthony Tuininga wrote: > > What is cx_Oracle? > > cx_Oracle is a Python extension module that allows access to Oracle and > conforms to the Python database API 2.0 specifications with a few > exceptions. > > > Where do I get it? > > http://starship.python.net/crew/atuining > > > What's new? > > 1) Added support for parsing an Oracle statement as requested by Patrick > Blackwill. > 2) Added support for BFILEs at the request of Matthew Cahn. > 3) Added support for binding decimal.Decimal objects to cursors. > 4) Added support for reading from NCLOBs as requested by Chris Dunscombe. > 5) Added connection attributes encoding and nencoding which return the > IANA > character set name for the character set and national character set in > use > by the client. > 6) Rework module initialization to use the techniques recommended by the > Python documentation as one user was experiencing random segfaults due > to the use of the module dictionary after the initialization was > complete. > 7) Removed support for the OPT_Threading attribute. Use the threaded > keyword > when creating connections and session pools instead. > 8) Removed support for the OPT_NumbersAsStrings attribute. Use the > numbersAsStrings attribute on cursors instead. > 9) Use type long rather than type int in order to support long integers on > 64-bit machines as reported by Uwe Hoffmann. > 10) Add cursor attribute "bindarraysize" which is defaulted to 1 and is > used > to determine the size of the arrays created for bind variables. > 11) Added repr() methods to provide something a little more useful than > the > standard type name and memory address. > 12) Added keyword argument support to the functions that imply such in the > documentation as requested by Harald Armin Massa. > 13) Treat an empty dictionary passed through to cursor.execute() as > keyword > arguments the same as if no keyword arguments were specified at all, > as > requested by Fabien Grumelard. > 14) Fixed memory leak when a LOB read would fail. > 15) Set the LDFLAGS value in the environment rather than directly in the > setup.py file in order to satisfy those who wish to enable the use of > debugging symbols. > 16) Use __DATE__ and __TIME__ to determine the date and time of the build > rather than passing it through directly. > 17) Use Oracle types and add casts to reduce warnings as requested by > Amaury > Forgeot d'Arc. > 18) Fixed typo in error message. > _______________________________________________ > DB-SIG maillist - DB-SIG at python.org > http://mail.python.org/mailman/listinfo/db-sig > -- Miroslav Siket CERN, IT/FIO, CH-1211, Suisse e-mail:Miroslav.Siket at cern.ch Web: http://cern.ch/mirsi phone: +41 22 76 73068 fax: +41 22 7669859 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20060803/84dbf06a/attachment.html From anthony.tuininga at gmail.com Thu Aug 3 15:47:13 2006 From: anthony.tuininga at gmail.com (Anthony Tuininga) Date: Thu, 3 Aug 2006 07:47:13 -0600 Subject: [DB-SIG] cx_Oracle 4.2 In-Reply-To: <878b9ee0608030321s644f3541p6e209043b424342f@mail.gmail.com> References: <703ae56b0607211046m551a1c8al3e2af641906fb9eb@mail.gmail.com> <878b9ee0608030321s644f3541p6e209043b424342f@mail.gmail.com> Message-ID: <703ae56b0608030647m2a073f44q29962a7082a4b3c8@mail.gmail.com> Certainly. See the method cursor.executemany(). On 8/3/06, Miroslav Siket wrote: > Hello all, > > I was wondering if there is a way to do bulk operations with cx_Oracle - > like bulk inserts, updates,... > > Thanks, > Miro > > > On 21/07/06, Anthony Tuininga wrote: > > > What is cx_Oracle? > > cx_Oracle is a Python extension module that allows access to Oracle and > conforms to the Python database API 2.0 specifications with a few > exceptions. > > > Where do I get it? > > http://starship.python.net/crew/atuining > > > What's new? > > 1) Added support for parsing an Oracle statement as requested by Patrick > Blackwill. > 2) Added support for BFILEs at the request of Matthew Cahn. > 3) Added support for binding decimal.Decimal objects to cursors. > 4) Added support for reading from NCLOBs as requested by Chris Dunscombe. > 5) Added connection attributes encoding and nencoding which return the IANA > character set name for the character set and national character set in > use > by the client. > 6) Rework module initialization to use the techniques recommended by the > Python documentation as one user was experiencing random segfaults due > to the use of the module dictionary after the initialization was > complete. > 7) Removed support for the OPT_Threading attribute. Use the threaded > keyword > when creating connections and session pools instead. > 8) Removed support for the OPT_NumbersAsStrings attribute. Use the > numbersAsStrings attribute on cursors instead. > 9) Use type long rather than type int in order to support long integers on > 64-bit machines as reported by Uwe Hoffmann. > 10) Add cursor attribute "bindarraysize" which is defaulted to 1 and is used > to determine the size of the arrays created for bind variables. > 11) Added repr() methods to provide something a little more useful than the > standard type name and memory address. > 12) Added keyword argument support to the functions that imply such in the > documentation as requested by Harald Armin Massa. > 13) Treat an empty dictionary passed through to cursor.execute() as keyword > arguments the same as if no keyword arguments were specified at all, as > requested by Fabien Grumelard. > 14) Fixed memory leak when a LOB read would fail. > 15) Set the LDFLAGS value in the environment rather than directly in the > setup.py file in order to satisfy those who wish to enable the use of > debugging symbols. > 16) Use __DATE__ and __TIME__ to determine the date and time of the build > rather than passing it through directly. > 17) Use Oracle types and add casts to reduce warnings as requested by Amaury > Forgeot d'Arc. > 18) Fixed typo in error message. > _______________________________________________ > DB-SIG maillist - DB-SIG at python.org > http://mail.python.org/mailman/listinfo/db-sig > > > > -- > Miroslav Siket CERN, IT/FIO, > CH-1211, Suisse > e-mail:Miroslav.Siket at cern.ch Web: http://cern.ch/mirsi > phone: +41 22 76 73068 fax: +41 22 7669859 From andy47 at halfcooked.com Fri Aug 4 10:17:27 2006 From: andy47 at halfcooked.com (Andy Todd) Date: Fri, 04 Aug 2006 18:17:27 +1000 Subject: [DB-SIG] Mysql query In-Reply-To: <18f27cbe0608031710s22f635bjf339e5695240d5ed@mail.gmail.com> References: <44D12662.4060100@halfcooked.com> <18f27cbe0608031710s22f635bjf339e5695240d5ed@mail.gmail.com> Message-ID: <44D30297.1010206@halfcooked.com> Patricia G. wrote: > Hi Andy, > > > Having a generic table with key value pairs like your current info > table > just leads to some really horrible SQL as you are currently finding. > > > > I agree with you . > > Unless you truly don't know what key-value pairs you are going to store > it's always best to design your table to store the specific values > > > In the beginning I wanted to design something similar to what you > suggest, but the problem is that it is hard to determine the values that > will be stored into the database. There will be times when I will not > get 'release' or 'arch' information. > > Thanks, > Patricia > Thats what NULL is for. Regards, Andy -- -------------------------------------------------------------------------------- From the desk of Andrew J Todd esq - http://www.halfcooked.com/ From cito at online.de Sat Aug 5 01:28:42 2006 From: cito at online.de (Christoph Zwerschke) Date: Sat, 05 Aug 2006 01:28:42 +0200 Subject: [DB-SIG] date/time handling In-Reply-To: <44D1C3A0.1040806@egenix.com> References: <44CFC8DE.1080406@online.de> <44D1C3A0.1040806@egenix.com> Message-ID: <44D3D82A.3070607@online.de> M.-A. Lemburg wrote: > Module authors are free to choose and support whatever date/time > package they feel fits the job. The DB API has historically > preferred mxDateTime as it was the only package available to > handle date/time. Nowadays, Python's builtin datetime package > is available for interface purposes as well. > > Supporting both certainly isn't all that hard, so why not go > for that setup ? Yes, of course I will continue to support both, but now prefer stdlib datetime if available (before, I did it the other way around). > Furthermore, there are applications that don't need or require > any special date/time packages. Those would much rather see > the data stored as ticks or tuples. Exactly. You may also want to use the raw string format as delivered by the database. Actually PyGreSQL did this until now, it used datetime types only for input. One reason was that PostgreSQL returns the date in a format (DMY, MDY etc.) depending on a parameter setting that you would need to monitor otherwise. > What we need for DB API 3 is a way to let the user configure the > type mapping in a more generic way and for both the input and > output types. Concerning input format: Since DB-API 2 defines constructors for Date, Time, Timestamp, the input format should be actually transparent and irrelevant to the user if you use these constructors, right? So no real need to define it. Conversely, if the user defines the date/time input format, he can simply use the probably more versatile constructors of the respective format, and the DB-API does not need to define date/time constructors. So maybe DB-API 3 should provide a possibility to choose an input type (default: stdlib datetime), and get rid of the date/time constructors (default: use the constructors in the stdlib datetime module). > Furthermore, you will want to be able to setup this mapping > per connection and cursor (with cursors inheriting > the setting from the connection and the connection). That would be flexible. Should this become an attribute/property of the connection/cursor or should it be a keyword argument of the constructor? > There have been a few attempts at it, but nothing has come out > of those yet. > > I would like to work on a new version in the coming months. > My main aim is to: > > * turn some of the optional extensions we already have in > the DB API 2 into standard APIs and > > * as new feature, add some way to do the type mapping That would be great. Thanks for your engagement in all of this. -- Christoph From cito at online.de Sat Aug 5 01:36:34 2006 From: cito at online.de (Christoph Zwerschke) Date: Sat, 05 Aug 2006 01:36:34 +0200 Subject: [DB-SIG] date/time handling In-Reply-To: <44D12253.8090502@halfcooked.com> References: <44CFC8DE.1080406@online.de> <44D12253.8090502@halfcooked.com> Message-ID: <44D3DA02.50105@online.de> Andy Todd wrote: > I don't know about the mechanism, but my preference for any DB-API module would be stdlib first and then mx.DateTime. Ok, I think I will do that. Otherwise, as soon as an administrator installs mx.DateTime, some non-related DB-API 2 applications may suddenly break. Of course you will then get a problem if the administrator upgrades from Python 2.2 to 2.3, but you expect some things to break anyway in such a case (and he should have done that a long time ago anyway ;-) Still, I'm looking for a way to explicitely tell the module to use mx or stdlib dates, or other date formats. -- Chris From python at venix.com Sat Aug 5 02:22:19 2006 From: python at venix.com (Python) Date: Fri, 04 Aug 2006 20:22:19 -0400 Subject: [DB-SIG] date/time handling In-Reply-To: <44D12253.8090502@halfcooked.com> References: <44CFC8DE.1080406@online.de> <44D12253.8090502@halfcooked.com> Message-ID: <1154737339.28304.740.camel@www.venix.com> On Thu, 2006-08-03 at 08:08 +1000, Andy Todd wrote: > I don't know about the mechanism, but my preference for any DB-API > module would be stdlib first and then mx.DateTime. > Would it make sense to reverse the order? All newer Python releases will have a stdlib.datetime. If mx.DateTime is present, the sysadmin went to extra effort to put it there. Can we infer that it's there because it is preferred? To me that suggests importing mx.DateTime first and then falling back on the stdlib. > This has been previously discussed on the list and I believe the > conclusion was to change to this approach in a future version of the > DB-API. > > Regards, > Andy -- Lloyd Kvam Venix Corp From cito at online.de Sat Aug 5 12:04:43 2006 From: cito at online.de (Christoph Zwerschke) Date: Sat, 05 Aug 2006 12:04:43 +0200 Subject: [DB-SIG] date/time handling In-Reply-To: <1154737339.28304.740.camel@www.venix.com> References: <44CFC8DE.1080406@online.de> <44D12253.8090502@halfcooked.com> <1154737339.28304.740.camel@www.venix.com> Message-ID: <44D46D3B.9060900@online.de> Python at venix.com wrote: > If mx.DateTime is present, the sysadmin > went to extra effort to put it there. Can we infer that it's there > because it is preferred? To me that suggests importing mx.DateTime > first and then falling back on the stdlib. As far as I understand, mx.DateTime is part of the mx base package, containing also other things such as mxTextTools. So the intent of the admin could have been to provide these things only. I don't like the idea that the admin can break or change applications (he might not even be aware of) simply by installing the mx.base package. What we really need is a way for DB-API apps to explicitely declare what datetime module shall be used. ("Explicit is better than implicit." and "In the face of ambiguity, refuse the temptation to guess.") -- Chris From davidrushby at yahoo.com Sat Aug 5 15:22:55 2006 From: davidrushby at yahoo.com (David Rushby) Date: Sat, 5 Aug 2006 06:22:55 -0700 (PDT) Subject: [DB-SIG] date/time handling In-Reply-To: <44D46D3B.9060900@online.de> Message-ID: <20060805132255.23366.qmail@web30003.mail.mud.yahoo.com> --- Christoph Zwerschke wrote: > I don't like the idea that the admin can break or > change applications (he might not even be aware of) simply > by installing the mx.base package. I strongly agree with this. Implicitly changing the representation of certain types of data on the basis of what packages happen to be installed on the system is insane. Consider the act of transferring a Python database app between Linux distributions. If one distribution doesn't include mx in its default set of packages, but the other does, the app's behavior would change without any indication from the sysadmin that such a change is desired. > What we really need is a way for DB-API apps to explicitely > declare what datetime module shall be used. Some database modules already have a system of input and output conversion callbacks that applies to all types, not just dates, times, and timestamps. One example is KInterbasDB: http://kinterbasdb.sourceforge.net/dist_docs/usage.html#adv_param_conv_dynamic_type_translation I would be disappointed to see the DB API standard revised to specify a mechanism for date/time/timestamp representation selection, but fall short of generality. That would seem to me like a wasted opportunity. On the other hand, there are enough engine-specific differences in relational database type systems that creating a fully general standard might be impossible (or so ugly as to be unusable). Consider the unusual extensibility of the PostreSQL type system, SQLite's weaker-than-normal typing, and debatable points such as the handling of textual blob data (suck it into a string in memory or present it as a stream?). I fear that a standard that tried to address all of those issues for all database engines would be a monstrosity. __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From mfrasca at zonnet.nl Sun Aug 6 11:37:39 2006 From: mfrasca at zonnet.nl (Mario Frasca) Date: Sun, 6 Aug 2006 11:37:39 +0200 Subject: [DB-SIG] date/time handling In-Reply-To: <44D1C3A0.1040806@egenix.com> References: <44CFC8DE.1080406@online.de> <44D1C3A0.1040806@egenix.com> Message-ID: <20060806093739.GA28803@kruiskruid.demon.nl> hi list readers, I've been discussing the topic with Chris on the pygres list, but I agree with him that this list is more indicated. conversion to different formats: ticks (unity?, start time?), strings (too many formats are possible), tuples (what does each item mean? how many items?), stdlib datetime, mx.DateTime, other datetime library... right, we can't possibly be exhaustive, and this is just a particular case of data conversion... actually, I think that this problem is a bit different than specifying translations for other types. the problem lies, as I see it, in the fact that the dbengine will return datetime values in different string formats depending on the setting of some datestyle within the engine, and that this format may, in principle, be changed even between different fetchone calls on the same query. so on one hand I agree that one would need to think about a general solution to the problem of translating data from and to the database not only for datetime data, but on the other hand I see an extra complication in this specific case, which produces the impression that this is an extremely complex problem even in the general case. I don't really want to think about the general case, in this particular one, what I would like to do is this: distinguish the three types: date, datetime/timestamp, time/timedelta. ... what about offering ticks (seconds from the start of unix time or difference in seconds between dates) and tuples [datetime: (year, month, day, hour, minute, seconds.decimals), date: (year, month, day), timedelta: (days, seconds.decimals)]? well, in the solution I'm thinking about type information takes a crucial role. data *coming* from the database keeps type information with itself, so that one can translate it in the desired way (a STRING "2003-05-12" is not the same as a DATE "2003-05-12" and the two things get potentially translated in a different way (I'm having problems on this point with the sqlite engine, which has no internal datetime types)). but then also data *going* to the database must be clearly typed. reserving a 6 elements tuple to represent a timestamp or a two elements tuple for a timedelta may sound reasonable to us now, but could sound as nonsense to someone working with complex numbers or some strange mathematical entities, as well as with dates... you could think about a totally different approach, where data going into a specifically typed field gets translated in its own way, so a float going to a timestamp gets a different treatment than a float going to a number or to a date. leave alone the fact that datetime data in a query is not necessarily associated to a table field. I think that the strong dynamic typing in Python makes the first approach natural and the second undesirable (actually a nightmare I think). what I propose is that we only offer clearly typed data translation from and to the database (support of two or more datetime modules) or no translation at all (crude string representation). I would also split the translation of data from the database to a datetime module in two parts. one where the interface module produces a tuple of float or integer values using the data from the database and a second which translates these tuples into the correct datetime type. this second part would be caracteristic of the datetime module and could possibly be shared among the different interface modules. if a module wants to represent datetime data as tuples, I don't see the problem, as long as there is a distinct type for this. curious about the reactions from the list... regards, Mario -- ... hinc sequitur, unamquamque rem naturalem tantum iuris ex natura habere, quantum potentiae habet ad existendum et operandum ... -- Baruch de Spinoza, TRACTATUS POLITICUS From fumanchu at amor.org Sun Aug 6 23:01:27 2006 From: fumanchu at amor.org (Robert Brewer) Date: Sun, 6 Aug 2006 14:01:27 -0700 Subject: [DB-SIG] date/time handling References: <44CFC8DE.1080406@online.de> <44D1C3A0.1040806@egenix.com> <20060806093739.GA28803@kruiskruid.demon.nl> Message-ID: <435DF58A933BA74397B42CDEB8145A86224C1C@ex9.hostedexchange.local> Mario Frasca wrote: > ...the problem lies, as I see it, in the fact that the > dbengine will return datetime values in different string > formats depending on the setting of some datestyle within > the engine, and that this format may, in principle, > be changed even between different fetchone calls > on the same query. In principle, perhaps, but make the common case simple and the uncommon case possible. That is, don't penalize the 99% of consumers who will never see that datestyle change, ever. One way to accomplish that would be a module-level flag for datestyle for the common case, and a sync_datestyle() function that the 1% can call as often as they need to. > well, in the solution I'm thinking about type information > takes a crucial role. data *coming* from the database > keeps type information with itself, so that one can > translate it in the desired way (a STRING "2003-05-12" > is not the same as a DATE "2003-05-12" Right. But converting to a Python type as early as possible saves a lot of headaches (not to mention code, memory and CPU cycles if you only do the conversion once). > and the two things get potentially translated in a > different way (I'm having problems on this point with the > sqlite engine, which has no internal datetime types)). Me too. :) > but then also data *going* to the database must be > clearly typed. Yes. The value-adaptation layer must be aware of the target column types. > you could think about a totally different approach, > where data going into a specifically typed field > gets translated in its own way, so a float going > to a timestamp gets a different treatment than a > float going to a number or to a date. > ... > I think that the strong dynamic typing in Python > makes [that] undesirable (actually a nightmare I think). Not really. In practice, there is a smallish set of Python types (int, long, str, unicode, float, date, datetime, timedelta, decimal, fixedpoint, list, tuple, dict, None, and bool), and a small set of database types which map *almost* one-to-one. In Dejavu [1], there are only 21 outbound adapters (that cover all 15 python types I listed), and only 15 inbound adapters; certain databases require a few of them to be overridden (e.g. for date formatting), but rarely are any new adapters added (usually for nonstandard database types, like CURRENCY for MS Access). So although it appears to be a nightmare M x N problem, in practice it's much closer to just M + N adapters. Some people will have a need for Python types outside of the 15 I listed, or for database types nobody else has used. IMO it's proper for those people to have the burden of writing custom adapters for their own needs. > I would also split the translation of data from the database > to a datetime module in two parts. one where the interface > module produces a tuple of float or integer values using > the data from the database and a second which translates > these tuples into the correct datetime type. this second > part would be caracteristic of the datetime module and > could possibly be shared among the different interface modules. YAGNI. I think at the end of that process, you'll find you've done a lot of separation of one-liners into two just for the sake of architectural beauty. Robert Brewer System Architect Amor Ministries fumanchu at amor.org [1] http://projects.amor.org/dejavu/browser/trunk/storage/geniusql.py -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20060806/b10840b3/attachment.html From mfrasca at zonnet.nl Mon Aug 7 08:48:25 2006 From: mfrasca at zonnet.nl (Mario Frasca) Date: Mon, 7 Aug 2006 08:48:25 +0200 Subject: [DB-SIG] date/time handling In-Reply-To: <435DF58A933BA74397B42CDEB8145A86224C1C@ex9.hostedexchange.local> References: <44CFC8DE.1080406@online.de> <44D1C3A0.1040806@egenix.com> <20060806093739.GA28803@kruiskruid.demon.nl> <435DF58A933BA74397B42CDEB8145A86224C1C@ex9.hostedexchange.local> Message-ID: <20060807064825.GA8359@kruiskruid.demon.nl> On 2006-0806 14:01:27, Robert Brewer wrote: > > but then also data *going* to the database must be > > clearly typed. > > Yes. The value-adaptation layer must be aware of the target column types. sorry if I skip the rest, but I feel this is a crucial point. I think that the value adaptation layer should not be aware of the target column types, it should, according to me, base every choice on local information, that is: the type of the incoming data, be it *from* the database (and the type information is present in most engines) or *to* the database (then the type information should be agreed on, that is what we are doing here). if the user sends a string, it will not be translated and the user is responsible for it being a valid string representation of the typed value to be stored in the column. MF -- "S'il y a un Dieu, l'ath?isme doit lui sembler une moindre injure que la religion." -- Edmond et Jules de Goncourt From blais at furius.ca Mon Aug 7 09:20:51 2006 From: blais at furius.ca (Martin Blais) Date: Mon, 7 Aug 2006 03:20:51 -0400 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. Message-ID: <8393fff0608070020n424b2172i5138b7c5c0032563@mail.gmail.com> Hi I want to propose a few improvements on the DBAPI 2.0 Cursor.execute() method interface. You can find the details of my proposed changes here: http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.html And a pure-Python preliminary implementation of these changes that can sit on an existing DBAPI 2.0 implementation (tested with psycopg2): http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.py (If this gets accepted IMO this preliminary hack should instead be implemented at the lower-level of the specific DBAPI implementations for performance reasons...) Comments appreciated. cheers, P.S. I'm not a regular of this list, so please forgive me if this specific problem has already been addressed since the release of DBAPI 2.0. From mal at egenix.com Mon Aug 7 13:17:37 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 07 Aug 2006 13:17:37 +0200 Subject: [DB-SIG] parameter and column binding (date/time handling) In-Reply-To: <44D3DA02.50105@online.de> References: <44CFC8DE.1080406@online.de> <44D12253.8090502@halfcooked.com> <44D3DA02.50105@online.de> Message-ID: <44D72151.7060502@egenix.com> Christoph Zwerschke wrote: > Andy Todd wrote: > > I don't know about the mechanism, but my preference for any DB-API > module would be stdlib first and then mx.DateTime. > > Ok, I think I will do that. Otherwise, as soon as an administrator > installs mx.DateTime, some non-related DB-API 2 applications may > suddenly break. Database modules should never choose a type mapping based on import testing. If a module has a default of using mxDateTime and the package can't be found, it should raise an import error instead of silently reverting to Python's datetime or another type. > Of course you will then get a problem if the > administrator upgrades from Python 2.2 to 2.3, but you expect some > things to break anyway in such a case (and he should have done that a > long time ago anyway ;-) > > Still, I'm looking for a way to explicitely tell the module to use mx or > stdlib dates, or other date formats. I think we'd need something that does the following: Parameter binding: * allows mapping based on: - parameter position in the statement - parameter type (the database type code, if the database can provide this information) * allows assigning a binding function that converts the Python object to whatever the database requires (these will have to be database specific) These binding functions will then have to implement e.g. "bind the Python object as VARCHAR". Result set types: * allows mapping based on: - column position - column type (the database type code) * allows assigning a binding function that converts the database type to a Python object (these will have to be database specific) These binding functions will have to implement e.g. "convert the database type to a Python float". Given that mapping on both position and type is hard to implement using dictionaries and that setting the binding functions is usually only done once per query, I think that using a callback to implement these mappings would be the most flexible way. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 07 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From fumanchu at amor.org Mon Aug 7 19:54:32 2006 From: fumanchu at amor.org (Robert Brewer) Date: Mon, 7 Aug 2006 10:54:32 -0700 Subject: [DB-SIG] date/time handling References: <44CFC8DE.1080406@online.de> <44D1C3A0.1040806@egenix.com> <20060806093739.GA28803@kruiskruid.demon.nl> <435DF58A933BA74397B42CDEB8145A86224C1C@ex9.hostedexchange.local> <20060807064825.GA8359@kruiskruid.demon.nl> Message-ID: <435DF58A933BA74397B42CDEB8145A86224C23@ex9.hostedexchange.local> Mario Frasca wrote: On 2006-0806 14:01:27, Robert Brewer wrote: > > > but then also data *going* to the database must be > > > clearly typed. > > > > Yes. The value-adaptation layer must be aware of the target column types. > > I think that the value adaptation layer should not be aware > of the target column types, it should, according to me, > base every choice on local information, that is: the type > of the incoming data, be it *from* the database (and the > type information is present in most engines) or *to* the > database (then the type information should be agreed on, > that is what we are doing here). if the user sends a string, > it will not be translated and the user is responsible for it > being a valid string representation of the typed value > to be stored in the column. Then you are adding zero value in the DB-API layer over what every Python DB wrapper already does. That works great until you write integration code for multiple legacy databases, where some dates are stored in ADO DBDATETIME, some are TEXT in "YYYYMMDD" format, and some are SQLite "typeless". Providing the DB-API adaptation layer with both the Python type and the database type can relieve the vast majority of that burden from the user. It works in Dejavu, and exists there in direct response to the limitations of the approach you describe (which Dejavu did previously). Robert Brewer System Architect Amor Ministries fumanchu at amor.org -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20060807/f4931d39/attachment.htm From farcepest at gmail.com Wed Aug 9 04:27:24 2006 From: farcepest at gmail.com (Andy Dustman) Date: Tue, 8 Aug 2006 22:27:24 -0400 Subject: [DB-SIG] Question about MySQL insert function In-Reply-To: <25ef693e0607301536o20b04e18td10da3e828454a35@mail.gmail.com> References: <25ef693e0607301536o20b04e18td10da3e828454a35@mail.gmail.com> Message-ID: <9826f3800608081927v4fd25f7aufc2962d59e09f38f@mail.gmail.com> On 7/30/06, Chris Curvey wrote: > Just wondering...do you need to commit() ? He does. Autocommit is off by default, as specified by PEP-249. -- The Pythonic Principle: Python works the way it does because if it didn't, it wouldn't be Python. From paul at snake.net Fri Aug 11 01:48:51 2006 From: paul at snake.net (Paul DuBois) Date: Thu, 10 Aug 2006 18:48:51 -0500 Subject: [DB-SIG] MySQLdb: DeprecationWarning: begin() is non-standard and will be removed in 1.3 Message-ID: I'm seeing this from my scripts that use MySQLdb now: ./transaction.py:63: DeprecationWarning: begin() is non-standard and will be removed in 1.3 conn.begin () So, what's the standard thing to do? Just omit the begin() calls? (Given that MySQLdb starts with auto-commit disabled, presumably that means only commit() and rollback() are needed?) From chris at cogdon.org Fri Aug 11 02:26:11 2006 From: chris at cogdon.org (Chris Cogdon) Date: Thu, 10 Aug 2006 17:26:11 -0700 Subject: [DB-SIG] MySQLdb: DeprecationWarning: begin() is non-standard and will be removed in 1.3 In-Reply-To: References: Message-ID: <82AF2498-D9BC-4ACB-9B28-D8E211568C72@cogdon.org> On Aug 10, 2006, at 4:48 PM, Paul DuBois wrote: > I'm seeing this from my scripts that use MySQLdb now: > > ./transaction.py:63: DeprecationWarning: begin() is non-standard > and will be > removed in 1.3 > conn.begin () > > So, what's the standard thing to do? Just omit the begin() calls? > (Given > that MySQLdb starts with auto-commit disabled, presumably that > means only > commit() and rollback() are needed?) Yes, that's right. Just commit and rollback are needed. -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From mal at egenix.com Fri Aug 11 11:36:33 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 11 Aug 2006 11:36:33 +0200 Subject: [DB-SIG] parameter and column binding (date/time handling) In-Reply-To: References: Message-ID: <44DC4FA1.50109@egenix.com> Vernon Cole wrote: > BAH! This is the 21st century! Who cares about the column POSITION? > Bind on the column NAME. > You guys have gotta quit thinking in SQL. It > is a crappy language. Think in Python. > A comparison like: > if DataBaseRow.admitTime.date() == datetime.datetime.today().date() > ought to work (assuming the table has a field named "admitTime"). Note that column position is the only safe way to address both a input parameter and an output column across databases: Input parameters usually don't correspond to a column and if they do the information is often enough not available from the database. Output columns can correspond to a table column, but don't have to (think "select count(*) from mytable where x=?"). Even when they do, the database may apply case conversion to the column name, so it's not clear whether the column names gets returned as-is, in uppercase or in lowercase or even whether the database makes this information available at all. As a result, you can't make any assumptions on column names in the DB API spec. You can in higher level wrappers crafted to support a few selected databases, but the DB API is focusing on providing a standard for a very broad range of backends. We should really have a FAQ for these things. > -- > Vernon Cole > > > > -----Original Message----- > From: db-sig-bounces at python.org [mailto:db-sig-bounces at python.org] On > Behalf Of M.-A. Lemburg > Sent: Monday, August 07, 2006 4:18 AM > To: Christoph Zwerschke > Cc: db-sig at python.org > Subject: Re: [DB-SIG] parameter and column binding (date/time handling) > > Christoph Zwerschke wrote: >> Andy Todd wrote: >> > I don't know about the mechanism, but my preference for any DB-API >> module would be stdlib first and then mx.DateTime. >> >> Ok, I think I will do that. Otherwise, as soon as an administrator >> installs mx.DateTime, some non-related DB-API 2 applications may >> suddenly break. > > Database modules should never choose a type mapping based > on import testing. > > If a module has a default of using mxDateTime and the package > can't be found, it should raise an import error instead of > silently reverting to Python's datetime or another type. > >> Of course you will then get a problem if the >> administrator upgrades from Python 2.2 to 2.3, but you expect some >> things to break anyway in such a case (and he should have done that a >> long time ago anyway ;-) >> >> Still, I'm looking for a way to explicitely tell the module to use mx > or >> stdlib dates, or other date formats. > > I think we'd need something that does the following: > > Parameter binding: > > * allows mapping based on: > - parameter position in the statement > - parameter type (the database type code, if the database can > provide this information) > > * allows assigning a binding function that converts the Python > object to whatever the database requires (these will have to > be database specific) > > These binding functions will then have to implement e.g. > "bind the Python object as VARCHAR". > > Result set types: > > * allows mapping based on: > - column position > - column type (the database type code) > > * allows assigning a binding function that converts the database > type to a Python object (these will have to be database specific) > > These binding functions will have to implement e.g. > "convert the database type to a Python float". > > Given that mapping on both position and type is hard to > implement using dictionaries and that setting the binding > functions is usually only done once per query, I think that > using a callback to implement these mappings would be the > most flexible way. > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 11 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From haraldarminmassa at gmail.com Sun Aug 13 10:56:46 2006 From: haraldarminmassa at gmail.com (Harald Armin Massa) Date: Sun, 13 Aug 2006 10:56:46 +0200 Subject: [DB-SIG] cx_Oracle - any way to change the user of a connection? Message-ID: <7be3f35d0608130156m79d1a1b6g87979493fe9d0ef5@mail.gmail.com> I want to have a connections which stays activated: cn=cx_Oracle.connect("dataset", "lowprivuser", "password") and change the user of that connection. Reason: I want to have a web application, which connects ONE TIME (per process/thread) to Oracle, and switches the privileges on the fly. I looked into "SET ROLE "; but as much as I understand thats pureley additional, so: cn=cx_Oracle.connect("dataset", "lowprivuser", "password") # gives the privs of "lowprivuser" cs=cn.cursor() cs.execute ("SET ROLE USERFISCH identified by secret2006") then gives the privs of "lowprivuser" plus the privs of USERFISCH, and an additional cs.execute ("SET ROLE USERFOO identified by othersecret") adds the privs of "USERFOO", WITHOUT loosing those of USERFISCH. Is there any way to do this? Best wishes, Harald -- GHUM Harald Massa persuadere et programmare Harald Armin Massa Reinsburgstra?e 202b 70197 Stuttgart 0173/9409607 - Let's set so double the killer delete select all. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20060813/c5a25b2b/attachment.html From travislspencer at gmail.com Mon Aug 14 16:08:07 2006 From: travislspencer at gmail.com (Travis Spencer) Date: Mon, 14 Aug 2006 07:08:07 -0700 Subject: [DB-SIG] UML diagram of Python DB API 2.0 specification In-Reply-To: References: Message-ID: On 1/14/06, Travis Spencer wrote: > You will find the UML diagram in the following formats at each respective URL: They've moved. They are now at http://web.cecs.pdx.edu/~tspencer/stash/projects/perftrack/umldiagram/. -- Regards, Travis Spencer From ccurvey at gmail.com Mon Aug 14 16:23:30 2006 From: ccurvey at gmail.com (Chris Curvey) Date: Mon, 14 Aug 2006 10:23:30 -0400 Subject: [DB-SIG] pyodbc on osx? Message-ID: <25ef693e0608140723k7e6f9ccfl929aa25331d3abfd@mail.gmail.com> I really like the pyodbc package for getting to MS-SQL on Windows. Has anyone been able to make this work with ODBC drivers on OSX? (Or is there another open source solution that works?) -Chris -- The short answer is "Yes." The long answer is "No." -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20060814/1974e651/attachment.htm From Chris.Clark at ingres.com Mon Aug 14 19:42:27 2006 From: Chris.Clark at ingres.com (Chris Clark) Date: Mon, 14 Aug 2006 10:42:27 -0700 Subject: [DB-SIG] pyodbc on osx? In-Reply-To: <25ef693e0608140723k7e6f9ccfl929aa25331d3abfd@mail.gmail.com> References: <25ef693e0608140723k7e6f9ccfl929aa25331d3abfd@mail.gmail.com> Message-ID: <44E0B603.6040001@ingres.com> Chris Curvey wrote: > I really like the pyodbc package for getting to MS-SQL on Windows. > Has anyone been able to make this work with ODBC drivers on OSX? (Or > is there another open source solution that works?) I've not tried it personally, I get the impression pyodbc is Windows focused at the moment, you might could try emailing Michael directly and ask him. he is very approachable and good at responding to mail. Asking the obvious question first ;-) I'm assuming that you don't have a python driver for the database you need on OSX, hence the ODBC requirement? One alternative is the Ingres DBI driver (GPL license), one of the less well known features is that it is ODBC based. We've successfully built with UnixODBC but this isn't the classic deployment option. Supporting other ODBC drivers isn't a primary feature but is a stretch goal. If you want to pull down the latest release from http://www.ingres.com/products/Prod_Download_Python_DBI.html and apply a little patch and play with it, I'd love to get some feedback/patches. === ingres!main!generic!common!pydbi setup.py rev 25 ==== 136c136 < libraries=["iiodbc.1","m", "c"] --- > libraries=["odbc","m", "c"] 171c171 < include_dirs=[ii_system_files,"hdr/"], --- > include_dirs=["/disk3/unixODBC-2.2.11/include/","hdr/"], 173c173 < library_dirs=[ii_system_lib], --- > library_dirs=["/usr/lib"], === ingres!main!generic!common!pydbi!dbi ingresdbi.c rev 24 ==== 5432c5432 < case SQL_BIT_VARYING: --- > // case SQL_BIT_VARYING: NOTE hard coded paths in code as this is a quick patch! Modify accordingly You can just use a dsn to connect (or use the not yet well documented connection string parameter "connectstr="). import ingresdbi dc=ingresdbi.connect(dsn='localdb') #dc=ingresdbi.connect(connectstr='DSN=localdb') c=dc.cursor() c.execute("select * from mytable") print c.description print c.fetchall() IngresDBI has not been tested under OSX but it has been tested on a bunch of other platforms so I can't imagine any platform issues (famous last words...), you may find some Ingres'isms in there but if you wanted to #ifdef them that would be a neat patch that I would like to see. Finally, other options for odbc and Python (under Unix like systems) include mxODBC and the SQL Relay. Hope this helps get you started, good luck! Chris From sam at superduper.net Tue Aug 15 08:39:28 2006 From: sam at superduper.net (Sam Clegg) Date: Tue, 15 Aug 2006 07:39:28 +0100 Subject: [DB-SIG] pyodbc on osx? In-Reply-To: <44E0B603.6040001@ingres.com> References: <25ef693e0608140723k7e6f9ccfl929aa25331d3abfd@mail.gmail.com> <44E0B603.6040001@ingres.com> Message-ID: <20060815063928.GA5558@localhost.localdomain> On Mon, Aug 14, 2006 at 10:42:27AM -0700, Chris Clark wrote: > Chris Curvey wrote: > > I really like the pyodbc package for getting to MS-SQL on Windows. > > Has anyone been able to make this work with ODBC drivers on OSX? (Or > > is there another open source solution that works?) > > I've not tried it personally, I get the impression pyodbc is Windows > focused at the moment, you might could try emailing Michael directly and > ask him. he is very approachable and good at responding to mail. > > Asking the obvious question first ;-) I'm assuming that you don't have > a python driver for the database you need on OSX, hence the ODBC > requirement? > > One alternative is the Ingres DBI driver (GPL license), one of the less > well known features is that it is ODBC based. We've successfully built > with UnixODBC but this isn't the classic deployment option. Supporting > other ODBC drivers isn't a primary feature but is a stretch goal. Indeed, I have a patched version of pyodbc (http://pyodbc.sourceforge.net) that I build and run on linux (debian/unstable). If your interested I can send you my patch, its very small. I'm also planning on packaging pyodbc for debian so more people can start using it with unixodbc. -- sam clegg :: sam at superduper.net :: http://superduper.net/ :: PGP : D91EE369 $superduper: .signature,v 1.13 2003/06/17 10:29:24 sam Exp $ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: Digital signature Url : http://mail.python.org/pipermail/db-sig/attachments/20060815/a1ec65f2/attachment.pgp From charlie at egenix.com Tue Aug 15 15:01:19 2006 From: charlie at egenix.com (Charlie Clark) Date: Tue, 15 Aug 2006 15:01:19 +0200 Subject: [DB-SIG] re pyodbc on osx? Message-ID: > I really like the pyodbc package for getting to MS-SQL on Windows. Has > anyone been able to make this work with ODBC drivers on OSX? (Or is > there > another open source solution that works?) Our product is not open source but mxODBC works fine with the Actual Software drivers for MS SQL. Tested on Mac OS X Tiger with Python 2.3.5 and 2.4.3 and MS SQL 2005. Charlie -- Charlie Clark eGenix.com Professional Python Services directly from the Source From brunson at brunson.com Wed Aug 16 06:09:12 2006 From: brunson at brunson.com (Eric Brunson) Date: Tue, 15 Aug 2006 22:09:12 -0600 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. In-Reply-To: <8393fff0608070020n424b2172i5138b7c5c0032563@mail.gmail.com> References: <8393fff0608070020n424b2172i5138b7c5c0032563@mail.gmail.com> Message-ID: <44E29A68.3070207@brunson.com> Martin Blais wrote: > Hi > > I want to propose a few improvements on the DBAPI 2.0 Cursor.execute() > method interface. You can find the details of my proposed changes > here: > http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.html > > And a pure-Python preliminary implementation of these changes that can > sit on an existing DBAPI 2.0 implementation (tested with psycopg2): > http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.py > > (If this gets accepted IMO this preliminary hack should instead be > implemented at the lower-level of the specific DBAPI implementations > for performance reasons...) > > I've only skimmed through the document once. The changes seem quite pythonic, I'd be interested in hearing other opinions. From haraldarminmassa at gmail.com Wed Aug 16 08:58:54 2006 From: haraldarminmassa at gmail.com (Harald Armin Massa) Date: Wed, 16 Aug 2006 08:58:54 +0200 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. In-Reply-To: <44E29A68.3070207@brunson.com> References: <8393fff0608070020n424b2172i5138b7c5c0032563@mail.gmail.com> <44E29A68.3070207@brunson.com> Message-ID: <7be3f35d0608152358t562ddc10o569ed19dd2a5bf96@mail.gmail.com> I skimmed them, and in my eyes these changes involve a bit too much magic to be helpfull in the full range of database accesses. "Since columns is a sequence, and sequences in SQL are always joined by ,, " That is clearly wrong. With the SQL engine I use (PostgreSQL); there is the data type of "ARRAY", so sequences are stored as array. "SELECT name, address FROM %s WHERE id = %S" As much as I understand: Big S means escaping, small s means "do not escape" ? For me this bears to much risk for to less gain. Especially since dynamic exchanges of the names of tables and columns in SQL-queries is a totally different beast then changing parameters. "Different beast" as: "usually has to happen in two steps" is quite incorrect concerning the "ususally" in my experience. The dynamic exchanging of tablenames within statements is the ABSOLUTE minority, 1 out of 40 or less statements. Especially since queries querying only one table are the absolute minority; and the dynamic exchange of 3 tables is undebuggable :) So from my point -1 Harald -- GHUM Harald Massa persuadere et programmare Harald Armin Massa Reinsburgstra?e 202b 70197 Stuttgart 0173/9409607 - Let's set so double the killer delete select all. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20060816/1be77574/attachment.htm From mal at egenix.com Wed Aug 16 12:54:24 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 16 Aug 2006 12:54:24 +0200 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. In-Reply-To: <7be3f35d0608152358t562ddc10o569ed19dd2a5bf96@mail.gmail.com> References: <8393fff0608070020n424b2172i5138b7c5c0032563@mail.gmail.com> <44E29A68.3070207@brunson.com> <7be3f35d0608152358t562ddc10o569ed19dd2a5bf96@mail.gmail.com> Message-ID: <44E2F960.1020704@egenix.com> Harald Armin Massa wrote: > I skimmed them, and in my eyes these changes involve a bit too much > magic to > be helpfull in the full range of database accesses. > > "Since columns is a sequence, and sequences in SQL are always joined by > ,, " > That is clearly wrong. With the SQL engine I use (PostgreSQL); there is the > data type of "ARRAY", so sequences are stored as array. > > "SELECT name, address FROM %s WHERE id = %S" > > As much as I understand: Big S means escaping, small s means "do not > escape" > ? For me this bears to much risk for to less gain. Especially since dynamic > exchanges of the names of tables and columns in SQL-queries is a totally > different beast then changing parameters. "Different beast" as: > > "usually has to happen in two steps" > > is quite incorrect concerning the "ususally" in my experience. The dynamic > exchanging of tablenames within statements is the ABSOLUTE minority, 1 out > of 40 or less statements. Especially since queries querying only one table > are the absolute minority; and the dynamic exchange of 3 tables is > undebuggable :) > > So from my point -1 Same here. Note that we have already discussed deprecating the pyformat binding format due to the many problems with it, e.g. users often mistakenly applying Python string formatting to the statement string. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 16 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From ianb at colorstudy.com Wed Aug 16 20:29:30 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Wed, 16 Aug 2006 13:29:30 -0500 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. In-Reply-To: <44E29A68.3070207@brunson.com> References: <8393fff0608070020n424b2172i5138b7c5c0032563@mail.gmail.com> <44E29A68.3070207@brunson.com> Message-ID: <44E3640A.9020302@colorstudy.com> > Martin Blais wrote: >> I want to propose a few improvements on the DBAPI 2.0 Cursor.execute() >> method interface. You can find the details of my proposed changes >> here: >> http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.html >> >> And a pure-Python preliminary implementation of these changes that can >> sit on an existing DBAPI 2.0 implementation (tested with psycopg2): >> http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.py >> >> (If this gets accepted IMO this preliminary hack should instead be >> implemented at the lower-level of the specific DBAPI implementations >> for performance reasons...) If it can be implemented as a wrapper it doesn't seem too important to me. It also doesn't seem very performance-sensitive. I would like to see some improvements in DB-API, but these are in the area of better metadata about connections and normalizing some of the stickier areas (like paramstyle), and maybe some more normalization of connection instantiation. I think the norm should be that DB-API connections get wrapped if you are concerned about convenience. All that said, I think there's also room for more standards, but they can build on the DB-API instead of extending it. -- Ian Bicking | ianb at colorstudy.com | http://blog.ianbicking.org From mal at egenix.com Wed Aug 16 21:10:10 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 16 Aug 2006 21:10:10 +0200 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. In-Reply-To: <44E3640A.9020302@colorstudy.com> References: <8393fff0608070020n424b2172i5138b7c5c0032563@mail.gmail.com> <44E29A68.3070207@brunson.com> <44E3640A.9020302@colorstudy.com> Message-ID: <44E36D92.4050703@egenix.com> Ian Bicking wrote: >> Martin Blais wrote: >>> I want to propose a few improvements on the DBAPI 2.0 Cursor.execute() >>> method interface. You can find the details of my proposed changes >>> here: >>> http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.html >>> >>> And a pure-Python preliminary implementation of these changes that can >>> sit on an existing DBAPI 2.0 implementation (tested with psycopg2): >>> http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.py >>> >>> (If this gets accepted IMO this preliminary hack should instead be >>> implemented at the lower-level of the specific DBAPI implementations >>> for performance reasons...) > > If it can be implemented as a wrapper it doesn't seem too important to > me. It also doesn't seem very performance-sensitive. > > I would like to see some improvements in DB-API, but these are in the > area of better metadata about connections and normalizing some of the > stickier areas (like paramstyle), and maybe some more normalization of > connection instantiation. I think the norm should be that DB-API > connections get wrapped if you are concerned about convenience. +1 > All that said, I think there's also room for more standards, but they > can build on the DB-API instead of extending it. +1 -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 16 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From pf_moore at yahoo.co.uk Thu Aug 17 21:58:03 2006 From: pf_moore at yahoo.co.uk (Paul Moore) Date: Thu, 17 Aug 2006 20:58:03 +0100 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. References: <8393fff0608070020n424b2172i5138b7c5c0032563@mail.gmail.com> Message-ID: "Martin Blais" writes: > I want to propose a few improvements on the DBAPI 2.0 Cursor.execute() > method interface. You can find the details of my proposed changes > here: > http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.html The model of query execution you are assuming is nothing like that used by Oracle (in cx_Oracle in particular). You can certainly build up bits of a query string using Python string formatting - this is nothing to do with the DB API, but on the other hand, it is also *extremely* uncommon in my experience. However, you assume that the "second stage", of adding variable bindings supplied in the cursor.execute call, is also a string formatting exercise (just with automatic escaping). This is most certainly not the case in Oracle - the query is sent to the DB engine as given, with variable placeholders intact, and the variable bindings are sent independently. This is a crucial optimisation for Oracle - with the code c.execute("select * from emp where id = :id", 100) c.execute("select * from emp where id = :id", 200) the DB engine only sees a SINGLE query, run twice with different bindings. The query plan can be cached and optimised on this basis. If the ID was interpolated by Python, Oracle would see 2 different queries, and would need to re-parse and re-optimise for each. So, your proposal for unifying the 2 steps you see does not make sense in the context of (cx_)Oracle - the steps are utterly different. Sorry for going on at such length, but I get twitchy every time I see people assume that parameter binding is simply a client-side string interpolation exercise. That approach is the reason that huge numbers of poorly written Visual Basic programs exist, which destroy performance on Oracle databases. (It's also the cause of many SQL injection attacks, but I don't want to make too much of that, as I'd be getting perilously close to spreading FUD without providing more detail than anyone would be able to stand :-)) I'd hate to see Python end up falling into the same trap. We now return you to your normal schedule :-) Paul. -- The most effective way to get information from usenet is not to ask a question; it is to post incorrect information. -- Aahz's Law From nicolas at garden-paris.com Fri Aug 18 12:00:14 2006 From: nicolas at garden-paris.com (Nicolas Grilly) Date: Fri, 18 Aug 2006 12:00:14 +0200 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. In-Reply-To: References: <8393fff0608070020n424b2172i5138b7c5c0032563@mail.gmail.com> Message-ID: <554cfd4f0608180300h5cbfe03dnee747abcd36b7f29@mail.gmail.com> On 8/17/06, Paul Moore wrote: > However, you assume that the "second stage", of adding variable > bindings supplied in the cursor.execute call, is also a string > formatting exercise (just with automatic escaping). This is most > certainly not the case in Oracle - the query is sent to the DB engine > as given, with variable placeholders intact, and the variable bindings > are sent independently. Paul, I totally agree with you. What you've described about parameter binding in Oracle is true for most databases too. To use parameter binding is critical for performance (query plan caching) and security (SQL injection). > "Martin Blais" writes: > > I want to propose a few improvements on the DBAPI 2.0 Cursor.execute() > > method interface. You can find the details of my proposed changes > > here: > > http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.html Martin, I think your improvements are too specific to have a place in DB-API. You can build a little layer above DB-API, with specific Connection and Cursor classes implementing your desired behaviors. DB-API needs to stay simple to foster development of drivers for every databases. Cheers, Nicolas Grilly From ob at tenios.de Fri Aug 25 16:58:01 2006 From: ob at tenios.de (Oliver Benicke) Date: Fri, 25 Aug 2006 16:58:01 +0200 Subject: [DB-SIG] Problems with cx_Oracle and debian Message-ID: <44EF0FF9.70008@tenios.de> Hi, I have installed cx_Oracle 4.2-1 on debian, but when I try to import the module with //'import cx_Oracle' I have got an error: // Traceback (most recent call last): File "./oracle.py", line 2, in ? import cx_Oracle ImportError: /lib/tls/libc.so.6: version `GLIBC_2.4' not found (required by /root/cx_Oracle.so) I've installed cx_Oracle with: alien -d --keep-version cx_Oracle dpkg -i cx-oracle Works cx_Oracle with debian ? //Does anyone have any suggestions ?// From andy47 at halfcooked.com Sat Aug 26 08:59:19 2006 From: andy47 at halfcooked.com (Andrew Todd) Date: Sat, 26 Aug 2006 16:59:19 +1000 Subject: [DB-SIG] Problems with cx_Oracle and debian In-Reply-To: <44EF0FF9.70008@tenios.de> References: <44EF0FF9.70008@tenios.de> Message-ID: <44EFF147.4090202@halfcooked.com> Oliver Benicke wrote: > Hi, > > I have installed cx_Oracle 4.2-1 on debian, but when I try to import the > module with > > //'import cx_Oracle' I have got an error: > > // > > Traceback (most recent call last): > File "./oracle.py", line 2, in ? > import cx_Oracle > ImportError: /lib/tls/libc.so.6: version `GLIBC_2.4' not found (required > by /root/cx_Oracle.so) > > I've installed cx_Oracle with: > alien -d --keep-version cx_Oracle > dpkg -i cx-oracle > > Works cx_Oracle with debian ? > //Does anyone have any suggestions ?// > I've used cx_Oracle on Debian before without a problem. Although I've always installed it from source [1]. I suspect that the problem is with trying to use a RPM on a Debian system. From the error messages it looks like the necessary libraries aren't in the same locations in the two different distributions. [1] http://www.halfcooked.com/mt/archives/001014.html Regards, Andy -- -------------------------------------------------------------------------------- From the desk of Andrew J Todd esq - http://www.halfcooked.com/ From andy47 at halfcooked.com Sat Aug 26 12:26:29 2006 From: andy47 at halfcooked.com (Andy Todd) Date: Sat, 26 Aug 2006 20:26:29 +1000 Subject: [DB-SIG] cx_Oracle - any way to change the user of a connection? In-Reply-To: <7be3f35d0608130156m79d1a1b6g87979493fe9d0ef5@mail.gmail.com> References: <7be3f35d0608130156m79d1a1b6g87979493fe9d0ef5@mail.gmail.com> Message-ID: <44F021D5.50008@halfcooked.com> Harald Armin Massa wrote: > I want to have a connections which stays activated: > > cn=cx_Oracle.connect("dataset", "lowprivuser", "password") > > > and change the user of that connection. > > Reason: I want to have a web application, which connects ONE TIME (per > process/thread) to Oracle, and switches the privileges on the fly. > > I looked into "SET ROLE "; but as much as I understand > thats pureley additional, so: > > cn=cx_Oracle.connect("dataset", "lowprivuser", "password") > # gives the privs of "lowprivuser" > cs=cn.cursor() > > cs.execute ("SET ROLE USERFISCH identified by secret2006") > > then gives the privs of "lowprivuser" plus the privs of USERFISCH, > > and an additional > > cs.execute ("SET ROLE USERFOO identified by othersecret") > > adds the privs of "USERFOO", WITHOUT loosing those of USERFISCH. > > > Is there any way to do this? > > Best wishes, > > Harald > > Technically no. The definition of a connection to an Oracle database is the user you connect as and the database you connect to. Change one or the other and you have a different connection. It's not like, for instance, MySQL where you can change the 'database' you are connected to by updating an attribute on the connection. Roles are a whole different kettle of fish and can indeed grant you rights and privileges that the user you connect as doesn't have, but you will still be that user. If it is any consolation it's an Oracle thing and nothing to do with the implementation of cx_Oracle. About the best thing that you can do is create a fresh connection with the same name; >>> db1 = cx_Oracle.connect('username/password at database') >>> cursor1 = db1.cursor() >>> cursor1.execute('') ... >>> db1 = cx_Oracle.connect('another user/password at database') >>> cursor2 = db1.cursor() ... Of course, any objects you create using the original definition of db1 will maintain that connection. In this example cursor1 will continue to use the privileges of 'username' whilst cursor2 has the privileges of 'another user'. I can't help but think that this would get very confusing very fast. I'm not entirely sure what you are trying to achieve in your application, the overheads for creating a fresh connection are fairly light (on the client side at least ;-) but it's generally bad form to change users on the fly like that in the Oracle world. The usual approach - which you've already got 50% of the way towards - is to connect as a user with low or no privileges and then 'assume' a specific role that has been granted the rights you require to perform the operations necessary for that particular session. Regards, Andy -- -------------------------------------------------------------------------------- From the desk of Andrew J Todd esq - http://www.halfcooked.com/ From haraldarminmassa at gmail.com Sat Aug 26 12:37:01 2006 From: haraldarminmassa at gmail.com (Harald Armin Massa) Date: Sat, 26 Aug 2006 12:37:01 +0200 Subject: [DB-SIG] cx_Oracle - any way to change the user of a connection? In-Reply-To: <44F021D5.50008@halfcooked.com> References: <7be3f35d0608130156m79d1a1b6g87979493fe9d0ef5@mail.gmail.com> <44F021D5.50008@halfcooked.com> Message-ID: <7be3f35d0608260337i516afc34hdb25c3b6169f0f14@mail.gmail.com> Andy, thanks for takeing the time to explain it fully. Especially your explanation of that Oracle way of the world is very helpfull. Technically no. The definition of a connection to an Oracle database is > the user you connect as and the database you connect to. Change one or > the other and you have a different connection. If it is any consolation it's an Oracle thing and nothing to do with the > implementation of cx_Oracle. Yes. I suspected this allready. Just it's is usually more helpfull to ask Pythonistas :) I can't help but think that this would get very confusing very fast. Confisung at least :) I'm not entirely sure what you are trying to achieve in your > application, the overheads for creating a fresh connection are fairly > light (on the client side at least ;-) but it's generally bad form to > change users on the fly like that in the Oracle world. > Propably because I came from an earlier Oracle Client ... creating a fresh connection across a wlan took up to 6 seconds, which is TOO much in a web application. I have to be connected as the real user because I work with a given database structure, and the user is put into some history information. Thank you very much! Going to build fresh connection, if it is not quick enough I cache them on a per user basis. Harald -- GHUM Harald Massa persuadere et programmare Harald Armin Massa Reinsburgstra?e 202b 70197 Stuttgart 0173/9409607 - Let's set so double the killer delete select all. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20060826/0974d5e7/attachment.html From dieter at handshake.de Sun Aug 27 19:05:47 2006 From: dieter at handshake.de (Dieter Maurer) Date: Sun, 27 Aug 2006 19:05:47 +0200 Subject: [DB-SIG] Problems with cx_Oracle and debian In-Reply-To: <44EF0FF9.70008@tenios.de> References: <44EF0FF9.70008@tenios.de> Message-ID: <17649.53483.21558.809514@gargle.gargle.HOWL> Oliver Benicke wrote at 2006-8-25 16:58 +0200: >I have installed cx_Oracle 4.2-1 on debian, but when I try to import the >module with > >//'import cx_Oracle' I have got an error: > >// > >Traceback (most recent call last): > File "./oracle.py", line 2, in ? > import cx_Oracle >ImportError: /lib/tls/libc.so.6: version `GLIBC_2.4' not found (required >by /root/cx_Oracle.so) Looks like "cx_Oracle.so" was linked against a special "libc.so.6" version ("GLIBC_2.4") and your system has this version not installed. I would rebuild "cx_Oracle.so". This should get rid of this problem (but you may need some development packages for the build). -- Dieter