[DB-SIG] DB API 2.0 Anal Compliance Unit Test for review

Henrik Ekelund henrik.ekelund@vpd.se
Tue, 18 Feb 2003 17:48:45 +0100


> -----Original Message-----
> From: Stuart Bishop [mailto:zen@shangri-la.dropbear.id.au]
> Sent: den 17 februari 2003 23:39
> To: db-sig@python.org
> Subject: Re: [DB-SIG] DB API 2.0 Anal Compliance Unit Test for review
> 
> 
> On Tuesday, February 18, 2003, at 06:38  AM, Henrik Ekelund wrote:
> 
> > The connect function in the adodb module takes one and only 
> one string
> > as parameter. The DB-API spec says that "It takes a number of 
> > parameters
> > that are database dependent." Doesn't that allow the use of only one
> > parameter? I had to modify the _connect method in order to 
> make it work
> > with adodbapi, two lines inserted after line 98 (of 
> dbapi20.py,v 1.4):
> >         except TypeError:
> >             return self.driver.connect( self.connect_args )
> 
> I take it you have something like:
> 	connect_args = 'somehidiousstring'
> 
> The following is correct for having a single argument to connect:
> 	connect_args = ('somehidiousstring',)
> 
> The self.driver.connect(*connect_args) then works, passing a single 
> argument
> to your connect method. At least if you are running a recent enough 
> version
> of Python to support that syntax.
> 

Yes, it now works fine. Thanks.

> > * I have some tests that Null values are mapped to None. 
> This belongs 
> > in
> > a DB-API 2.0 test, doesn't it? If you agree I can try to 
> add a version 
> > of
> > my tests to dbapi20.
> 
> Sounds good. If a RDBMS doesn't support NULL, the stub can 
> always turn 
> the
> test off. Send me your code, or I'll just knock up a simple
> 'insert NULL, select it, make sure it is returned as None' test which
> should be good enough.
> 

My existing code needs other table definitions, so I think your idea 
is better.

> > * There is currently only one data type tested as 
> parameter, varchar.
> > What about datetime, numeric and binary? Maybe _paraminsert or 
> > _testexecute
> > could iterate over the datatypes? If not all drivers supports all 
> > datatypes
> > this could be solved by letting the subclass return the 
> datatypes or 
> > have a
> > property of the featues it supports (i.e supportsBinaryParameter=1,
> > supports??=1)
> > Again, if you think this is a good idea maybe I can help modify 
> > dbapi20.py.
> >
> > * Should a complete dbapi20 testsuite test the features of 
> callproc, 
> > nextset
> > and
> > setoutputtest? And drivers that don't support these 
> features overrides 
> > them
> > with pass? I believe it is possible to make *almost* 
> portable tests of
> > stored procedures too. Again, if you think it's a good idea I may 
> > help..
> 
> The best I could come up with for callproc was:
>      lower_func = 'lower'
> 	def test_callproc(self):
> 		con = self._connect()
> 		try:
> 			cur = con.cursor()
> 			if self.lower_func and hasattr(cur,'callproc'):
> 			    r = cur.callproc(self.lower_func,('FOO',))
> 			    self.assertEqual(r,('FOO',))
> 			    r = cur.fetchall()
> 			    self.assertEqual(len(r),1)
> 			    self.assertEqual(len(r[0]),1)
> 			    self.assertEqual(r[0][0],'foo')
> 		finally:
> 			con.close()
> 
> I left it out of the last release as I wasn't sure if this was
> x-platform enough, but I think it can go in. I'm sure people
> will let me know if even this isn't generic enough or I've
> misinterpreted the behavior of cursor.callproc
> 

You were right in that it is not competely general. In SQL Server the 
built-in LOWER function is not treated as a stored procedure, so I had 
to create a wrapper procedure around it. 
I also had to change the first assertEqual statement to
	 self.assertEqual(str(r[0]),'foo') # was
self.assertEqual(r,('FOO',))
Lowercase 'foo' because I thought that lower_func should return lowercase 
in the in/out parameter and not as a return set. Then the asserts 
below failed because I did not return a result set, so I added 
that too in the wrapper:
                create procedure templower
                    @theData varchar(50) OUT
                as
                    set @theData=lower(@theData)
                    select @thedata

But now I see that you intended that lower_func does like this:
                create procedure templower
			  @theData varchar(50)  
                as 
                    select lower(@theData)
I also changed the test so it checks element zero of the 
returned sequence, because adodb returns a list rather than a tuple. 
Finally i added the str() function because adodb returns unicode 
rather than ascii. (This was also required some lines below).

So, should this test be included or not? I think that it is a good 
thing that it clarifies how the callproc should work, a bad thing 
is probably that it requires quite a bit of tweaking for everyone 
to make it work.

> I've never used 'multiple result sets', so am unsure as to what SQL
> throws out such a beasty. I guess issuing a couple of selects through
> a single call to cursor.executemany() would be the more cross platform
> way of doing this, but I'm not sure. Send code if you have an answer.

Here is a test, but it is a bit platform specific as it requires 
a stored procedure. But I don't think it is much work to modify, 
it should be enough to change the sql statements in the help
methods. I guess it suffers from the same advantage/disadvantage as the
callproc test.

    def help_nextset_setUp(self,cur):
        'Should create a procedure called deleteme '
        'that returns two result sets, first the  '
	  'number of rows in booze then "name from booze"'
        sql="""
            create procedure deleteme as
            begin
                select count(*) from booze
                select name from booze
            end
        """
        cur.execute(sql)

    def help_nextset_tearDown(self,cur):
        'If cleaning up is needed after nextSetTest'
        try:
            cur.execute("drop procedure deleteme")
        except:
            pass

    def test_nextset(self):
        con = self._connect()
        try:
            cur = con.cursor()

            sql=self._populate()
            for sql in self._populate():
                cur.execute(sql)

            self.help_nextset_setUp(cur)

            cur.callproc('deleteme')
            numberofrows=cur.fetchone()
            assert numberofrows[0]== len(self.samples)
            assert cur.nextset()
            names=cur.fetchall()
            assert len(names) == len(self.samples)
            s=cur.nextset()
            assert s == None,'No more return sets, should return None'

        finally:
            try:
                self.help_nextset_tearDown(cur)
            finally:
                con.close()

> I don't see how setoutputsize can be tested. As an implementation is
> free to ignore it, there is no way to test that it is working 
> correctly.

Me neither when I looked closer.

> Also the DDL, insert and select statements that would be 
> required would
> all be platform specific (?), so it becomes simpler for the 
> stub writer
> to simply implement their own test. I avoid BLOBS so havn't 
> worked with
> them, so you may have a better understanding of the issues 
> involved and
> have an answer.

Yes, it may be too much work to try to test everything. 
But I think that if there will be a 3.0 it would be good to 
have detailed tests of the API. Tests serve well as 
documentation of the API, and I don't think it matters much
if they have to be tweaked a bit for the different databases.
On the contrary it shows how the vendors implement things
differently.

/Henrik Ekelund


**********************************************************************
This email message has been swept by
MIMEsweeper for the presence of computer viruses.

www.mimesweeper.com
**********************************************************************