A little stricter type checking

Peter Otten __peter__ at web.de
Mon Sep 6 13:54:45 CEST 2004

Andrew Dalke wrote:

> Peter Otten wrote:
>> There are at least three more idioms that could (sometimes) be replaced
>> by type-checking:
> Though from what I can tell the OP wanted "real" type
> checking, as found in Java and C.  The original text was
> ] On the other hand in languages like C, java etc...
> ] where types are strict we have the guarantee that
> ] the variables will always be the same type without
> ] any change.
>> # "casting"
> Wouldn't that be type conversion, and not a check?

In a way it is both. Consider

def f(s):
    s = str(s)

interface Stringifiable:
    def __str__(self) -> str:

def f(Stringifiable s):
   s = str(s)

In the latter form the compiler can guarantee not that the call succeeds,
but that the interface is there at least, while in the former both steps
(check and creation/conversion) are blurred in the single str() call.
Don't let yourself detract from that by the fact that in Python str() hardly
ever fails.

>> # check for an "interface"
>> try:
>>     lower = s.lower
>> except AttributeError:
>>     raise TypeError
>> else:
>>     lower()
> Mmm, there's a problem with this checking.  Consider
> class Spam:
>    lower = 8

Of course it is not a full replacement of a type declaration, but often used
to achieve similar goals. I guess that it would be immediately dismissed by
people experienced only in statically typed languages and most likely
replaced with something like

def f(str s):
    s = s.lower()

thus (sometimes) unnecessarily excluding every object with a string-like
"duck-type", i. e. a sufficient subset of the str interface for the above
function to work.

> Though the odds are low.
>> # check for an "interface", LBYL-style
>> if not hasattr(foo, "bar"):
>>     raise TypeError
> Aren't both of these the "look before you leap" style?
> Well, except for checking for the callable.

I thought that "look before you leap" is doing something twice, once to see
if it is possible and then for real. Let's concentrate on attribute access
for the sake of the example:

# good style, IMHO
   lower = s.lower
except AttributeError:
   # handle failure
# use lower, don't access s.lower anymore.

# LBYL, bad IMHO
except AttributeError
    # handle failure
# use lower, accessing it via attribute access again, e. g.
lower = s.lower # an unexpected failure lurks here

# LBYL, too, but not as easily discernable
# and therefore seen more often in the wild
if hasattr(s, "lower"):
   lower = s.lower # an unexpected failure lurks here
   # handle failure
And now, just for fun, a class that exposes the difference:

>>> class DestructiveRead(object):
...     def __init__(self):
...             self.lower = 42
...     def __getattribute__(self, name):
...             result = object.__getattribute__(self, name)
...             delattr(self, name)
...             return result
>>> print DestructiveRead().lower # just doit
>>> d = DestructiveRead()
>>> if hasattr(d, "lower"): # try to play it safe
...     print d.lower
Traceback (most recent call last):
  File "<stdin>", line 2, in ?
  File "<stdin>", line 5, in __getattribute__
AttributeError: 'DestructiveRead' object has no attribute 'lower'

This would of course be pointless if there weren't (sometimes subtle)
real-word variants of the above.

>> These are more difficult to hunt and I'm lazy.
>> However, I don't think you can tell from the structure alone whether an
>> explicit type declaration would be a superior solution in these cases.
> I do at times use tests like you showed, in order
> to get a more meaningful error message.  I don't
> think they are considered type checks -- perhaps
> protocol checks might be a better term?
Maybe - I think I'm a bit fuzzy about the terms type, interface and
protocol. In a way I tried to make that fuzziness explicit in my previous
post: In a staticically typed language you may be tempted to ensure the
availability of a certain protocol by specifying a type that does fulfill
it, either because templates are not available or too tedious, and you
cannot make an interface (abstract base class) for every single method, no? 

In a dynamically typed language the same conceptual restrictions may be
spread over various constructs in the code (not only isinstance()), and the
translation from one system into the other is far from straightforward.
>> Should Python be extended to allow type declarations, I expect them to
>> appear in places where they reduce the usefulness of the code while
>> claiming to make it safe...
> As was mentioned in the @PEP (as I recall) one of the
> interesting possibilities is to use type checks with
> support for the adapter PEP so that inputs parameters
> can get converted to the right type, rather like your
> first point.
> @require(PILImage)
> def draw(image):
>     ...
why not rely on PILImage's __init__() or __new__() for that:

def draw(image):
    image = PILImage(image)

Since the introduction of __new__() this can be almost a noop if the image
is already a PILImage instance.

> As new image data types are added, they can be passed to
> draw() without need for an explicit convert step, so
> long as the adapter is registered.

I had a quick look at the PEP 246 - Object Adaptation, and I'm not sure yet
whether the pythonic way to write e. g.

class EggsSpamAndHam (Spam,KnightsWhoSayNi):
    def ham(self): print "ham!"
    def __conform__(self,protocol):
        if protocol is Ham:
            # implements Ham's ham, but does not have a word
            return self
        if protocol is KnightsWhoSayNi:
            # we are no longer the Knights who say Ni!
            raise adaptForceFailException
        if protocol is Eggs:
            # Knows how to create the eggs!
            return Eggs()

would rather be along these lines:

class Eggs:
    def asEggs(self): 
        return self

class EggsSpamAndHam(Spam): # no need to inherit form KnightsWhoSayNi
    def ham(self): print "ham!"
    def asEggs(self):
        return Eggs()

That and a class-independent adaptation registry should achieve the same
functionality with much simpler code. The real limits to protocol
complexity are human understanding rather than technical feasibility. 

Of course I may change my mind as soon as I see a compelling real world


More information about the Python-list mailing list