[Python-Dev] proposal for interfaces

John Williams jrw@pobox.com
Sun, 29 Sep 2002 15:26:54 -0500

Esteban U.C.. Castro wrote:
 > I like it a lot! Anyway, if it can be implemented in python as is,
 > what is the point of the PEP? Making the 'interface' root class and/or
 > InterfaceError builtins, maybe?

Well, aside from the ego boost of having my very own PEP, I think the
nature of interfaces is such that they're vastly more useful if lots of
people use them.  Putting something in the standard distribution almost
guarantees that.

 > I have some comments which I thought I would bounce. I'll organize
 > these attending to the activities they relate to. Don't hesitate to
 > tell me if I'm sayig something stupid. :)

You have some very good points; I'll address them individually below,
but let me get it out of the way that most of my programming background
is with compiled languages with strong type checking.  As much as I love
Python, sometimes I really miss the rigor of more stongly-typed
languages, so rather than trying to make something very Pythonic, I've
tried to go the opposite direction in order to complent what's there

 > Define an interface
 > ===================
 > In your example, it seems that
 >   class Foo(interface):
 >     def default_foo(self, a, b):
 >       "Docstring for foo method."
 >       print "Defaults can be handy."
 > [I added the a, b arguments to illustrate one point below]
 > Does the following:
 >  - Defines the requirement that all objects that implement Foo have a
 >    foo function
 >  - Defines that foo should accept arguments of the form (a, b) maybe?
 >  - Sets a doc string for the foo method.
 >  - Sets a default implementation for the method.

Yes, exactly.

 > Some questions on this:
 >  - Can an interface only define method names (and maybe argument
 >    formats)?  I think it would be handy to let it expose attributes.

Actually I wanted to have attributes, too (and operators, since they're
just methods).  This brings up one of the uses I had in mind for the
prefixes in front of the method names.  To add a property, you could do
something like this:

  doc_myProperty = "docstring for myproperty"
  readonly_myOtherProperty = "docstring for a read-only property"
  writeonly_myLastProperty = "docstring for write-only property"

This would require implementions to either have properties named
"myProperty", "myOtherProperty" and "myLastProperty", or define methods
named __get__myProperty, __set__myProperty, __get__myOtherProperty, and

 >  - Is method names (and maybe format of arguments) the only thing you
 >    can 'promise' in the interface? In other words, is that the only
 >    type of guarantee that code that works against the interface can
 >    get? I think a __check__(self, obj) special method in interfaces
 >    would be a simple way to boost their flexibility.

Something I didn't include in the original message was the
design-by-contract feature, which would allow pre- and postconditions to
be specified for any method, like this:

  def check_foo(self, a, b):
    "docstring for foo"
    if not (some precondition for foo):
      raise ExceptionOfYourChoice
    if not (some other precondition for foo):
      raise DifferentExceptionOfYourChoice
    return lambda result: (postconditions of foo) # optional

I'd even like to allow multiple declarations per method, so you could
have a check_foo (which is always called, at least when __debug__ is
true), and default_foo, which is called only in classes that don't give
their own implementation of foo.

 >  - For the uses you have given to the prefix_methodname notation so far,
 >    I don't think it's really needed. Isn't the following sufficient?
 >  class Foo(interface):
 >     def foo(self, a, b):
 >         "foo docstring"
 >         # nothing here; no default definition
 >     def bar(self, a, b):
 >         pass # no docstring, _empty definition_

Not IMHO, since I'd want methods with no implementation to raise
NotImplementedError instead of silently returning None.  One this that
*could* be done to make the the simplest case (just a docstring?)
easier (and make the syntax look more Pythonic) would be this:

  def foo(): "Docstring for foo"
    # Never called, so arguments aren't needed, but they would be nice
    # for documentation purposes.
  def __default__bar(self): (default implementation of bar)
  def __check__baz(self): (error checking for baz)

This has to side effect of making the __ before the prefixes necessary,
so you can still define define methods that start with "default",
"check", etc.  I suppose the aestheics of the __ are a matter of taste,
though it does at least make the "magic" nature of the prefixes stand
out better.

 > This has the side effect that a method with no default definition and no
 > doc string is a SyntaxError. Is this too bad?
 > - It would maybe be hard to figure out what such a method is supposed
 >   to do, so you _should_ provide a docstring.
 > - If you're in a hurry, an empty docstring will do the trick. While in
 >   'quick and dirty mode' you probably won't be using interfaces a lot,
 >   anyway.


 > Defaults look indeed useful, but the really crucial aspect to lay down
 > in an interface definition is what does it guarantee on the objects
 > that implement it. If amalgamating this with default defs would
 > otherwise obscure it (there's another issue I'm addressing below), I
 > think defaults belong more properly to a class that implements the
 > interface, not to the interface definition itself.

I got the idea of defaults from Haskell, where it's common to see an
interface define methods with mutually recursive default definitions,
kind of like (to use a somewhat silly Python example) defining __eq__,
__ne__, __cmp__, etc. all in terms of one another and expecting
implementations to define enough of the methods that everything works.

 > Check whether an object implements an interface
 > ===============================================
 > From your examples, I get it that an object implements an interface
 > iff it is an instance of a class that implements that interface.  So I
 > guess any checking of the requirements expressed by the interface is
 > done at the time when you bind() a class to an interface.  This
 > paradigm perfectly fits static and strongly typed languages, but it
 > falls a bit short of the flexibility of python IMO. You can do very
 > funny things to classes and objects at runtime, which can break any
 > assumptions based on object class.

Very good point.  The dynamic nature of Python it what makes it possible
to implement interfaces this way.  It would be a shame (and a little
ironic) if interfaces didn't play nice with dynamic code.

 > In your example:
 >   def foo_proc(foo_arg):
 >     foo_proxy = Foo(foo_arg)
 >     ...
 >     x = foo_proxy.foo(a, b)
 > [added a, b again]
 > imagine foo_proc may only really cares that foo_arg is an object that
 > has a foo() method that takes (a, b) arguments (this is all Foo
 > guarantees).

Here's where my compiled language bias comes in.  If you only care the
foo_arg has a certain method, you don't want to use interfaces at all.
Using the interface doesn't just imply that foo_arg has a method named
foo, but also that the method satisfies the requirements laid out in the
interface definition.

 >  * Will the Foo() call check this, or will it just check that some class
 >    in foo_arg's bases is bound to the Foo interface?

Both, in a way.  The call to Foo() checks that foo_arg's class has been
declared to implement Foo, no more and no less.  Checking that the class
implements Foo *correctly* is a more multifaceted problem.  Some parts
(like making sure the right method names exist) could happen when the
interface is bound to the class, but other parts, like checking method
pre- and postconditions would have to happen at every method call.

 >    In the second case, if someone has been fiddling with foo_arg or some
 >    of its base classes, foo_arg.foo() may no longer exist or it may have
 >    a different signature.

You can't really stop people from shooting themselves in the foot.
Making methods disappear is black magic in my book.

 >  * Why should Foo() _always_ fail for objects that _do_ meet the
 >   requirements expressed by the interface Foo but have _not_ declared
 >   that they implement the interface?
 > Making such check optional allows implicit (not declared) interface
 > satisfaction for those who want it. This should extend the
 > applicability of interfaces.

One of the major points of my design is to make the interface mechanism
very formal and explicit.  Keeping things implicit and informal is what
Python is already good at so I don't want to go there.  Also, having the
separate "bind" call means that it's really not necessary for a class to
declare that it implements the interface, only that some declaration of
that fact exists somewhere.

 > Declare that an object implements an interface or part of it
 > ============================================================
 >   Foo.bind(SomeClass)
 > Problems:
 > * I agree the declaration had better be included in the class
 >   definition, at least as an option.

I agree, but as an option.

The motivation for having interface bindings separate from the class
(and interface) definitions is mainly so that it's not necessary to
modify the source code for your classes to make them implement an
interface, so you can do things like add a new interface to a builtin or
legacy class without having access to the source code.  This is not
nearly as big a deal as with a language like C++ where you often can't
get to the source code in any useful way, but there are other reasons to
avoid touching the source, like avoiding the need to keep track of
patches to 3rd-party code.

OTOH, I agree that the level of control the "bindd" call gives you is
usually not useful.  I left out alterntives for the sake of brevity, but
it would be easy enough to add it in the interest of keeping the most
common case as simple as possible.

 > * Declarative better than procedural, for this purpose.

Whether this is declarative of procedural is mostly a matter of
perpective, IMHO.  In my imagination, calls to bind occur only at the
module level, and almost always immediately after the class or interface
definition, so the "flavor" is declarative.

 > * Only classes (not instances) can declare that they implement an
 >   interface.

Maybe there should be something like a "bindinstance" method as well.
I'm sure there's a way to do it, but I don't consider it a very high

 > * Indexing notation to 'resolve' methods a bit counterintuitive.
 > What about:

Good ideas.  Your method has a lot of advantages, but it would be hard
(and messy) to make it do everything you can do with seprarate method
calls, and one thing I'm *very* reluctant to do is have two ways of
doing everything with only subtle or stylistic differences between them.
The best thing to do here may be to allow your style for simple cases
(the 90% that would have required only a single "bind" call), but
use the method-based syntax for anything more elaborate, like method
renaming and unbinding subclasses.
 > Restrict
 > ========
 > This is, to make sure that an object is only accessed in the ways
 > defined in the interface (via the proxy).
 > This should be optional too, but your syntax does this nicely; you
 > can call Foo() as an assertion of sorts and ignore the result.

I think it would be very misleading to require than an object support a
formal interface but then expect it to also support methods not
specified in the interface.  If this is what you want, the right thing
to do is derive a new interface from the old one that has the extra
functionality you need.

 > Note that the __implements__ method resolution magic would require
 > that you get a proxy, though.

Unfortunately, yes.  Of course there'd be nothing stopping you from
calling the class's methods without going through the interface at all
(provided you know the right names for them), but in that case you
really just want to check that the object is an instance of a class, not
that it implements an interface.

OTOH, if you think the main problem with the proxy approach is just that
it's verbose, perhaps I can interest you in this idea (or some variation
thereof).  Here's the Python iterator protocol implemented with interfaces
instead of magic method names:

  class Iterable(interface):
    def iter():
      "Return an object implmenting the Iterator interface."

  class Iterator(Iterable):
    def next():
      "Return the next item or raise StopIteration."

  def iter(object):
    return Iterable(object).iter()

If you want to be even lazier, let Foo.foo(x) a synonym for
Foo(x).foo(), so you can define the "iter" function like this:

  iter = Iterable.__iter__

Here's how you might add the iterator protocol to builtin lists, string,
and tuples (if it wasn't already there, of course):

  # Define iteration semantics.
  class SequenceIterator(object):
    def __init__(self, seq)
      self.seq = seq
      self.index = 0
    def next(self):
        self.index += 1
        return self.seq[self.index - 1]
      except IndexError:
        raise StopIteration

  # Bind the interface to this implementation.
  # Add the Iterable interface to existing classes that don't have an
  # "iter" method.
  for t in list, tuple, str:
    Iterable[t].iter = SequenceIterator

Here are a few variations on the loop body, since you don't like the
indexing notation:
    Iterable.bind(t, {"iter": SequenceIterator})
    binding = Iterable.bind(t)
    binding.iter = SequenceIterator
    Iterable.bindmethod("iter", SequenceIterator)

Whew!  Ok, I guess I'm done now.  Thanks you your comments!