[Python-Dev] Sneaky 'super' instances
Moore, Paul
Paul.Moore@atosorigin.com
Thu, 12 Jun 2003 11:16:39 +0100
From: Phillip J. Eby [mailto:pje@telecommunity.com]
> PEP 246 has a chicken-and-egg problem, because it's only useful if
> it's being used. adapt() by itself isn't at all useful if you don't
> have components or protocols that support it. And nobody wants to
> write components or protocols to support it, if nobody is using
> adapt(). If I understand Alex and Clark's intentions in writing the
> PEP, it was to get adapt() to have a ubiquitous presence, so that
> authors could assume their users would be able to use it.
Um. I don't think that makes sense. My reading of PEP 246 seems to
show that the authors were fully aware of the problem with requiring
widespread support before the technique is useful. Specifically,
requirement (e) in the PEP considers precisely the case where neither
the "protocol" nor the "object" need to know about the adapt()
mechanism. Sadly, the reference implementation provided does not
support requirement (e).
As I understand PEP 246, its basic idea is that an application (such
as a pydoc rewrite, to focus on your specific example) can be written
based round concrete "protocol" classes which encapsulate all of the
assumptions being made. These protocol classes can have __adapt__
methods to define how the built in and standard library types do or
do not conform to the protocol. This gives an application which works
unmodified for core Python.
Now, user-defined classes can be written to "support" the application,
by having __conform__ methods to qualify if and how the types match
the application's protocols.
And for existing user-defined classes (or new ones where you don't want
to put knowledge of this application into the class) the "registry"
approach works.
Actually, this analysis makes me wonder a bit about the PEP. The
__adapt__ method seems reasonable for encapsulating knowledge about
built-in types, although in practice it may not work as well as it
would seem at first glance. There's an implication that protocols
don't need to be "special", but can be ordinary classes - the
canonical example would be adapt(obj, file) to "make" obj a file-
like object, if possible. But "real" classes won't have __adapt__
methods...
The __conform__ method seems unlikely to be useful in practice -
it's the usual issue of having to change the class to support a new
protocol.
The registry idea might work (but it's not implemented). However, it
would seem to depend on someone, somewhere, doing the registering. I
can see things like::
import numarray
import register_numarray_adapters_to_pydoc
import pygame
import register_pygame_adapters_to_pydoc
[And this assumes that the registration happens in __init__.py
import-time magic, which is arguably bad practice, so maybe you need
an extra register() call for each of these]
... anyway - enough theoretical analysis of PEP 246. Hopefully, I
have demonstrated the point that it needs "real world" use to expose
any issues.
> Two years after the original PEP, I saw a way to make use of it in
> a framework I was working on, and then noticed the chicken-and-egg
> problem.
I think the PEP's intent was that you can write the necessary adapters
for types you know of, and then your users can add adapters (or ask
you to add them) as required.
> But, one thing that had changed since then was that Twisted and
> Zope both had significant "interface" packages. I saw a way to
> unite them, using PEP 246 as a mechanism, and how to make a toolkit
> that would allow developers to use adapt() in their own programs,
> without having to wait for anybody else to adopt adapt(). I'm hoping
> this will end the chicken-and-egg problem for PEP 246, as well as
> increase interoperability between frameworks.
I don't believe that there should be a chicken-and-egg problem
(if there is, PEP 246 is doomed to failure IMHO). But if you can
do something like this and eliminate what you see as the problem,
that's great. But if you do it at the cost of a *new* chicken-and-egg
problem, you've achieved nothing...
> There is no need for language-level support; it's just a library.
> And, PyProtocols, my PEP 246 implementation and extension, is
> available for download now.
If PyProtocols is a PEP 246 implementation, then that's something I'm
interested in. I assume that it has no chicken-and-egg issue (as I
don't see one in PEP 246 itself). Off to download and investigate...
> See, I could probably write the documentation tool faster if I used
> PEAK, my application framework. But that would make it unlikely to
> ever be usable for the Python standard library, because that would
> be like distributing Zope in the standard library.
That's a tough call. Only you can answer that, but if you work
from the basis that pydoc is your "competition", the question is
whether you can convince existing satisfied users of pydoc that your
replacement is "better". In other words, are you willing to "sell"
your replacement to sceptical users? If not, then don't waste your
time making your development job harder.
> In effect, my question is, "Should I expend the extra effort to
> develop the documentation tool in such a way that it could easily
> be distributed with the standard library?" And, since the tool
> would have to depend on PyProtocols, this means that (in effect)
> PyProtocols would have to be accepted for the standard library.
If PyProtocols is just PEP 246, then you're just another voice (and
one who's got useful code to contribute) arguing for PEP 246. If it's
more, you'll have to explain *what* there is that's extra, and *why*
it's necessary. And you're effectively arguing against PEP 246, as
being insufficient for its stated purpose, so you'll have to justify
that, too.
> So, what I want to know is:
> * Do other people find pydoc inadequate?
No real sign of that. I don't. But my use is minimal, and I suspect
that's true of many people. If pydoc broke on something subtle
(like super()) then I'd view it as a minor irritation and move on.
(Particularly as pydoc is a command line tool, rather than a program
library).
> * Does it seem likely that PyProtocols would be considered as an
> addition to the standard library (and by implication, used to
> document the interfaces of "standard" Python objects)?
a) Less likely than that PEP 246 would be (assuming it's "more" than
PEP246). And PEP 246 is struggling for lack of real use cases (no,
I don't believe that it's a chicken-and-egg issue, more that people
are happy implementing their own solution, either because they
don't know of PEP 246 or because it's got problems that haven't
been teased out by real use yet).
b) I'm not sure what you're implying by documenting interfaces. PEP
246 is clearly *not* about standardising (documenting) interfaces,
but about adapting to more fluid "real life" situations - package
XXX needs a file-like object, but only cares about readline, seek
and tell methods, but package YYY needs a file-like object with
read and write methods (but doesn't care about seekability). It
sounds like PyProtocols carries more baggage (a point against it).
> * Is there anything specific that anybody would want in an
> overhauled pydoc, that I should know about?
Doesn't look like it. Guido expressed an interest in "less implicit
assumptions" but that sounds to me more like an implementation issue
than a user-level requirement.
Sorry, that went on a bit. Hope it helped.
Paul.