[Python-Dev] PEP 246: lossless and stateless

Guido van Rossum gvanrossum at gmail.com
Fri Jan 14 07:20:40 CET 2005


[Guido]
> >This may solve the curernt raging argument, but IMO it would make the
> >optional signature declaration less useful, because there's no way to
> >accept other kind of adapters. I'd be happier if def f(X: Y) implied X
> >= adapt(X, Y).

[Phillip]
> The problem is that type declarations really want more guarantees about
> object identity and state than an unrestricted adapt() can provide,

I'm not so sure. When I hear "guarantee" I think of compile-time
checking, and I though that was a no-no.

> including sane behavior when objects are passed into the same or different
> functions repeatedly.  See this short post by Paul Moore:
> 
> http://mail.python.org/pipermail/python-dev/2005-January/051020.html

Hm. Maybe that post points out that adapters that add state are bad,
period. I have to say that the example of adapting a string to a file
using StringIO() is questionable. Another possible adaptation from a
string to a file would be open(), and in fact I know a couple of
existing APIs in the Python core (and elsewhere) that take either a
string or a file, and interpret the string as a filename. Operations
that are customarily done with string data or a file typically use two
different different function/method names, for example pickle.load and
pickle.loads.

But I'd be just as happy if an API taking either a string or a file
(stream) should be declared as taking the union of IString and
IStream; adapting to a union isn't that hard to define (I think
someone gave an example somewhere already).

OK, so what am I saying here (rambling really): my gut tells me that I
still like argument declarations to imply adapt(), but that adapters
should be written to be stateless. (I'm not so sure I care about
lossless.)

Are there real-life uses of stateful adapters that would be thrown out
by this requirement?

> Even if you're *very* careful, your seemingly safe setup can be blown just
> by one routine passing its argument to another routine, possibly causing an
> adapter to be adapted.  This is a serious pitfall because today when you
> 'adapt' you can also access the "original" object -- you have to first
> *have* it, in order to *adapt* it.

How often is this used, though? I can imagine all sorts of problems if
you mix access to the original object and to the adapter.

> But type declarations using adapt()
> prevents you from ever *seeing* the original object within a function.  So,
> it's *really* unsafe in a way that explicitly calling 'adapt()' is
> not.  You might be passing an adapter to another function, and then that
> function's signature might adapt it again, or perhaps just fail because you
> have to adapt from the original object.

Real-life example, please?

I can see plenty of cases where this could happen with explicit
adaptation too, for example f1 takes an argument and adapts it, then
calls f2 with the adapted value, which calls f3, which adapts it to
something else. Where is f3 going to get the original object?

I wonder if a number of these cases are isomorphic to the hypothetical
adaptation from a float to an int using the int() constructor -- no
matter how we end up defining adaptation, that should *not* happen,
and neither should adaptation from numbers to strings using str(), or
from strings to numbers using int() or float().

But the solution IMO is not to weigh down adapt(), but to agree, as a
user community, not to create such "bad" adapters, period. OTOH there
may be specific cases where the conventions of a particular
application or domain make stateful or otherwise naughty adapters
useful, and everybody understands the consequences and limitations.
Sort of the way that NumPy defines slices as views on the original
data, even though lists define slices as copies of the original data;
you have to know what you are doing with the NumPy slices but the
NumPy users don't seem to have a problem with that. (I think.)

> Clark's proposal isn't going to solve this issue for PEP 246, alas.  In
> order to guarantee safety of adaptive type declarations, the implementation
> strategy *must* be able to guarantee that 1) adapters do not have state of
> their own, and 2) adapting an already-adapted object re-adapts the original
> rather than creating a new adapter.  This is what the monkey-typing PEP and
> prototype implementation are intended to address.

Guarantees again. I think it's hard to provide these, and it feels
unpythonic. (2) feels weird too -- almost as if it were to require
that float(int(3.14)) should return 3.14. That ain't gonna happen.

> (This doesn't mean that explicit adapt() still isn't a useful thing, it
> just means that using it for type declarations is a bad idea in ways that
> we didn't realize until after the "great debate".)

Or maybe we shouldn't try to guarantee so much and instead define
simple, "Pythonic" semantics and live with the warts, just as we do
with mutable defaults and a whole slew of other cases where Python
makes a choice rooted in what is easy to explain and implement (for
example allowing non-Liskovian subclasses). Adherence to a particular
theory about programming is not very Pythonic; doing something that
superficially resembles what other languages are doing but actually
uses a much more dynamic mechanism is (for example storing instance
variables in a dict, or defining assignment as name binding rather
than value copying).

My 0.02 MB,

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)


More information about the Python-Dev mailing list