[Python-ideas] iterable.__unpack__ method

Alex Stewart foogod at gmail.com
Wed Feb 27 21:58:47 CET 2013


On Tue, Feb 26, 2013 at 4:44 PM, Terry Reedy <tjreedy at udel.edu> wrote:

> On 2/25/2013 5:25 PM, Alex Stewart wrote:
>
> Negative putdowns are off-topic and not persuasive.


None of my statements were intended to be put-downs, and if it came across
that way I apologize.  It is possible that my frustration showed through on
occasion and my language was harsher than it should have been..


> I starting with 1.3 in March 1997 and first posted a month later.


For what it's worth (not that I think it's terribly important, really),
while I have not generally participated much in online discussions over the
years, I actually started using Python about the same time you did, and
have watched it evolve over a similar timespan..

> Then how, prior to the development of the iterator protocol,
>
> did one define an object which was accessible sequentially but not
>> randomly in the Python language?
>>
>
> As I said, by using the original fake-getitem iterator protocol, which
> still works, instead of the newer iter-next iterator protocol.
>

[large snip]

Ok, yes, it was technically possible with ugly hacks to do it, but I don't
consider that to be really the same as "supported by the language".  The
only way you could do it was by making an object "fake" a random-access
interface that actually likely did the wrong thing if accessed randomly.
 That is not a separation of concepts, that is a twisting of one concept in
an incompatible way into the semblance of another as a workaround to the
fact that the two concepts are not actually distinct in the language
design.  Once again, you're confusing "how it's thought of in the
particular programmer's mind" with "how the language is actually designed",
which are not the same thing.

Back to unpack and informing the source as to the number n of items
> expected to be requested.
>
> Case 1. The desired response of the source to n is generic.
>

If I'm reading you right, I believe this is basically equivalent to "the
consumer decides what behavior is most useful".  In this case, yes, the
correct place to do this is on the consumer side, which is not what
__unpack__ is intended to be for anyway (fairly obviously, since it is not
a consumer-side change).

(For what it's worth, you seem to do a decent job here of arguing against
your own propositions to change the unpack syntax, though..  Not sure if
that was intentional..)


> Case 2. The desired response is specific to a class or even each instance.
>

[...]

Solution: write a method, call it .unpack(n), that returns an iterator that
> will produce the objects specified in the table. This can be done today
> with no change to Python. It can be done whether or not there is a
> .__iter__ method to produce a generic default iterator for the object. And,
> of course, xxx.unpack can have whatever signature is appropriate to xxx. It
> seems to me that this procedure can handle any special collection or
> structure breakup need.
>

This solution works fine, in the very restrictive case where the programmer
knows exactly what type of object they've been given to work with, is aware
that it provides this interface, and knows that it's important in that
particular case to use it for that particular type of object.  If the
consumer wants to be able to use both this type of object and other ones
that don't provide that interface, then their code suddenly becomes a lot
more complicated (having to check the type, or methods of the object, and
conditionally do different things).  Even if they decide to do that, since
there is no established standard (or even convention) for this sort of
thing, they then run the risk of being given an object which has an
"unpack" method with a completely different signature, or worse yet an
object which defines "unpack" that does some completely different
operation, unrelated to this ad-hoc protocol.  Any time somebody produces a
new "smart unpackable" object that doesn't work quite the same as the
others (either deliberately, or just because the programmer didn't know
that other people were already doing the same thing in a different way), it
is quite likely that all of the existing consumer code everywhere will have
to be rewritten to support it, or (much more likely) it will be supported
haphazardly some places and not others, leading to inconsistent behavior or
outright bugs.

Even ignoring all of this, it still isn't possible to write an object which
has advanced unpacking behavior that works with any existing unpacking
consumers, such as libraries or other code not under the object-designer's
control.

In short, yes, it solves things, in a very limited, incompatible, painful,
and largely useless way.

Comment 2: we are not disagreeing that people might want to do custom
> count-dependent disassembly or that they should be able to do so. It can
> already be done.
>

I disagree that it can already be done in the manner I'm describing, as
I've explained.  There is, frankly, no existing mechanism which allows an
unpacking-producer to do this sort of thing in a standard, consistent, and
interchangeable way, and there is also no way at all to do it in a way that
is compatible with existing consumer code.


> Areas of disagreement:
>
> 1. consumer-source interdependency: you seem to think there is something
> special about the consumer assigning items to multiple targets in one
> statement, as opposed to doing anything else, including doing the multiple
> assignments in multiple statements.


This is not a matter of opinion.  It is a *fact* that there is a difference
in this case.  The difference is, quite simply, that through the use of the
unpacking construct, the programmer has given the Python interpreter
additional information (which the interpreter is not providing to the
producing object).  The disagreement appears to be that you believe for
some reason it's a good thing to silently discard this information so that
nobody can make use of it, whereas I believe it would be beneficial to make
it available for those who want to use it.

Again, I feel compelled to point out that as far as I can tell, your entire
objection on this count boils down to "unpacking must always be the same
thing as iteration because that's the way I've always thought about it".
 Unpacking *currently* uses iteration under the covers because it is a
convenient interface that already exists in the language, but there is
absolutely no reason why unpacking *must* inherently be defined as the same
thing as iteration.  You talk about it as if this is a foregone conclusion,
but as far as I can tell it's only foregone because you've already
arbitrarily decided it to be one way and just won't listen to anybody
suggesting anything else.

Alternately, if you just can't manage to get past this "unpacking must mean
iteration" thing, then don't look at this as a change to unpacking.  Look
at it instead as an extension to the iterator protocol, which allows an
iteration consumer to tell the iteration producer more information about
the number of items they are wanting to obtain from it.  Heck, I actually
wouldn't be opposed to making this a general feature of iter(), if it could
be done in a backwards-compatible way..

2. general usefulness: you want .unpack to be standardized and made a
> special method. I think it is inherently variable enough and and the need
> rare enough to not justify that.
>

I think I already spoke to this above..  Put simply, if it is not
standardized and utilized by the corresponding language constructs, it is
essentially useless as a general-purpose solution and only works in very
limited cases.  Your alternative solutions just aren't solutions to the
more general problems.

--Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20130227/b951e158/attachment.html>


More information about the Python-ideas mailing list