while (a=b()) ...

Tim Peters tim_one at email.msn.com
Mon May 17 01:43:03 EDT 1999


[Hrvoje Niksic]
> ...
> Anyway, the __getitem__ stuff beginning with 0 instead of using something
> like __next__ (or whatever) strikes me as ugly just as much as the
> exception games.  But that is just me.

No it's not <wink>.  "for" was designed to iterate over a particular notion
of sequence, that had to support __getitem__ anyway.  All of Python's
builtin sequence types support random-access indexing, and the "for"
protocol reflects that probably more than it should.  A __next__ protocol
would be more general; maybe in Python2.  In the meantime, you can write
__next__ methods if you like, and get them invoked via inheriting from:

class NextBase:

    def __getitem__(self, i):
        result = self.__next__()  # ignore the index
        if result == 42:          # a value Hrvoje never uses <wink>
            raise IndexError
        else:
            return result

A more general concept of iteration would really like something akin to
Sather's iterators or Icon's generators to build on, though.

>> "try" is cheap;

> How is it implemented?  I assume not with setjmp/longjmp.  :-)

No, no set/long_jmp's in the entire source tree.  As to how it's
implemented, the source is more of your friend than I will be in what's left
of tonight <wink>.  Use dis.dis to disassemble a function, then look for the
opcodes in ceval.c.

>> If you mean that
>>
>>     for i in xrange(N): pass
>>
>> is slower than
>>
>>     for i in range(N): pass

> Yes, and for reasonably small N's (*less* than 10000), which are the
> most frequent.

For small N they're both pretty fast <0.1 wink>.

> And it's a shame, because range() is one of the few things I truly
> dislike in Python.  Thinking of:
>
> for i in range(1000):
>   for j in range(1000):
>     ...
>
> makes me want to cry.  *Not* crocodile tears.  :-)

Except it usually doesn't really matter!  The loop guts are executed a
million times, and the time burden of creating 1001 lists at C speed is
typically small compared to that.  Perhaps because it's not visible, the
*real* overhead here is in doing millions of
increfs+decrefs+storage-shuffling on the loop-index integer objects (which
is why neither switching to xrange nor "while i < 1000:" yields a speedup
worth getting excited about -- worse, the "while" spelling is typically
slower on all plaforms).

if-it-were-easy-to-speed-dramatically-it-would-already-have-
    been-done-ly y'rs  - tim






More information about the Python-list mailing list