xrange question

John Flynn transpicio at yahoo.com.au
Mon May 7 07:28:03 CEST 2001


"Tim Peters" <tim.one at home.com> wrote in message
news:mailman.989174168.2949.python-list at python.org...

[...]

> Given the way the interpreter is currently written, it's simply easier and
> faster to stick with native C longs when possible.  This is becoming
harder
> to live with over time, though, as 2**31 is no longer "essentially
infinity"
> for all practical problems.  So Python is slowly eradicating the
user-visible
> differences between its bounded and unbounded integral types, and someday
I
> expect neither range() nor xrange() will care about native C integer
sizes.

That's sensible. I think implicit conversion is probably as important as the
size issue. (Not that either one is a such big deal).

Eg. It seems that both 'range' and 'xrange' accept longs as bounds, but they
convert their bounds to ints:

>>> seq = range(1L, 10L)
>>> type(seq.start)
<type 'int'>

... yet operations on ranges of ints don't produce longs when necessary ...

>>> def fact(n):
            return reduce(operator.__mul__, range(1L, n + 1L)
...
>>> fact(6)
720
>>> fact(100)
Traceback (most recent call last):
  File "<interactive input>", line 1, in ?
  File "<interactive input>", line 2, in fact
OverflowError: integer multiplication
>>>

Again, this is not a big deal - but since Python removes just about every
other need to explicitly specify data types, it would also make sense (to me
at this stage) to implicitly convert integral types based on what's being
done with them. (But perhaps this is more trouble than it's worth?)

hoping-i-haven't-opened-an-old-can-of-worms-ly y'rs - JF.






More information about the Python-list mailing list