PEP 238 (revised)

Bengt Richter bokr at accessone.com
Sat Jul 28 02:29:57 EDT 2001


On Fri, 27 Jul 2001 12:27:44 -0700, Chris Barker <chrishbarker at home.net> wrote:
[...]
>so if I want to use the result as an index, I'll need to use:
>
>int(x/y)
>
>kind of like I now use:
>
>int(floor(x/y))
>
>What I don't get, is why // couldn't return an integer always? It will
>always have an integral value. I suppose one problem is that the range
>of integers that a float (C double) can handle is larger than a 32 bit
>integer can hold. This could be solved in the future with int/long
>inification, what will be the behaviour then?
>
I think it has to do with being consistent with the idea of using the
representation type as an indicator of exactness.

Using that idea, float is a visible name and means inexact,
and int and long are visible and mean exact.

Floor can't turn inexact to exact, so it must return inexact (i.e., float)
if it gets an inexact (i.e., float) input.

int(x) will effectively be an exactness coercion because of the
representation-type linkage.

If Dr Who takes us forward a bit, and we have no floats or ints, but just
exact and approximate numbers, with exact being integers or rationals, and
approximate being approximate[1], then you would have to translate your statement
to "What I don't get, is why // couldn't return an exact number always?"

[1] Approximate numbers will undoubtedly be implemented as floats, but hiding
that will let people think about aspects of approximation instead of the hardware
below. And if someone came up with a new approximate number representation for
special situations, e.g., an alternate rational with bounded numerator/denominator
values, it could be implemented without changing the language.  The point is you
need to know the effectiveness of the approximation for your purposes, not what
representation tricks they have come up with for the latest version of Python.
Take us back, Dr Who.

If the current numeric literal formats persist, as I think they must, then I
think the old formats for float should not automatically become formats for
approximate.

All the numbers you can reasonably enter as literals can be represented
exactly, so why not break the 1.0 -> float-hence-inexact chain, and let 1.0 and 1
signify the same thing? Driving the internal representation with a decimal point
is arms-length C programming, not Python, IMHO.

Evaluating some expressions will result in approximations, but not all the time.
Inputs can be exact all the time. Including 0.1 and 1/3, after number unification.
(0.1 just becomes 1/10).



More information about the Python-list mailing list