On Mon, Mar 10, 2014 at 3:36 PM, Wolfgang Maier <wolfgang.maier@biologie.uni-freiburg.de> wrote:
Guido van Rossum <guido@...> writes:

>
>
>
> On Mon, Mar 10, 2014 at 2:51 PM, Chris Angelico
<rosuav@gmail.com> wrote:
> On Tue, Mar 11, 2014 at 7:57 AM, Ethan Furman
<ethan@stoneleaf.us> wrote:
>
>
> > I think he's saying make a new type similarly to complex, only instead of
> > two floats to make a complex number, have a (long) int and a decimal float
> > to make this new type.  The long int portion would have infinite precision,
> > the float portion would have, say, 16 digits (or whatever).
> That's plausible as a representation, and looks tempting, but basic
> arithmetic operations become more complicated. Addition and
> subtraction just need to worry about carries, but multiplication forks
> out into four multiplications (int*int, int*frac, frac*int,
> frac*frac), and division becomes similarly complicated. Would it
> really be beneficial?
>
>
> It looks neither plausible nor tempting to me at all, and I hope that's
not what he meant. It can represent numbers of any magnitude that have lots
of zeros following the decimal point followed by up to 16 digits of
precision, but not numbers that have e.g. lots of ones instead of those
zeros -- the float portion would be used up for the first 16 ones.
E.g.111111111111111111111111111111.000000000000000000000000000000123456789
>
> would be representable exactly but
not111111111111111111111111111111.111111111111111111111111111111123456789
>
>
>
>
>
> What makes numbers in the vicinity of integers special?
>


I'm afraid it is exactly what I'm proposing. I don't see though how this is
different from current behavior of lets say Decimal.
Assuming default context with prec=28 you currently get:

>>> +Decimal('0.000000000000000000000000000000123456789')
Decimal('1.23456789E-31')

, but with a "consumer 1" (one is enough, actually):

>>> +Decimal('0.100000000000000000000000000000123456789')
Decimal('0.1000000000000000000000000000')

It is very different. Decimal with prec=28 counts the number of digits from the first non-zero digit, and the number of digits it gives you after the decimal point depends on how many digits there are before it. That is a sane perspective on precision (the total number of significant digits).

But in your proposal the number of digits you get after the point depends on how close the value is to the nearest integer (in the direction of zero), not how many significant digits you have in total. That's why my examples started with lots of ones, not zeros.

--
--Guido van Rossum (python.org/~guido)