Chris Angelico
On Tue, Mar 11, 2014 at 7:57 AM, Ethan Furman
wrote: I think he's saying make a new type similarly to complex, only instead of two floats to make a complex number, have a (long) int and a decimal float to make this new type. The long int portion would have infinite precision, the float portion would have, say, 16 digits (or whatever).
That's plausible as a representation, and looks tempting, but basic arithmetic operations become more complicated. Addition and subtraction just need to worry about carries, but multiplication forks out into four multiplications (int*int, int*frac, frac*int, frac*frac), and division becomes similarly complicated. Would it really be beneficial?
That's right, it would complicate calculations exactly as you are pointing out. That's why I said it might be too slow for most users even when implemented optimally, but then I'm not sure whether performance should be the very first thing to discuss. Best, Wolfgang