On Sat, Mar 8, 2014 at 1:59 AM, Stephen J. Turnbull email@example.com wrote:
Seriously, as one data point, I don't think having more "human" representations encourages me the think of floating point results as the product of arithmetic on real numbers. I don't think anybody who knows how tricky "floating point" arithmetic can be is going to be fooled by the "pretty eyes" of a number represented as "2.0" rather than "1.99999999999999743591".
Fair enough. I just remember reading, back in my really REALLY early days with GW-BASIC, an explanation of why 3.2# (the hash made it double-precision) came out as whatever-it-did. Went into a full explanation of the nature of binary floating point, and the issue was forced to your attention because just about _any_ value that wasn't a neat multiple of a (negative) power of two would do that.
You can lead a programmer to docs, but you can't make him understand.