PEP proposal for round(x,n) enhancement

Tim Peters tim_one at email.msn.com
Tue Sep 18 21:14:12 CEST 2001


[Tim]
> >>> float("%.3g" % 1234)
> 1230.0
> >>>
>
> That is, you're not required to print the result, and it's trivial to
> convert it back to a float (if that's what *you* want).

[Chris Barker]
> I did think of this, and it would be one easy way to write a SigFig()
> function. Do you trust implimentation to do it consistantly and
> accurately???

More so than any other method -- both 754 and C99 give tight bounds on the
required accuracy of float<->string conversions, but no standard covers
round() or log() accuracy.

> And is it efficient?

That depends on your libc string<->float conversion routines, as Python
defers to the platform C for these operations.  Likewise the speed of log()
and log10() functions depends on the platform C libraries.

> and how hard would it be to make a NumPy Ufunc out of it.

Sorry, no idea.

>> while round() can introduce multiple rounding errors

> I'm curious why that would be...

Look at the implementation -- this wasn't a philosophical point <wink>.

>> (and errors you can't analyze without studying the implementation --

> True. but this is the case with all floating point functions.

Not good ones!  A good libm (KC Ng's fdlibm is a fine example) promises
worst-case error strictly less than 1 ULP (as compared to the
infinitely-precise true result).  round() doesn't, and it would be harder to
achieve that than I expect you realize.

> Must of us would be perfectly happy with a a good round function,

"good" doesn't mean anything to me without quantification (as I partially
quantified "good libm" just above -- a more careful characterization would
include statements about error distribution and monotonicity).

> and would rather the wierd details were worked out by folks that know
> more than me about FP. While I may know what int(x + 0.5) does, I might
> not have been aware of the biased rounding issue.

Sure, but some apps require biased rounding in halfway cases, while others
can't tolerate it.  A library routine can't make an intelligent decision
about this for you.  If you never thought about it and never got into
trouble as a result, then your apps (so far, and so far as you know) don't
care.

> Having a nice round() and SigFig function would in no way preclude you
> from using your own hand written version.

Sure.

>> when you wrote "to the degree it is possible above, you weren't
>> describing Python's builtin round.

> Too bad. Does it have a major flaw we should address?

Different people judge "major" differently; ditto a single person in
different app contexts.  Worst-case error < 1 ULP is current best practice.
Would violating that be major to your apps?  It would be to some, but
probably not most.

> no it's not, but using a printing formatting statement to round a nuber
> seems like kind of a round about way to do it... and do you trust it any
> more than you do round()? or Christopher Smith's proposed function?

Yes and yes, for reasons explained at the top.  Efficient correctly-rounding
fp base conversion runs into thousands of lines of delicate code (see, e.g.,
David Gay's routines for this on Netlib).  That's because the "close to, but
not exactly at, halfway" cases are difficult to always get right without use
of extended precisions, and avoiding strings doesn't sidestep that essential
difficulty (it's an inherent part of the problem space).

> By the way, I'd love to have anyone who really knows the ins and outs
> of FP to comment on his proposed function.

Can't be done without knowing error bounds on the platform logarithm
function.  It's almost certainly "good enough" for most people most of the
time, though.  The nastiest part is being unable to quantify when and where
it's not good enough.

>> BTW, I'd much rather have a round() function that specified the
>> number of significant bits.

> That would be handy, and I did suggest the the proposed SigFig()
> function allow you to specify the base. base 10 and 2 would be the most
> common, of course, but people might have a reason to use another base as
> well.

OTOH, the more you want, the less likely you'll find someone willing to
devote their spare time to implementing it.  If you're *willing* to settle
for "close enough, probably, most of the time", write it in Python yourself
and declare victory.  The float->string->float method is the easiest to code
in Python and the most likely to be most accurate across platforms.  There
will be x-platform differences in exactly-half-way cases, though (glibc uses
round-to-nearest/even then, Microsoft "add a half and chop", and oddly
enough Python's builtin round() acts more like Microsoft here).





More information about the Python-list mailing list