# [Tutor] My function correct_num()

Dennis Lee Bieber wlfraed at ix.netcom.com
Thu Mar 30 21:47:25 EDT 2023

```On Thu, 30 Mar 2023 14:58:12 +0200, Goran Ikac <goranikac65 at gmail.com>
declaimed the following:

>Hi, I wish a nice day to every pythonist out there!
>I'm a newbie, still learning the basics. For an exercise, I wrote a
>function *correct_num() *to avoid ridiculous presentation of some numeric
>calculations in Python (e.g. *print(.1 + .1 + .1) *outputs
>*0.30000000000000004*):
>

Since 0.1 is not power-of-two, it is a repeating binary value. Your
"ridiculous presentation" is Python attempting to display as much accuracy
as it can for the value.

You can format the output easily, without invoking some long
function...

>>> x = .1 + .1 + .1
>>> print("%s" % x)
0.30000000000000004
>>> print("%f" % x)
0.300000
>>> print("%4.2f" % x)
0.30
>>>
>>> print("%4.2e" % x)
3.00e-01
>>>

{There are other ways to format output but the old "string interpolation"
just feels natural to me -- similar to C's printf() family.}

<SNIP>

>Now, I'm happy with the function's work, but I don't know what number of
>decimal digits to leave alone, e.g. what number of decimal digits are
>certainly OK). I've decided it to be *six *decimal digits:
>        if dec_digits < *7*:
>            return num                  # return the same number.
>but it was by pure intuition. Does anybody know what is the right number of
>decimal digits to leave as they were returned by Python numeric
>calculations?

SINGLE precision (32-bit) floating point is commonly considered to be
good for 7 significant digits (significant means the digits before AND
after the decimal point.

>>> print("%13.6e" % x)
3.000000e-01
>>> print("%13.6e" % -x)
-3.000000e-01
>>>

Note that I've set the field width to allow for negative sign, the decimal
point, and the 4-character exponent.

DOUBLE precision (64-bit) floating point is typically considered good
for 15 significant digits (Python uses double precision).

>>> print("%21.14e" % x)
3.00000000000000e-01
>>> print("%21.14e" % -x)
-3.00000000000000e-01
>>> print("%23.16e" % x)
3.000000000000000e-01
>>> print("%23.16e" % x)
3.0000000000000004e-01
>>> print("%23.16e" % -x)
-3.0000000000000004e-01
>>>

Python appears to be using 17 significant digits for its default output.

```