[Tutor] How does the interpreter determine how many decimal places to display for a float?

boB Stepp robertvstepp at gmail.com
Sat May 15 14:45:17 EDT 2021


On Sat, May 15, 2021 at 3:38 AM Peter Otten <__peter__ at web.de> wrote:
>
> On 15/05/2021 05:29, boB Stepp wrote:
>
> > How does >= Python 3.1 determine the "shortest of these"?
>
> This is an interesting question...

More interesting than I suspected based on what you have uncovered.

> A little digging in the Python source turns up
>
> https://github.com/python/cpython/blob/main/Python/dtoa.c

How did you ever suspect to look here?  "dtoa" = ?  Decimal to ASCII?
I would have never suspected I should look here.

I have not studied C.  Looking over the referenced source code,  why
do C programmers use such cryptic acronyms for their naming?  I looked
up the C standard for naming and there is no prohibition on using
descriptive names.  The naming used in this file is reminiscent of my
distant FORTRAN days.  Ugh!

> Does that answer your question? Well, it didn't help me, so I looked for
> the original code and found what seems to be the corresponding paper
>
> David M. Gay:
> Correctly Rounded Binary-Decimal and Decimal-Binary Conversion
> https://ampl.com/REFS/rounding.pdf
>
> which in turn leads to
>
> Guy L. Steele Jr., Jon L White:
> How to Print Floating-Point Numbers Accurately
>
> https://lists.nongnu.org/archive/html/gcl-devel/2012-10/pdfkieTlklRzN.pdf
>
>
> Hope that helps ;)

Looks like some light after dinner reading.  ~(:>))

Thanks!
boB Stepp


More information about the Tutor mailing list