[Python-Dev] Caching float(0.0)
tim.peters at gmail.com
Wed Oct 4 06:53:55 CEST 2006
[skip at pobox.com]
> Can you give a simple example where the difference between the two is apparent
> to the Python programmer?
BTW, I don't recall the details and don't care enough to reconstruct
them, but when Python's front end was first changed to recognize
"negative literals", it treated +0.0 and -0.0 the same, and we did get
bug reports as a result.
A bit more detail, because it's necessary to understand that even
minimally. Python's grammar doesn't have negative numeric literals;
e.g., according to the grammar,
are applications of the unary minus operator to the positive numeric
literals 1 and 1.1. And for years Python generated code accordingly:
LOAD_CONST followed by the unary minus opcode.
Someone (Fred, I think) introduced a front-end optimization to
collapse that to plain LOAD_CONST, doing the negation at compile time.
The code object contains a vector of compile-time constants, and the
optimized code initially didn't distinguish between +0.0 and -0.0. As
a result, if the first float 0.0 in a code block "looked postive",
/all/ float zeroes in the code block were in effect treated as
positive; and similarly if the first float zero was -0.0, all float
zeroes were in effect treated as negative.
That did break code. IIRC, it was fixed by special-casing the snot
out of "-0.0", leaving that single case as a LOAD_CONST followed by
More information about the Python-Dev