Fully agreed on the sentiment that we shouldn't treat compile-time literals differently from runtime operations. It has no precedent in Python and adds a significant mental burden to keep track of. I can only imagine the deluge StackOverflow threads from surprised users if this were to be done.
That said, I also really like the idea of better Python support for symbolic and decimal math.
How about this as a compromise:
`from __feature__ import decimal_math, fraction_math`
With the 2 interpreter directives above (which must appear at the top of the module, either before or after __future__ imports, but before anything else), any float literals inside that module would be automatically coerced to `Decimal`, and any division operations would be coerced to `Fraction`. You could also specify just one of those directives if you wanted.
Upsides
- No change at all to existing code
-
By specifically opting into this behavior you would be accepting the reduced performance and memory-efficiency
- Very simple to explain to people using Python for maths or science (not an insignificant userbase) who don't understand or care about the merits and downsides of various data-types and just want to do math that behaves the way they expect.
Downsides:
- the complexity of adding this new '__feature__' interpreter directive, although it *should* be possibly to reuse the existing __future__ machinery for it
-
having to maintain these new 'features' into the future
- you can't choose to mark a single float literal as a decimal or a single division operation as a fraction, it's all-or-nothing within a given module
I don't know. It was just an idea off the top of my head. On second thought, maybe it's needlessly contrived.
Cheers everyone