On Mon, Mar 10, 2014 at 04:00:38PM +0000, Oscar Benjamin wrote:
On 10 March 2014 15:31, Steven D'Aprano
wrote: On Mon, Mar 10, 2014 at 02:05:22PM +0000, Oscar Benjamin wrote:
Where this would get complicated is for people who also use the Decimal type. They'd need to keep track of which objects were of which type and so decimal literals might seem more annoying than useful.
Hmmm. I don't think this should necessarily be complicated, or at least no more complicated than dealing with any other mixed numeric types. If you don't want to mix them, don't mix them :-)
Exactly. It just means that you wouldn't want to use the literals in code that actually uses the decimal module. Consider:
If you're writing a function that accepts whatever random numeric type the caller passes in, you have to expect that you cannot control the behaviour precisely. (You can't control rounding or precision for floats; Fractions are exact; the proposed decimal64 will have a fixed precision and fixed rounding; decimal.Decimal you can control.) You might still write type-agnostic code, but the result you get may depend on the input data.
if some_condition: x = 1d else: x = Decimal(some_string)
Mixing different types within a single calculation/function, as shown above, sounds like a recipe for chaos to me. I wouldn't write it like that, any more than I would write: if some_condition: x = 1.1 else: x = Decimal("2.2")
# ... y = x / 3
So now x / 3 rounds differently depending on whether x is a decimal64 or a Decimal. I probably don't want that.
Then don't do it! :-)
The solution: coerce to Decimal.
Or write both branches as a Decimal in the first place.
But then why did I bother with the Decimal literal anyway? Decimal(1d) is hardly better than Decimal('1').
Presumably you didn't use a decimal64 literal if you cared about controlling the rounding or precision, because that would be silly. (You might, on the other hand, use decimal64 literals everywhere in your function, if its rounding and precision was sufficient for your needs.) However, the caller of your function may pass you a decimal64 instance. I don't think there's much advantage to passing integer values with the d suffix, but passing fractional values is a different story: some_function(1.1) versus some_function(1.1d) I think this is a proposal I could get behind: leave the decimal module (mostly) as-is, perhaps with a few backwards-compatible tweaks, while adding an independent decimal64 or decimal128 built-in type and literal with fixed rounding and precision. The new literal type would satisfy the use-case Mark Harris is worried about, apart from having to add a "d" suffix to literals[1], while the decimal module still allows for fine control of the context. The built-in decimal type should convert floats using their repr, which ought to make Guido happy too :-) [1] Shifting to decimal floats by default is a major backwards-incompatible change. -- Steven