On Tuesday, August 27, 2019, 11:12:51 AM PDT, Chris Angelico
On Wed, Aug 28, 2019 at 3:10 AM Andrew Barnert via Python-ideas
wrote: Before I get into this, let me ask you a question. What does the j suffix give us? You can write complex numbers without it just fine:
c = complex c(1, 2)
And you can even write a j function trivially:
def j(x): return complex(0, x) 1 + j(2)
But would anyone ever write that when they can write it like this:
1 + 2j
I don’t think so. What does the j suffix give us? The two extra keystrokes are trivial. The visual noise of the parens is a bigger deal. The real issue is that this matches the way we conceptually think of complex numbers, and the way we write them in other contexts. (Well, the way electrical engineers write them; most of the rest of us use i rather than j… but still, having to use j instead of i is less of an impediment to reading 1+2j than having to use function syntax like 1+i(2).
And the exact same thing is true in 3D or CUDA code that uses a lot of float32 values. Or code that uses a lot of Decimal values. In those cases, I actually have to go through a string for implementation reasons (because otherwise Python would force me to go through a float64 and distort the values), but conceptually; there are no strings involved when I write this:
array([f('0.2'), f('0.3'), f('0.1')])
… and it would be a lot more readable if I could write it the same way I do in other programming languages:
array([0.2f, 0.3f, 0.1f])
Again, it’s not about saving 4 keystrokes per number, and the visual noise of the parens is an issue but not the main one (and quotes are barely any noise by comparison); it’s the fact that these numeric values look like numeric values instead of looking like strings
If your conclusion here were "and that's why Python needs a proper> syntax for Decimal literals", then I would be inclined to agree with> you - a Decimal literal would be lossless (as it can entirely encode> whatever was in the source file), and you could then create the> float32 values from those. I think builtin Decimal literals are a non-starter. The type isn't even builtin. You surely wouldn't want to incur the cost of importing it to every Python session. And implementing some kind of lazy import mechanism in the middle of the json module is one thing, but in the middle of the compiler? So how _could_ you implement them? (While we're at it, what would that do to MicroPython and… one of the browser Pythons, I forget which… that have 100% syntax compatibility with Python but leave out much of the stdlib, including decimal? Sure, nobody ever promised they could do that, but it's a happy accident that they could, and do we want to break that capriciously?) Maybe you could come up with some kind of DecimalLiteral object that doesn't actually act like a number, but can be converted to all of the different numeric types as needed (so, e.g., if you add or radd one to a `float` it converts to a `float`, etc.). That works great in languages like Swift and Haskell, but I don't think there's a feasible design for a dynamically-typed language. So, even if Decimal literals really were the only thing we needed, a way to register Decimal literals may be the best way to do that. But they're not. You didn't even attempt to answer the comparison with complex that you quoted. The problem that `j` solves is not that there's no way to create complex values losslessly out of floats, but that there's no way to create them _readably_, in a way that's consistent with the way you read and write them in every other context. Which is exactly the problem that `f` solves. Adding a Decimal literal would not help that at all—letting me write `f(1.23d)` instead of `f('1.23')` does not let me write `1.23f`. Also, I think you're the one who brought up performance earlier? `%timeit np.float32('1.23')` is 671ns, while `%timeit np.float32(d)` with a pre-constructed `Decimal(1.23)` is 2.56us on my laptop, so adding a Decimal literal instead of custom literals actually encourages _slower_ code, not faster. Also, as the OP has pointed out repeatedly and nobody has yet answered, if I want to write `f(1.23d)` or `f('1.23')`, I have to pollute the global namespace with a function named `f` (a very commonly-used name); if I want to write `1.23f`, I don't, since the converter gets stored in some out-of-the-way place like `__user_literals_registry__['f']` rather than `f`. That seems like a serious benefit to me. But you haven't made the case for generic string prefixes or any sort of "arbitrary literal" that would let you import something that registers something to make your float32 literals.
Sure I did; you just cut off the rest of the email that had other cases. And ignored most of what you quoted about the float32 case. And ignored the previous emails by both me and the OP that had other cases. Or can you explain to me how a builtin Decimal literal could solve the problem of Windows paths? Here's a few more: Numeric types that can't be losslessly converted to and from Decimal, like Fraction. Something more similar to complex (e.g., `quat = 1.0x + 0.0y + 0.1z + 1.0w`). What would Decimal literals do for me there? I think your reluctance and the OP's excitement here both come from the same source: Any feature that gives you a more convenient way to write and read something is good, because it lets you write things in a way that's consistent with your actual domain, and also bad, because it lets you write things in a way that's not readable to people who aren't steeped in your domain. Those are _always_ both true, so just arguing from first principles is pointless. The question is whether, for this specific feature, there are good uses where the benefit outweighs the cost. And I think there are. In fact, if you're already convinced that we need Decimal literals, unless you can come up with a more feasible way to add builtin Decimal literals to Python, Decimal on its own seems like a sufficient use case for the feature.