On Thu, 29 Aug 2019 at 01:18, Andrew Barnert
Also, it's worth noting that the benefits of *user-defined* literals are *not* the same as the benefits of things like 0.2f, or 3.14d, or even re/^hello.*/. Those things may well be useful. But the benefit you gain from *user-defined* literals is that of letting the end user make the design decisions, rather than the language designer. And that's a subtly different thing.
That’s a good point, but I think you’re missing something big here.
Think about it this way; assuming f and frac and dec and re and sql and so on are useful, out options are:
1) people don’t get a useful feature 2) we add user-defined affixes 3) we add all of these as builtin affixes
While #3 theoretically isn’t impossible, it’s wildly implausible, and probably a bad idea to boot, so the realistic choice is between 1 and 2.
That's a completely different point. Built in affixes are defined by the language, user defined affixes are defined by the user (obviously!) That includes all aspects of design - both how a given affix works, and whether it's justified to have an affix at all for a given use case. The argument is identical to that of user-defined operators vs built in operators. If you can use this argument to justify user-defined affixes, it applies equally to user-defined operators, which is something that has been asked for far more often, with much more widespread precedents in other languages, and been rejected every time. Regarding your cases #1, #2, and #3, this is the fundamental point of language design - you have to choose whether a feature is worthwhile (in the face of people saying "well *I* would find it useful), and whether to provide a general mechanism or make a judgement on which (if any) use cases warrant a special-case language builtin. If you assume everything should be handled by general mechanisms, you end up at the Lisp/Haskell end of the spectrum. If you decide that the language defines the limits, you are at the C end. Traditionally, Python has been a lot closer to the "language defined" end of the scale than the "general mechanisms" end. You can argue whether that's good or bad, or even whether things should change because people have different expectations nowadays, but it's a fairly pervasive design principle, and should be treated as such. This actually goes back to the OP's point:
we can get to that point later when there is a general understanding that this is worth considering
The biggest roadblock to a "general understanding that this is worth considering" is precisely that Python has traditionally avoided (over-) general mechanisms for things like this. The obvious other example, as I mentioned above, being user defined operators. I've been very careful *not* to use the term "Pythonic" here, as it's too easy for that to be a way of just saying "my opinion is more correct than yours" without a real justification, but the real stumbling block for proposals like this tends to be far less about the technical issues, and far *more* about "does this fit into the philosophy of Python as a language, that has made it as successful as it is?" My instinct is that it doesn't fit well with Python's general philosophy. Paul