IDEA: Allow override of all assignments of a built in type to be a different type
One of the lovely things about Python is that we have the capability to avoid issues such as the vagaries of the floating point type with libraries such as decimal and fractions. This is wonderous to me but comes with an issue that I suspect is limiting its usage. That issue is that you have to litter your code with constructors of those types, e.g. ``` a = 0.1 b = 0.2 c = 0.3 a + b == c # This doesn't work but it is easy to forget why ``` vs: ``` from decimal import * # I know why this is bad but it comes straight from the examples a = Decimal("0.1") # Needs to be a string to avoid being temporarily a float b = Decimal("0.2") # Ditto c = Decimal("0.3") # Ditto a + b == c # Magic this works! ``` While this works it is not very friendly to users and there will often be cases where people forget to quote the values, etc. and there are similar issues with using the fractions library and I am sure with others. I suspect that this is why the decimals library, and some others, is not used as widely as it possibly should be. Wouldn't it be possible to have something along the lines of: ``` from decimal import TreatFloatsAsDecimal @TreatFloatsAsDecimal a = 0.1 # These are all now decimals b = 0.2 c = 0.3 a + b == c # This now works ``` Less obviously this sort of approach could also apply to making all integers into Fractions or other base types into something else. I do know that this goes against the explicit is better than implicit so an alternative might be to have a flexible base modifier, something like: ``` from decimal import DecimalBase as D a = 0D0.1 # These are all now decimals b = 0D0.2 c = 0D0.3 a + b == c # This now works ``` Since anything like this would require overriding at least part of the base interpreter I do not think that this is a suitable one for an external PyPi library or at least some hooks in the interpreter would be required to make it possible. Any thoughts, feedback, interest, expressions of horror? Or is this already there and I have missed it? Steve Barnes
On Thu, 5 Mar 2020 at 09:28, Steve Barnes <GadgetSteve@live.co.uk> wrote:
Wouldn’t it be possible to have something along the lines of:
``` from decimal import TreatFloatsAsDecimal @TreatFloatsAsDecimal a = 0.1 # These are all now decimals b = 0.2 c = 0.3
a + b == c # This now works ```
I'm not at all clear how you imagine this would work. How would the scope of that TreatFloatsAsDecimal "decorator" be determined? How would the Decimal constructor get access to the original string representation of 0.1, rather than the float value?
I do know that this goes against the explicit is better than implicit so an alternative might be to have a flexible base modifier, something like:
``` from decimal import DecimalBase as D
a = 0D0.1 # These are all now decimals b = 0D0.2 c = 0D0.3
a + b == c # This now works ```
You can already do ``` from decimal import Decimal as D a = D("0.1") # These are all now decimals b = D("0.2") c = D("0.3") a + b == c # This now works ``` so the main differences are (1) a few keystrokes, and (2) a conceptual model closer to what you intend ("these are decimal numbers" rather than "these are strings that we parse to get decimals"). I mention (2) because it's often ignored, and yet it's probably the most important reason people keep coming back to this. But user-defined literals have been discussed many times in the past, and never really got anywhere as an idea. I don't have links right now, but a search for the mailing list archives should turn up some. You should probably research those and explain how your proposal here addresses the objections that have been raised previously. Personally, I have a vague attachment to the idea in theory, but in practice I don't think I'd ever use it (at least not for the use cases you mention, and I can't think of any others that I would use it for) so it would be a non-trivial increase in the complexity of the language for little obvious benefit. The other thing that often acts as a good argument for a proposal is to point out some reasonably substantial bodies of existing, real-world, code that would be improved by the proposal. This is again where this idea often falls down - I don't actually know of any substantial code base that even uses Decimal or Fraction types, much less uses literals sufficiently frequently that a dedicated syntax would help. (I'm not saying that Decimal and/or Fraction are useless - I use them myself, but mostly in the REPL, or in adhoc code experiments, not in actual applications). Paul
Comments in-line (I wish Outlook would behave sensibly) -----Original Message----- From: Paul Moore <p.f.moore@gmail.com> Sent: 05 March 2020 09:52 To: Steve Barnes <GadgetSteve@live.co.uk> Cc: python-ideas@python.org Subject: Re: [Python-ideas] IDEA: Allow override of all assignments of a built in type to be a different type On Thu, 5 Mar 2020 at 09:28, Steve Barnes <GadgetSteve@live.co.uk> wrote:
Wouldn’t it be possible to have something along the lines of:
``` from decimal import TreatFloatsAsDecimal @TreatFloatsAsDecimal a = 0.1 # These are all now decimals b = 0.2 c = 0.3
a + b == c # This now works ```
I'm not at all clear how you imagine this would work. How would the scope of that TreatFloatsAsDecimal "decorator" be determined? How would the Decimal constructor get access to the original string representation of 0.1, rather than the float value? [Steve Barnes] To my mind a decorator would have function scope and there might be a file scope "magic". The interpreter would have to handle passing the raw value, (always a string at that stage), to the constructor rather than to the float constructor.
I do know that this goes against the explicit is better than implicit so an alternative might be to have a flexible base modifier, something like:
``` from decimal import DecimalBase as D
a = 0D0.1 # These are all now decimals b = 0D0.2 c = 0D0.3
a + b == c # This now works ```
You can already do ``` from decimal import Decimal as D a = D("0.1") # These are all now decimals b = D("0.2") c = D("0.3") a + b == c # This now works ``` so the main differences are (1) a few keystrokes, and (2) a conceptual model closer to what you intend ("these are decimal numbers" rather than "these are strings that we parse to get decimals"). I mention (2) because it's often ignored, and yet it's probably the most important reason people keep coming back to this. [Steve Barnes] (2) is an important point and as you say the all too often forgotten - I don't mind a few keystrokes but the fact that Decimal(0.2) != Decimal("0.2") is the source of more than a few bugs I am sure. But user-defined literals have been discussed many times in the past, and never really got anywhere as an idea. I don't have links right now, but a search for the mailing list archives should turn up some. You should probably research those and explain how your proposal here addresses the objections that have been raised previously. Personally, I have a vague attachment to the idea in theory, but in practice I don't think I'd ever use it (at least not for the use cases you mention, and I can't think of any others that I would use it for) so it would be a non-trivial increase in the complexity of the language for little obvious benefit. The other thing that often acts as a good argument for a proposal is to point out some reasonably substantial bodies of existing, real-world, code that would be improved by the proposal. This is again where this idea often falls down - I don't actually know of any substantial code base that even uses Decimal or Fraction types, much less uses literals sufficiently frequently that a dedicated syntax would help. (I'm not saying that Decimal and/or Fraction are useless - I use them myself, but mostly in the REPL, or in adhoc code experiments, not in actual applications). [Steve Barnes] To an extent the lack of substantial code bases that make use of decimal could well be at least partly because of these issues. There is probably a lot of code out there with either bugs or awkward work arounds that could be addressed by such a feature, just about any financial calculations for example - I have seen quite a lot of such code that works in pennies or cents and then scales the results for example. Paul
On Mar 5, 2020, at 02:11, Steve Barnes <GadgetSteve@live.co.uk> wrote:
Comments in-line (I wish Outlook would behave sensibly)
-----Original Message----- From: Paul Moore <p.f.moore@gmail.com> Sent: 05 March 2020 09:52 To: Steve Barnes <GadgetSteve@live.co.uk> Cc: python-ideas@python.org Subject: Re: [Python-ideas] IDEA: Allow override of all assignments of a built in type to be a different type
On Thu, 5 Mar 2020 at 09:28, Steve Barnes <GadgetSteve@live.co.uk> wrote: Wouldn’t it be possible to have something along the lines of:
``` from decimal import TreatFloatsAsDecimal @TreatFloatsAsDecimal a = 0.1 # These are all now decimals b = 0.2 c = 0.3
a + b == c # This now works ```
I'm not at all clear how you imagine this would work. How would the scope of that TreatFloatsAsDecimal "decorator" be determined? How would the Decimal constructor get access to the original string representation of 0.1, rather than the float value? [Steve Barnes] To my mind a decorator would have function scope and there might be a file scope "magic". The interpreter would have to handle passing the raw value, (always a string at that stage), to the constructor rather than to the float constructor.
But it’s not a string at that stage. The compiler compiles float literals into float constant values attached to the code object. You need to change it to either put a Decimal value in the constants, or put the string in the constants and emit a call to Decimal in the bytecode. Either way, there’s nothing left for the interpreter to do; you’ve done it all at compile time.
On 5/03/20 11:09 pm, Steve Barnes wrote:
just about any financial calculations for example - I have seen quite a lot of such code that works in pennies or cents and then scales the results for example.
But how much real-world financial code uses hard-coded amounts of money, rather than reading them as input, or getting them from a database or config file? -- Greg
On Thu, Mar 5, 2020 at 5:29 AM Steve Barnes <GadgetSteve@live.co.uk> wrote:
One of the lovely things about Python is that we have the capability to avoid issues such as the vagaries of the floating point type with libraries such as decimal and fractions. This is wonderous to me but comes with an issue that I suspect is limiting its usage. That issue is that you have to litter your code with constructors of those types, e.g.
```
a = 0.1
b = 0.2
c = 0.3
a + b == c # This doesn’t work but it is easy to forget why
```
vs:
```
from decimal import * # I know why this is bad but it comes straight from the examples
a = Decimal("0.1") # Needs to be a string to avoid being temporarily a float
b = Decimal("0.2") # Ditto
c = Decimal("0.3") # Ditto
a + b == c # Magic this works!
```
While this works it is not very friendly to users and there will often be cases where people forget to quote the values, etc. and there are similar issues with using the fractions library and I am sure with others. I suspect that this is why the decimals library, and some others, is not used as widely as it possibly should be.
Wouldn’t it be possible to have something along the lines of:
```
from decimal import TreatFloatsAsDecimal
@TreatFloatsAsDecimal
a = 0.1 # These are all now decimals
b = 0.2
c = 0.3
a + b == c # This now works
```
It is possible, and quite straightforward, to do this as an import hook or a custom encoding. I will try to do so and document it later today, and post it on this list.
Less obviously this sort of approach could also apply to making all integers into Fractions or other base types into something else.
Also a bit tricky since you don't want to end up with for i in range(Fraction(4)) ... However, have a look at https://aroberge.github.io/ideas/docs/html/fractional_math_tok.html André Roberge
I do know that this goes against the explicit is better than implicit so an alternative might be to have a flexible base modifier, something like:
```
from decimal import DecimalBase as D
a = 0D0.1 # These are all now decimals
b = 0D0.2
c = 0D0.3
a + b == c # This now works
```
Since anything like this would require overriding at least part of the base interpreter I do not think that this is a suitable one for an external PyPi library or at least some hooks in the interpreter would be required to make it possible.
Any thoughts, feedback, interest, expressions of horror? Or is this already there and I have missed it?
Steve Barnes
_______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/7EF5MO... Code of Conduct: http://python.org/psf/codeofconduct/
Fascinating material André. I am sure that some people would love the idea of a fractional or decimal iterator, (as long as they don’t try to use it as an index). Steve From: André Roberge <andre.roberge@gmail.com> Sent: 05 March 2020 09:56 To: Steve Barnes <GadgetSteve@live.co.uk> Cc: python-ideas@python.org Subject: Re: [Python-ideas] IDEA: Allow override of all assignments of a built in type to be a different type On Thu, Mar 5, 2020 at 5:29 AM Steve Barnes <GadgetSteve@live.co.uk<mailto:GadgetSteve@live.co.uk>> wrote: One of the lovely things about Python is that we have the capability to avoid issues such as the vagaries of the floating point type with libraries such as decimal and fractions. This is wonderous to me but comes with an issue that I suspect is limiting its usage. That issue is that you have to litter your code with constructors of those types, e.g. ``` a = 0.1 b = 0.2 c = 0.3 a + b == c # This doesn’t work but it is easy to forget why ``` vs: ``` from decimal import * # I know why this is bad but it comes straight from the examples a = Decimal("0.1") # Needs to be a string to avoid being temporarily a float b = Decimal("0.2") # Ditto c = Decimal("0.3") # Ditto a + b == c # Magic this works! ``` While this works it is not very friendly to users and there will often be cases where people forget to quote the values, etc. and there are similar issues with using the fractions library and I am sure with others. I suspect that this is why the decimals library, and some others, is not used as widely as it possibly should be. Wouldn’t it be possible to have something along the lines of: ``` from decimal import TreatFloatsAsDecimal @TreatFloatsAsDecimal a = 0.1 # These are all now decimals b = 0.2 c = 0.3 a + b == c # This now works ``` It is possible, and quite straightforward, to do this as an import hook or a custom encoding. I will try to do so and document it later today, and post it on this list. Less obviously this sort of approach could also apply to making all integers into Fractions or other base types into something else. Also a bit tricky since you don't want to end up with for i in range(Fraction(4)) ... However, have a look at https://aroberge.github.io/ideas/docs/html/fractional_math_tok.html André Roberge I do know that this goes against the explicit is better than implicit so an alternative might be to have a flexible base modifier, something like: ``` from decimal import DecimalBase as D a = 0D0.1 # These are all now decimals b = 0D0.2 c = 0D0.3 a + b == c # This now works ``` Since anything like this would require overriding at least part of the base interpreter I do not think that this is a suitable one for an external PyPi library or at least some hooks in the interpreter would be required to make it possible. Any thoughts, feedback, interest, expressions of horror? Or is this already there and I have missed it? Steve Barnes _______________________________________________ Python-ideas mailing list -- python-ideas@python.org<mailto:python-ideas@python.org> To unsubscribe send an email to python-ideas-leave@python.org<mailto:python-ideas-leave@python.org> https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/7EF5MO... Code of Conduct: http://python.org/psf/codeofconduct/
On Thu, Mar 5, 2020 at 5:55 AM André Roberge <andre.roberge@gmail.com> wrote:
On Thu, Mar 5, 2020 at 5:29 AM Steve Barnes <GadgetSteve@live.co.uk> wrote:
One of the lovely things about Python is that we have the capability to avoid issues such as the vagaries of the floating point type with libraries such as decimal and fractions. This is wonderous to me but comes with an issue that I suspect is limiting its usage. That issue is that you have to litter your code with constructors of those types, e.g.
SNIP
Wouldn’t it be possible to have something along the lines of:
```
from decimal import TreatFloatsAsDecimal
@TreatFloatsAsDecimal
a = 0.1 # These are all now decimals
b = 0.2
c = 0.3
a + b == c # This now works
```
It is possible, and quite straightforward, to do this as an import hook or a custom encoding. I will try to do so and document it later today, and post it on this list.
Quick **first draft**:
>>> from ideas.examples import decimal_math >>> hook = decimal_math.add_hook() >>> from ideas import console >>> console.start() Configuration values for the console: source_init from ideas.examples.decimal_math transform_source from ideas.examples.decimal_math -------------------------------------------------- Ideas Console version 0.0.15. [Python version: 3.7.3] ~>> 0.1 + 0.2 == 0.3 True ~>> 0.1 * 10 == 1 True ~>> 0.1 Decimal('0.1') ~>> 0.1 + 0.100 Decimal('0.200') Documentation at https://aroberge.github.io/ideas/docs/html/decimal_math.html (This took longer to write than the actual code.) André Roberge
I do know that this goes against the explicit is better than implicit so an alternative might be to have a flexible base modifier, something like:
```
from decimal import DecimalBase as D
a = 0D0.1 # These are all now decimals
b = 0D0.2
c = 0D0.3
a + b == c # This now works
```
Since anything like this would require overriding at least part of the base interpreter I do not think that this is a suitable one for an external PyPi library or at least some hooks in the interpreter would be required to make it possible.
Any thoughts, feedback, interest, expressions of horror? Or is this already there and I have missed it?
Steve Barnes
_______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/7EF5MO... Code of Conduct: http://python.org/psf/codeofconduct/
I think we need to be careful here. This whole idea is predicated an assumption that Decimals are inherently ”better” or more accurate than binary floats. They are not. The only real advantage is that they fit most people’s mental model better. But that’s actually a bit dangerous, as it makes the issues subtler. I’m not convinced it’s even better for money — if you operate in the lowest unit (cents in the US) then there isn’t anything decimal about it, and different accounting systems require different rounding rules, and even different fractional representations, such as the NY stock exchange: https://www.nytimes.com/1971/02/28/archives/stocks-why-fractions-eighths-a-p... Which is actually better suited to binary math :-) In short, this assumption that making it easier to use Decimal is an obvious good should be questioned. In many cases on this list, it’s said that we’d do things differently if we were starting from scratch. I’m not sure that’s the case here. Where the Python Decimal type IS advantageous is with controlling the precision and the like. But in that case, users need to know what they are doing, so using the Decimal constructor explicitly is a good thing. That being said: maybe a decimal literal would help a bit: 1.1D Is currently a syntax error. It seems something like that is the obvious choice. Though I imagine it’s been rejected already for a reason. Final note: in a recent discussion about the JSON module, there was reluctance to have it fully represent Decimal. If it’s too much overhead for the JSON module, it’s surely too much overhead for the core interpreter! -CHB On Thu, Mar 5, 2020 at 8:05 AM André Roberge <andre.roberge@gmail.com> wrote:
On Thu, Mar 5, 2020 at 5:55 AM André Roberge <andre.roberge@gmail.com> wrote:
On Thu, Mar 5, 2020 at 5:29 AM Steve Barnes <GadgetSteve@live.co.uk> wrote:
One of the lovely things about Python is that we have the capability to avoid issues such as the vagaries of the floating point type with libraries such as decimal and fractions. This is wonderous to me but comes with an issue that I suspect is limiting its usage. That issue is that you have to litter your code with constructors of those types, e.g.
SNIP
Wouldn’t it be possible to have something along the lines of:
```
from decimal import TreatFloatsAsDecimal
@TreatFloatsAsDecimal
a = 0.1 # These are all now decimals
b = 0.2
c = 0.3
a + b == c # This now works
```
It is possible, and quite straightforward, to do this as an import hook or a custom encoding. I will try to do so and document it later today, and post it on this list.
Quick **first draft**:
>>> from ideas.examples import decimal_math >>> hook = decimal_math.add_hook() >>> from ideas import console >>> console.start() Configuration values for the console: source_init from ideas.examples.decimal_math transform_source from ideas.examples.decimal_math -------------------------------------------------- Ideas Console version 0.0.15. [Python version: 3.7.3]
~>> 0.1 + 0.2 == 0.3 True ~>> 0.1 * 10 == 1 True ~>> 0.1 Decimal('0.1') ~>> 0.1 + 0.100 Decimal('0.200')
Documentation at https://aroberge.github.io/ideas/docs/html/decimal_math.html (This took longer to write than the actual code.)
André Roberge
I do know that this goes against the explicit is better than implicit so an alternative might be to have a flexible base modifier, something like:
```
from decimal import DecimalBase as D
a = 0D0.1 # These are all now decimals
b = 0D0.2
c = 0D0.3
a + b == c # This now works
```
Since anything like this would require overriding at least part of the base interpreter I do not think that this is a suitable one for an external PyPi library or at least some hooks in the interpreter would be required to make it possible.
Any thoughts, feedback, interest, expressions of horror? Or is this already there and I have missed it?
Steve Barnes
_______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/7EF5MO... Code of Conduct: http://python.org/psf/codeofconduct/
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/TQ7LUA... Code of Conduct: http://python.org/psf/codeofconduct/
-- Christopher Barker, PhD Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython
I've written my share of mathematical and physics software by now and arbitrary-precision rational numbers are hardly ever a good solution to problems you might be having. Paul Moore has mentioned how few keystrokes this syntax would actually save: add to this the fact that there are probably very few uses of arbitrary-precision rational numbers in real-world Python code and this will look a lot of added complexity for little benefit.
On Thu, Mar 5, 2020 at 8:27 PM Steve Barnes <GadgetSteve@live.co.uk> wrote:
One of the lovely things about Python is that we have the capability to avoid issues such as the vagaries of the floating point type with libraries such as decimal and fractions. This is wonderous to me but comes with an issue that I suspect is limiting its usage. That issue is that you have to litter your code with constructors of those types, e.g.
from decimal import * # I know why this is bad but it comes straight from the examples a = Decimal("0.1") # Needs to be a string to avoid being temporarily a float b = Decimal("0.2") # Ditto c = Decimal("0.3") # Ditto
a + b == c # Magic this works!
As an aside, this complaint comes up in basically every language, and it's a consequence of the literal 0.1 not actually meaning one tenth. Nobody would bat an eyelid if you show that, say: x = 0.3333333 # one third y = 0.6666666 # two thirds x + y != 1.0 because they've obviously been rounded (those are NOT complete representations of those fractions). It's only with fifths and tenths that people are surprised, and only because they don't understand that computers work in binary :) That said, though, there are a number of good reasons for wanting to change the interpretation of literals. But by the time you get to executing the module code, it's too late - the literals get parsed and compiled into the code object, and they're already the values they are going to ultimately be. There are two approaches that would reasonably plausibly work for this. It's possible to redefine literal parsing using a future directive ("from __future__ import unicode_literals"), but those have to be defined entirely by the language. To do it in custom code, you'd need instead to run some code before your module is parsed - for example, an import hook. This would have to NOT be done by default - you'd have to do it on a per-module basis - because any change like this would break a lot of things. (Even if you keep it just to your module, using Decimal can create bizarre situations - such as where the average of two numbers is not between them.) Ultimately, I think the best solution in current Python is to just import Decimal as D and then use the shorter name. Hmm, is there a PEP regarding Decimal literals? I couldn't find one, although there is PEP 240 regarding rational literals. Maybe it's time to write up a rejected PEP explaining exactly what the problems are with Decimal literals. From memory, the problems are (a) it'd effectively require the gigantic decimal module to be imported by default, and (b) contexts don't work with literals. But since Decimal literals are such an obvious solution to most of the above problems, I think it'd be good to have a document saying why it won't help. ChrisA
-----Original Message----- From: Chris Angelico <rosuav@gmail.com> Sent: 05 March 2020 11:42 To: python-ideas@python.org Subject: [Python-ideas] Re: IDEA: Allow override of all assignments of a built in type to be a different type On Thu, Mar 5, 2020 at 8:27 PM Steve Barnes <GadgetSteve@live.co.uk> wrote:
One of the lovely things about Python is that we have the capability to avoid issues such as the vagaries of the floating point type with libraries such as decimal and fractions. This is wonderous to me but comes with an issue that I suspect is limiting its usage. That issue is that you have to litter your code with constructors of those types, e.g.
from decimal import * # I know why this is bad but it comes straight from the examples a = Decimal("0.1") # Needs to be a string to avoid being temporarily a float b = Decimal("0.2") # Ditto c = Decimal("0.3") # Ditto
a + b == c # Magic this works!
As an aside, this complaint comes up in basically every language, and it's a consequence of the literal 0.1 not actually meaning one tenth. Nobody would bat an eyelid if you show that, say: x = 0.3333333 # one third y = 0.6666666 # two thirds x + y != 1.0 because they've obviously been rounded (those are NOT complete representations of those fractions). It's only with fifths and tenths that people are surprised, and only because they don't understand that computers work in binary :) That said, though, there are a number of good reasons for wanting to change the interpretation of literals. But by the time you get to executing the module code, it's too late - the literals get parsed and compiled into the code object, and they're already the values they are going to ultimately be. There are two approaches that would reasonably plausibly work for this. It's possible to redefine literal parsing using a future directive ("from __future__ import unicode_literals"), but those have to be defined entirely by the language. To do it in custom code, you'd need instead to run some code before your module is parsed - for example, an import hook. This would have to NOT be done by default - you'd have to do it on a per-module basis - because any change like this would break a lot of things. (Even if you keep it just to your module, using Decimal can create bizarre situations - such as where the average of two numbers is not between them.) Ultimately, I think the best solution in current Python is to just import Decimal as D and then use the shorter name. Hmm, is there a PEP regarding Decimal literals? I couldn't find one, although there is PEP 240 regarding rational literals. Maybe it's time to write up a rejected PEP explaining exactly what the problems are with Decimal literals. From memory, the problems are (a) it'd effectively require the gigantic decimal module to be imported by default, and (b) contexts don't work with literals. But since Decimal literals are such an obvious solution to most of the above problems, I think it'd be good to have a document saying why it won't help. ChrisA [Steve Barnes] I think that a part of the problem is that because Decimal silently accepts both float and string literal inputs things get surprising e.g.: In [5]: D(0.3) == D('0.3') Out[5]: False In [6]: D(0.5) == D('0.5') Out[6]: True In [7]: D(0.1) * 10 == 1 Out[7]: False In [8]: D(0.5) * 2 == 1 Out[8]: True Maybe a way forwards, (if we have no widely applicable way of preventing the actual call to the initialiser being interpreted as a float before it is seen), would be to: 1. Deprecate the use of the Decimal initialiser with float inputs so that decimal.Decimal(0.1), etc., issues a warning 2. Possibly have a from __future__ import NoDecimalFloat or similar 3. Eventually outlaw float as an input to the initialiser. While this would "break" code which happen to work as expected, e.g. decimal.Decimal(0.5) but at least we would have consistent behaviour and fail early rather than potentially working but giving incorrect or inconsistent results. While a rejected PEP would give somewhere to point to for explanations I am afraid that all to many python users never read any of the PEPs or only PEP8. Steve
On Thu, Mar 05, 2020 at 12:39:38PM +0000, Steve Barnes wrote:
Hmm, is there a PEP regarding Decimal literals?
No.
I couldn't find one, although there is PEP 240 regarding rational literals. Maybe it's time to write up a rejected PEP explaining exactly what the problems are with Decimal literals.
The proposal hasn't been rejected, it just faded away for lack of somebody to write the PEP and offer to do the work. As I recall, a number of senior core developers were tentatively interested in the idea, at least in principle. Nick Coghlan was one, if memory serves me right. I don't recall any major objections, although that part might be confirmation bias :-)
From memory, the problems are (a) it'd effectively require the gigantic decimal module to be imported by default, and (b) contexts don't work with literals.
Neither of those are problems. They are only problems if you expect the (hypothetical) builtin decimal type to be the exact decimal.Decimal type, but that is overkill for the use-cases for a builtin decimals. All the context related functionality would be dropped: builtin.decimal would implement only a fixed width with a single rounding mode. They would be effectively like float, only base 10. For those who need the extra functionality of decimal.Decimal, the module would still exist. But builtins.decimal would be aimed at the simpler use-case of numerically unsophisticated users who wouldn't know a rounding mode or trap if it bit them but do know that 0.1 + 0.2 should equal 0.3 :-) There are a couple of standards for fixed-width decimals, by memory we were considering either 64 bit or 128 bit decimals.
I think that a part of the problem is that because Decimal silently accepts both float and string literal inputs things get surprising e.g.:
In [5]: D(0.3) == D('0.3') Out[5]: False
That's only surprising to those who don't read the docs :-)
1. Deprecate the use of the Decimal initialiser with float inputs so that decimal.Decimal(0.1), etc., issues a warning [...] 3. Eventually outlaw float as an input to the initialiser.
Please no. We started there, and relaxed that restriction because it was more annoying than helpful. Going backwards to Python 2.5 or thereabouts is, well, going backwards: >>> decimal.Decimal(0.5) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.5/decimal.py", line 648, in __new__ "First convert the float to a string") TypeError: Cannot convert float to Decimal. First convert the float to a string Numerically unsophisticated users are not the only users of Decimal, and frankly, the unsophisticated users aren't going to be any less surprised by Decimal(0.1) raising an exception than they are surprised by any of the other floating-point oddities that affect both decimal and float.
While this would "break" code which happen to work as expected, e.g. decimal.Decimal(0.5) but at least we would have consistent behaviour and fail early rather than potentially working but giving incorrect or inconsistent results.
It's not giving incorrect results. It is giving correct results. The problem is not Decimal, but that people think that 0.1 means one tenth when it actually means 3602879701896397 ÷ 36028797018963968 :-) Viewing Decimal(0.1) is an excellent way to discover what value 0.1 actually has, as opposed to the value you think it has. -- Steven
On Thu, Mar 5, 2020 at 7:02 AM Steven D'Aprano <steve@pearwood.info> wrote:
On Thu, Mar 05, 2020 at 12:39:38PM +0000, Steve Barnes wrote:
Hmm, is there a PEP regarding Decimal literals?
No.
I couldn't find one, although there is PEP 240 regarding rational literals. Maybe it's time to write up a rejected PEP explaining exactly what the problems are with Decimal literals.
The proposal hasn't been rejected, it just faded away for lack of somebody to write the PEP and offer to do the work.
As I recall, a number of senior core developers were tentatively interested in the idea, at least in principle. Nick Coghlan was one, if memory serves me right. I don't recall any major objections, although that part might be confirmation bias :-)
My recollection on this idea (as I think I brought it up once as well) is how do you handle decimal contexts? And having a global settings in the decimal module for the default case influence syntax would be unique/weird. -Brett
From memory, the problems are (a) it'd effectively require the gigantic decimal module to be imported by default, and (b) contexts don't work with literals.
Neither of those are problems. They are only problems if you expect the (hypothetical) builtin decimal type to be the exact decimal.Decimal type, but that is overkill for the use-cases for a builtin decimals.
All the context related functionality would be dropped: builtin.decimal would implement only a fixed width with a single rounding mode. They would be effectively like float, only base 10.
For those who need the extra functionality of decimal.Decimal, the module would still exist. But builtins.decimal would be aimed at the simpler use-case of numerically unsophisticated users who wouldn't know a rounding mode or trap if it bit them but do know that 0.1 + 0.2 should equal 0.3 :-)
There are a couple of standards for fixed-width decimals, by memory we were considering either 64 bit or 128 bit decimals.
I think that a part of the problem is that because Decimal silently accepts both float and string literal inputs things get surprising e.g.:
In [5]: D(0.3) == D('0.3') Out[5]: False
That's only surprising to those who don't read the docs :-)
1. Deprecate the use of the Decimal initialiser with float inputs so that decimal.Decimal(0.1), etc., issues a warning [...] 3. Eventually outlaw float as an input to the initialiser.
Please no. We started there, and relaxed that restriction because it was more annoying than helpful. Going backwards to Python 2.5 or thereabouts is, well, going backwards:
>>> decimal.Decimal(0.5) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.5/decimal.py", line 648, in __new__ "First convert the float to a string") TypeError: Cannot convert float to Decimal. First convert the float to a string
Numerically unsophisticated users are not the only users of Decimal, and frankly, the unsophisticated users aren't going to be any less surprised by Decimal(0.1) raising an exception than they are surprised by any of the other floating-point oddities that affect both decimal and float.
While this would "break" code which happen to work as expected, e.g. decimal.Decimal(0.5) but at least we would have consistent behaviour and fail early rather than potentially working but giving incorrect or inconsistent results.
It's not giving incorrect results. It is giving correct results. The problem is not Decimal, but that people think that 0.1 means one tenth when it actually means 3602879701896397 ÷ 36028797018963968 :-)
Viewing Decimal(0.1) is an excellent way to discover what value 0.1 actually has, as opposed to the value you think it has.
-- Steven _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/LGVKOQ... Code of Conduct: http://python.org/psf/codeofconduct/
On Thu, Mar 5, 2020 at 4:27 AM Steve Barnes <GadgetSteve@live.co.uk> wrote:
One of the lovely things about Python is that we have the capability to avoid issues such as the vagaries of the floating point type with libraries such as decimal and fractions. This is wonderous to me but comes with an issue that I suspect is limiting its usage. That issue is that you have to litter your code with constructors of those types, e.g.
```
a = 0.1
b = 0.2
c = 0.3
a + b == c # This doesn’t work but it is easy to forget why
```
vs:
```
from decimal import * # I know why this is bad but it comes straight from the examples
a = Decimal("0.1") # Needs to be a string to avoid being temporarily a float
b = Decimal("0.2") # Ditto
c = Decimal("0.3") # Ditto
a + b == c # Magic this works!
```
While this works it is not very friendly to users and there will often be cases where people forget to quote the values, etc. and there are similar issues with using the fractions library and I am sure with others. I suspect that this is why the decimals library, and some others, is not used as widely as it possibly should be.
Wouldn’t it be possible to have something along the lines of:
```
from decimal import TreatFloatsAsDecimal
@TreatFloatsAsDecimal
a = 0.1 # These are all now decimals
b = 0.2
c = 0.3
a + b == c # This now works
``
If you were going to do this you would probably need something like a context manager to control the scope, such as: with TreatFloatsAsDecimal: a = 0.1 # decimal b = 0.2 # decimal c = 0.3 # decimal d = 0.3 # float I have been interested in something like this in the context of numpy arrays, but I am not sure even that would even be possible under how the python language works, and if it was I figured it was too complicated to be worthwhile.
On Thu, Mar 5, 2020 at 11:09 AM Todd <toddrjen@gmail.com> wrote:
On Thu, Mar 5, 2020 at 4:27 AM Steve Barnes <GadgetSteve@live.co.uk> wrote:
SNIP
Wouldn’t it be possible to have something along the lines of:
```
from decimal import TreatFloatsAsDecimal
@TreatFloatsAsDecimal
a = 0.1 # These are all now decimals
b = 0.2
c = 0.3
a + b == c # This now works
``
If you were going to do this you would probably need something like a context manager to control the scope, such as:
with TreatFloatsAsDecimal: a = 0.1 # decimal b = 0.2 # decimal c = 0.3 # decimal
d = 0.3 # float
I have been interested in something like this in the context of numpy arrays, but I am not sure even that would even be possible under how the python language works, and if it was I figured it was too complicated to be worthwhile.
I like this example; I will refer to it below. I really think that anyone proposing new syntax construct would really benefit from being able to "play" with it, being able to write and run such currently invalid Python syntax. As I alluded to in previous emails (and as quite a few others did for years on this list), once you have a working example of an import hook (or custom encoding), in some cases, it can actually be quite straightforward to write the code needed to transform a proposed syntax into valid Python code. So, in addition to my previous example, I will give two more, copied verbatim from my terminal. Note: these two examples have not been included in the ideas repository (nor on the Pypi version). Also, I show code in a console, but it could be in a script run via an import statement. Example 1: >>> from ideas.examples import decimal_math_d >>> hook = decimal_math_d.add_hook() >>> from ideas import console >>> console.start() Configuration values for the console: source_init from ideas.examples.decimal_math_d transform_source from ideas.examples.decimal_math_d -------------------------------------------------- Ideas Console version 0.0.15. [Python version: 3.7.3] ~>> a = 0.1 ~>> a 0.1 ~>> b = 0.1 D ~>> b Decimal('0.1') ~>> a + b Traceback (most recent call last): File "IdeasConsole", line 1, in <module> TypeError: unsupported operand type(s) for +: 'float' and 'decimal.Decimal' ~>> 1 + b Decimal('1.1') The actual code to do the required transformation is very short and, I think, quite readable: def transform_source(source, **kwargs): """Simple transformation: replaces any explicit float followed by ``D`` by a Decimal. """ tokens = token_utils.tokenize(source) for first, second in zip(tokens, tokens[1:]): if first.is_number() and "." in first.string and second == "D": first.string = f"Decimal('{first.string}')" second.string = "" return token_utils.untokenize(tokens) The second example (a fake context) was a bit trickier to write, but not that difficult: >>> from ideas.examples import decimal_math_with >>> hook = decimal_math_with.add_hook() >>> from ideas import console >>> console.start() Configuration values for the console: source_init from ideas.examples.decimal_math_with transform_source from ideas.examples.decimal_math_with -------------------------------------------------- Ideas Console version 0.0.16. [Python version: 3.7.3] ~>> a = 1.0 ~>> with float_as_Decimal: ... b = 0.1 ... c = 0.2 ... d = 0.3 ... ~>> e = 0.4 ~>> b + c == d True ~>> b Decimal('0.1') ~>> a, e (1.0, 0.4) ~>> b, c, d (Decimal('0.1'), Decimal('0.2'), Decimal('0.3')) Caveat: doing source transformations prior to execution like I do is definitely not as robust as having code parsed by a well-designed grammar-based parser. Cheers, André
_______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/H346TH... Code of Conduct: http://python.org/psf/codeofconduct/
On Mar 5, 2020, at 01:27, Steve Barnes <GadgetSteve@live.co.uk> wrote: One of the lovely things about Python is that we have the capability to avoid issues such as the vagaries of the floating point type with libraries such as decimal and fractions. This is wonderous to me but comes with an issue that I suspect is limiting its usage. That issue is that you have to litter your code with constructors of those types, e.g.
```
a = 0.1
b = 0.2
c = 0.3
a + b == c # This doesn’t work but it is easy to forget why
```
vs:
```
from decimal import * # I know why this is bad but it comes straight from the examples
a = Decimal("0.1") # Needs to be a string to avoid being temporarily a float
b = Decimal("0.2") # Ditto
c = Decimal("0.3") # Ditto
a + b == c # Magic this works!
```
While this works it is not very friendly to users and there will often be cases where people forget to quote the values, etc. and there are similar issues with using the fractions library and I am sure with others. I suspect that this is why the decimals library, and some others, is not used as widely as it possibly should be.
Wouldn’t it be possible to have something along the lines of:
```
from decimal import TreatFloatsAsDecimal
@TreatFloatsAsDecimal
a = 0.1 # These are all now decimals
b = 0.2
c = 0.3
a + b == c # This now works
```
I don’t think this could work, because assignment doesn’t have anything magic you can hook in an obvious way. Also, you can’t turn all float _values_ into decimals without losing all the benefits of Decimal. What you want is to turn all float _literals_ into decimal literals. Which is definitely doable. See https://github.com/abarnert/floatliteralhack for some inspiration, and maybe https://github.com/abarnert/userliteralhack. But of course if this is built into the compiler rather than being an import hook, it wouldn’t have to be hacky like those examples. What you want is some kind of syntax that marks a context—presumably either a block or a scope, so a magic with statement or a decorator might be sufficient, or at least sufficient for a proof of concept: with TreatFloatsAsDecimals: assert 1.0 + 2.0 == 3.0 This would be a real context manager, but also be magical, in the same way that __future__ is a real module but also magical. However, I think adding user-defined suffixes, or maybe just one new hardcoded suffix D, is a better way to handle this: assert 1.0D + 2.0D == 3.0D assert 1.0 + 2.0 != 3.0
I do know that this goes against the explicit is better than implicit so an alternative might be to have a flexible base modifier, something like:
```
from decimal import DecimalBase as D
a = 0D0.1 # These are all now decimals
b = 0D0.2
c = 0D0.3
a + b == c # This now works
That’s an interesting variant on suffixes or prefixes, because it leverages an existing prefix that at least some number literals already handle. I suspect it has the same implementation issues—the real questions are (1) how to get that D into some scope that the compiler can look up at compile time or emit code to defer to at runtime, and (2) which of those two it should do. Look over any of the past threads on user-defined affixed for lots of argument.
Since anything like this would require overriding at least part of the base interpreter I do not think that this is a suitable one for an external PyPi library or at least some hooks in the interpreter would be required to make it possible.
Python already has an import hook mechanism, and mechanisms to run each stage of the compilation process manually (except tokens to AST, but it has tokenize and untokenize that can be used to fake that), and a way to easily create a pre-hooked REPL, which together are all you need to write this as an external library. In fact, this seems almost tailor made as a perfect first use case for André’s ideas import-hook-helper library: https://github.com/aroberge/ideas
participants (11)
-
Andrew Barnert
-
André Roberge
-
Bernardo Sulzbach
-
Brett Cannon
-
Chris Angelico
-
Christopher Barker
-
Greg Ewing
-
Paul Moore
-
Steve Barnes
-
Steven D'Aprano
-
Todd