Greetings, Python3.3 Decimal Library v0.3 is Released here:
https://code.google.com/p/pythondecimallibrary/
pdeclib.py is the decimal library, pilib.py is the PI library.
pdeclib.py provides scientific and transcendental functions for the C Accelerated Decimal module written by Stefan Krah. The library is open source, GLPv3, comprised of two py files.
My idea for python is to two things really, 1) make floating point decimal the default floating point type in python4.x, and 2) make these functions ( pdeclib.py ) or equiv available in python4.x by default.
Thank you for your consideration.
Hello Mark,
Le 03/03/2014 16:41, Mark H. Harris a écrit :
Greetings, Python3.3 Decimal Library v0.3 is Released here:
https://code.google.com/p/pythondecimallibrary/
pdeclib.py is the decimal library, pilib.py is the PI library.
pdeclib.py provides scientific and transcendental functions for the C Accelerated Decimal module written by Stefan Krah. The library is open source, GLPv3, comprised of two py files.
My idea for python is to two things really, 1) make floating point decimal the default floating point type in python4.x, and 2) make these functions ( pdeclib.py ) or equiv available in python4.x by default.
If you want to contribute those functions to Python, the first required step would be to license them under terms compatible with the contributor agreement: http://www.python.org/psf/contrib/contrib-form/
(i.e. under the Academic Free License v. 2.1 or the Apache License, Version 2.0)
Regards
Antoine.
On Monday, March 3, 2014 9:58:02 AM UTC-6, Antoine Pitrou wrote:
Hello Mark,
Le 03/03/2014 16:41, Mark H. Harris a écrit :
Greetings, Python3.3 Decimal Library v0.3 is Released here:
https://code.google.com/p/pythondecimallibrary/
pdeclib.py is the decimal library, pilib.py is the PI library.
pdeclib.py provides scientific and transcendental functions for the C Accelerated Decimal module written by Stefan Krah. The library is open source, GLPv3, comprised of two py files.
My idea for python is to two things really, 1) make floating point decimal the default floating point type in python4.x, and 2) make these functions ( pdeclib.py ) or equiv available in python4.x by default.
If you want to contribute those functions to Python, the first required step would be to license them under terms compatible with the contributor agreement: http://www.python.org/psf/contrib/contrib-form/
(i.e. under the Academic Free License v. 2.1 or the Apache License, Version 2.0)
Antoine, thank you for the response. I have e-signed the Academic Free License, and it is filed.
Mark H Harris marcus
On 3 March 2014 15:41, Mark H. Harris harrismh777@gmail.com wrote:
Greetings, Python3.3 Decimal Library v0.3 is Released here:
https://code.google.com/p/pythondecimallibrary/
pdeclib.py is the decimal library, pilib.py is the PI library.
pdeclib.py provides scientific and transcendental functions for the C Accelerated Decimal module written by Stefan Krah. The library is open source, GLPv3, comprised of two py files.
My idea for python is to two things really, 1) make floating point decimal the default floating point type in python4.x,
This is an interesting suggestion. It's hard to judge how much code would be broken by such an explicit change. A preliminary less controversial step would just be to introduce decimal literals e.g 1.23d.
and 2) make these functions ( pdeclib.py ) or equiv available in python4.x by default.
If there was a desire to add these then there's no reason why they would have to wait until the (hypothetical) release of Python 4.x. However I doubt that they would be accepted without having first spent some time maturing on PyPI and certainly not without unit tests.
Oscar
On Monday, March 3, 2014 10:52:01 AM UTC-6, Oscar Benjamin wrote:
This is an interesting suggestion. It's hard to judge how much code would be broken by such an explicit change. A preliminary less controversial step would just be to introduce decimal literals e.g 1.23d.
If there was a desire to add these then there's no reason why they would have to wait until the (hypothetical) release of Python 4.x. However I doubt that they would be accepted without having first spent some time maturing on PyPI and certainly not without unit tests.
Oscar, sounds right. I would like to see it just the other way round. We have a very speedy decimal module, its 2014, and we just don't need any more of this:
from decimal import * Decimal(.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
I'm from back in that day. You know, about 1982. We've embedded floats and doubles in everything from the processor to python and beyond. Memory is cheap, technology has advanced (heck, python is one of the coolest most sophisticated languages on the planet), its time to use decimal floating point. I'm thinking we ought to have decimal by default, and the binary literal 1.23b as the option.
As far as how much code breaks; who knows, but it can't be any worse than moving from 2.7.x to 3.3.4. I mean, everyone thought that was going to break the world, and people have adapted just fine. Some folks are staying put on 2.7.6 (and that's fine) and some folks like me are out there on the bloody edge downloading 3.3.5 !
Thanks for the discussion Oscar. marcus
Mark H. Harris, 03.03.2014 18:10:
We have a very speedy decimal module, its 2014, and we just don't need any more of this:
from decimal import * Decimal(.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
I assume you are aware that the correct way to do this is
Decimal('0.1')
Decimal('0.1')
i.e., if you want exact precision, use exact literals.
That being said, did you read through the previous discussions where people suggested to add decimal literals to Python?
Stefan
On Mon, Mar 03, 2014 at 09:41:40AM -0600, Mark H. Harris wrote:
My idea for python is to two things really, 1) make floating point decimal the default floating point type in python4.x, and
I think it is premature to be talking about what goes into Python 4.x, which is why I refer to it as "Python 4000". There's no concrete plans for a Python 4 yet, or even whether there will be a Python 4, what the last Python 3.x version will be, or what sort of changes will be considered. But I would expect that any such Python 4 will probably be at least four years away, although given the extended life of 2.7 possibly more like eight.
(Given the stress of the 2->3 migration, I think *nobody* is exactly in a hurry for yet another backwards incompatible version. Perhaps we should be delaying such things until Python 5000.)
- make
these functions ( pdeclib.py ) or equiv available in python4.x by default.
If it's worth having a decimal maths library, its probably worth having it in 3.5 or 3.6.
On Mon, Mar 3, 2014 at 11:54 AM, Steven D'Aprano steve@pearwood.info wrote:
If it's worth having a decimal maths library, its probably worth having it in 3.5 or 3.6.
Agreed.
I'm -1 on a decimal-specific math library, though. What I would rather see is a type-agnostic math library, that does what (IIRC) the new statistics module does: take in numbers of any type, coerce only as is strictly necessary, calculate, and return an answer of the same type as the input with the goal of correctness over performance. If all input is float (or complex), calculation could be delegated to the existing math (or cmath) libraries, and performance wouldn't be too much worse than using those libraries directly.
On 3 March 2014 18:23, Zachary Ware zachary.ware+pyideas@gmail.com wrote:
I'm -1 on a decimal-specific math library, though. What I would rather see is a type-agnostic math library, that does what (IIRC) the new statistics module does: take in numbers of any type, coerce only as is strictly necessary, calculate, and return an answer of the same type as the input with the goal of correctness over performance.
Given that incorrect results aren't acceptable in any situation, fair enough, but what I'd like to see would be library functions that provided good performance for decimal first. If they can do so while remaining type-agnostic, then fine, but I'd favour decimal performance over the ability to calculate the sine of a rational... Ultimately, I'd like to feel that I pay a one-off cost for working with decimals (the cost of a software implementation rather than a hardware one) but I *don't* have to pay further because the algorithms used for decimals are suboptimal compared to the floating point ones. I'd say the same about rationals, but the set of functions involved is different (and basically wouldn't involve most of what's in the normal math library)
Having said this, I can't actually think of a real-life domain where math-library functions (by which I assume we basically mean trancendental functions?) would be used and yet there would be a genuine need for decimal arithmetic rather than floating point. So although I'm +1 for a decimal math library in theory, I'm unsure of its practical value.
Paul
Taking a look at the documentation for cdecimal itself (now part of Python 3.3) at http://www.bytereef.org/mpdecimal/benchmarks.html, it looks like for basic add/mul operations that don't exceed the precision of floating point, FP is more than twice as fast as optimized decimals. Of course, where the precision needed is more than FP handles at all, it's simply a choice between calculating a quantity and not doing so.
This suggests to me a rather strong argument against making decimals the *default* numeric datatype. However, it *does* still remain overly cumbersome to create decimals, and a literal notation for them would be very welcomed.
It is far from clear to me that decimals are generically the *right* answer to rounding issues (i.e. even if we don't care about the benchmark question). Yes, it's easier for humans who use base 10 to understand numeric results when using them. But the hardware on CPUs is oriented around binary floating point representations, and pretty much every other programming language uses BFP. It's a nice property to have 'a+b' in Python produce the same answer as 'a+b' in C, Ruby, Perl, Mathematica, Haskell, Java, etc.--I'm not saying that's the only nice property one might want, but it is one of them since data is exchanged between tools and libraries written in different languages (or even, e.g. Python often uses libraries written in C).
I do recognize that in Python 4000, we *could* have default literals be decimal, and require some other literal notation for binary floating point, and that *would* be available for calculations meant to be compatible with outside tools. However, I haven't seen a case presented for why decimals are generically better as a default.
Of possible note, I happen to work in a research lab where we worry a lot about bitwise identity of calculations, not merely about numeric stability. I know that is a somewhat unusual requirement, even within scientific computing. But it is one that exists, and it makes me think of (bitwise) compatibility with calculations in other programming languages.
On Mon, Mar 3, 2014 at 12:12 PM, Paul Moore p.f.moore@gmail.com wrote:
On 3 March 2014 18:23, Zachary Ware zachary.ware+pyideas@gmail.com wrote:
I'm -1 on a decimal-specific math library, though. What I would rather see is a type-agnostic math library, that does what (IIRC) the new statistics module does: take in numbers of any type, coerce only as is strictly necessary, calculate, and return an answer of the same type as the input with the goal of correctness over performance.
Given that incorrect results aren't acceptable in any situation, fair enough, but what I'd like to see would be library functions that provided good performance for decimal first. If they can do so while remaining type-agnostic, then fine, but I'd favour decimal performance over the ability to calculate the sine of a rational... Ultimately, I'd like to feel that I pay a one-off cost for working with decimals (the cost of a software implementation rather than a hardware one) but I *don't* have to pay further because the algorithms used for decimals are suboptimal compared to the floating point ones. I'd say the same about rationals, but the set of functions involved is different (and basically wouldn't involve most of what's in the normal math library)
Having said this, I can't actually think of a real-life domain where math-library functions (by which I assume we basically mean trancendental functions?) would be used and yet there would be a genuine need for decimal arithmetic rather than floating point. So although I'm +1 for a decimal math library in theory, I'm unsure of its practical value.
Paul _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On Monday, March 3, 2014 2:44:12 PM UTC-6, David Mertz wrote:
However, I haven't seen a case presented for why decimals are generically
better as a default.
hi David, here is one, right out of the python3.3 docs,
"Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.” – excerpt from the decimal arithmetic specification.
Business apps require precision (banking, sales, marketing, finance, & on and on).
One big issue that is going to confront everyone sooner than later is cryptography. Fast bignum support, fast factoring, and fast transcendentals are going to become more important as firms and individuals move into their own on crypto; not too far fetched, really. We have got to come up with this on our own, because we can not trust others to get it right for us. NSA and GCHQ have made that clear. PGP was one thing, but we have got to invent something better.
But, really the real reason is the first paragraph. marcus
On Mon, Mar 3, 2014 at 2:32 PM, Mark H. Harris harrismh777@gmail.comwrote:
Business apps require precision (banking, sales, marketing, finance, & on and on).
I definitely agree the decimal numbers are much better for financial data than binary floating point. That's certainly a very legitimate domain. On the other hand, there are a great many other domains that are very different from this--i.e. scientific areas. For this, decimal is no better, and in many cases worse. At the very least, throwing in a 2x+ slowdown of calculations is notably worse for scientific computing.
In other words, I'm glad we have decimals in Python now. I'd be more glad if they were expressed more easily (e.g. as literals), but for more domains than not, for backward and cross-language compatibility, and for speed, binary floating point remains better.
One big issue that is going to confront everyone sooner than later is cryptography. Fast bignum support, fast factoring, and fast transcendentals are going to become more important as firms and individuals move into their own on crypto; not too far fetched, really.
Sounds far fetched to me to imagine that amateur cryptography will ever be of value over well-audited, well studied, crypto-analyzed techniques.
But bracketing that, exceedingly few cryptographic techniques are likely to use either binary or decimal floating point operations. This is strictly the domain of integer math, so mentioning it seems like random hand waving unrelated to the topic of default or convenient representation of fractional numeric data.
Yours, David...
Mark H. Harris wrote:
One big issue that is going to confront everyone sooner than later is cryptography. Fast bignum support, fast factoring, and fast transcendentals are going to become more important
I don't see any reason that these have to be done in decimal, though. All the bignum arithmetic used in cryptography is integer, which is exact in any base. Also, the user never sees the resulting numbers as numbers. So the primary requirement is speed, which argues for binary rather than decimal, at least on current hardware.
On Tue, 04 Mar 2014 11:54:08 +1300 Greg Ewing greg.ewing@canterbury.ac.nz wrote:
Mark H. Harris wrote:
One big issue that is going to confront everyone sooner than later is cryptography. Fast bignum support, fast factoring, and fast transcendentals are going to become more important
I don't see any reason that these have to be done in decimal, though. All the bignum arithmetic used in cryptography is integer, which is exact in any base. Also, the user never sees the resulting numbers as numbers. So the primary requirement is speed, which argues for binary rather than decimal, at least on current hardware.
For the record, int doesn't have a sqrt() method while Decimal has, so if you wanna take the exact square root of a large integer, you'd better convert it to a Decimal.
Regards
Antoine.
On Mon, Mar 3, 2014 at 3:01 PM, Antoine Pitrou solipsis@pitrou.net wrote:
For the record, int doesn't have a sqrt() method while Decimal has, so if you wanna take the exact square root of a large integer, you'd better convert it to a Decimal.
Well, actually, if you want to take the square root of a large integer, most times you'll need an irrational number as a value. Unfortunately, neither floats, decimals, nor fractions are able to do that (nor any finite representation that is numeric; you can only use symbolic ones).
On the other hand, now that you mention it, a floor_sqrt() function that operated quickly on ints would be nice to have. But that's a different thread.
On Mon, 3 Mar 2014 15:07:57 -0800 David Mertz mertz@gnosis.cx wrote:
On Mon, Mar 3, 2014 at 3:01 PM, Antoine Pitrou solipsis@pitrou.net wrote:
For the record, int doesn't have a sqrt() method while Decimal has, so if you wanna take the exact square root of a large integer, you'd better convert it to a Decimal.
Well, actually, if you want to take the square root of a large integer, most times you'll need an irrational number as a value.
Well, unless you know by construction that your integer is a perfect square.
Regards
Antoine.
On Mon, Mar 3, 2014 at 3:10 PM, Antoine Pitrou solipsis@pitrou.net wrote:
On Mon, 3 Mar 2014 15:07:57 -0800 David Mertz mertz@gnosis.cx wrote:
On Mon, Mar 3, 2014 at 3:01 PM, Antoine Pitrou solipsis@pitrou.net
wrote:
For the record, int doesn't have a sqrt() method while Decimal has, so
if you wanna take the exact square root of a large integer, you'd
better
convert it to a Decimal.
Well, actually, if you want to take the square root of a large integer, most times you'll need an irrational number as a value.
Well, unless you know by construction that your integer is a perfect square.
Umm... if you construct your integer as a perfect square, wouldn't it be easier just to store the number it is a perfect square of than to work on optimizing the integer sqrt() function?
It does make me wonder--although this is definitely not actually python-ideas--whether there is any technique to determine if a number is a perfect square that takes less work than finding its integral root. Maybe so, I don't know very much number theory.
On Mar 4, 2014, at 1:18, Greg Ewing greg.ewing@canterbury.ac.nz wrote:
Antoine Pitrou wrote:
Well, unless you know by construction that your integer is a perfect square.
If you've constructed it that way, you probably already know its square root.
Usually, but not always.
For example, there are algorithms to generate squares of Pythagorean triples without generating the triples themselves. Of course there are simpler and more efficient algorithms to just generate the triples (can't get much simpler than Euclid's formula...), but there might be some reason you're using one of the square algorithms. And then, to test it, you'd need to verify that a2+b2==c2 and that all three of them are perfect squares.
Andrew Barnert wrote:
On Mar 4, 2014, at 1:18, Greg Ewing greg.ewing@canterbury.ac.nz wrote:
If you've constructed it that way, you probably already know its square root.
Usually, but not always.
In any case, the fact remains that any algorithm for calculating the square root of a perfect square will work just as well in any base. There's nothing special about decimal in that regard.
On Wed, 05 Mar 2014 11:17:17 +1300 Greg Ewing greg.ewing@canterbury.ac.nz wrote:
Andrew Barnert wrote:
On Mar 4, 2014, at 1:18, Greg Ewing greg.ewing@canterbury.ac.nz wrote:
If you've constructed it that way, you probably already know its square root.
Usually, but not always.
In any case, the fact remains that any algorithm for calculating the square root of a perfect square will work just as well in any base. There's nothing special about decimal in that regard.
The point was not decimal vs. binary, but arbitrarily high precision vs. fixed low precision. (i.e. if you have a 1000-digit integer, just calling N ** 0.5 will give you a very low-precision result compared to the integer's precision).
Regards
Antoine.
On Tue, Mar 4, 2014 at 7:44 AM, David Mertz mertz@gnosis.cx wrote:
Taking a look at the documentation for cdecimal itself (now part of Python 3.3) at http://www.bytereef.org/mpdecimal/benchmarks.html, it looks like for basic add/mul operations that don't exceed the precision of floating point, FP is more than twice as fast as optimized decimals. Of course, where the precision needed is more than FP handles at all, it's simply a choice between calculating a quantity and not doing so.
This suggests to me a rather strong argument against making decimals the *default* numeric datatype. However, it *does* still remain overly cumbersome to create decimals, and a literal notation for them would be very welcomed.
You could probably make the same performance argument against making Unicode the default string datatype. But a stronger argument is that the default string should be the one that does the right thing with text. As of Python 3, that's the case. And the default integer type handles arbitrary sized integers (although Py2 went most of the way there by having automatic promotion). It's reasonable to suggest that the default non-integer numeric type should also simply do the right thing.
It's a trade-off, though, and for most people, float is sufficient. Is it worth the cost of going decimal everywhere? I want to first see a decimal literal notation (eg 1.0 == float("1.0") and 1.0d == Decimal("1.0"), as has been suggested a few times), and then have a taggable float marker (1.0f == float("1.0")); then there can be consideration of a __future__ directive to change the default, which would let people try it out and see how much performance suffers.Maybe it'd be worth it for the accuracy. Maybe we'd lose less time processing than we save answering the "why does this do weird things sometimes" questions.
ChrisA
On Tue, Mar 04, 2014 at 10:42:57AM +1100, Chris Angelico wrote:
You could probably make the same performance argument against making Unicode the default string datatype.
I don't think so -- for ASCII strings the performance cost of Unicode is significantly less than the performance hit for Decimal:
[steve@ando ~]$ python3.3 -m timeit -s "s = 'abcdef'*1000" "s.upper()" 100000 loops, best of 3: 8.76 usec per loop [steve@ando ~]$ python3.3 -m timeit -s "s = b'abcdef'*1000" "s.upper()" 100000 loops, best of 3: 7.05 usec per loop
[steve@ando ~]$ python3.3 -m timeit -s "x = 123.4567" "x**6" 1000000 loops, best of 3: 0.344 usec per loop [steve@ando ~]$ python3.3 -m timeit -s "from decimal import Decimal" \
-s "x = Decimal('123.4567')" "x**6"
1000000 loops, best of 3: 1.41 usec per loop
That's a factor of 1.2 times slower for Unicode versus 4.1 for Decimal. I think that's *fast enough* for all but the most heavy numeric needs, but it's not something we can ignore.
But a stronger argument is that the default string should be the one that does the right thing with text. As of Python 3, that's the case. And the default integer type handles arbitrary sized integers (although Py2 went most of the way there by having automatic promotion). It's reasonable to suggest that the default non-integer numeric type should also simply do the right thing.
Define "the right thing" for numbers.
It's a trade-off, though, and for most people, float is sufficient.
That's a tricky one. For people doing quote-unquote "serious" numeric work, they'll mostly want to stick to binary floats, even if that means missing out on all the extra IEEE-754 goodies that the decimal module has but floats don't. The momentum of 40+ years of almost entirely binary floating point maths does not shift to decimal overnight.
But for everyone else, binary floats are sufficient except when they aren't. Decimal, of course, won't solve all you floating point difficulties -- it's easy to demonstrate that nearly all the common pitfalls of FP maths also occurs with Decimal, with the exception of inexact conversion from decimal strings to numbers. But that one issue alone is a major cause of confusion.
My personal feeling is that for Python 4000 I'd vote for the default floating point format to be decimal, with binary floats available with a b suffix.
But since that could be a decade away, it's quite premature to spend too much time on this.
On 2014-03-04 01:55, Steven D'Aprano wrote:
On Tue, Mar 04, 2014 at 10:42:57AM +1100, Chris Angelico wrote:
You could probably make the same performance argument against making Unicode the default string datatype.
I don't think so -- for ASCII strings the performance cost of Unicode is significantly less than the performance hit for Decimal:
[steve@ando ~]$ python3.3 -m timeit -s "s = 'abcdef'*1000" "s.upper()" 100000 loops, best of 3: 8.76 usec per loop [steve@ando ~]$ python3.3 -m timeit -s "s = b'abcdef'*1000" "s.upper()" 100000 loops, best of 3: 7.05 usec per loop
[steve@ando ~]$ python3.3 -m timeit -s "x = 123.4567" "x**6" 1000000 loops, best of 3: 0.344 usec per loop [steve@ando ~]$ python3.3 -m timeit -s "from decimal import Decimal" \
-s "x = Decimal('123.4567')" "x**6"
1000000 loops, best of 3: 1.41 usec per loop
That's a factor of 1.2 times slower for Unicode versus 4.1 for Decimal. I think that's *fast enough* for all but the most heavy numeric needs, but it's not something we can ignore.
But a stronger argument is that the default string should be the one that does the right thing with text. As of Python 3, that's the case. And the default integer type handles arbitrary sized integers (although Py2 went most of the way there by having automatic promotion). It's reasonable to suggest that the default non-integer numeric type should also simply do the right thing.
Define "the right thing" for numbers.
It's a trade-off, though, and for most people, float is sufficient.
That's a tricky one. For people doing quote-unquote "serious" numeric work, they'll mostly want to stick to binary floats, even if that means missing out on all the extra IEEE-754 goodies that the decimal module has but floats don't. The momentum of 40+ years of almost entirely binary floating point maths does not shift to decimal overnight.
[snip] Won't people doing quote-unquote "serious" numeric work be using numpy?
On Tue, Mar 4, 2014 at 12:55 PM, Steven D'Aprano steve@pearwood.info wrote:
On Tue, Mar 04, 2014 at 10:42:57AM +1100, Chris Angelico wrote:
You could probably make the same performance argument against making Unicode the default string datatype.
I don't think so -- for ASCII strings the performance cost of Unicode is significantly less than the performance hit for Decimal:
[steve@ando ~]$ python3.3 -m timeit -s "s = 'abcdef'*1000" "s.upper()" 100000 loops, best of 3: 8.76 usec per loop [steve@ando ~]$ python3.3 -m timeit -s "s = b'abcdef'*1000" "s.upper()" 100000 loops, best of 3: 7.05 usec per loop
[steve@ando ~]$ python3.3 -m timeit -s "x = 123.4567" "x**6" 1000000 loops, best of 3: 0.344 usec per loop [steve@ando ~]$ python3.3 -m timeit -s "from decimal import Decimal" \
-s "x = Decimal('123.4567')" "x**6"
1000000 loops, best of 3: 1.41 usec per loop
That's a factor of 1.2 times slower for Unicode versus 4.1 for Decimal. I think that's *fast enough* for all but the most heavy numeric needs, but it's not something we can ignore.
There is a difference of degree, yes, but Unicode-strings-as-default has had a few versions to settle in, so the figures mightn't be perfectly fair. But there's still a difference. My point is that Python should be choosing what's right over what's fast, so there's a parallel there.
It's reasonable to suggest that the default non-integer numeric type should also simply do the right thing.
Define "the right thing" for numbers.
Yeah, and that's the issue. In this case, since computers don't have infinite computational power, "the right thing" is going to be fairly vague, but I'd define it heuristically as "what the average programmer is most likely to expect". IEEE 754 defines operations on infinity in a way that makes them do exactly what you'd expect. If that's not possible, nan.
inf=float("inf") inf+5
inf
5-inf
-inf
5/inf
0.0
inf-inf
nan
A default decimal type would add to the "doing exactly what you expect" operations the obvious one of constructing an object from a series of decimal digits. If you say "0.1", you get the real number 1/10, not 3602879701896397/36028797018963968.
My personal feeling is that for Python 4000 I'd vote for the default floating point format to be decimal, with binary floats available with a b suffix.
Quite possibly. But changing defaults is a hugely backward-incompatible change, while adding a decimal literal syntax isn't. I'd be in favour of adding decimal literals and using performance and usefulness data from that to guide any discussions about Py4K changing the default.
And learn from Py3K and keep both the b and d suffixes supported in the new version :)
ChrisA
On Mon, Mar 3, 2014, at 15:12, Paul Moore wrote:
Having said this, I can't actually think of a real-life domain where math-library functions (by which I assume we basically mean trancendental functions?) would be used and yet there would be a genuine need for decimal arithmetic rather than floating point. So although I'm +1 for a decimal math library in theory, I'm unsure of its practical value.
If we count non-integer powers (though, usually they're still rational), finance.
I suspect though there's a better case for an arbitrary-precision library and no reason that library shouldn't be (also) able to work in decimal.
On 3/3/2014 12:54 PM, Steven D'Aprano wrote:
I think it is premature to be talking about what goes into Python 4.x, which is why I refer to it as "Python 4000". There's no concrete plans for a Python 4 yet, or even whether there will be a Python 4, what the last Python 3.x version will be,
Given that Guido does not want double-digit minor version numbers, 3.9, at the latest, will be followed by 4.0.
or what sort of changes will be considered.
There will be several deprecated items removed, such as http://docs.python.org/2/library/unittest.html#deprecated-aliases I think it would be worth making a consolidated list.
I think a few modules that have replacements will be considered for removal. Optparse (if argparse is an adequate replacement)? Asyncore (if the new async catches on)? Changing the meaning of a core feature like float literals is a different matter.
But I would expect that any such Python 4 will probably be at least four years away, although given the extended life of 2.7 possibly more like eight.
3.5, ..., 3.9, 4.0 is 6 releases, which should be about 10 years.
(Given the stress of the 2->3 migration, I think *nobody* is exactly in a hurry for yet another backwards incompatible version. Perhaps we should be delaying such things until Python 5000.)
On Monday, March 3, 2014 9:41:40 AM UTC-6, Mark H. Harris wrote:
Python3.3 Decimal Library v0.3 is Released here:
hi Oscar,
I released my pdeclib module on PyPI this afternoon (it only took four hours) but I'll never forget how to do that again ! So if anyone is interested in the pdeclib module, it may be installed with pip install pdeclib
The tarball will install both pdeclib.py (decimal library) & pilib.py (PI Library).
The library is beta obviously, and is licensed for contribution on PyPI.
Cheers,
marcus
On 4 March 2014 01:37, Mark H. Harris harrismh777@gmail.com wrote:
I released my pdeclib module on PyPI this afternoon (it only took four hours) but I'll never forget how to do that again ! So if anyone is interested in the pdeclib module, it may be installed with pip install pdeclib
Regardless of anything else that comes out of this thread, thanks for doing that. Paul