Floating point contexts in Python core
On 9 October 2012 02:07, Guido van Rossum <guido@python.org> wrote:
On Mon, Oct 8, 2012 at 5:32 PM, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
On 9 October 2012 01:11, Guido van Rossum <guido@python.org> wrote:
On Mon, Oct 8, 2012 at 5:02 PM, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
So the question that really needs to be answered, I think, is not "Why is NaN == NaN false?", but "Why doesn't NaN == anything raise an exception, when it would make so much more sense to do so?"
Because == raising an exception is really unpleasant. We had this in Python 2 for unicode/str comparisons and it was very awkward.
Nobody arguing against the status quo seems to care at all about numerical algorithms though. I propose that you go find some numerical mathematicians and ask them.
The main purpose of quiet NaNs is to propagate through computation ruining everything they touch. In a programming language like C that lacks exceptions this is important as it allows you to avoid checking all the time for invalid values, whilst still being able to know if the end result of your computation was ever affected by an invalid numerical operation. The reasons for NaNs to compare unequal are no doubt related to this purpose.
It is of course arguable whether the same reasoning applies to a language like Python that has a very good system of exceptions but I agree with Guido that raising an exception on == would be unfortunate. How many people would forget that they needed to catch those exceptions? How awkward could your code be if you did remember to catch all those exceptions? In an exception handling language it's important to know that there are some operations that you can trust.
If we want to do *anything* I think we should first introduce a floating point context similar to the Decimal context. Then we can talk.
The other thread has gone on for ages now and isn't going anywhere. Guido's suggestion here is much more interesting (to me) so I want to start a new thread on this subject. Python's default handling of floating point operations is IEEE-754 compliant which in my opinion is the obvious and right thing to do. However, Python is a much more versatile language than some of the other languages for which IEEE-754 was designed. Python offers the possibility of a very rich approach to the control and verification of the accuracy of numeric operations on both a function by function and code block by code block basis. This kind of functionality is already implemented in the decimal module [1] as well as numpy [2], gmpy [3], sympy [4] and no doubt other numerical modules that I'm not aware of. It would be a real blessing to numerical Python programmers if either/both of the following were to occur: 1) Support for calculation contexts with floats 2) A generic kind of calculation context manager that was recognised widely by the builtin/stdlib types and also by third party numerical packages. Oscar References: [1] http://docs.python.org/library/decimal.html#context-objects [2] http://docs.scipy.org/doc/numpy/reference/generated/numpy.seterr.html#numpy.... [3] https://gmpy2.readthedocs.org/en/latest/mpfr.html [4] http://docs.sympy.org/dev/modules/mpmath/contexts.html
On Oct 10, 2012, at 6:42 PM, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
If we want to do *anything* I think we should first introduce a floating point context similar to the Decimal context. Then we can talk.
The other thread has gone on for ages now and isn't going anywhere. Guido's suggestion here is much more interesting (to me) so I want to start a new thread on this subject. Python's default handling of floating point operations is IEEE-754 compliant which in my opinion is the obvious and right thing to do.
I gave this idea +float('inf') in the other thread and was thinking about it since. I am now toying with the idea to unify float and decimal in Python. IEEE combined their two FP standards in one recently, so we have a precedent for this. We can start by extending decimal to support radix 2 and once that code is mature enough and has accelerated code for platform formats (single, double, long double), we can replace Python float with the new fully platform independent IEEE 754 compliant implementation. We can even supply a legacy context to support some current warts.
On 11/10/12 09:56, Alexander Belopolsky wrote:
I gave this idea +float('inf') in the other thread and was thinking about it since. I am now toying with the idea to unify float and decimal in Python. IEEE combined their two FP standards in one recently, so we have a precedent for this.
We can start by extending decimal to support radix 2 and once that code is mature enough and has accelerated code for platform formats (single, double, long double),
I don't want to be greedy, but supporting minifloats would be a real boon to beginners trying to learn how floats work.
we can replace Python float with the new fully platform independent IEEE 754 compliant implementation. We can even supply a legacy context to support some current warts.
This all sounds very exciting, but also like a huge amount of work. -- Steven
This all sounds very exciting, but also like a huge amount of work.
Indeed. But that's what we're here for. Anyway, as an indication of the amount of work, you might want to look at the fpectl module -- the module itself is tiny, but its introduction required a huge amount of changes to every place where CPython uses a double. I don't know if anybody uses it, though it's still in the Py3k codebase. -- --Guido van Rossum (python.org/~guido)
On Wed, Oct 10, 2012 at 9:44 PM, Guido van Rossum <guido@python.org> wrote:
Anyway, as an indication of the amount of work, you might want to look at the fpectl module -- the module itself is tiny, but its introduction required a huge amount of changes to every place where CPython uses a double.
I would start from another end. I would look at decimal.py first. This is little over 6,400 line of code and I think most of it can be reused to implement base 2 (or probably better base 16) float. Multi-precision binary float can coexist with existing float until the code matures and accelerators are written for major platforms. At the same time we can make incremental improvements to builtin float until it can be replaced by a multi-precision float in some well-defined context.
On Oct 11, 2012, at 4:45 AM, Serhiy Storchaka <storchaka@gmail.com> wrote:
With base 16 floats you can't emulate x86 native 53-bit mantissa floats.
I realized that as soon as I hit send. :-( I also realized that it does not matter for python implementation because decimal stores mantissa as an int rather than a list of digits.
Alexander Belopolsky wrote:
I gave this idea +float('inf') in the other thread and was thinking about it since. I am now toying with the idea to unify float and decimal in Python.
Are you sure there would be any point in this? People who specifically *want* base-2 floats are probably quite happy with the current float type, and wouldn't appreciate having it slowed down, even by a small amount. It might make sense for them to share whatever parts of the fp context apply to both, and they might have a common base type, but they should probably remain distinct types with separate implementations. -- Greg
On 11 October 2012 06:45, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Alexander Belopolsky wrote:
I gave this idea +float('inf') in the other thread and was thinking about it since. I am now toying with the idea to unify float and decimal in Python.
Are you sure there would be any point in this? People who specifically *want* base-2 floats are probably quite happy with the current float type, and wouldn't appreciate having it slowed down, even by a small amount.
It might make sense for them to share whatever parts of the fp context apply to both, and they might have a common base type, but they should probably remain distinct types with separate implementations.
This is what I was pitching at. It would be great if a single floating point context could be used to control the behaviour of float, decimal, ndarray etc simultaneously. Something that would have made my life easier yesterday would have been a way to enter a debugger at the point when a first NaN is created during execution. Something like: python -m pdb --error-nan broken_script.py Or perhaps: PYTHONRUNFIRST='import errornan' python broken_script.py With numpy you can already do: export PYTHONRUNFIRST='imoprt numpy; numpy.seterr(all='raise')' (Except that PYTHONRUNFIRST isn't implemented yet: http://bugs.python.org/issue14803) Oscar
I think you're mistaking my suggestion. I meant to recommend that there should be a way to control the behavior (e.g. whether to silently return Nan/Inf or raise an exception) of floating point operations, using the capabilities of the hardware as exposed through C, using Python's existing float type. I did not for a second consider reimplementing IEEE 754 from scratch. Therein lies insanity. That's also why I recommended you look at the fpectl module. -- --Guido van Rossum (python.org/~guido)
On 11 October 2012 15:54, Guido van Rossum <guido@python.org> wrote:
I think you're mistaking my suggestion. I meant to recommend that there should be a way to control the behavior (e.g. whether to silently return Nan/Inf or raise an exception) of floating point operations, using the capabilities of the hardware as exposed through C, using Python's existing float type. I did not for a second consider reimplementing IEEE 754 from scratch. Therein lies insanity.
That's also why I recommended you look at the fpectl module.
I would like to have precisely the functionality you are suggesting and I don't want to reimplement anything (I assume this message is intended for me since it was addressed to me). I don't know enough about the implementation details to agree on the hardware capabilities part. From a quick glance at the fpectl module I see that it has problems with portability: http://docs.python.org/library/fpectl.html#fpectl-limitations Setting up a given processor to trap IEEE-754 floating point errors currently requires custom code on a per-architecture basis. You may have to modify fpectl to control your particular hardware. This presumably explains why I don't have the module in my Windows build or on the Linux machines in the HPC cluster I use. Are these problems that can be overcome? If it is necessary to have this hardware-specific accelerator for floating point exceptions then is it reasonable to expect implementations other than CPython to be able to match the semantics of floating point contexts without a significant degradation in performance? I was expecting the implementation to be some checks in straight forward C code for invalid values. I would expect this to cause a small degradation in performance (the kind that you wouldn't notice unless you went out of your way to measure it). Python already does this by checking for a zero value on every division. As far as I can tell from the numpy codebase this is how it works there. This function seems to be responsible for the integer division by zero result in numpy: https://github.com/numpy/numpy/blob/master/numpy/core/src/scalarmathmodule.c...
import numpy as np np.seterr() {'over': 'warn', 'divide': 'warn', 'invalid': 'warn', 'under': 'ignore'} np.int32(1) / np.int32(0) __main__:1: RuntimeWarning: divide by zero encountered in long_scalars 0 np.seterr(divide='ignore') {'over': 'warn', 'divide': 'warn', 'invalid': 'warn', 'under': 'ignore'} np.int32(1) / np.int32(0) 0 np.seterr(divide='raise') {'over': 'warn', 'divide': 'ignore', 'invalid': 'warn', 'under': 'ignore'} np.int32(1) / np.int32(0) Traceback (most recent call last): File "<stdin>", line 1, in <module> FloatingPointError: divide by zero encountered in long_scalars
This works perfectly well in numpy and also in decimal I see no reason why it couldn't work for float/int. But what would would be even better is if you could control all of them with a single context manager. Typically I don't care with the error occurred as a result of operations on ints/floats/ndarrays/decimals I just know that I got a NaN from somewhere and I need to debug it. Oscar
On Thu, 11 Oct 2012 18:45:50 +1300 Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Alexander Belopolsky wrote:
I gave this idea +float('inf') in the other thread and was thinking about it since. I am now toying with the idea to unify float and decimal in Python.
Are you sure there would be any point in this? People who specifically *want* base-2 floats are probably quite happy with the current float type, and wouldn't appreciate having it slowed down, even by a small amount.
Indeed, I don't see the point either. Decimal's strength over float is to be able to represent *decimal* numbers of arbitrary precision, which is useful because any common human activity uses base-10 numbers. I don't see how adding a new binary float type would help any use case. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net
On 11/10/12 16:45, Greg Ewing wrote:
Alexander Belopolsky wrote:
I gave this idea +float('inf') in the other thread and was thinking about it since. I am now toying with the idea to unify float and decimal in Python.
Are you sure there would be any point in this? People who specifically *want* base-2 floats are probably quite happy with the current float type, and wouldn't appreciate having it slowed down, even by a small amount.
I would gladly give up a small amount of speed for better control over floats, such as whether 1/0.0 raised an exception or returned infinity. If I wanted fast code, I'd be using C. I'm happy with *fast enough*. For example, 1/0.0 in a continued fraction is generally harmless, provided it returns infinity. If it raises an exception, you have to write slow, ugly code to evaluate continued fractions robustly. I wouldn't expect 1/0.0 -> infinity to becomes the default, but I'd like a runtime switch to turn it on and off as needed. -- Steven
On 11.10.2012 13:35, Steven D'Aprano wrote:
I would gladly give up a small amount of speed for better control over floats, such as whether 1/0.0 raised an exception or returned infinity.
If I wanted fast code, I'd be using C. I'm happy with *fast enough*.
For example, 1/0.0 in a continued fraction is generally harmless, provided it returns infinity. If it raises an exception, you have to write slow, ugly code to evaluate continued fractions robustly. I wouldn't expect 1/0.0 -> infinity to becomes the default, but I'd like a runtime switch to turn it on and off as needed.
For those who use Python for numerical or scientific computing or computer graphics this is a real issue. First: The standard way of dealing with 1/0.0 in this context, since the days of FORTRAN, is to return an inf. Consequently, that is what NumPy does, as does Matlab, R and most C programs and libraries. Now compare:
1.0/0.0
Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> 1.0/0.0 ZeroDivisionError: float division by zero With this:
import numpy as np np.float64(1.0)/np.float64(0.0) inf
Thus, the NumPy float64 scalar behaves differently from the Python float scalar! In less than trivial expressions, we can have a combination of Python floats and ints and NumPy types (arrays or scalars). What this means is that the behavior is undefined. You might get an inf, or you might get an exception. Who can tell? The issue also affects integers:
1/0
Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> 1/0 ZeroDivisionError: integer division or modulo by zero whereas:
np.int64(1)/np.int64(0) 0
np.int32(1)/np.int32(0) 0
And with arrays:
np.ones(10, dtype=np.int)/np.zeros(10, dtype=np.int) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
np.ones(10, dtype=np.float64)/np.zeros(10, dtype=np.float64) array([ inf, inf, inf, inf, inf, inf, inf, inf, inf, inf])
I think for the sake of us who actually need computation -- believe it or not, Python is rapidly becoming the language of choice for numerical computing -- it would be very nice is this was controllable. Not just the behavior of floats, but also the behavior of ints. A global switch in the sys module would make life a lot easier. Even better would be a context manager that allows us to set up a "numerical" context for local expressions using a with statement. That would not have a lasting effect, but just affect the context. Preferably it should not even propagate across function calls. Something like this: def foobar(): 1/0.0 # raise an exception 1/0 # raise an exception with sys.numerical: 1/0.0 # return inf 1/0 # return 0 foobar() (NumPy actually prints divide by zero warnings on their first occurrence, but I removed it for clarity.) Sturla Molden
On Thu, Oct 11, 2012 at 11:31 PM, Sturla Molden <sturla@molden.no> wrote:
A global switch in the sys module would make life a lot easier. Even better would be a context manager that allows us to set up a "numerical" context for local expressions using a with statement. That would not have a lasting effect, but just affect the context. Preferably it should not even propagate across function calls. Something like this:
def foobar(): 1/0.0 # raise an exception 1/0 # raise an exception
with sys.numerical: 1/0.0 # return inf 1/0 # return 0 foobar()
Not propagating across function calls strikes me as messy, but I see why you'd want it. Would this be better as a __future__ directive? There's already the concept that they apply to a module but not to what that module calls. ChrisA
On 11 October 2012 14:18, Chris Angelico <rosuav@gmail.com> wrote:
On Thu, Oct 11, 2012 at 11:31 PM, Sturla Molden <sturla@molden.no> wrote:
A global switch in the sys module would make life a lot easier. Even better would be a context manager that allows us to set up a "numerical" context for local expressions using a with statement. That would not have a lasting effect, but just affect the context. Preferably it should not even propagate across function calls. Something like this:
def foobar(): 1/0.0 # raise an exception 1/0 # raise an exception
with sys.numerical: 1/0.0 # return inf 1/0 # return 0 foobar()
Not propagating across function calls strikes me as messy, but I see why you'd want it. Would this be better as a __future__ directive? There's already the concept that they apply to a module but not to what that module calls.
__future__ directives are for situations in which the default behaviour will be changed in the future but you want to get the new behaviour now. The proposal is to always have widely supported, convenient ways to switch between different handling modes for numerical operations. The default Python behaviour would be unchanged by this. Oscar
On Fri, Oct 12, 2012 at 12:36 AM, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
On 11 October 2012 14:18, Chris Angelico <rosuav@gmail.com> wrote:
Not propagating across function calls strikes me as messy, but I see why you'd want it. Would this be better as a __future__ directive? There's already the concept that they apply to a module but not to what that module calls.
__future__ directives are for situations in which the default behaviour will be changed in the future but you want to get the new behaviour now. The proposal is to always have widely supported, convenient ways to switch between different handling modes for numerical operations. The default Python behaviour would be unchanged by this.
Sure, it's not perfect for __future__ either, but it does seem odd for a function invocation to suddenly change semantics. This change "feels" to me more like a try/catch block - it's a change to this code that causes different behaviour around error conditions. That ought to continue into a called function. ChrisA
On 11 October 2012 15:55, Serhiy Storchaka <storchaka@gmail.com> wrote:
On 11.10.12 15:31, Sturla Molden wrote:
np.int64(1)/np.int64(0) 0
np.int32(1)/np.int32(0) 0
For such behavior must be some rationale.
I don't know what the rationale for that is but it is at least controllable in numpy:
import numpy as np np.seterr(all='raise') # Exceptions instead of mostly useless values {'over': 'raise', 'divide': 'raise', 'invalid': 'raise', 'under': 'raise'} np.int32(1) / np.int32(0) Traceback (most recent call last): File "<stdin>", line 1, in <module> FloatingPointError: divide by zero encountered in long_scalars np.float32(1e20) * np.float32(1e20) Traceback (most recent call last): File "<stdin>", line 1, in <module> FloatingPointError: overflow encountered in float_scalars np.float32('inf') inf np.float32('inf') / np.float32('inf') Traceback (most recent call last): File "<stdin>", line 1, in <module> FloatingPointError: invalid value encountered in float_scalars
Oscar
On 11 October 2012 17:05, Stephen J. Turnbull <stephen@xemacs.org> wrote:
Steven D'Aprano writes:
I would gladly give up a small amount of speed for better control over floats, such as whether 1/0.0 raised an exception or returned infinity.
Isn't that what the fpectl module is supposed to buy, albeit much less pleasantly than Decimal contexts do?
But the fpectl module IIUC wouldn't work for 1 / 0. Since Python has managed to unify integer/float division now it would be a shame to introduce any new reasons to bring in superfluous .0s again: with context(zero_division='infinity'): x = 1 / 0.0 # float('inf') y = 1 / 0 # I'd like to see float('inf') here as well I've spent 4 hours this week in computer labs with students using Python 2.7 as an introduction to scientific programming. A significant portion of that time was spent explaining the int/float division problem. They all get the issue now but not all of them understand that it is specifically about division: many are putting .0s everywhere. I expect it to be easier when we use Python 3 and I can simply explain that there are two types of division with two different operators. Oscar
On Thu, Oct 11, 2012 at 11:42 AM, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
On 11 October 2012 17:05, Stephen J. Turnbull <stephen@xemacs.org> wrote:
Steven D'Aprano writes:
I would gladly give up a small amount of speed for better control over floats, such as whether 1/0.0 raised an exception or returned infinity.
Isn't that what the fpectl module is supposed to buy, albeit much less pleasantly than Decimal contexts do?
But the fpectl module IIUC wouldn't work for 1 / 0. Since Python has managed to unify integer/float division now it would be a shame to introduce any new reasons to bring in superfluous .0s again:
with context(zero_division='infinity'): x = 1 / 0.0 # float('inf') y = 1 / 0 # I'd like to see float('inf') here as well
I've spent 4 hours this week in computer labs with students using Python 2.7 as an introduction to scientific programming. A significant portion of that time was spent explaining the int/float division problem. They all get the issue now but not all of them understand that it is specifically about division: many are putting .0s everywhere. I expect it to be easier when we use Python 3 and I can simply explain that there are two types of division with two different operators.
You could have just told them to "from __future__ import division" -- --Guido van Rossum (python.org/~guido)
On 11 October 2012 19:46, Guido van Rossum <guido@python.org> wrote:
On Thu, Oct 11, 2012 at 11:42 AM, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
I've spent 4 hours this week in computer labs with students using Python 2.7 as an introduction to scientific programming. A significant portion of that time was spent explaining the int/float division problem. They all get the issue now but not all of them understand that it is specifically about division: many are putting .0s everywhere. I expect it to be easier when we use Python 3 and I can simply explain that there are two types of division with two different operators.
You could have just told them to "from __future__ import division"
I know but the reason for choosing Python is the low barrier to getting started with procedural programming. When they're having trouble understanding the difference between the Python shell and the OS shell I'd like to avoid introducing the concept that the interpreter can change its calculation modes dynamically and forget those changes when you restart it. It's also unfortunate for the students to know that some of the things they're seeing on day one will change in the next version (you can't just tell people to import things from the "future" without some kind of explanation). I used the opportunity to think a little bit about types by running type(x) and explain that different types of objects behave differently. I would rather explain that using genuinely incompatible types like strings and numbers than ints and floats though. Oscar
Oscar Benjamin writes:
But the fpectl module IIUC wouldn't work for 1 / 0.
No, and it shouldn't.
Since Python has managed to unify integer/float division now it would be a shame to introduce any new reasons to bring in superfluous .0s again:
With all due respect to the designers, unification of integer/float division is a compromise, even a mathematical kludge. I'm not complaining, it happens to work well for most applications, even for me (at least where I need a computer to do the calculations :-). Practicality beats purity.
with context(zero_division='infinity'): x = 1 / 0.0 # float('inf') y = 1 / 0 # I'd like to see float('inf') here as well
I'd hate that. Zero simply isn't a unit in any ring of integers; if I want to handle divide-by-zero specially (rather than consider it a programming error in preceding code) a LBYL non-zero divisor test or a try handler for divide-by-zero is appropriate. And in the case of z = -1 / 0.0 should it be float('inf') (complex) or -float('inf') (real)? (Obviously it should be the latter, as most scientific programming is done using real algorithms. But one could argue that just as integer is corrupted to float in the interests of continuity in division results, float should be corrupted to complex in the interest of a larger domain for roots and trigonometric functions.)
I've spent 4 hours this week in computer labs with students using Python 2.7 as an introduction to scientific programming. A significant portion of that time was spent explaining the int/float division problem. They all get the issue now but not all of them understand that it is specifically about division: many are putting .0s everywhere.
A perfectly rational approach for them, which may appeal to their senses of beauty in mathematics -- I personally would always write 1.0/0.0, not 1/0.0, and more mathematically correct than what you try to teach them. I really don't understand why you have a problem with it. Your problem seems to be that Python shouldn't have integers, except as an internal optimization for a subset of floating point operations. Then "1" could always be an abbreviation for "1.0"!
I expect it to be easier when we use Python 3 and I can simply explain that there are two types of division with two different operators.
Well, it's been more than 40 years since I studied this stuff in America, but what they taught 10-year-olds then was that there are two ways to view division: in integers with result and remainder, and as a fraction. And they used the same operator! Not to mention that the algorithm for reducing fractions depends on integer division. It's a shame students forget so quickly. :-)
On 12/10/12 03:05, Stephen J. Turnbull wrote:
Steven D'Aprano writes:
I would gladly give up a small amount of speed for better control over floats, such as whether 1/0.0 raised an exception or returned infinity.
Isn't that what the fpectl module is supposed to buy, albeit much less pleasantly than Decimal contexts do?
I can't test it, because I don't have that module installed, but I would think not. Reading the docs: http://docs.python.org/library/fpectl.html I would say that fpectl exists to turn on floating point exceptions where Python currently returns an inf or NaN, not to turn on special values where Python currently raises an exception, e.g. 1/0.0. Because it depends on a build-time option, using it is even less convenient that most other non-standard libraries. It only has a single exception type for any of Division by Zero, Overflow and Invalid, and doesn't appear to trap Underflow or Inexact at all. It's not just less pleasant than Decimal contexts, but much less powerful as well. -- Steven
Steven D'Aprano writes:
On 12/10/12 03:05, Stephen J. Turnbull wrote:
Steven D'Aprano writes:
I would gladly give up a small amount of speed for better control over floats, such as whether 1/0.0 raised an exception or returned infinity.
Isn't that what the fpectl module is supposed to buy, albeit much less pleasantly than Decimal contexts do?
I can't test it, because I don't have that module installed, but I would think not.
Reading the docs:
http://docs.python.org/library/fpectl.html
I would say that fpectl exists to turn on floating point exceptions where Python currently returns an inf or NaN, not to turn on special values where Python currently raises an exception, e.g. 1/0.0.
OK. But if Python does that, it must be checking the value of the operand as well as the type. Surely that could be delegated to the hardware easily by commenting out one line. (Of course that would need to be a build-time option, and requires care in initialization.)
Because it depends on a build-time option, using it is even less convenient that most other non-standard libraries.
That is neither here nor there. I think the people who would use such facilities are a very small minority; imposing a slight extra burden on them is not a huge cost to Python. Eg, I'm perfectly happy with Python's current behavior because I only write toy examples/classroom demos in pure Python. If I were going to try to write statistical code in Python (vaguely plausible but not likely :-), I'd surely use SciPy.
It only has a single exception type for any of Division by Zero, Overflow and Invalid, and doesn't appear to trap Underflow or Inexact at all. It's not just less pleasant than Decimal contexts, but much less powerful as well.
Now you're really picking nits. Nobody said fpectl is perfect for all uses, just that you could get *better* control over floats. If you're going to insist that nothing less than Decimal contexts will do, you're right for you -- but that's not what you said.
On Thu, Oct 11, 2012 at 6:35 AM, Steven D'Aprano <steve@pearwood.info> wrote:
On 11/10/12 16:45, Greg Ewing wrote:
Are you sure there would be any point in this? People who specifically *want* base-2 floats are probably quite happy with the current float type, and wouldn't appreciate having it slowed down, even by a small amount.
I would gladly give up a small amount of speed for better control over floats, such as whether 1/0.0 raised an exception or returned infinity.
Umm, you would be giving up a *lot* of speed. Native floating point happens right in the processor, so if you want special behavior, you'd have to take the floating point out of the CPU and into "user space". mark
participants (11)
-
Alexander Belopolsky
-
Antoine Pitrou
-
Chris Angelico
-
Greg Ewing
-
Guido van Rossum
-
Mark Adam
-
Oscar Benjamin
-
Serhiy Storchaka
-
Stephen J. Turnbull
-
Steven D'Aprano
-
Sturla Molden