pyc files, constant folding and borderline portability issues
Hello, There are a couple of ancillary portability concerns due to optimizations which store system-dependent results of operations between constants in pyc files: - Issue #5057: code like '\U00012345'[0] is optimized away and its result stored as a constant in the pyc file, but the result should be different in UCS-2 and UCS-4 builds. - Issue #5593: code like 1e16+2.9999 is optimized away and its result stored as a constant (again), but the result can vary slightly depending on the internal FPU precision. These problems have probably been there for a long time and almost no one seems to complain, but I thought I'd report them here just in case. Regards Antoine.
On Sun, Mar 29, 2009 at 9:42 AM, Antoine Pitrou <solipsis@pitrou.net> wrote:
There are a couple of ancillary portability concerns due to optimizations which store system-dependent results of operations between constants in pyc files:
- Issue #5057: code like '\U00012345'[0] is optimized away and its result stored as a constant in the pyc file, but the result should be different in UCS-2 and UCS-4 builds.
Why would anyone write such code (disregarding the \U escape problem)? So why do we bother optimizing this?
- Issue #5593: code like 1e16+2.9999 is optimized away and its result stored as a constant (again), but the result can vary slightly depending on the internal FPU precision.
I would just not bother constant folding involving FP, or only if the values involved have an exact representation in IEEE binary FP format.
These problems have probably been there for a long time and almost no one seems to complain, but I thought I'd report them here just in case.
I would expect that constant folding isn't nearly effective in Python as in other (less dynamic) languages because it doesn't do anything for NAMED constants. E.g. MINUTE = 60 def half_hour(): return MINUTE*30 This should be folded to "return 1800" but doesn't because the compiler doesn't know that MINUTE is a constant. Has anyone ever profiled the effectiveness of constant folding on real-world code? The only kind of constant folding that I expect to be making a diference is things like unary operators, since e.g. "x = -2" is technically an expression involving a unary minus. ISTM that historically, almost every time we attempted some new form of constant folding, we introduced a bug. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
On Mar 29, 2009 at 05:36PM, Guido van Rossum <guido@python.org> wrote:
- Issue #5593: code like 1e16+2.9999 is optimized away and its result stored as a constant (again), but the result can vary slightly depending on the internal FPU precision.
I would just not bother constant folding involving FP, or only if the values involved have an exact representation in IEEE binary FP format.
The Language Reference says nothing about the effects of code optimizations. I think it's a very good thing, because we can do some work here with constant folding. If someone wants to preserve precision with floats, it can always use a temporary variable, like in many other languages.
These problems have probably been there for a long time and almost no one seems to complain, but I thought I'd report them here just in case.
I would expect that constant folding isn't nearly effective in Python as in other (less dynamic) languages because it doesn't do anything for NAMED constants. E.g.
MINUTE = 60
def half_hour(): return MINUTE*30
This should be folded to "return 1800" but doesn't because the compiler doesn't know that MINUTE is a constant.
I completely agree. We can't say nothing about MINUTE at the time half_hour will be executed. The code here must never been changed.
Has anyone ever profiled the effectiveness of constant folding on real-world code? The only kind of constant folding that I expect to be making a diference is things like unary operators, since e.g. "x = -2" is technically an expression involving a unary minus.
At this time with Python 2.6.1 we have these results: def f(): return 1 + 2 * 3 + 4j dis(f) 1 0 LOAD_CONST 1 (1) 3 LOAD_CONST 5 (6) 6 BINARY_ADD 7 LOAD_CONST 4 (4j) 10 BINARY_ADD 11 RETURN_VALUE def f(): return ['a', ('b', 'c')] * (1 + 2 * 3) dis(f) 1 0 LOAD_CONST 1 ('a') 3 LOAD_CONST 7 (('b', 'c')) 6 BUILD_LIST 2 9 LOAD_CONST 4 (1) 12 LOAD_CONST 8 (6) 15 BINARY_ADD 16 BINARY_MULTIPLY 17 RETURN_VALUE With proper constant folding code, both functions can be reduced to a single LOAD_CONST and a RETURN_VALUE (or, definitely, by a single instruction at all with an advanced peephole optimizer). I'll show you it at PyCon in Florence, next month.
ISTM that historically, almost every time we attempted some new form of constant folding, we introduced a bug.
I found a very rich test battery with Python, which helped me a lot in my work of changing the ast, compiler, peephole, and VM. If they aren't enough, we can expand them to add more test cases. But, again, the Language Reference says nothing about optimizations. Cheers, Cesare
Cesare Di Mauro <cesare.dimauro <at> a-tono.com> writes:
def f(): return ['a', ('b', 'c')] * (1 + 2 * 3) [...]
With proper constant folding code, both functions can be reduced to a single LOAD_CONST and a RETURN_VALUE (or, definitely, by a single instruction at all with an advanced peephole optimizer).
Lists are mutable, you can't optimize the creation of list literals by storing them as singleton constants. Regards Antoine.
On Lun, Apr 6, 2009 16:43, Antoine Pitrou wrote:
Cesare Di Mauro <cesare.dimauro <at> a-tono.com> writes:
def f(): return ['a', ('b', 'c')] * (1 + 2 * 3) [...]
With proper constant folding code, both functions can be reduced to a single LOAD_CONST and a RETURN_VALUE (or, definitely, by a single instruction at all with an advanced peephole optimizer).
Lists are mutable, you can't optimize the creation of list literals by storing them as singleton constants.
Regards
Antoine.
You are right, I've mistyped the example. def f(): return ('a', ('b', 'c')) * (1 + 2 * 3) generates a single instruction (depending on the threshold used to limit folding of sequences), whereas def f(): return ['a', ('b', 'c')] * (1 + 2 * 3) needs three. Sorry for the mistake. Cheers, Cesare
Cesare> At this time with Python 2.6.1 we have these results: Cesare> def f(): return 1 + 2 * 3 + 4j ... Cesare> def f(): return ['a', ('b', 'c')] * (1 + 2 * 3) Guido can certainly correct me if I'm wrong, but I believe the main point of his message was that you aren't going to encounter a lot of code in Python which is amenable to traditional constant folding. For the most part, they will be assigned to symbolic "constants", which, unlike C preprocessor macros aren't really constants at all. Consequently, the opportunity for constant folding is minimal and probably introduces more opportunities for bugs than performance improvements. Skip
On Mon, Apr 6, 2009 18:57, skip@pobox.com wrote:
Cesare> At this time with Python 2.6.1 we have these results: Cesare> def f(): return 1 + 2 * 3 + 4j ... Cesare> def f(): return ['a', ('b', 'c')] * (1 + 2 * 3)
Guido can certainly correct me if I'm wrong, but I believe the main point of his message was that you aren't going to encounter a lot of code in Python which is amenable to traditional constant folding. For the most part, they will be assigned to symbolic "constants", which, unlike C preprocessor macros aren't really constants at all. Consequently, the opportunity for constant folding is minimal and probably introduces more opportunities for bugs than performance improvements.
Skip
I can understand Guido's concern, but you worked as well on constant folding, and you know that there's space for optimizations here. peephole.c have some code for unary, binary, and tuple/list folding; they worked fine. Why mantaining unuseful and dangerous code, otherwise? I know that bugs can come out doing such optimizations, but Python have a good tests battery that can help find them. Obviously tests can't give us 100% insurance that everything works as expected, but they are very good starting point. Bugs can happen at every change on the code base, but code base changes... Cesare
On Mon, Apr 6, 2009 at 7:28 AM, Cesare Di Mauro <cesare.dimauro@a-tono.com> wrote:
The Language Reference says nothing about the effects of code optimizations. I think it's a very good thing, because we can do some work here with constant folding.
Unfortunately the language reference is not the only thing we have to worry about. Unlike languages like C++, where compiler writers have the moral right to modify the compiler as long as they stay within the weasel-words of the standard, in Python, users' expectations carry value. Since the language is inherently not that fast, users are not all that focused on performance (if they were, they wouldn't be using Python). Unsurprising behavior OTOH is valued tremendously. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
On Tue, 7 Apr 2009 07:27:29 am Guido van Rossum wrote:
Unfortunately the language reference is not the only thing we have to worry about. Unlike languages like C++, where compiler writers have the moral right to modify the compiler as long as they stay within the weasel-words of the standard, in Python, users' expectations carry value. Since the language is inherently not that fast, users are not all that focused on performance (if they were, they wouldn't be using Python). Unsurprising behavior OTOH is valued tremendously.
Speaking as a user, Python's slowness is *not* a feature. Anything reasonable which can increase performance is a Good Thing. One of the better aspects of Python programming is that (in general) you can write code in the most natural way possible, with the least amount of scaffolding getting in the way. I'm with Raymond: I think it would be sad if "exp = long(mant * 2.0 ** 53)" did the exponentiation in the inner-loop. Pre-computing that value outside the loop counts as scaffolding, and gets in the way of readability and beauty. On the other hand, I'm with Guido when he wrote "it is certainly not right to choose speed over correctness". This is especially a problem for floating point optimizations, and I urge Cesare to be conservative in any f.p. optimizations he introduces, including constant folding. So... +1 on the general principle of constant folding, -0.5 on any such optimizations which change the semantics of a f.p. operation. The only reason it's -0.5 rather than -1 is that (presumably) anyone who cares about floating point correctness already knows to never trust the compiler. -- Steven D'Aprano
On Mon, Apr 6, 2009 at 5:10 PM, Steven D'Aprano <steve@pearwood.info> wrote:
On Tue, 7 Apr 2009 07:27:29 am Guido van Rossum wrote:
Unfortunately the language reference is not the only thing we have to worry about. Unlike languages like C++, where compiler writers have the moral right to modify the compiler as long as they stay within the weasel-words of the standard, in Python, users' expectations carry value. Since the language is inherently not that fast, users are not all that focused on performance (if they were, they wouldn't be using Python). Unsurprising behavior OTOH is valued tremendously.
Speaking as a user, Python's slowness is *not* a feature. Anything reasonable which can increase performance is a Good Thing.
One of the better aspects of Python programming is that (in general) you can write code in the most natural way possible, with the least amount of scaffolding getting in the way. I'm with Raymond: I think it would be sad if "exp = long(mant * 2.0 ** 53)" did the exponentiation in the inner-loop. Pre-computing that value outside the loop counts as scaffolding, and gets in the way of readability and beauty.
On the other hand, I'm with Guido when he wrote "it is certainly not right to choose speed over correctness". This is especially a problem for floating point optimizations, and I urge Cesare to be conservative in any f.p. optimizations he introduces, including constant folding.
So... +1 on the general principle of constant folding, -0.5 on any such optimizations which change the semantics of a f.p. operation. The only reason it's -0.5 rather than -1 is that (presumably) anyone who cares about floating point correctness already knows to never trust the compiler.
Unfortunately, historically well-meaning attempts at adding constant-folding have more than once introduced obscure bugs that were hard to reproduce and only discovered one or two releases later. This has little to do with caring about float correctness. It's more about the difficulty of debugging Heisenbugs. For all these reasons should be super risk averse in this area. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
On Apr 07, 2009 at 02:10AM, Steven D'Aprano <steve@pearwood.info> wrote:
On the other hand, I'm with Guido when he wrote "it is certainly not right to choose speed over correctness". This is especially a problem for floating point optimizations, and I urge Cesare to be conservative in any f.p. optimizations he introduces, including constant folding.
The principle that I followed on doing constant folding was: "do what Python will do without constant folding enabled". So if Python will generate LOAD_CONST 1 LOAD_CONST 2 BINARY_ADD the constant folding code will simply replace them with a single LOAD_CONST 3 When working with such kind of optimizations, the temptation is to apply them at any situation possible. For example, in other languages this a = b * 2 * 3 will be replaced by a = b * 6 In Python I can't do that, because b can be an object which overloaded the * operator, so it *must* be called two times, one for 2 and one for 3. That's the way I choose to implement constant folding. The only difference at this time is regards invalid operations, which will raise exceptions at compile time, not at running time. So if you write: a = 1 / 0 an exception will be raised at compile time. I decided to let the exception be raised immediately, because I think that it's better to detect an error at compile time than at execution time. However, this can leed to incompatibilities with existing code, so in the final implementation I will add a flag to struct compiling (in ast.c) so that this behaviour can be controlled programmatically (enabling or not the exception raising). I already introduced a flag in struct compiling to control the constant folding, that can be completely disabled, if desired.
So... +1 on the general principle of constant folding, -0.5 on any such optimizations which change the semantics of a f.p. operation. The only reason it's -0.5 rather than -1 is that (presumably) anyone who cares about floating point correctness already knows to never trust the compiler.
As Raymond stated, there's no loss in precision working with constant folding code on float datas. That's because there will be a rounding and a store of computed values each time that a result is calculated. Other languages will use FPU registers to hold results as long as possibile, keeping full 80 bit precision (16 bit exponent + 64 bit mantissa). That's not the Python case. Cesare
2009/4/7 Cesare Di Mauro <cesare.dimauro@a-tono.com>:
The principle that I followed on doing constant folding was: "do what Python will do without constant folding enabled".
So if Python will generate
LOAD_CONST 1 LOAD_CONST 2 BINARY_ADD
the constant folding code will simply replace them with a single
LOAD_CONST 3
When working with such kind of optimizations, the temptation is to apply them at any situation possible. For example, in other languages this
a = b * 2 * 3
will be replaced by
a = b * 6
In Python I can't do that, because b can be an object which overloaded the * operator, so it *must* be called two times, one for 2 and one for 3.
That's the way I choose to implement constant folding.
That sounds sufficiently "super risk-averse" to me, so I'm in favour of constant folding being implemented with this attitude :-) Paul.
Cesare> The only difference at this time is regards invalid operations, Cesare> which will raise exceptions at compile time, not at running Cesare> time. Cesare> So if you write: Cesare> a = 1 / 0 Cesare> an exception will be raised at compile time. I think I have to call *bzzzzt* here. This is a common technique used during debugging. Insert a 1/0 to force an exception (possibly causing the running program to drop into pdb). I think you have to leave that in. Skip
In data 07 aprile 2009 alle ore 17:19:25, <skip@pobox.com> ha scritto:
Cesare> The only difference at this time is regards invalid operations, Cesare> which will raise exceptions at compile time, not at running Cesare> time.
Cesare> So if you write:
Cesare> a = 1 / 0
Cesare> an exception will be raised at compile time.
I think I have to call *bzzzzt* here. This is a common technique used during debugging. Insert a 1/0 to force an exception (possibly causing the running program to drop into pdb). I think you have to leave that in.
Skip
Many tests rely on this, and I have changed them from something like: try: 1 / 0 except: .... to try: a = 1; a / 0 except: .... But I know that it's a major source of incompatibilities, and in the final code I'll enabled it only if user demanded it (through a flag). Cesare
Well I'm sorry Cesare but this is unacceptable. As Skip points out there is plenty of code that relies on this. Also, consider what "problem" you are trying to solve here. What is the benefit to the user of moving this error to compile time? I cannot see any. --Guido On Tue, Apr 7, 2009 at 8:19 AM, Cesare Di Mauro <cesare.dimauro@a-tono.com> wrote:
In data 07 aprile 2009 alle ore 17:19:25, <skip@pobox.com> ha scritto:
Cesare> The only difference at this time is regards invalid operations, Cesare> which will raise exceptions at compile time, not at running Cesare> time.
Cesare> So if you write:
Cesare> a = 1 / 0
Cesare> an exception will be raised at compile time.
I think I have to call *bzzzzt* here. This is a common technique used during debugging. Insert a 1/0 to force an exception (possibly causing the running program to drop into pdb). I think you have to leave that in.
Skip
Many tests rely on this, and I have changed them from something like:
try: 1 / 0 except: ....
to
try: a = 1; a / 0 except: ....
But I know that it's a major source of incompatibilities, and in the final code I'll enabled it only if user demanded it (through a flag).
Cesare _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
-- --Guido van Rossum (home page: http://www.python.org/~guido/)
On Tue, Apr 7, 2009 06:25PM, Guido van Rossum wrote:
Well I'm sorry Cesare but this is unacceptable. As Skip points out there is plenty of code that relies on this.
Guido, as I already said, in the final code the normal Python behaviour will be kept, and the stricter one will be enabled solely due to a flag set by the user.
Also, consider what "problem" you are trying to solve here. What is the benefit to the user of moving this error to compile time? I cannot see any.
--Guido
In my experience it's better to discover a bug at compile time rather than at running time. Cesare
On Tue, Apr 7, 2009 at 8:19 AM, Cesare Di Mauro <cesare.dimauro@a-tono.com> wrote:
In data 07 aprile 2009 alle ore 17:19:25, <skip@pobox.com> ha scritto:
Cesare> The only difference at this time is regards invalid operations, Cesare> which will raise exceptions at compile time, not at running Cesare> time.
Cesare> So if you write:
Cesare> a = 1 / 0
Cesare> an exception will be raised at compile time.
I think I have to call *bzzzzt* here. This is a common technique used during debugging. Insert a 1/0 to force an exception (possibly causing the running program to drop into pdb). I think you have to leave that in.
Skip
Many tests rely on this, and I have changed them from something like:
try: 1 / 0 except: ....
to
try: a = 1; a / 0 except: ....
But I know that it's a major source of incompatibilities, and in the final code I'll enabled it only if user demanded it (through a flag).
Cesare _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
-- --Guido van Rossum (home page: http://www.python.org/~guido/)
On Tue, Apr 7, 2009 at 9:46 AM, Cesare Di Mauro <cesare.dimauro@a-tono.com> wrote:
On Tue, Apr 7, 2009 06:25PM, Guido van Rossum wrote:
Well I'm sorry Cesare but this is unacceptable. As Skip points out there is plenty of code that relies on this.
Guido, as I already said, in the final code the normal Python behaviour will be kept, and the stricter one will be enabled solely due to a flag set by the user.
Ok.
Also, consider what "problem" you are trying to solve here. What is the benefit to the user of moving this error to compile time? I cannot see any.
--Guido
In my experience it's better to discover a bug at compile time rather than at running time.
That's my point though, which you seem to be ignoring: if the user explicitly writes "1/0" it is not likely to be a bug. That's very different than "1/x" where x happens to take on zero at runtime -- *that* is likely bug, but a constant folder can't detect that (at least not for Python). -- --Guido van Rossum (home page: http://www.python.org/~guido/)
On Tue, Apr 7, 2009 07:22PM, Guido van Rossum wrote:
In my experience it's better to discover a bug at compile time rather than at running time.
That's my point though, which you seem to be ignoring: if the user explicitly writes "1/0" it is not likely to be a bug. That's very different than "1/x" where x happens to take on zero at runtime -- *that* is likely bug, but a constant folder can't detect that (at least not for Python).
-- --Guido van Rossum (home page: http://www.python.org/~guido/)
I agree. My only concern was about user mistyping that can leed to an error interceptable by a stricter constant folder. But I admit that it's a rarer case compared to an explicit exception raising such the one you showed. Cesare
Cesare Di Mauro wrote:
On Tue, Apr 7, 2009 07:22PM, Guido van Rossum wrote:
In my experience it's better to discover a bug at compile time rather than at running time. That's my point though, which you seem to be ignoring: if the user explicitly writes "1/0" it is not likely to be a bug. That's very different than "1/x" where x happens to take on zero at runtime -- *that* is likely bug, but a constant folder can't detect that (at least not for Python).
-- --Guido van Rossum (home page: http://www.python.org/~guido/)
I agree. My only concern was about user mistyping that can leed to an error interceptable by a stricter constant folder.
But I admit that it's a rarer case compared to an explicit exception raising such the one you showed.
I would guess that it is so rare as to not be worth bothering about.
On 07/04/2009, at 7:27 AM, Guido van Rossum wrote:
On Mon, Apr 6, 2009 at 7:28 AM, Cesare Di Mauro <cesare.dimauro@a-tono.com> wrote:
The Language Reference says nothing about the effects of code optimizations. I think it's a very good thing, because we can do some work here with constant folding.
Unfortunately the language reference is not the only thing we have to worry about. Unlike languages like C++, where compiler writers have the moral right to modify the compiler as long as they stay within the weasel-words of the standard, in Python, users' expectations carry value. Since the language is inherently not that fast, users are not all that focused on performance (if they were, they wouldn't be using Python). Unsurprising behavior OTOH is valued tremendously.
Rather than trying to get the optimizer to guess, why not have a "const" keyword and make it explicit? The result would be a symbol that essentially only exists at compile time - references to the symbol would be replaced by the computed value while compiling. Okay, maybe that would suck a bit (no symbolic debug output). Yeah, I know... take it to python-wild-and-ill-considered-ideas@python.org .
[Antoine]
- Issue #5593: code like 1e16+2.9999 is optimized away and its result stored as a constant (again), but the result can vary slightly depending on the internal FPU precision. [Guido] I would just not bother constant folding involving FP, or only if the values involved have an exact representation in IEEE binary FP format.
+1 for removing constant folding for floats (besides conversion of -<literal>). There are just too many things to worry about: FPU rounding mode and precision, floating-point signals and flags, effect of compiler flags, and the potential benefit seems small. Mark
+1 for removing constant folding for floats (besides conversion of -<literal>). There are just too many things to worry about: FPU rounding mode and precision, floating-point signals and flags, effect of compiler flags, and the potential benefit seems small.
If you're talking about the existing peepholer optimization that has been in-place for years, I think it would be better to leave it as-is. It's better to have the compiler do the work than to have a programmer thinking he/she needs to do it by hand (reducing readability by introducing magic numbers). The code for the lsum() recipe is more readable with a line like: exp = long(mant * 2.0 ** 53) than with exp = long(mant * 9007199254740992.0) It would be ashamed if code written like the former suddenly started doing the exponentation in the inner-loop or if the code got rewritten by hand as shown. The list of "things to worry about" seems like the normal list of issues associated with doing anything in floating point. Python is already FPU challenged in that it offers nearly zero control over the FPU or direct access to signals and flags. Every step of a floating point calculation in Python gets written-out to a PyFloat object and is squeezed back into a C double (potentially introducing double-rounding if extended precision had be used by the FPU). Disabling the peepholer doesn't change this situation. Raymond
On Mon, Apr 6, 2009 at 9:05 PM, Raymond Hettinger <python@rcn.com> wrote:
The code for the lsum() recipe is more readable with a line like:
exp = long(mant * 2.0 ** 53)
than with
exp = long(mant * 9007199254740992.0)
It would be ashamed if code written like the former suddenly started doing the exponentation in the inner-loop or if the code got rewritten by hand as shown.
Well, I'd say that the obvious solution here is to compute the constant 2.0**53 just once, somewhere outside the inner loop. In any case, that value would probably be better written as 2.0**DBL_MANT_DIG (or something similar). As Antoine reported, the constant-folding caused quite a confusing bug report (issue #5593): the problem (when we eventually tracked it down) was that the folded constant was in a .pyc file, and so wasn't updated when the compiler flags changed. Mark
On Mon, Apr 6, 2009 at 1:22 PM, Mark Dickinson <dickinsm@gmail.com> wrote:
On Mon, Apr 6, 2009 at 9:05 PM, Raymond Hettinger <python@rcn.com> wrote:
The code for the lsum() recipe is more readable with a line like:
exp = long(mant * 2.0 ** 53)
than with
exp = long(mant * 9007199254740992.0)
It would be ashamed if code written like the former suddenly started doing the exponentation in the inner-loop or if the code got rewritten by hand as shown.
Do you have any evidence that people write lots of inner loops with constant expressions? In real-world code these just don't exist that much. The case of constant folding in Python is *much* weaker than in C because Python doesn't have real compile-time constants, so named "constants" are variables to the compiler.
Well, I'd say that the obvious solution here is to compute the constant 2.0**53 just once, somewhere outside the inner loop. In any case, that value would probably be better written as 2.0**DBL_MANT_DIG (or something similar).
So true.
As Antoine reported, the constant-folding caused quite a confusing bug report (issue #5593): the problem (when we eventually tracked it down) was that the folded constant was in a .pyc file, and so wasn't updated when the compiler flags changed.
Right. Over the years the peephole optimizer and constant folding have been a constant (though small) source of bugs. I'm not sure that there is much real-world value in it, and it is certainly not right to choose speed over correctness. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
On Mon, Apr 6, 2009 at 2:22 PM, Mark Dickinson <dickinsm@gmail.com> wrote:
Well, I'd say that the obvious solution here is to compute the constant 2.0**53 just once, somewhere outside the inner loop. In any case, that value would probably be better written as 2.0**DBL_MANT_DIG (or something similar).
As Antoine reported, the constant-folding caused quite a confusing bug report (issue #5593): the problem (when we eventually tracked it down) was that the folded constant was in a .pyc file, and so wasn't updated when the compiler flags changed.
Another way of looking at this is that we have a ./configure option which affects .pyc output. Therefor, we should add a flag to the magic number, causing it to be regenerated as needed. Whether that's better or worse than removing constant folding I haven't decided. I have such low expectations of floating point that I'm not surprised by bugs like this. I'm more surprised that people expect consistent, deterministic results... -- Adam Olsen, aka Rhamphoryncus
participants (11)
-
Adam Olsen
-
Andrew McNamara
-
Antoine Pitrou
-
Cesare Di Mauro
-
Guido van Rossum
-
Mark Dickinson
-
Paul Moore
-
Raymond Hettinger
-
skip@pobox.com
-
Steven D'Aprano
-
Terry Reedy