<br><br><div class="gmail_quote">On Mon, Oct 17, 2011 at 6:12 AM, Bengt Richter <span dir="ltr"><<a href="mailto:bokr@oz.net">bokr@oz.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div><div></div><div class="h5">On 10/17/2011 12:10 AM Armin Rigo wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br>
<br>
On Sun, Oct 16, 2011 at 23:41, David Cournapeau<<a href="mailto:cournape@gmail.com" target="_blank">cournape@gmail.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Interesting to know. But then, wouldn't this limit the speed gains to<br>
be expected from the JIT ?<br>
</blockquote>
<br>
Yes, to some extent. It cannot give you the last bit of performance<br>
improvements you could expect from arithmetic optimizations, but (as<br>
usual) you get already the several-times improvements of e.g. removing<br>
the boxing and unboxing of float objects. Personally I'm wary of<br>
going down that path, because it means that the results we get could<br>
suddenly change their least significant digit(s) when the JIT kicks<br>
in. At least there are multiple tests in the standard Python test<br>
suite that would fail because of that.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
And I am not sure I understand how you can "not go there" if you want<br>
to vectorize code to use SIMD instruction sets ?<br>
</blockquote>
<br>
I'll leave fijal to answer this question in detail :-) I suppose that<br>
the goal is first to use SIMD when explicitly requested in the RPython<br>
source, in the numpy code that operate on matrices; and not do the<br>
harder job of automatically unrolling and SIMD-ing loops containing<br>
Python float operations. But even the later could be done without<br>
giving up on the idea that all Python operations should be present in<br>
a bit-exact way (e.g. by using SIMD on 64-bit floats, not on 32-bit<br>
floats).<br>
<br>
<br>
A bientôt,<br>
<br>
Armin.<br>
</blockquote></div></div>
I'm wondering how you handle high level loop optimizations vs<br>
floating point order-sensitive calculations. E.g., if a source loop<br>
has z[i]=a*b*c, might you hoist b*c without considering that<br>
assert a*b*c == a*(b*c) might fail, as in<br>
<br>
>>> a=b=1e-200; c=1e200<br>
>>> assert a*b*c == a*(b*c)<br>
Traceback (most recent call last):<br>
File "<stdin>", line 1, in <module><br>
AssertionError<br>
>>> a*b*c, a*(b*c)<br>
(0.0, 1e-200)<br>
<br>
Regards,<br><font color="#888888">
Bengt Richter</font><div><div></div><div class="h5"><br>
<br>
<br>
______________________________<u></u>_________________<br>
pypy-dev mailing list<br>
<a href="mailto:pypy-dev@python.org" target="_blank">pypy-dev@python.org</a><br>
<a href="http://mail.python.org/mailman/listinfo/pypy-dev" target="_blank">http://mail.python.org/<u></u>mailman/listinfo/pypy-dev</a><br>
</div></div></blockquote></div><br>No, you would never hoist b * c because b * c isn't an operation in that loop, the only ops that exist are:<div><br></div><div>t1 = a * b</div><div>t2 = t1 * c</div><div>z[i] = t2</div>
<div><br></div><div>even if we did do arithmetic reassosciation (which we don't, yet), you can't do them on floats.</div><div><br></div><div>Alex<br clear="all"><div><br></div>-- <br>"I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire)<br>
"The people's good is the highest law." -- Cicero<br><br>
</div>