
Hi, running into the following traceback when translating. Any ideas? My machine too old to have certain features? Thanks! Best regards, Wim Lavrijsen [translation:ERROR] Error: [translation:ERROR] Traceback (most recent call last): [translation:ERROR] File "translate.py", line 306, in main [translation:ERROR] drv.proceed(goals) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/translator/driver.py", line 809, in proceed [translation:ERROR] return self._execute(goals, task_skip = self._maybe_skip()) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/translator/tool/taskengine.py", line 116, in _execute [translation:ERROR] res = self._do(goal, taskcallable, *args, **kwds) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/translator/driver.py", line 286, in _do [translation:ERROR] res = func() [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/translator/driver.py", line 397, in task_pyjitpl_lltype [translation:ERROR] backend_name=self.config.translation.jit_backend, inline=True) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/jit/metainterp/warmspot.py", line 50, in apply_jit [translation:ERROR] warmrunnerdesc.finish() [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/jit/metainterp/warmspot.py", line 210, in finish [translation:ERROR] self.annhelper.finish() [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/rpython/annlowlevel.py", line 240, in finish [translation:ERROR] self.finish_annotate() [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/rpython/annlowlevel.py", line 259, in finish_annotate [translation:ERROR] ann.complete_helpers(self.policy) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/annrpython.py", line 176, in complete_helpers [translation:ERROR] self.complete() [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/annrpython.py", line 250, in complete [translation:ERROR] self.processblock(graph, block) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/annrpython.py", line 448, in processblock [translation:ERROR] self.flowin(graph, block) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/annrpython.py", line 508, in flowin [translation:ERROR] self.consider_op(block.operations[i]) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/annrpython.py", line 710, in consider_op [translation:ERROR] raise_nicer_exception(op, str(graph)) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/annrpython.py", line 707, in consider_op [translation:ERROR] resultcell = consider_meth(*argcells) [translation:ERROR] File "<4237-codegen /home/wlav/pypydev/pypy/pypy/annotation/annrpython.py:745>", line 3, in consider_op_getattr [translation:ERROR] return arg.getattr(*args) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/unaryop.py", line 606, in getattr [translation:ERROR] attrdef = ins.classdef.find_attribute(attr) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/classdef.py", line 217, in find_attribute [translation:ERROR] return self.locate_attribute(attr).attrs[attr] [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/classdef.py", line 212, in locate_attribute [translation:ERROR] self.generalize_attr(attr) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/classdef.py", line 307, in generalize_attr [translation:ERROR] self._generalize_attr(attr, s_value) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/classdef.py", line 297, in _generalize_attr [translation:ERROR] newattr.mutated(self) [translation:ERROR] File "/home/wlav/pypydev/pypy/pypy/annotation/classdef.py", line 137, in mutated [translation:ERROR] raise NoSuchAttrError(homedef, self.name) [translation:ERROR] NoSuchAttrError': (<ClassDef 'pypy.jit.backend.x86.regloc.AssemblerLocation'>, 'is_xmm') [translation:ERROR] .. v0 = getattr(loc_0, ('is_xmm')) [translation:ERROR] .. '(pypy.jit.backend.x86.assembler:671)Assembler386._assemble_bootstrap_direct_call' [translation:ERROR] Processing block: [translation:ERROR] block@150 is a <class 'pypy.objspace.flow.flowcontext.SpamBlock'> [translation:ERROR] in (pypy.jit.backend.x86.assembler:671)Assembler386._assemble_bootstrap_direct_call [translation:ERROR] containing the following operations: [translation:ERROR] v0 = getattr(loc_0, ('is_xmm')) [translation:ERROR] v1 = is_true(v0) [translation:ERROR] --end-- -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi Maciej,
cool, now it translates again! Btw., the reason to move to head of default was b/c my translated pypy-c was crashing on the spot. The binary download of 1.5 and older locally translated pypy's didn't do that. Turns out there is something with my history file: the crash happens when it's being loaded by readline: Python 2.7.1 (9113640a83ac+8d950d4e5c98+, May 24 2011, 20:01:17) [PyPy 1.5.0-alpha0 with GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. using my private settings ... Program received signal SIGSEGV, Segmentation fault. 0x08aae11c in pypy_g_OptHeap_emitting_operation () By playing a little, I find that the history file is problematic if it contains 1000 lines or more. Having 999 lines or less is fine either way. Doesn't matter for me now that I know, but it's rather odd. Best regards, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

On Tue, May 24, 2011 at 3:46 PM, <wlavrijsen@lbl.gov> wrote:
Given that 1000 is the default cutoff for the JIT, and the gdb backtrace you have, it's pretty clear that there's a crash during compilation, and that readline loops over the lines in the history file, and hitting that 1000th line triggers the compilation. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero

Hi Wim, On 25/05/11 01:16, wlavrijsen@lbl.gov wrote:
Okay, then I can't live with it. :}
Not sure to understand: do you get a crash even now that you merged default? If this is the case, do you get it even with a trunk version of pypy? You don't have to translate it by yourself, you can just download one of the nightly builds: http://buildbot.pypy.org/nightly/trunk/
Any idea how to start debugging this?
since you get the crash during the optimieopt phase, you can try to selectively enable/disable the various optimization passes, but using --jit enable_opts, e.g.: ./pypy-c --jit enable_opts=intbounds:rewrite:virtualize:string:heap:ffi:unroll the line above is equivalent of a plain pypy-c, because it enables all optimizations. Try to remove some of them and see if you still get the crash. ciao, Anto

Hi Anto,
Not sure to understand: do you get a crash even now that you merged default?
yes, that's the case.
With pypy-c-jit-44414-19bf70a7d70c-linux.tar.bz2, I get no crash. Could it be as silly as having a buggy gcc or other tool used underneath when building? My local gcc is 4.4.1, whereas the above pypy build is 4.4.3.
Okay, that sounds like something I can try, but it'll take a bit to run through some of the possible combinations. Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Anto,
there are several combinations that work with the pre-built pypy but not with my local version (except one: if only intbounds is selected, neither works). The largest set of combinations that does work is: intbounds:rewrite:virtualize:ffi Several others don't crash, but raise (and again, not in the pre-built): RPython traceback: File "implement_18.c", line 205492, in optimize_loop File "implement_19.c", line 86345, in _optimize_loop File "implement_19.c", line 202071, in UnrollOptimizer_propagate_all_forward File "implement_19.c", line 248573, in Optimizer_reconstruct_for_next_iteration File "implement_19.c", line 247560, in Optimization_reconstruct_for_next_iteration Fatal RPython error: NotImplementedError Does any of that ring a bell? Changing gcc didn't work. For now, I'll live with it with the lower set of options. It may after all still be something specific to my box which I intend to replace sometime soon. Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi Wim, As a summary, can you tell us if there is something wrong with the trunk version of PyPy, as compiled by your GCC? In this case it's a bug in the trunk version, which we have to fix, because we don't want to support only half the GCC configurations out there. Or does the bug only ever occur in your own branch? In this case it's likely to be a bug that, although pre-existing, only shows up in the particular configuration of your branch. It would also be something we have to fix, but it's harder for us to get to the crash. It's hard to just give general guidelines for how to debug such a crash, particularly because all levels of PyPy are involved. There is no real other option than "gdb pypy-c", which requires knowledge of how all levels play together in order to give the C source we get. A bientôt, Armin.

Hi Armin,
As a summary, can you tell us if there is something wrong with the trunk version of PyPy, as compiled by your GCC?
I've tried different versions of GCC (4.4 and 4.5): in both cases do I get a pypy-c that crashes once the jit is invoked. It was just a guess, and it turns out not to be related to the version of GCC (or associated tools).
I did a fresh hg clone from bitbucket to make sure it's nothing local. I find that the trunk on my machine does NOT crash, but the branch does from that same fresh clone does. There are only a few changes that are in trunk but not yet in the branch (I did a recent merge). Also, the runtime of the branch is somewhat different, as it is linked to one more library (libReflex.so). I'll update the branch once more; easy to test if that makes a difference. Also, I'll try once more on the branch with cppyy disabled. That should be identical to trunk. Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi Armin, okay, something has been fixed over the past couple of days: the readline now works! :) I'll move up to that then. My benchmark, however, does not, so no cigar just yet ... Taking again the list of Anto for the jit options, this is the longest that I can run successfully: intbounds:rewrite:virtualize:string:ffi If I add "unroll" I get the NotImplementedError that Maciej already explained. If I add "heap", I get the segfault. Note that the timing of the benchmark is very good. :) I.e. heap and unroll don't actually do much for my benchmark. Of course, I can't actually run the benchmark with trunk, since I need module cppyy for that. Hence, the crash in the benchmark may be due to pbs in my own rpython (as opposed to whatever caused the issues with readline). Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi, [replying to myself]
okay, something has been fixed over the past couple of days: the readline now works! :) I'll move up to that then.
too fast ... it sometimes works, and when it doesn't, it sometimes does under gdb or valgrind. Sometimes if I start a new shell, it works, then stops working again for another. (Although the downloaded pypy-c's have never crashed, mind.) This would make me think that there's an uninitialized variable somewhere. Is it possible to create such a thing with a buggy RPython construct? That said, the results with gdb or valgrind are always the same, when they do show the crash: Valgrind: ==16631== Process terminating with default action of signal 11 (SIGSEGV) ==16631== Access not within mapped region at address 0x4 ==16631== at 0x8AADB0C: pypy_g_OptHeap_emitting_operation (in /home/wlav/pypydev/pypy/pypy/translator/goal/pypy-c) gdb: 0x8aadb0c <pypy_g_OptHeap_emitting_operation+604>: cmp 0x4(%edi),%eax 0x8aadb0f <pypy_g_OptHeap_emitting_operation+607>: jl 0x8aadb37 <pypy_g_OptHeap_emitting_operation+647> 0x8aadb11 <pypy_g_OptHeap_emitting_operation+609>: jmp 0x8aadbbf <pypy_g_OptHeap_emitting_operation+783> Whereas I'd expect more variation with an uninitialized variable. Valgrind does not report any uninitialized variables either (other than PyObject_Free, which I'll presume is there for the same reason as for CPython (?)): When I select only the jit options that will not give me a segfault and then run under valgrind, this is the only complaint I get (other than PyObject_Free): ==16681== Address 0x4b18010 is 3,352 bytes inside a block of size 4,100 free'd ==16681== at 0x40268A6: free (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so) ==16681== by 0x811282C: pypy_g__ll_dict_setitem_lookup_done__DICTPtr_Address_Ad (in /home/wlav/pypydev/pypy/pypy/translator/goal/pypy-c) Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi Wim, On Tue, May 31, 2011 at 8:46 PM, <wlavrijsen@lbl.gov> wrote:
It definitely crashes during optimization of the trace produced by the JIT. I imagine that we could get somewhere by carefully inspecting the unoptimized operations that lead to the crash. Try enabling the dump with "PYPYLOG=jit-log-unopt:log", and then reproducing the crash. This should ideally leave behind a file called "log" with all the unoptimized loops seen so far, and the faulty one is the last one (i.e. the last section between "{jit-log-unopt" and "jit-log-unopt}"). A bientôt, Armin.

Hi Armin,
thanks; but to get something useful, I had to use jit-log-noopt: (i.e. output to stderr, since buffers were not flushed; and slightly different filter). Since readline in PyPy is a .py, I was able to reduce the loop that exhibits a segfault to this simple script: for i in range(1001): unicode("aap") although the trace is still far from simple, at least it's a lot shorter than what I got before. :) I've attached the trace.log file. And again, the problem optimization was "heap." For comparison purposes, I grabbed the latest "green" nightly (44610) and made a trace with that one, too (the prebuild editions don't crash). That file is trace_nightly.log (also attached). It is quite a bit shorter. Anything in there that rings a bell? Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi Armin, to come back to the problem that I've had with the jit ... it's gone now with current default tip. No idea what's up, but I'm plenty happy with things now working again for me. The "heap" optimization does have a significant effect on my code, though, so it's good to have it. I broke the fast path during one of the many merges with default when I had to resolve differences (to be fixed next week?), but the current numbers are still excellent: $ ../../../translator/goal/pypy-cppyy-c-070511 bench1.py rebuilding bench1.exe ... warming up ... C++ reference uses 0.032s :::: cppyy interp cost: 0.523s ( 16x) :::: cppyy python cost: 0.997s ( 31x) :::: pycintex cost: 50.554s (1579x) So a factor of 50 improvement over the current CPython bindings, something that used to be (back in December) a factor of 10 "only." Of course, for a real comparison, I should backport cppyy to pypy as of Dec '10 to know which changes are to be attributed to what, but none of this is the final product, so for now it's simply good enough. :) Best regards, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi Maciej,
cool, now it translates again! Btw., the reason to move to head of default was b/c my translated pypy-c was crashing on the spot. The binary download of 1.5 and older locally translated pypy's didn't do that. Turns out there is something with my history file: the crash happens when it's being loaded by readline: Python 2.7.1 (9113640a83ac+8d950d4e5c98+, May 24 2011, 20:01:17) [PyPy 1.5.0-alpha0 with GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. using my private settings ... Program received signal SIGSEGV, Segmentation fault. 0x08aae11c in pypy_g_OptHeap_emitting_operation () By playing a little, I find that the history file is problematic if it contains 1000 lines or more. Having 999 lines or less is fine either way. Doesn't matter for me now that I know, but it's rather odd. Best regards, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

On Tue, May 24, 2011 at 3:46 PM, <wlavrijsen@lbl.gov> wrote:
Given that 1000 is the default cutoff for the JIT, and the gdb backtrace you have, it's pretty clear that there's a crash during compilation, and that readline loops over the lines in the history file, and hitting that 1000th line triggers the compilation. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." -- Evelyn Beatrice Hall (summarizing Voltaire) "The people's good is the highest law." -- Cicero

Hi Wim, On 25/05/11 01:16, wlavrijsen@lbl.gov wrote:
Okay, then I can't live with it. :}
Not sure to understand: do you get a crash even now that you merged default? If this is the case, do you get it even with a trunk version of pypy? You don't have to translate it by yourself, you can just download one of the nightly builds: http://buildbot.pypy.org/nightly/trunk/
Any idea how to start debugging this?
since you get the crash during the optimieopt phase, you can try to selectively enable/disable the various optimization passes, but using --jit enable_opts, e.g.: ./pypy-c --jit enable_opts=intbounds:rewrite:virtualize:string:heap:ffi:unroll the line above is equivalent of a plain pypy-c, because it enables all optimizations. Try to remove some of them and see if you still get the crash. ciao, Anto

Hi Anto,
Not sure to understand: do you get a crash even now that you merged default?
yes, that's the case.
With pypy-c-jit-44414-19bf70a7d70c-linux.tar.bz2, I get no crash. Could it be as silly as having a buggy gcc or other tool used underneath when building? My local gcc is 4.4.1, whereas the above pypy build is 4.4.3.
Okay, that sounds like something I can try, but it'll take a bit to run through some of the possible combinations. Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Anto,
there are several combinations that work with the pre-built pypy but not with my local version (except one: if only intbounds is selected, neither works). The largest set of combinations that does work is: intbounds:rewrite:virtualize:ffi Several others don't crash, but raise (and again, not in the pre-built): RPython traceback: File "implement_18.c", line 205492, in optimize_loop File "implement_19.c", line 86345, in _optimize_loop File "implement_19.c", line 202071, in UnrollOptimizer_propagate_all_forward File "implement_19.c", line 248573, in Optimizer_reconstruct_for_next_iteration File "implement_19.c", line 247560, in Optimization_reconstruct_for_next_iteration Fatal RPython error: NotImplementedError Does any of that ring a bell? Changing gcc didn't work. For now, I'll live with it with the lower set of options. It may after all still be something specific to my box which I intend to replace sometime soon. Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi Wim, As a summary, can you tell us if there is something wrong with the trunk version of PyPy, as compiled by your GCC? In this case it's a bug in the trunk version, which we have to fix, because we don't want to support only half the GCC configurations out there. Or does the bug only ever occur in your own branch? In this case it's likely to be a bug that, although pre-existing, only shows up in the particular configuration of your branch. It would also be something we have to fix, but it's harder for us to get to the crash. It's hard to just give general guidelines for how to debug such a crash, particularly because all levels of PyPy are involved. There is no real other option than "gdb pypy-c", which requires knowledge of how all levels play together in order to give the C source we get. A bientôt, Armin.

Hi Armin,
As a summary, can you tell us if there is something wrong with the trunk version of PyPy, as compiled by your GCC?
I've tried different versions of GCC (4.4 and 4.5): in both cases do I get a pypy-c that crashes once the jit is invoked. It was just a guess, and it turns out not to be related to the version of GCC (or associated tools).
I did a fresh hg clone from bitbucket to make sure it's nothing local. I find that the trunk on my machine does NOT crash, but the branch does from that same fresh clone does. There are only a few changes that are in trunk but not yet in the branch (I did a recent merge). Also, the runtime of the branch is somewhat different, as it is linked to one more library (libReflex.so). I'll update the branch once more; easy to test if that makes a difference. Also, I'll try once more on the branch with cppyy disabled. That should be identical to trunk. Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi Armin, okay, something has been fixed over the past couple of days: the readline now works! :) I'll move up to that then. My benchmark, however, does not, so no cigar just yet ... Taking again the list of Anto for the jit options, this is the longest that I can run successfully: intbounds:rewrite:virtualize:string:ffi If I add "unroll" I get the NotImplementedError that Maciej already explained. If I add "heap", I get the segfault. Note that the timing of the benchmark is very good. :) I.e. heap and unroll don't actually do much for my benchmark. Of course, I can't actually run the benchmark with trunk, since I need module cppyy for that. Hence, the crash in the benchmark may be due to pbs in my own rpython (as opposed to whatever caused the issues with readline). Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi, [replying to myself]
okay, something has been fixed over the past couple of days: the readline now works! :) I'll move up to that then.
too fast ... it sometimes works, and when it doesn't, it sometimes does under gdb or valgrind. Sometimes if I start a new shell, it works, then stops working again for another. (Although the downloaded pypy-c's have never crashed, mind.) This would make me think that there's an uninitialized variable somewhere. Is it possible to create such a thing with a buggy RPython construct? That said, the results with gdb or valgrind are always the same, when they do show the crash: Valgrind: ==16631== Process terminating with default action of signal 11 (SIGSEGV) ==16631== Access not within mapped region at address 0x4 ==16631== at 0x8AADB0C: pypy_g_OptHeap_emitting_operation (in /home/wlav/pypydev/pypy/pypy/translator/goal/pypy-c) gdb: 0x8aadb0c <pypy_g_OptHeap_emitting_operation+604>: cmp 0x4(%edi),%eax 0x8aadb0f <pypy_g_OptHeap_emitting_operation+607>: jl 0x8aadb37 <pypy_g_OptHeap_emitting_operation+647> 0x8aadb11 <pypy_g_OptHeap_emitting_operation+609>: jmp 0x8aadbbf <pypy_g_OptHeap_emitting_operation+783> Whereas I'd expect more variation with an uninitialized variable. Valgrind does not report any uninitialized variables either (other than PyObject_Free, which I'll presume is there for the same reason as for CPython (?)): When I select only the jit options that will not give me a segfault and then run under valgrind, this is the only complaint I get (other than PyObject_Free): ==16681== Address 0x4b18010 is 3,352 bytes inside a block of size 4,100 free'd ==16681== at 0x40268A6: free (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so) ==16681== by 0x811282C: pypy_g__ll_dict_setitem_lookup_done__DICTPtr_Address_Ad (in /home/wlav/pypydev/pypy/pypy/translator/goal/pypy-c) Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi Wim, On Tue, May 31, 2011 at 8:46 PM, <wlavrijsen@lbl.gov> wrote:
It definitely crashes during optimization of the trace produced by the JIT. I imagine that we could get somewhere by carefully inspecting the unoptimized operations that lead to the crash. Try enabling the dump with "PYPYLOG=jit-log-unopt:log", and then reproducing the crash. This should ideally leave behind a file called "log" with all the unoptimized loops seen so far, and the faulty one is the last one (i.e. the last section between "{jit-log-unopt" and "jit-log-unopt}"). A bientôt, Armin.

Hi Armin,
thanks; but to get something useful, I had to use jit-log-noopt: (i.e. output to stderr, since buffers were not flushed; and slightly different filter). Since readline in PyPy is a .py, I was able to reduce the loop that exhibits a segfault to this simple script: for i in range(1001): unicode("aap") although the trace is still far from simple, at least it's a lot shorter than what I got before. :) I've attached the trace.log file. And again, the problem optimization was "heap." For comparison purposes, I grabbed the latest "green" nightly (44610) and made a trace with that one, too (the prebuild editions don't crash). That file is trace_nightly.log (also attached). It is quite a bit shorter. Anything in there that rings a bell? Thanks, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net

Hi Armin, to come back to the problem that I've had with the jit ... it's gone now with current default tip. No idea what's up, but I'm plenty happy with things now working again for me. The "heap" optimization does have a significant effect on my code, though, so it's good to have it. I broke the fast path during one of the many merges with default when I had to resolve differences (to be fixed next week?), but the current numbers are still excellent: $ ../../../translator/goal/pypy-cppyy-c-070511 bench1.py rebuilding bench1.exe ... warming up ... C++ reference uses 0.032s :::: cppyy interp cost: 0.523s ( 16x) :::: cppyy python cost: 0.997s ( 31x) :::: pycintex cost: 50.554s (1579x) So a factor of 50 improvement over the current CPython bindings, something that used to be (back in December) a factor of 10 "only." Of course, for a real comparison, I should backport cppyy to pypy as of Dec '10 to know which changes are to be attributed to what, but none of this is the final product, so for now it's simply good enough. :) Best regards, Wim -- WLavrijsen@lbl.gov -- +1 (510) 486 6411 -- www.lavrijsen.net
participants (5)
-
Alex Gaynor
-
Antonio Cuni
-
Armin Rigo
-
Maciej Fijalkowski
-
wlavrijsen@lbl.gov