Hello Arthur, On Mon, Aug 29, 2005 at 07:26 -0500, Arthur Peters wrote:
pypy-c starts and seems to work but is VERY slow. It spends a lot of time consuming 100% CPU and not accessing the disk. It also seems to be constantly allocating more RAM, also. Is this the issue related to slow module loading or something else? (help(theading) took almost 40 minutes, and failed with missing popen) Also it takes a couple seconds to respond after I hit enter no matter what the command is (even things like "x = 1").
The main issue with the interactive command line being slow is that the bytecode compiler is running completely at application level. You will note that if you issue e.g. "x=range(10000)" that it takes the same time as "x=range(1000)". The time is mostly consumed by compiling the expression. The actual bytecode interpretation is reasonably fast. One major area for the current pypy code base is getting the compiler (and ST->AST transformation) to be statically translateable. This should tremendously increase interactive performance and get a better feel.
I think implementing readline-like features for the interactive prompt would be very nice. At the very least a working backspace and command history.
Yes, i agree. But our current translation driving needs some serious cleanup before we can tackle to having flexible ways to be able to optionally link and interface to libraries like readline.
I tried llvm but it failed will downloading from codespeak.net/pypy/llvm-gcc.cgi (Strange problem for a compiler) because the ethernet cable had come loose from the machine.
Problem is that we require LLVM-cvs and the gcc-frontend in particular and that is hard to install. Therefore we do the indirection via codespeak which has a recent version installed and does the actual compilation. Not elegant and a bit suprising but convenient nevertheless.
Overall I am very impressed and I can't wait for some new features to come out of this. The JIT will be neat, but I'm also interested in the thunk object space as a way of implementing a threading model in which each thread makes requests from other threads. The requests return immediately, but the returned object is a special object that is automatically replaced with the actual result when it is available. It seems like a thread safe version of the thunk object space would do quite nice for implementing this. (I learn about this idea from reading about Eifel but I've never actually programmed with it)
makes all sense to me and i am interested in this as well. Certainly a nice sprint topic at some point ... cheers, holger