Thanks for your replies. I had forgoten -t-lowmem. I unloaded X and a few other things and tried it again. It translated without swapping. The python translator process finished holding 704M. I didn't notice it going any higher (but it may have). It used about 28 minutes of CPU time. I used the options "-t-lowmem -text" (-text because I'm not running under X). The cc1 process used in excess of 800M again. Would it be posible to emit a set of files instead of a single file? Then compile them seperately. Spliting it into 10 files would speed up compilation alot because it wouldn't swap. pypy-c starts and seems to work but is VERY slow. It spends a lot of time consuming 100% CPU and not accessing the disk. It also seems to be constantly allocating more RAM, also. Is this the issue related to slow module loading or something else? (help(theading) took almost 40 minutes, and failed with missing popen) Also it takes a couple seconds to respond after I hit enter no matter what the command is (even things like "x = 1"). I think implementing readline-like features for the interactive prompt would be very nice. At the very least a working backspace and command history. I tried llvm but it failed will downloading from codespeak.net/pypy/llvm-gcc.cgi (Strange problem for a compiler) because the ethernet cable had come loose from the machine. Overall I am very impressed and I can't wait for some new features to come out of this. The JIT will be neat, but I'm also interested in the thunk object space as a way of implementing a threading model in which each thread makes requests from other threads. The requests return immediately, but the returned object is a special object that is automatically replaced with the actual result when it is available. It seems like a thread safe version of the thunk object space would do quite nice for implementing this. (I learn about this idea from reading about Eifel but I've never actually programmed with it) Just some thoughts of my own. Not a request. Once the whole thing gets a bit more stable I'll start playing with implementing this kinda thing myself. (assuming I have some free time ;-) Good work, -Arthur PS: I'll remember the "compile it by hand later" trick, but it wasn't needed this time. On 8/28/2005, "Carl Friedrich Bolz" <cfbolz@gmx.de> wrote:
Hi Arthur!
Arthur Peters wrote:
I tried translating pypy on my AMD64 laptop and it translated fine, compilation did finally run the machine out of ram.
Damn.
[snip]
PS: BTW, I'm willing to try it again if it would help. This time I will kill everything unnessisary and use 2-3GB of swap ;-)
I don't know if you did that already but you might want to use -t-lowmem as an option to translate_pypy which should reduce memory usage quite a bit.
Thanks for trying this out!
Cheers,
Carl Friedrich
Hello Arthur, On Mon, Aug 29, 2005 at 07:26 -0500, Arthur Peters wrote:
pypy-c starts and seems to work but is VERY slow. It spends a lot of time consuming 100% CPU and not accessing the disk. It also seems to be constantly allocating more RAM, also. Is this the issue related to slow module loading or something else? (help(theading) took almost 40 minutes, and failed with missing popen) Also it takes a couple seconds to respond after I hit enter no matter what the command is (even things like "x = 1").
The main issue with the interactive command line being slow is that the bytecode compiler is running completely at application level. You will note that if you issue e.g. "x=range(10000)" that it takes the same time as "x=range(1000)". The time is mostly consumed by compiling the expression. The actual bytecode interpretation is reasonably fast. One major area for the current pypy code base is getting the compiler (and ST->AST transformation) to be statically translateable. This should tremendously increase interactive performance and get a better feel.
I think implementing readline-like features for the interactive prompt would be very nice. At the very least a working backspace and command history.
Yes, i agree. But our current translation driving needs some serious cleanup before we can tackle to having flexible ways to be able to optionally link and interface to libraries like readline.
I tried llvm but it failed will downloading from codespeak.net/pypy/llvm-gcc.cgi (Strange problem for a compiler) because the ethernet cable had come loose from the machine.
Problem is that we require LLVM-cvs and the gcc-frontend in particular and that is hard to install. Therefore we do the indirection via codespeak which has a recent version installed and does the actual compilation. Not elegant and a bit suprising but convenient nevertheless.
Overall I am very impressed and I can't wait for some new features to come out of this. The JIT will be neat, but I'm also interested in the thunk object space as a way of implementing a threading model in which each thread makes requests from other threads. The requests return immediately, but the returned object is a special object that is automatically replaced with the actual result when it is available. It seems like a thread safe version of the thunk object space would do quite nice for implementing this. (I learn about this idea from reading about Eifel but I've never actually programmed with it)
makes all sense to me and i am interested in this as well. Certainly a nice sprint topic at some point ... cheers, holger
participants (2)
-
Arthur Peters
-
hpk@trillke.net