[pypy-dev] Parallel translation?

exarkun at twistedmatrix.com exarkun at twistedmatrix.com
Sat Jan 9 23:38:28 CET 2010

On 5 Dec 2009, 04:49 pm, fijall at gmail.com wrote:
>On Sat, Dec 5, 2009 at 4:44 PM, Armin Rigo <arigo at tunes.org> wrote:
>>On Fri, Dec 04, 2009 at 06:18:13PM +0100, Antonio Cuni wrote:
>>>I agree that at this point in time we cannot or don't want to make
>>>annotation/rtyping/backend parallelizable, but it should definitely 
>>>possible to just pass the -j flag to 'make' in an automatic way.
>>Of course, that is full of open problems too.  The main one is that 
>>gcc process consumes potentially a lot of RAM, so just passing "-j" is
>>not a great idea, as all gccs are started in parallel.  It looks like
>>some obscure tweak is needed, like setting -j to a number that depends
>>not only on the number of CPUs (as is classically done) but also on 
>>total RAM of the system...
>>A bientot,
>I guess the original idea was to have a translation option that is
>passed as -j flag to make, so one can specify what number of jobs he
>wants, instead of trying to guess it automatically.

I poked around on this front a bit.  I couldn't find any code in PyPy 
which invokes make.  I did find 
though.  This seems to be where lists of C files are sent for 
compilation.  Is that right?

I thought about how to make this parallel.  The cheesy solution, of 
course, would be to start a few threads and have them do the compilation 
(which should be sufficiently parallel, since it's another process 
that's doing the actual work).  This is a bit complicated by the chdir 
calls in the code, though.  Also, maybe distutils isn't threadsafe.

I dunno if I'll think about this any further, but I thought I'd 
summarize what little I did figure out.

>pypy-dev at codespeak.net

More information about the Pypy-dev mailing list