[pypy-svn] r60659 - pypy/extradoc/talk/ecoop2009

antocuni at codespeak.net antocuni at codespeak.net
Sat Dec 20 20:58:57 CET 2008


Author: antocuni
Date: Sat Dec 20 20:58:54 2008
New Revision: 60659

Modified:
   pypy/extradoc/talk/ecoop2009/benchmarks.tex
   pypy/extradoc/talk/ecoop2009/clibackend.tex
   pypy/extradoc/talk/ecoop2009/conclusion.tex
   pypy/extradoc/talk/ecoop2009/jitgen.tex
   pypy/extradoc/talk/ecoop2009/tlc.tex
Log:
some minore changes


Modified: pypy/extradoc/talk/ecoop2009/benchmarks.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/benchmarks.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/benchmarks.tex	Sat Dec 20 20:58:54 2008
@@ -41,7 +41,7 @@
 much better.  At the first iteration, the classes of the two operands of the
 multiplication are promoted; then, the JIT compiler knows that both are
 integers, so it can inline the code to compute the result.  Moreover, it can
-\emph{virtualize} (see section \ref{sec:virtuals} all the temporary objects, because they never escape from
+\emph{virtualize} (see Section \ref{sec:virtuals}) all the temporary objects, because they never escape from
 the inner loop.  The same remarks apply to the other two operations inside
 the loop.
 
@@ -167,7 +167,7 @@
 code we wrote uses two classes and a \lstinline{virtual} method call to
 implement this behaviour.
 
-However, our generated JIT does not compile the whole function at
+As already discussed, our generated JIT does not compile the whole function at
 once. Instead, it compiles and executes code chunk by chunk, waiting until it
 knows enough informations to generate highly efficient code.  In particualr,
 at the time it emits the code for the inner loop it exactly knows the

Modified: pypy/extradoc/talk/ecoop2009/clibackend.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/clibackend.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/clibackend.tex	Sat Dec 20 20:58:54 2008
@@ -6,7 +6,7 @@
 From the implementation point of view, the JIT generator is divided into a
 frontend and several backends.  The goal of the frontend is to generate a JIT
 compiler which works as described in the previous sections.  Internally, the
-frontend represents the compiled code as \emph{flow graphs}, and the role of
+JIT represents the compiled code as \emph{flow graphs}, and the role of
 the backends is to translate flowgraphs into machine code.
 
 At the moment of writing, three backends have been implemented: one for Intel
@@ -20,7 +20,7 @@
 JIT-compilation, each layer removing away different kinds of overhead.  By
 operating at a higher level, our JIT can potentially do a better job than the
 .NET one in some contexts, as our benchmarks demonstrate (see
-section~\ref{sec:benchmarks}).  On the other hand, the lower-level .NET JIT is
+Section~\ref{sec:benchmarks}).  On the other hand, the lower-level .NET JIT is
 very good at producing machine code, much more than PyPy's own \emph{x86}
 backend for example.  By combining the strengths of both we can get highly
 efficient machine code.

Modified: pypy/extradoc/talk/ecoop2009/conclusion.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/conclusion.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/conclusion.tex	Sat Dec 20 20:58:54 2008
@@ -54,6 +54,12 @@
 \cite{Blanchet99escapeanalysis}, \cite{Choi99escapeanalysis} our algorithm is
 totally simple-minded, but it is still useful in practise.
 
+\commentout{
+\section{Future work}
+
+XXX to be written
+}
+
 \section{Conclusion}
 
 high level structure:

Modified: pypy/extradoc/talk/ecoop2009/jitgen.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/jitgen.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/jitgen.tex	Sat Dec 20 20:58:54 2008
@@ -1,7 +1,7 @@
 \section{Automatic generation of JIT compilers}
 
 Traditional JIT compilers are hard to write, time consuming, hard to evolve,
-etc. etc.
+etc. etc. \anto{we need a better introductive sentence}
 
 \commentout{
 \begin{figure}[h]

Modified: pypy/extradoc/talk/ecoop2009/tlc.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/tlc.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/tlc.tex	Sat Dec 20 20:58:54 2008
@@ -45,7 +45,7 @@
 Obviously, not all the operations are applicable to all types. For example,
 it is not possible to \lstinline{ADD} an integer and an object, or reading an
 attribute from an object which does not provide it.  Being dynamically typed,
-the VM needs to do all these checks at runtime; in case one of the check
+the interpreter needs to do all these checks at runtime; in case one of the check
 fails, the execution is simply aborted.
 
 \subsection{TLC properties}
@@ -82,7 +82,8 @@
 As we said above, TLC exists only at bytecode level; to ease the development
 of TLC programs, we wrote an assembler that generates TLC bytecode. Figure \ref{fig:tlc-abs}
 shows a simple program that computes the absolute value of
-the given integer.
+the given integer.  In the subsequent sections, we will examine step-by-step 
+how the generated JIT compiler manages to produce a fully optimized version of it.
 
 \begin{figure}[h]
 \begin{center}



More information about the Pypy-commit mailing list