[pypy-svn] r60672 - pypy/extradoc/talk/ecoop2009

antocuni at codespeak.net antocuni at codespeak.net
Sun Dec 21 11:25:03 CET 2008


Author: antocuni
Date: Sun Dec 21 11:25:02 2008
New Revision: 60672

Modified:
   pypy/extradoc/talk/ecoop2009/benchmarks.tex
Log:
minor modifications



Modified: pypy/extradoc/talk/ecoop2009/benchmarks.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/benchmarks.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/benchmarks.tex	Sun Dec 21 11:25:02 2008
@@ -6,7 +6,7 @@
 \emph{stack-based interpreter}, \emph{boxed arithmetic} and \emph{dynamic lookup} of
 methods and attributes.
 
-In the following sections, we will show some benchmarks that show how our
+In the following sections, we present some benchmarks that show how our
 generated JIT can handle all these features very well.
 
 To measure the speedup we get with the JIT, we run each program three times:
@@ -20,7 +20,7 @@
 \end{enumerate}
 
 Moreover, for each benchmark we also show the time taken by running the
-equivalent program written in C\# \footnote{The sources for both TLC and C\#
+equivalent program written in C\#\footnote{The sources for both TLC and C\#
   programs are available at:
   http://codespeak.net/svn/pypy/extradoc/talk/ecoop2009/benchmarks/}
 
@@ -102,7 +102,7 @@
 than $10^7$, we did not run the interpreted program as it would have took too
 much time, without adding anything to the discussion.
 
-As we can see, the code generated by the JIT can be up to ~1800 times faster
+As we can see, the code generated by the JIT can be up to about 1800 times faster
 than the non-jitted case.  Moreover, it often runs at the same speed as the
 equivalent program written in C\#, being only 1.5 slower in the worst case.
 
@@ -174,7 +174,7 @@
 type of \lstinline{obj}, thus it can remove the overhead of dynamic dispatch
 and inline the method call.  Moreover, since \lstinline{obj} never escapes the
 function, it is \emph{virtualized} and its field \lstinline{value} is stored
-as a local variable.  As a result, the generated code results in a simple loop
+as a local variable.  As a result, the generated code turns out to be a simple loop
 doing additions in-place.
 
 \begin{table}[ht]
@@ -217,11 +217,8 @@
   \cite{hoelzle_optimizing_1991}, that requires a guard check at each iteration.
 \end{itemize}
 
-\anto{maybe we should move the following paragraph to
-  abstract/introduction/conclusion?}
-
 Despite being only a microbenchmark, this result is very important as it proves
 that our strategy of intermixing compile time and runtime can yield to better
 performances than current techniques.  The result is even more impressive if
-we consider dynamically typed languages as TLC are usually considered much
+we take in account that dynamically typed languages as TLC are usually considered much
 slower than the statically typed ones.



More information about the Pypy-commit mailing list