[pypy-svn] r63829 - pypy/extradoc/talk/icooolps2009

cfbolz at codespeak.net cfbolz at codespeak.net
Wed Apr 8 14:49:20 CEST 2009


Author: cfbolz
Date: Wed Apr  8 14:49:19 2009
New Revision: 63829

Modified:
   pypy/extradoc/talk/icooolps2009/paper.tex
Log:
more tweaks to the benchmark section


Modified: pypy/extradoc/talk/icooolps2009/paper.tex
==============================================================================
--- pypy/extradoc/talk/icooolps2009/paper.tex	(original)
+++ pypy/extradoc/talk/icooolps2009/paper.tex	Wed Apr  8 14:49:19 2009
@@ -725,23 +725,7 @@
 10000000}
 \end{figure}
 
-To test the technique on a more realistic example, we did some
-preliminary benchmarks with PyPy's Python interpreter. The function we
-benchmarked as well as the results can be seen in Figure
-\ref{fig:bench-python}. The function is a bit arbitrary but executing it is
-still non-trivial, as a normal Python interpreter needs to dynamically dispatch
-nearly all of the involved operations (like indexing into the tuple, addition
-and comparison of \texttt{i}). We benchmarked PyPy's Python interpreter with the
-JIT disabled, with the JIT enabled and
-CPython\footnote{\texttt{http://python.org}} 2.5.4 (the reference implementation of
-Python). 
-
-The results show that the tracing JIT speeds up the execution of this Python
-function significantly, even outperforming CPython. To achieve this, the tracer
-traces through the whole Python dispatching machinery, automatically inlining
-the relevant fast paths.
-
-\begin{figure}
+\begin{figure}[h]
 \label{fig:bench-example}
 {\small
 \begin{verbatim}
@@ -769,6 +753,22 @@
 \texttt{f(10000000)} \label{fig:bench-python}}
 \end{figure}
 
+To test the technique on a more realistic example, we did some
+preliminary benchmarks with PyPy's Python interpreter. The function we
+benchmarked as well as the results can be seen in Figure
+\ref{fig:bench-python}. While the function may seem a bit arbitrary, executing it is
+still non-trivial, as a normal Python interpreter needs to dynamically dispatch
+nearly all of the involved operations, like indexing into the tuple, addition
+and comparison of \texttt{i}. We benchmarked PyPy's Python interpreter with the
+JIT disabled, with the JIT enabled and
+CPython\footnote{\texttt{http://python.org}} 2.5.4 (the reference implementation of
+Python). 
+
+The results show that the tracing JIT speeds up the execution of this Python
+function significantly, even outperforming CPython. To achieve this, the tracer
+traces through the whole Python dispatching machinery, automatically inlining
+the relevant fast paths.
+
 \section{Related Work}
 
 Applying a trace-based optimizer to an interpreter and adding hints to help the



More information about the Pypy-commit mailing list