[pypy-commit] extradoc extradoc: XXXs

cfbolz noreply at buildbot.pypy.org
Mon Jun 27 16:04:21 CEST 2011


Author: Carl Friedrich Bolz <cfbolz at gmx.de>
Branch: extradoc
Changeset: r3795:73a66fe07d24
Date: 2011-06-27 15:49 +0200
http://bitbucket.org/pypy/extradoc/changeset/73a66fe07d24/

Log:	XXXs

diff --git a/talk/icooolps2011/jit-hints.pdf b/talk/icooolps2011/jit-hints.pdf
index c78b3b84550a3db53382fb1fb1a9a97c0596a4ef..afbc33e62d31d10aa178450e5a83b3e086fee9b8
GIT binary patch

[cut]

diff --git a/talk/iwtc11/paper.bib b/talk/iwtc11/paper.bib
--- a/talk/iwtc11/paper.bib
+++ b/talk/iwtc11/paper.bib
@@ -109,6 +109,16 @@
 	year = {2009}
 },
 
+ at inproceedings{bolz_runtime_2011,
+	address = {Lancaster, {UK}},
+	title = {Runtime Feedback in a {Meta-Tracing} {JIT} for Efficient Dynamic Languages},
+	abstract = {Meta-tracing {JIT} compilers can be applied to a variety of different languages without explicitly encoding language semantics into the compiler. So far, they lacked a way to give the language implementor control over runtime feedback. This restricted their performance. In this paper we describe the mechanisms in {PyPy&#8217;s} meta-tracing {JIT} that can be used to control runtime feedback in language-specific ways. These mechanisms are flexible enough to express classical {VM} techniques such as maps and runtime type feedback.},
+	booktitle = {{ICOOOLPS}},
+	publisher = {{ACM}},
+	author = {Bolz, Carl Friedrich and Cuni, Antonio and Fija&#322;kowski, Maciej and Leuschel, Michael and Rigo, Armin and Pedroni, Samuele},
+	year = {2011}
+},
+
 @inproceedings{chang_tracing_2009,
 	address = {Washington, {DC}},
 	title = {Tracing for Web 3.0: Trace Compilation for the Next Generation Web Applications},
diff --git a/talk/iwtc11/paper.tex b/talk/iwtc11/paper.tex
--- a/talk/iwtc11/paper.tex
+++ b/talk/iwtc11/paper.tex
@@ -45,7 +45,6 @@
 \usepackage{listings}
 \usepackage{beramono}
 
-
 \definecolor{gray}{rgb}{0.3,0.3,0.3}
 
 \lstset{
@@ -444,9 +443,6 @@
 
 \section{Making Trace Optimizations Loop Aware}
 
-XXX make clear that the preamble is not necessarily the \emph{first} iteration
-of a loop
-
 Before the trace is passed to a backend compiling it into machine code
 it needs to be optimized to achieve better performance.
 The focus of this paper
@@ -486,6 +482,8 @@
 However, the peeled loop can then be optimized using the assumption that a
 previous iteration has happened.
 
+XXX (samuele): the point about the first iteration is hard to understand
+
 When applying optimizations to this two-iteration trace
 some care has to taken as to how the arguments of the two
 \lstinline{jump} operations and the input arguments of the peeled loop are
@@ -752,8 +750,8 @@
 
 In the general case, each allocation-removed object in the jump arguments is exploded into a
 vector of variables containing the values of all registered
-fields\footnote{This is sometimes called \emph{scalar replacement}. XXX check
-whether that's true}. If some of the fields are themselves references to
+fields\footnote{This is sometimes called \emph{scalar replacement}.}.
+If some of the fields are themselves references to
 allocation-removed objects they are recursively exploded
 to make the vector contain only concrete variables. Some care has
 to be taken to always place the fields in the same order when
@@ -945,7 +943,8 @@
 
 We can observe that PyPy (even without loop peeling) is orders of magnitude
 faster than either CPython or Psyco. This is due to the JIT compilation
-advantages and optimizations we discussed in XXX [ref to other paper]. Loop
+advantages and optimizations we discussed in previous work
+\cite{bolz_allocation_2011, bolz_runtime_2011}. Loop
 peeling gives an additional XXX on average, which makes benchmark times
 comparable with native-compiled C code. Missing performance we attribute to
 the relative immaturity of PyPy's JIT assembler backend as well as missing
@@ -960,8 +959,10 @@
 \section{Related Work}
 \label{sec:related}
 
-All the optimizations presented here are completely standard
-\cite{muchnick_advanced_1997}. XXX
+The effect of combining a one ass optimization with loop peeling gives
+completely standard loop invariant code motion optimizations
+\cite{muchnick_advanced_1997}. We do not claim any novelty in the effect, but
+think that our implementation scheme is a very simple one.
 
 Mike Pall, the author of LuaJIT\footnote{\texttt{http://luajit.org/}} seems to
 have developped the described technique independently. There are no papers about


More information about the pypy-commit mailing list