[pypy-commit] extradoc extradoc: various typos and some XXXs

cfbolz noreply at buildbot.pypy.org
Tue Jun 14 10:35:37 CEST 2011


Author: Carl Friedrich Bolz <cfbolz at gmx.de>
Branch: extradoc
Changeset: r3672:c87ad96e85d2
Date: 2011-06-12 23:00 +0200
http://bitbucket.org/pypy/extradoc/changeset/c87ad96e85d2/

Log:	various typos and some XXXs

diff --git a/talk/iwtc11/paper.tex b/talk/iwtc11/paper.tex
--- a/talk/iwtc11/paper.tex
+++ b/talk/iwtc11/paper.tex
@@ -358,9 +358,9 @@
 trace in the sense that the operations within it only operations on
 variables that are either among the input arguments of the second iterations
 or are produced within the second iterations. To ensure this we need
-to introduce a bit of formalism. 
+to introduce a bit of formalism.
 
-The original trace (prior too peeling) consists of three parts. 
+The original trace (prior to peeling) consists of three parts.
 A vector of input
 variables, $I=\left(I_1, I_2, \cdots, I_{|I|}\right)$, a list of non-
 jump operations and a single
@@ -525,7 +525,7 @@
 
 \subsection{Allocation Removals}
 By using escape analysis it is possible to identify objects that are
-allocated within the loop but never escapes it. That is the object are
+allocated within the loop but never escape it. That is the object are
 short lived and no references to them exists outside the loop. This
 is performed by processing the operation from top to bottom and
 optimistically removing every \lstinline{new} operation. Later on if
@@ -553,9 +553,9 @@
 In the general case, each virtual in the jump arguments is exploded into a
 vector of variables containing the values of all it's attributes. If some
 of the attributes are themselves virtuals they are recursively exploded
-to make the vector contain only non virtual variables. Some care has
+to make the vector contain only non-virtual variables. Some care has
 to be taken to always place the attributes in the same order when
-performing this explosion. Notation becomes somewhat simpler if also every non
+performing this explosion. Notation becomes somewhat simpler if also every non-
 virtual variable of the jump arguments is exploded into a vector. This will
 be a vector containing the original variable only. To summarize, for
 every variable, $J_k$, of the original jump arguments, $J$, let
@@ -633,55 +633,61 @@
 \section{Benchmarks}
 
 The loop peeling optimization was implemented in the PyPy
-framework. That means that the jit compilers generated for all
+framework. That means that the JIT-compilers generated for all
 interpreters implemented within PyPy now can take advantage of
 it. Benchmarks have been executed for a few different interpreters and
 we see improvements in several cases. The ideal loop for this optimization
 would be short numerical calculations with no failing guards and no
 external calls.
 
+XXX reason why we use small numerical kernels for benchmarks
+
+XXX we either need to explain that we use C++ or consistently use C
+
 \subsection{Python}
-The python interpreter of the PyPy framework is a complete python
+The python interpreter of the PyPy framework is a complete Python
 version 2.7 compatible interpreter. A set of numerical
-calculations where implemented in both python and in C and their
+calculations were implemented in both Python and in C and their
 runtimes compared. The benchmarks are
 \begin{itemize}
 \item {\bf sqrt}: approximates the square root of $y$ as $x_\infty$
   with $x_0=y/2$ and $x_k = \left( x_{k-1} + y/x_{k-1} \right) /
   2$. There are three different versions of this benchmark where $x_k$
   is represented with different type of objects: int's, float's and
-  Fix16's. The later, Fix16, is a custom class that implements
-  fixpoint arithmetic with 16 bits precision. In python there is only
+  Fix16's. The latter, Fix16, is a custom class that implements
+  fixpoint arithmetic with 16 bits precision. In Python there is only
   a single implementation of the benchmark that gets specialized
   depending on the class of it's input argument, $y$, while in C,
   there is three different implementations.
-\item {\bf conv3}: one dimensional convolution with a kernel of fixed
+\item {\bf conv3}: one-dimensional convolution with a kernel of fixed
   size $3$.
-\item {\bf conv5}: one dimensional convolution with a kernel of fixed
+\item {\bf conv5}: one-dimensional convolution with a kernel of fixed
   size $5$.
-\item {\bf conv3x3}: two dimensional convolution with kernel of fixed
-  size $3 \times 3$ using a custom class to represent two dimensional
+\item {\bf conv3x3}: two-dimensional convolution with kernel of fixed
+  size $3 \times 3$ using a custom class to represent two-dimensional
   arrays.
-\item {\bf dilate3x3}: two dimensional dilation with kernel of fixed
+\item {\bf dilate3x3}: two-dimensional dilation with kernel of fixed
   size $3 \times 3$. This is similar to convolution but instead of
   summing over the elements, the maximum is taken. That places a
   external call to a max function within the loop that prevents some
   of the optimizations.
-\item {\bf sobel}: an low level video processing algorithm used to
-  locate edges in an image. It calculated the gradient magnitude
-  using sobel derivatives. The algorithm is in python implemented
+\item {\bf sobel}: a low-level video processing algorithm used to
+  locate edges in an image. It calculates the gradient magnitude
+  using sobel derivatives. In Python the algorithm is implemented
   on top of a custom image class that is specially designed for the
   problem. It ensures that there will be no failing guards, and makes
   a lot of the two dimension index calculations loop invariant. The
-  intention there is twofold. It shows that the performance impact of
-  having wrapper classes giving objects some application specific
+  intention there is twofold. It shows that the performance-impact of
+  having wrapper classes giving objects some application-specific
   properties is negligible. This is due to the inlining performed
   during the tracing and the allocation removal of the index objects
-  introduced. It also shows that it is possible to do some low level
-  hand optimizations of the python code and hide those optimization
+  introduced. It also shows that it is possible to do some low-level
+  hand optimizations of the Python code and hide those optimization
   under a nice interface without loosing performance.
 \end{itemize}
 
+XXX we need Psyco numbers
+
 \subsection{Numpy}
 XXX: Fijal?
 


More information about the pypy-commit mailing list