[pypy-svn] r60632 - pypy/extradoc/talk/ecoop2009

antocuni at codespeak.net antocuni at codespeak.net
Sat Dec 20 11:45:14 CET 2008


Author: antocuni
Date: Sat Dec 20 11:45:12 2008
New Revision: 60632

Modified:
   pypy/extradoc/talk/ecoop2009/clibackend.tex
   pypy/extradoc/talk/ecoop2009/intro.tex
   pypy/extradoc/talk/ecoop2009/jitgen.tex
   pypy/extradoc/talk/ecoop2009/main.tex
   pypy/extradoc/talk/ecoop2009/rainbow.tex
   pypy/extradoc/talk/ecoop2009/tlc.tex
Log:
several small fixes.  Some rewording in the cli backend section



Modified: pypy/extradoc/talk/ecoop2009/clibackend.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/clibackend.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/clibackend.tex	Sat Dec 20 11:45:12 2008
@@ -16,14 +16,13 @@
 
 \subsection{Flexswitches}
 
-As already explained, \dacom{I guess flexswitches will be introduced
-  in the previous sect.} \emph{flexswitch} is one of the key
+As already explained, \emph{flexswitch} is one of the key
 concepts allowing the JIT compiler generator to produce code which can
 be incrementally specialized and compiled at run time.
 
 A flexswitch is a special kind of switch which can be dynamically
-extended with new cases; intuitively, its behavior can be described
-well in terms of flow graphs. Indeed, a flexswitch can be considered 
+extended with new cases.  Intuitively, its behavior can be described
+well in terms of flow graphs: a flexswitch can be considered 
 as a special flow graph block where links to newly created blocks are
 dynamically added whenever new cases are needed. 
 
@@ -58,10 +57,15 @@
 Since in .NET methods are the basic units of compilation, a possible
 solution consists in creating a new method 
 any time a new case has to be added to a flexswitch.
-In this way, whereas flow graphs without flexswitches are translated
-to a single method, the translation of flow graphs which can dynamically grow because of
-flexswitches will be scattered over several methods.
-Summarizing, the backend behaves in the following way:
+
+It is important to underline the difference between flow graphs and a methods:
+the first are the logical unit of code as seen by the JIT compiler, each of
+them being concretely implemented by \emph{one or more} methods.
+
+In this way, whereas flow graphs without flexswitches are translated to a
+single method, the translation of \emph{growable} flow graphs will be
+scattered over several methods.  Summarizing, the backend behaves in the
+following way:
 \begin{itemize}
 \item Each flow graph is translated in a collection of methods which
   can grow dynamically. Each collection contains at least one
@@ -70,49 +74,44 @@
   whenever a new case is added to a flexswitch.
 
 \item Each either primary or secondary method implements a certain
-  number of blocks, all belonging to the same flow graph. Among these blocks
-  there always exists an initial block whose input arguments 
-  might be passed as arguments of the method; however, for
-  implementation  reasons (see the details below) the input variables
-  of all blocks (including the initial one)
-  are implemented as local variables of the method. 
+  number of blocks, all belonging to the same flow graph.
 \end{itemize} 
 
-When  a new case is added to a flexswitch, new blocks are generated
-and translated by the backend in a new single method pointed
-by a delegate \footnote{\emph{Delegates} are the .NET equivalent of function pointers}
- of  which is stored in the code implementing the flexswitch,
-so that the method can be invoked later.
+When a new case is added to a flexswitch, the backend generates the new blocks
+into a new single method.  The newly created method is pointed by a
+delegate \footnote{\emph{Delegates} are the .NET equivalent of function
+  pointers} stored in the flexswitch, so that it can be invoked later when
+needed.
 
 \subsubsection{Internal and external links}
 
-A link is called \emph{internal} if it connects two blocks implemented
-by the same method,
+A link is called \emph{internal} if it connects two blocks contained
+in the same method,
  \emph{external} otherwise.
 
-Following an internal link would  not be difficult in IL bytecode: a jump to
+Following an internal link is easy in IL bytecode: a jump to
 the corresponding code fragment in the same method can be emitted 
 to execute the new block, whereas the appropriate local variables can be
 used for passing arguments. 
-Also following an external link whose target is an initial block could
-be easily implemented, by just invoking the corresponding method.
 
+Following an external link whose target is an initial block could also
+be easily implemented, by just invoking the corresponding method.
 What cannot be easily implemented in CLI is following an external link
 whose target is not an initial block; consider, for instance, the
 outgoing link of the block dynamically added in the right-hand side
-picture of Figure~\ref{flexswitch-fig}. How it is possible to pass the
-right arguments to the target block?
+picture of Figure~\ref{flexswitch-fig}. How it is possible to jump in 
+the middle of a method?
 
 To solve this problem every method contains a special code, called
-\emph{dispatcher}; whenever a method is invoked, its dispatcher is
+\emph{dispatcher}: whenever a method is invoked, its dispatcher is
 executed first\footnote{The dispatcher should not be
 confused with the initial block of a method.} to
 determine which block has to be executed.
 This is done by passing to the method a 32 bits number, called 
 \emph{block id}, which uniquely identifies the next block of the graph to be executed.
-The high 2 bytes \dacom{word was meant as a fixed-sized group of bits} of a block id constitute the id of the method to which the block
-belongs, whereas the low 2 bytes constitute a progressive number univocally identifying
-each block implemented by the method.
+The high 2 bytes of a block id constitute the \emph{method id}, which 
+univocally identifies a method in a graph, whereas the low 2 bytes constitute
+a progressive number univocally identifying a block inside each method.
 
 The picture in Figure~\ref{block-id-fig} shows a graph composed of three methods (for
 simplicity, dispatchers are not shown); method ids are in red, whereas
@@ -204,7 +203,7 @@
   secondary methods of a graph must have the same signature.
 \end{itemize}
 
-Therefore, the only solution we came up with is defining a class
+Therefore, the solution we came up with is defining a class
 \lstinline{InputArgs} for passing sequences of arguments whose length
 and type is variable.
 \begin{small}

Modified: pypy/extradoc/talk/ecoop2009/intro.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/intro.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/intro.tex	Sat Dec 20 11:45:12 2008
@@ -31,10 +31,6 @@
 
 \subsection{PyPy and RPython}
 
-\anto{as Armin points out, the two CLI backends can be easily confused; what
-  about renaming the ``CLI Backend for flowgraphs'' into ``CLI bytecode
-  compiler''? Any better idea for the name?}
-
 \begin{figure}[h]
 \begin{center}
 \includegraphics[width=.6\textwidth]{diagram0}

Modified: pypy/extradoc/talk/ecoop2009/jitgen.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/jitgen.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/jitgen.tex	Sat Dec 20 11:45:12 2008
@@ -12,7 +12,7 @@
 \end{figure}
 
 The JIT generation framework uses partial evaluation techniques to generate a
-dynamic compiler from an interpreter; the idea is inspired by Psyco, which
+dynamic compiler from an interpreter; the idea is inspired by Psyco \cite{DBLP:conf/pepm/Rigo04}, which
 uses the same techniques but it's manually written instead of being
 automatically generated.
 
@@ -161,7 +161,9 @@
 same type inference engine that is used on the source RPython program.
 This is called the \emph{hint-annotator}; it
 operates over input graphs that are already low-level instead of
-RPython-level, and propagates annotations that do not track types but
+RPython-level, \anto{we never make distinction between low-level and rpython-level 
+flowgraphs, do we? I propose to talk about ``intermediate flowgraphs''} 
+and propagates annotations that do not track types but
 value dependencies and manually-provided binding time hints.
 
 The normal process of the hint-annotator is to propagate the binding
@@ -226,3 +228,5 @@
 it prevents under-specialization: an unsatisfiable \texttt{hint(v1,
 concrete=True)} is reported as an error.
 
+\anto{maybe we should at least mention promotion, and refer to the proper
+  section for details?}

Modified: pypy/extradoc/talk/ecoop2009/main.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/main.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/main.tex	Sat Dec 20 11:45:12 2008
@@ -14,13 +14,23 @@
 \usepackage{ifthen}
 \usepackage{xspace}
 \usepackage{listings}
+\usepackage{fancyvrb}
 \usepackage[pdftex]{graphicx}
 
 %\input{macros}
 
 \pagestyle{plain}
 
-\lstset{mathescape=true,language=Java,basicstyle=\tt,keywordstyle=\bf}
+%\lstset{mathescape=true,language=Java,basicstyle=\tt,keywordstyle=\bf}
+\lstset{language=Python,
+        basicstyle=\footnotesize\ttfamily,
+        keywordstyle=\color{blue}, % I couldn't find a way to make chars both bold and tt
+        frame=lines,
+        stringstyle=\color{blue},
+        fancyvrb=true,
+        xleftmargin=20pt,xrightmargin=20pt,
+        showstringspaces=false}
+
 
 %\renewcommand{\baselinestretch}{.98}
 \newboolean{showcomments}

Modified: pypy/extradoc/talk/ecoop2009/rainbow.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/rainbow.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/rainbow.tex	Sat Dec 20 11:45:12 2008
@@ -6,8 +6,8 @@
 inefficient. For this reason, we
 implemented a way for the compiler to try to avoid memory allocations in the
 residual code as long as possible. The idea is to try to keep new
-run-time instances "exploded": instead of a single run-time object allocated on
-the heap, the object is "virtualized" as a set
+run-time instances \emph{exploded}: instead of a single run-time object allocated on
+the heap, the object is \emph{virtualized} as a set
 of fresh local variables, one per field. Only when the object can be accessed by from
 somewhere else is it actually allocated on the heap. The effect of this is similar to that of
 escape analysis \cite{Blanchet99escapeanalysis}, \cite{Choi99escapeanalysis},
@@ -16,7 +16,7 @@
 our very simple analysis).
 
 It is not always possible to keep instances virtual.  The main
-situation in which it needs to be "forced" (i.e. actually allocated at
+situation in which it needs to be \emph{forced} (i.e. actually allocated at
 run-time) is when the pointer escapes to some non-virtual location like
 a field of a real heap structure.  Virtual instances still avoid the run-time
  allocation of most short-lived objects, even in non-trivial situations.  
@@ -66,10 +66,10 @@
 must be put in the source code of the interpreter.
 
 From a partial evaluation point of view, promotion is the converse of
-the operation generally known as "lift" \cite{XXX}.  Lifting a value means
+the operation generally known as \emph{lift} \cite{XXX}.  Lifting a value means
 copying a variable whose binding time is compile-time into a variable
 whose binding time is run-time – it corresponds to the compiler
-"forgetting" a particular value that it knew about.  By contrast,
+``forgetting'' a particular value that it knew about.  By contrast,
 promotion is a way for the compiler to gain \emph{more} information about
 the run-time execution of a program. Clearly, this requires
 fine-grained feedback from run-time to compile-time, thus a
@@ -77,27 +77,25 @@
 
 Promotion requires interleaving compile-time and run-time phases,
 otherwise the compiler can only use information that is known ahead of
-time. It is impossible in the "classical" approaches to partial
+time. It is impossible in the ``classical'' approaches to partial
 evaluation, in which the compiler always runs fully ahead of execution.
 This is a problem in many realistic use cases.  For example, in an
 interpreter for a dynamic language, there is mostly no information
 that can be clearly and statically used by the compiler before any
 code has run.
 
-A very different point of view on promotion is as a generalization of techniques
-that already exist in dynamic compilers as found in modern virtual machines for
-object-oriented language.  In this context feedback
-techniques are crucial for good results.  The main goal is to
-optimize and reduce the overhead of dynamic dispatching and indirect
-invocation.  This is achieved with variations on the technique of
-polymorphic inline caches \cite{hoelzle_optimizing_1991}: the dynamic lookups are cached and
-the corresponding generated machine code contains chains of
-compare-and-jump instructions which are modified at run-time.  These
-techniques also allow the gathering of information to direct inlining for even
-better optimization results. 
-\anto{What about this: While traditional PICs are only applied to indirect
-  calls, promotion is a more general operation that can be applied to any kind
-  of value, including instances of user-defined classes or integer numbers}
+A very different point of view on promotion is as a generalization of
+techniques that already exist in dynamic compilers as found in modern virtual
+machines for object-oriented language, like \emph{Polymorphic Inline Cache}
+(PIC, \cite{hoelzle_optimizing_1991}) and its variations, whose main goal is
+to optimize and reduce the overhead of dynamic dispatching and indirect
+invocation: the dynamic lookups are cached and the corresponding generated
+machine code contains chains of compare-and-jump instructions which are
+modified at run-time.  These techniques also allow the gathering of
+information to direct inlining for even better optimization results. Compared
+to PICs, promotion is more general because it can be applied not only to
+indirect calls but to any kind of value, including instances of user-defined
+classes or integer numbers.
 
 In the presence of promotion, dispatch optimization can usually be
 reframed as a partial evaluation task.  Indeed, if the type of the
@@ -257,13 +255,13 @@
 is a hint to promote the class of \lstinline{b}.  Although in general
 promotion is implemented through a flexswitch, in this case it is not needed
 as \lstinline{b} holds a \emph{virtual instance}, whose class is already
-known (as described in previous section).
+known (as described in the previous section).
 
 Then, the compiler knows the exact class of \lstinline{b}, thus it can inline
 the calls to \lstinline{lt}.  Moreover, inside \lstinline{lt} there is a
 call to \lstinline{a.int_o()}, which is inlined as well for the very same
-reason.
-
-Moreover, as we saw in section \ref{sec:virtuals}, the \lstinline{IntObj}
+reason.  Moreover, as we saw in section \ref{sec:virtuals}, the \lstinline{IntObj}
 instance can be virtualized, so that the subsequent \lstinline{BR_COND} opcode
 can be compiled efficiently without needing any more flexswitch.
+
+\anto{We should show the very final version of the code, with all instances virtualized}

Modified: pypy/extradoc/talk/ecoop2009/tlc.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/tlc.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/tlc.tex	Sat Dec 20 11:45:12 2008
@@ -36,7 +36,7 @@
   \lstinline{ADD}, \lstinline{SUB}, etc.
 \item \textbf{Comparisons} like \lstinline{EQ}, \lstinline{LT},
   \lstinline{GT}, etc.
-\item \textbf{Object-oriented}: operations on objects: \lstinline{NEW},
+\item \textbf{Object-oriented} operations: \lstinline{NEW},
   \lstinline{GETATTR}, \lstinline{SETATTR}, \lstinline{SEND}.
 \item \textbf{List operations}: \lstinline{CONS}, \lstinline{CAR},
   \lstinline{CDR}.



More information about the Pypy-commit mailing list