arigo at codespeak.net arigo at codespeak.net
Sun Jul 15 15:41:08 CEST 2007

Author: arigo
Date: Sun Jul 15 15:41:08 2007
New Revision: 45106

Modified:
Log:
My final review, 2nd half.

==============================================================================
Binary files. No diff available.

==============================================================================
+++ pypy/extradoc/talk/dyla2007/dyla.tex	Sun Jul 15 15:41:08 2007
@@ -398,7 +398,7 @@
described in section \ref{subsect:dynamic_compilers}.

In the sequel, we will focus on the relative advantages and
-inconveniences of the PyPy approach compared to the approach of
+inconveniences of the PyPy approach compared to
hand-writing a language implementation on top of an OO VM.

@@ -478,7 +478,7 @@
assuming an efficient dynamic compiler.

Of course, the translation toolchain, once written, can also be reused
-to implement other languages, and possibly tailored on a case-by-case
+to implement other languages, and tailored on a case-by-case
basis to fit the specific needs of a language.  The process is
incremental: we can add more features as needed instead of starting from
a maximal up-front design, and gradually improve the quality of the
@@ -509,15 +509,15 @@
\label{subsect:dynamic_compilers}

As mentioned above, the performance of the VMs generated by our
-translation framework are quite acceptable -- e.g. the Python VM
+translation framework is quite acceptable -- e.g. the Python VM
generated via C code is much faster than Jython running on the best
-JVMs.  Of course, the JIT compilers in these JVMs are essential to
+JVMs.  Moreover, the JIT compilers in these JVMs are essential to
achieve even this performance, which further proves the point that
-writing good OO VMs -- especially ones meant to support dynamic
+writing good general-purpose OO VMs -- especially ones meant to support dynamic
languages -- is a lot of work.

The deeper problem with the otherwise highly-tuned JIT compilers of the
-OO VMs is that they are not a very good match for running dynamic
+OO VMs is that they are not a very good match for running arbitrary dynamic
languages.  It might be possible to tune a general-purpose JIT compiler
enough and write the dynamic language implementation accordingly so
that most of the bookkeeping work involved in running the dynamic
@@ -525,13 +525,13 @@
this has not been demonstrated yet.\footnote
{Still in the draft stage, a proposed
extension to the Java bytecode \cite{invokedynamic} might help achieve
-better integration between the Java JITs and dynamic language
-implementations running on top of JVMs.}
+better integration between the Java JITs and some class of dynamic languages
+running on top of JVMs.}

By far the fastest Python implementation, Psyco \cite{psyco-software}
contains a hand-written language-specific dynamic compiler.  It works by
-specializing (parts of) Python functions by feeding runtime information
-back into the compiler (typically, but not exclusively, object types).
+specializing (parts of) Python functions based on runtime information
+fed back into the compiler (typically, but not exclusively, object types).
The reader is referred to \cite{Psyco-paper} for more details.

PyPy abstracts on this approach: its translation tool-chain is able to
@@ -540,14 +540,14 @@
the interpreter.  This is achieved by a pragmatic application of partial
evaluation techniques guided by a few hints added to the source of the
interpreter.  In other words, it is possible to produce a reasonably
-good language-specific JIT compiler and insert it into a VM, alongside
+good language-specific JIT compiler and insert it into a VM, along
with the necessary support code and the rest of the regular interpreter.

This result was one of the major goals and motivations for the whole
approach.  By construction, the JIT stays synchronized with its VM
-and with the language when it evolves,
-and any code written in the dynamic language runs
-correctly under the JIT.  Some very simple Python examples run more than
+and with the language when it evolves.  Also by construction, the JIT
+immediately supports (and is correct for) arbitrary input code.
+Some very simple Python examples run more than
100 times faster.  At the time of this writing this is still rather
experimental, and the techniques involved are well beyond the scope of
the present paper.  The reader is referred to \cite{D08.2} for more
@@ -564,8 +564,9 @@
\begin{itemize}
\item \emph{High-level languages are suitable to implement dynamic languages.}
They allow an interpreter to be written more abstractly, which has many
-advantages -- among them the avoidance of a proliferation of diverging
-implementations, and better ways to combine flexibility with efficiency.
+advantages.  Among these, it avoids the proliferation of diverging
+implementations, and gives implementers better ways to combine flexibility
+with efficiency.
Moreover, this is not incompatible with targeting and benefiting from
existing high-quality object-oriented virtual machines like those of the
Java and .NET.
@@ -577,8 +578,8 @@
medium to large languages.  Unless large amounts of resources can be
invested, the resulting VMs are bound to have limitations which lead to
the emergence of many implementations, a fact that is taxing precisely
-for a community with limited resources.  (This is of course even more
-true for general-purpose VMs.)
+for a community with limited resources.  This is of course even more
+true for VMs that are meant to be general-purpose.
\end{itemize}

\noindent
@@ -590,8 +591,9 @@
Aside from the advantages described in section
\ref{sect:metaprogramming}, a translation toolchain need not be
standardized for inter-operability but can be tailored to the needs of
-each project.  Diversity is good; translation toolchains offset the need
-to attempt to standardize on a single OO VM.
+each project.
+\item \emph{Diversity is good.}  Meta-programming translation
+toolchains offset the need for standardization of general-purpose OO VMs.
\end{itemize}

The approach we outlined is actually just one in a very large, mostly