[pypy-svn] r60540 - in pypy/extradoc/talk/ecoop2009: . benchmarks

antocuni at codespeak.net antocuni at codespeak.net
Wed Dec 17 14:53:38 CET 2008


Author: antocuni
Date: Wed Dec 17 14:53:36 2008
New Revision: 60540

Added:
   pypy/extradoc/talk/ecoop2009/benchmarks/
   pypy/extradoc/talk/ecoop2009/benchmarks/factorial.cs
   pypy/extradoc/talk/ecoop2009/benchmarks/factorial.tlc
      - copied unchanged from r60489, pypy/branch/oo-jit/pypy/jit/tl/factorial.tlc
   pypy/extradoc/talk/ecoop2009/benchmarks/fibo.cs
   pypy/extradoc/talk/ecoop2009/benchmarks/fibo.tlc
      - copied unchanged from r60525, pypy/branch/oo-jit/pypy/jit/tl/fibo.tlc
Modified:
   pypy/extradoc/talk/ecoop2009/benchmarks.tex
   pypy/extradoc/talk/ecoop2009/tlc.tex
Log:
move the paragraph about tlc features to the TLC section, and add the benchmarks written in C#



Modified: pypy/extradoc/talk/ecoop2009/benchmarks.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/benchmarks.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/benchmarks.tex	Wed Dec 17 14:53:36 2008
@@ -1,35 +1,12 @@
 \section{Benchmarks}
 
-\cfbolz{I think this should go to the beginning of the description of the TLC as
-it explains why it is written as it is written}
-
-Despite being very simple and minimalistic, \lstinline{TLC} is a good
-candidate as a language to run benchmarks, as it has some of the features that
-makes most of current dynamic languages so slow:
-
-\begin{itemize}
-
-\item \textbf{Stack based VM}: this kind of VM requires all the operands to be
-  on top of the evaluation stack.  As a consequence programs spend a lot of
-  time pushing and popping values to/from the stack, or doing other stack
-  related operations.  However, thanks to its simplicity this is still the
-  most common and preferred way to implement VMs.
-
-\item \textbf{Boxed integers}: integer objects are internally represented as
-  an instance of the \lstinline{IntObj} class, whose field \lstinline{value}
-  contains the real value.  By having boxed integers, common arithmetic
-  operations are made very slow, because each time we want to load/store their
-  value we need to go through an extra level of indirection.  Moreover, in
-  case of a complex expression, it is necessary to create many temporary
-  objects to hold intermediate results.
-
-\item \textbf{Dynamic lookup}: attributes and methods are looked up at
-  runtime, because there is no way to know in advance if and where an object
-  have that particular attribute or method.
-\end{itemize}
+In section \ref{sec:tlc-features}, we saw that TLC provides most of the
+features that usaully make dynamically typed language so slow, such as
+\emph{stack-based VM}, \emph{boxed arithmetic} and \emph{dynamic lookup} of
+methods and attributes.
 
 In the following sections, we will show some benchmarks that show how our
-generated JIT can handle all the features above very well.
+generated JIT can handle all these features very well.
 
 To measure the speedup we get with the JIT, we run each program three times:
 
@@ -41,6 +18,12 @@
   been done, so we are actually measuring how good is the code we produced.
 \end{enumerate}
 
+Moreover, for each benchmark we also show the time taken by running the
+equivalent program written in C\#.  By comparing the results against C\#, we
+can see how far we are from the supposed optimal performances.  \anto{I
+  don't really like the last sentence, but right now I can't think of another
+  way to phrase it.  Rewording welcome :-)}
+
 The benchmarks have been run on machine XXX with hardware YYY etc. etc.
 
 \subsection{Arithmetic operations}
@@ -77,7 +60,10 @@
 
 As we can see, the code generated by the JIT is almost 500 times faster than
 the non-jitted case, and it is only about 1.5 times slower than the same
-algorithm written in C\#, which can be considered the optimal goal.
+algorithm written in C\#: the difference in speed it is probably due to both
+the fact that the current CLI backend emits slightly non-optimal code and that
+the underyling .NET JIT compiler is highly optimized to handle bytecode
+generated by C\# compilers.
 
 \subsection{Object-oriented features}
 

Added: pypy/extradoc/talk/ecoop2009/benchmarks/factorial.cs
==============================================================================
--- (empty file)
+++ pypy/extradoc/talk/ecoop2009/benchmarks/factorial.cs	Wed Dec 17 14:53:36 2008
@@ -0,0 +1,23 @@
+using System;
+
+class Factorial
+{
+  public static void Main(string[] args)
+  {
+    int n = Convert.ToInt32(args[0]);
+    DateTime start, stop;
+    start = DateTime.UtcNow;
+    int res = factorial(n);
+    stop = DateTime.UtcNow;
+    double secs = (stop-start).TotalSeconds;
+    Console.WriteLine("C#:            {0} ({1} seconds)", res, secs);
+  }
+
+  public static int factorial(int n)
+  {
+    int res=1;
+    for(int i=1; i<=n; i++)
+      res *= i;
+    return res;
+  }
+}

Added: pypy/extradoc/talk/ecoop2009/benchmarks/fibo.cs
==============================================================================
--- (empty file)
+++ pypy/extradoc/talk/ecoop2009/benchmarks/fibo.cs	Wed Dec 17 14:53:36 2008
@@ -0,0 +1,27 @@
+using System;
+
+class Fibo
+{
+  public static void Main(string[] args)
+  {
+    int n = Convert.ToInt32(args[0]);
+    DateTime start, stop;
+    start = DateTime.UtcNow;
+    int res = fibo(n);
+    stop = DateTime.UtcNow;
+    double secs = (stop-start).TotalSeconds;
+    Console.WriteLine("C#:            {0} ({1} seconds)", res, secs);
+  }
+
+  public static int fibo(int n)
+  {
+    int a = 0;
+    int b = 1;
+    while (--n > 0) {
+      int sum = a+b;
+      a = b;
+      b = sum;
+    }
+    return b;
+  }
+}

Modified: pypy/extradoc/talk/ecoop2009/tlc.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/tlc.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/tlc.tex	Wed Dec 17 14:53:36 2008
@@ -42,6 +42,35 @@
 the VM needs to do all these checks at runtime; in case one of the check
 fails, the execution is simply aborted.
 
+\subsection{TLC features}
+\label{sec:tlc-features}
+
+Despite being very simple and minimalistic, \lstinline{TLC} is a good
+candidate as a language to test our JIT generator, as it has some of the
+features that makes most of current dynamic languages so slow:
+
+\begin{itemize}
+
+\item \textbf{Stack based VM}: this kind of VM requires all the operands to be
+  on top of the evaluation stack.  As a consequence programs spend a lot of
+  time pushing and popping values to/from the stack, or doing other stack
+  related operations.  However, thanks to its simplicity this is still the
+  most common and preferred way to implement VMs.
+
+\item \textbf{Boxed integers}: integer objects are internally represented as
+  an instance of the \lstinline{IntObj} class, whose field \lstinline{value}
+  contains the real value.  By having boxed integers, common arithmetic
+  operations are made very slow, because each time we want to load/store their
+  value we need to go through an extra level of indirection.  Moreover, in
+  case of a complex expression, it is necessary to create many temporary
+  objects to hold intermediate results.
+
+\item \textbf{Dynamic lookup}: attributes and methods are looked up at
+  runtime, because there is no way to know in advance if and where an object
+  have that particular attribute or method.
+\end{itemize}
+
+
 \subsection{TLC examples}
 
 As we said above, TLC exists only at bytecode level; to ease the development



More information about the Pypy-commit mailing list