[pypy-svn] pypy default: merge docs changes

Alex Perry commits-noreply at bitbucket.org
Thu Mar 17 19:04:12 CET 2011


Author: Alex Perry <alex.perry at ieee.org>
Branch: 
Changeset: r42748:9abd94c2bd68
Date: 2011-03-14 19:11 +0000
http://bitbucket.org/pypy/pypy/changeset/9abd94c2bd68/

Log:	merge docs changes

diff --git a/pypy/doc/config/objspace.std.optimized_int_add.txt b/pypy/doc/config/objspace.std.optimized_int_add.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.optimized_int_add.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Optimize the addition of two integers a bit. Enabling this option gives small
-speedups.

diff --git a/pypy/doc/discussion/paper-wishlist.txt b/pypy/doc/discussion/paper-wishlist.txt
deleted file mode 100644
--- a/pypy/doc/discussion/paper-wishlist.txt
+++ /dev/null
@@ -1,27 +0,0 @@
-Things we would like to write papers about
-==========================================
-
-- object space architecture + reflective space
-- stackless transformation
-- composable coroutines
-- jit:
-  - overview paper
-  - putting our jit into the context of classical partial evaluation
-  - a jit technical paper too, probably
-
-- sandboxing
-
-Things about which writing a paper would be nice, which need more work first
-============================================================================
-
-- taint object space
-- logic object space
-
-- jit
-
-  - with some more work: how to deal in a JIT backend with less-that-
-      full-function compilation unit
-
-  - work in progress (Anto?): our JIT on the JVM
-  - (later) removing the overhead of features not used, e.g. thunk space or
-      another special space

diff --git a/pypy/doc/config/objspace.usemodules._stackless.txt b/pypy/doc/config/objspace.usemodules._stackless.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._stackless.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-Use the '_stackless' module. 
-
-Exposes the `stackless` primitives, and also implies a stackless build. 
-See also :config:`translation.stackless`.
-
-.. _`stackless`: ../stackless.html

diff --git a/pypy/doc/config/objspace.nofaking.txt b/pypy/doc/config/objspace.nofaking.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.nofaking.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-This options prevents the automagic borrowing of implementations of
-modules and types not present in PyPy from CPython.
-
-As such, it is required when translating, as then there is no CPython
-to borrow from.  For running py.py it is useful for testing the
-implementation of modules like "posix", but it makes everything even
-slower than it is already.

diff --git a/pypy/doc/config/objspace.std.withprebuiltchar.txt b/pypy/doc/config/objspace.std.withprebuiltchar.txt
deleted file mode 100644

diff --git a/pypy/doc/config/objspace.usemodules.pyexpat.txt b/pypy/doc/config/objspace.usemodules.pyexpat.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.pyexpat.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use (experimental) pyexpat module written in RPython, instead of CTypes
-version which is used by default.

diff --git a/pypy/doc/config/translation.gcrootfinder.txt b/pypy/doc/config/translation.gcrootfinder.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.gcrootfinder.txt
+++ /dev/null
@@ -1,15 +0,0 @@
-Choose method how to find roots in the GC. Boehm and refcounting have their own
-methods, this is mostly only interesting for framework GCs. For those you have
-a choice of various alternatives:
-
- - use a shadow stack (XXX link to paper), e.g. explicitly maintaining a stack
-   of roots
-
- - use stackless to find roots by unwinding the stack.  Requires
-   :config:`translation.stackless`.  Note that this turned out to
-   be slower than just using a shadow stack.
-
- - use GCC and i386 specific assembler hackery to find the roots on the stack.
-   This is fastest but platform specific.
-
- - Use LLVM's GC facilities to find the roots.

diff --git a/pypy/doc/dev_method.txt b/pypy/doc/dev_method.txt
deleted file mode 100644
--- a/pypy/doc/dev_method.txt
+++ /dev/null
@@ -1,360 +0,0 @@
-Distributed and agile development in PyPy
-=========================================
-
-PyPy isn't just about producing code - it's also about how we produce code.
-The challenges of coordinating work within a community and making sure it is
-fused together with the parts of the project that is EU funded are tricky
-indeed. Our aim is of course to make sure that the communities way of working
-is disturbed as little as possible and that contributing to PyPy still feels
-fun and interesting (;-) but also to try to show to the EU as well as other
-funded projects that open source ideas, tools and methods are really good ways
-of running development projects. So the way PyPy as a project is being run -
-distributed and agile - is something we think might be of use to other open
-source development projects and commercial projects.
-
-Main methods for achieving this is:
-
-  * Sprint driven development
-  * Sync meetings
-
-Main tools for achieving this is:
-
-  * py.test - automated testing
-  * Subversion - version control
-  * Transparent communication and documentation (mailinglists, IRC, tutorials
-    etc etc) 
-
-
-Sprint driven development:
---------------------------
-
-What is a sprint and why are we sprinting?
-
-Originally the sprint methodology used in the Python community grew from
-practices within Zope3 development. The  definition of a sprint is "two-day or
-three-day focused development session, in which developers pair off together
-in a room and focus on building a particular subsystem". 
-
-Other typical sprint factors:
-
-  * no more than 10 people (although other projects as well as PyPy haven been
-    noted to have more than that. This is the recommendation and it is
-    probably based on the idea of having a critical mass of people who can
-    interact/communicate and work without adding the need for more than just
-    the absolute necessary coordination time. The sprints during 2005 and 2006 have
-    been having ca 13-14 people per sprint, the highest number of participants
-    during a PyPy sprint has been 24 developers)
-
-  * a coach (the coach is the "manager" of the sprint, he/she sets the goals,
-    prepares, leads and coordinate the work and track progress and makes this
-    visible for the team. Important to note here - PyPy have never had coaches
-    in our sprints. Instead we hold short status meetings in the whole group,
-    decisions are made in the same way. So far this have worked well and we
-    still have been able to achieve tremendous results under stressed
-    conditions, releases and such like. What we do have is a local organizer,
-    often a developer living in the area and one more developer who prepares
-    and organizes sprint. They do not "manage" the sprint when its started -
-    their role is more of the logistic nature. This doesn't mean that we wont
-    have use for the coach technique or something similar in the future).
-
-  * only coding (this is a tough one. There have been projects who have used
-    the sprinting method to just visionalize och gather input. PyPy have had a
-    similar brainstorming start up sprint. So far though this is the official
-    line although again, if you visit a PyPy sprint we are doing quite a lot
-    of other small activities in subgroups as well - planning sprints,
-    documentation, coordinating our EU deliverables and evaluation etc. But
-    don't worry - our main focus is programming ;-)
-
-  * using XP techniques (mainly pairprogramming and unit testing - PyPy is
-    leaning heavily on these aspects). Pairing up core developers with people
-    with different levels of knowledge of the codebase have had the results
-    that people can quite quickly get started and join in the development.
-    Many of our participants (new to the project and the codebase) have
-    expressed how pairprogramming in combination with working on the automated
-    tests have been a great way of getting started. This is of course also a
-    dilemma because our core developers might have to pair up to solve some
-    extra hairy problems which affects the structure and effect of the other
-    pairs.
-
-It is a method that fits distributed teams well because it gets the team
-focused around clear (and challenging) goals while working collaborative
-(pairprogramming, status meeting, discussions etc) as well as accelerated
-(short increments and tasks, "doing" and testing instead of long start ups of
-planning and requirement gathering). This means that most of the time a sprint
-is a great way of getting results, but also to get new people acquainted with
-the codebase. It is also a great method for dissemination and learning within
-the team because of the pairprogramming.
-
-If sprinting is combined with actually moving around and having the sprint
-close to the different active developer groups in the community as well as
-during conferences like PyCon and EuroPython, the team will have an easier
-task of recruiting new talents to the team. It also vitalizes the community
-and increases the contact between the different Python implementation
-projects.
- 
-As always with methodologies you have to adapt them to fit your project (and
-not the other way around which is much too common). The PyPy team have been
-sprinting since early 2003 and have done 22  sprints so far, 19 in Europe, 2
-in the USA and 1 in Asia. Certain practices have proven to be more successful within this
-team and those are the one we are summarizing here.
-
-
-How is it done?
-+++++++++++++++
-
-There are several aspects of a sprint. In the PyPy team we focus on:
-1. Content (goal)
-2. Venue
-3. Information
-4. Process
-
-1. Content (goal) is discussed on mailinglists (pypy-dev) and on IRC ca one
-   month before the event. Beforehand we have some rough plans called "between
-   sprints" and the sprintplan is based on the status of those issues but also
-   with a focus on upcoming releases and deliverables. Usually its the core
-   developers who does this but the transparency and participation have
-   increased since we started with our weekly "pypy-sync meetings" on IRC. The
-   sync meetings in combination with a rough in between planning makes it
-   easier for other developer to follow the progress and thus participating in
-   setting goals for the upcoming sprints.
-
-   The goal needs to be challenging or it won't rally the full effort of the
-   team, but it must not be unrealistic as that tends to be very frustrating
-   and dissatisfying. It is also very important to take into account the
-   participants when you set the goal for the sprint. If the sprint takes place
-   connected to a conference (or similar open events) the goals for the actual
-   coding progress should be set lower (or handled in another way) and focus
-   should shift to dissemination and getting new/interested people to a
-   certain understanding of the PyPy codebase. Setting the right goal and
-   making sure this is a shared one is important because it helps the
-   participants coming in with somewhat similar expectations ;-)
-
-2. Venue - in the PyPy project we have a rough view on where we are sprinting
-   a few months ahead. No detailed plans have been made that far in
-   advance. Knowing the dates and the venue makes flight bookings easier ;-)
-   The venue is much more important than one would think. We need to have a
-   somewhat comfortable environment to work in (where up to 15 people can sit
-   and work), this means tables and chairs, light and electricity outlets. Is
-   it a venue needing access cards so that only one person is allowed to open?
-   How long can you stay - 24 hours per day or does the landlord want the team
-   evacuated by 23:00? These are important questions that can gravely affect
-   the "feel and atmosphere" of the sprint as well as the desired results!
-
-   Also, somewhat close to low cost places to eat and accommodate
-   participants. Facilities for making tea/coffee as well as some kind of
-   refrigerator for storing food. A permanent Internet connection is a must -
-   has the venue were the sprint is planned to be weird rules for access to
-   their network etc etc?
-
-   Whiteboards are useful tools and good to have. Beamers (PyPy jargon for a projector)
-   are very useful for the status meetings and should be available, at least 1. The
-   project also owns one beamer - specifically for sprint purposes.
-
-   The person making sure that the requirements for a good sprint venue is
-   being met should therefore have very good local connections or, preferably
-   live there.
-
-3. Information - discussions about content and goals (pre announcements) are
-   usually carried out on pypy-dev (mailinglist/IRC). All other info is
-   distributed via email on pypy-sprint mailinglist and as web pages on
-   codespeak. When dates, venue and content is fully decided a sprint
-   announcement is being made and sent out to pypy-dev and pypy-sprint as well
-   as more general purpose mailing lists like comp.lang.python and updated on
-   codespeak - this happens 2-4 weeks before the sprint. It's important that
-   the sprint announcements points to information about local transportation
-   (to the country and to the city and to the venue), currency issues, food
-   and restaurants etc. There are also webpages in which people announce when
-   they will arrive and where they are accommodated.
-
-   The planning text for the sprint is updated up till the sprint and is then
-   used during the status meetings and between to track work. After the sprint
-   (or even better: in between so that the memory is fresh) a sprint report is
-   written by one of the developers and updated to codespeak, this is a kind
-   of summary of the entire sprint and it tells of the work done and the
-   people involved. 
-
-   One very important strategy when planning the venue is cost
-   efficiency. Keeping accommodation and food/travel costs as low as possible
-   makes sure that more people can afford to visit or join the sprint
-   fully. The partially EU funded parts of the project do have a so called sprint budget
-   which we use to try to help developers to participate in our sprints
-   (travel expenses and accommodation) and because most of the funding is so
-   called matched funding we pay for most of our expenses in our own
-   organizations and companies anyway.
- 
-
-4. Process - a typical PyPy sprint is 7 days with a break day in the
-   middle. Usually sprinters show up the day before the sprint starts. The
-   first day has a start up meeting, with tutorials if there are participants 
-   new to the project or if some new tool or feature have been implemented. A
-   short presentation of the participants and their background and
-   expectations is also good to do. Unfortunately there is always time spent
-   the first day, mostly in the morning when people arrive to get the internet
-   and server infrastructure up and running. That is why we are, through
-   documentation_, trying to get participants to set up the tools and
-   configurations needed before they arrive to the sprint.
-
-   Approximate hours being held are 10-17, but people tend to stay longer to
-   code during the evenings. A short status meeting starts up the day and work
-   is "paired" out according to need and wishes. The PyPy sprints are
-   developer and group driven, because we have no "coach" our status meetings
-   are very much group discussion while notes are taken and our planning texts
-   are updated. Also - the sprint is done (planned and executed) within the
-   developer group together with someone acquainted with the local region
-   (often a developer living there). So within the team there is no one
-   formally responsible for the sprints.
-
-   Suggestions for off hours activities and social events for the break day is
-   a good way of emphasizing how important it is to take breaks - some
-   pointers in that direction from the local organizer is good.
-
-   At the end of the sprint we do a technical summary (did we achieve the
-   goals/content), what should be a rough focus for the work until the next
-   sprint and the sprint wheel starts rolling again ;-) An important aspect is
-   also to evaluate the sprint with the participants. Mostly this is done via
-   emailed questions after the sprint, it could also be done as a short group
-   evaluation as well. The reason for evaluating is of course to get feedback
-   and to make sure that we are not missing opportunities to make our sprints
-   even more efficient and enjoyable.
-
-    The main challenge of our sprint process is the fact that people show up
-    at different dates and leave at different dates. That affects the shared
-    introduction (goals/content, tutorials, presentations etc) and also the
-    closure - the technical summary etc. Here we are still struggling to find
-    some middle ground - thus increases the importance of feedback.
-
-
-.. _documentation: getting-started.html
-
-Can I join in?
-++++++++++++++
-
-Of course. Just follow the work on pypy-dev and if you specifically are
-interested in information about our sprints - subscribe to
-pypy-sprint at codespeak.net and read the news on codespeak for announcements etc.
-
-If you think we should sprint in your town - send us an email - we are very
-interested in using sprints as away of making contact with active developers
-(Python/compiler design etc)!
-
-If you have questions about our sprints and EU-funding - please send an email
-to pypy-funding at codespeak.net, our mailinglist for project coordination.
-
-Previous sprints?
-+++++++++++++++++
-
-The PyPy team has been sprinting on the following occasions::
-
-    * Hildesheim                      Feb     2003
-    * Gothenburg                      May     2003
-    * Europython/Louvain-La-Neuve     June    2003
-    * Berlin                          Sept    2003
-    * Amsterdam                       Dec     2003
-    * Europython/Gothenburg           June    2004
-    * Vilnius                         Nov     2004
-    * Leysin                          Jan     2005
-    * PyCon/Washington                March   2005     
-    * Europython/Gothenburg           June    2005
-    * Hildesheim                      July    2005
-    * Heidelberg                      Aug     2005
-    * Paris                           Oct     2005
-    * Gothenburg                      Dec     2005
-    * Mallorca                        Jan     2006
-    * PyCon/Dallas                    Feb     2006
-    * Louvain-La-Neuve                March   2006
-    * Leysin                          April   2006
-    * Tokyo                           April   2006
-    * D&#252;sseldorf                      June    2006
-    * Europython/Geneva               July    2006
-    * Limerick                        Aug     2006
-    * D&#252;sseldorf                      Oct     2006
-    * Leysin                          Jan     2007
-    * Hildesheim                      Feb     2007
-    
-People who have participated and contributed during our sprints and thus
-contributing to PyPy (if we have missed someone here - please contact us 
-so we can correct it):
-
-    Armin Rigo
-    Holger Krekel
-    Samuele Pedroni
-    Christian Tismer
-    Laura Creighton
-    Jacob Hall&#233;n
-    Michael Hudson
-    Richard Emslie
-    Anders Chrigstr&#246;m
-    Alex Martelli
-    Ludovic Aubry
-    Adrien DiMascio
-    Nicholas Chauvat
-    Niklaus Haldimann
-    Anders Lehmann
-    Carl Friedrich Bolz
-    Eric Van Riet Paap
-    Stephan Diel
-    Dinu Gherman
-    Jens-Uwe Mager
-    Marcus Denker
-    Bert Freudenberg
-    Gunther Jantzen
-    Henrion Benjamin
-    Godefroid Chapelle
-    Anna Ravenscroft
-    Tomek Meka
-    Jonathan David Riehl
-    Patrick Maupain
-    Etienne Posthumus
-    Nicola Paolucci
-    Albertas Agejevas
-    Marius Gedminas
-    Jesus Cea Avion
-    Olivier Dormond
-    Jacek Generowicz
-    Brian Dorsey
-    Guido van Rossum
-    Bob Ippolito
-    Alan McIntyre
-    Lutz Paelike
-    Michael Chermside
-    Beatrice D&#252;ring
-    Boris Feigin
-    Amaury Forgeot d'Arc 
-    Andrew Thompson      
-    Valentino Volonghi   
-    Aurelien Campeas
-    Stephan Busemann
-    Johan Hahn
-    Gerald Klix
-    Gene Oden
-    Josh Gilbert
-    Geroge Paci
-    Martin Blais
-    Stuart Williams
-    Jiwon Seo
-    Michael Twomey 
-    Wanja Saatkamp
-    Alexandre Fayolle
-    Rapha&#235;l Collet
-    Gr&#233;goire Dooms
-    Sanghyeon Seo
-    Yutaka Niibe
-    Yusei Tahara
-    George Toshida
-    Koichi Sasada
-    Guido Wesdorp        
-    Maciej Fijalkowski   
-    Antonio Cuni          
-    Lawrence Oluyede    
-    Fabrizio Milo        
-    Alexander Schremmer  
-    David Douard       
-    Michele Frettoli     
-    Simon Burton         
-    Aaron Bingham        
-    Pieter Zieschang     
-    Sad Rejeb 
-    Brian Sutherland
-    Georg Brandl
-
-

diff --git a/pypy/doc/config/objspace.std.mutable_builtintypes.txt b/pypy/doc/config/objspace.std.mutable_builtintypes.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.mutable_builtintypes.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Allow modification of builtin types.  Disabled by default.

diff --git a/pypy/doc/config/objspace.usemodules.crypt.txt b/pypy/doc/config/objspace.usemodules.crypt.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.crypt.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'crypt' module. 
-This module is expected to be fully working.

diff --git a/pypy/doc/discussion/testing-zope.txt b/pypy/doc/discussion/testing-zope.txt
deleted file mode 100644
--- a/pypy/doc/discussion/testing-zope.txt
+++ /dev/null
@@ -1,45 +0,0 @@
-Testing Zope on top of pypy-c
-=============================
-
-Getting Zope packages
----------------------
-
-If you don't have a full Zope installation, you can pick a Zope package,
-check it out via Subversion, and get all its dependencies (replace
-``$PKG`` with, for example, ``zope.interface``)::
-
-    svn co svn://svn.zope.org/repos/main/$PKG/trunk $PKG
-    cd $PKG
-    python bootstrap.py
-    bin/buildout
-    bin/test
-
-Required pypy-c version
------------------------
-
-You probably need a pypy-c built with --allworkingmodules, at least::
-
-    cd pypy/translator/goal
-    ./translate.py targetpypystandalone.py --allworkingmodules
-
-Workarounds
------------
-
-At the moment, our ``gc`` module is incomplete, making the Zope test
-runner unhappy.  Quick workaround: go to the
-``lib-python/modified-2.4.1`` directory and create a
-``sitecustomize.py`` with the following content::
-
-    print "<adding dummy stuff into the gc module>"
-    import gc
-    gc.get_threshold = lambda : (0, 0, 0)
-    gc.get_debug = lambda : 0
-    gc.garbage = []
-
-Running the tests
------------------
-
-To run the tests we need the --oldstyle option, as follows::
-
-    cd $PKG
-    pypy-c --oldstyle bin/test

diff --git a/pypy/doc/config/objspace.honor__builtins__.txt b/pypy/doc/config/objspace.honor__builtins__.txt
deleted file mode 100644

diff --git a/pypy/doc/config/objspace.std.withrangelist.txt b/pypy/doc/config/objspace.std.withrangelist.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withrangelist.txt
+++ /dev/null
@@ -1,11 +0,0 @@
-Enable "range list" objects. They are an additional implementation of the Python
-``list`` type, indistinguishable for the normal user. Whenever the ``range``
-builtin is called, an range list is returned. As long as this list is not
-mutated (and for example only iterated over), it uses only enough memory to
-store the start, stop and step of the range. This makes using ``range`` as
-efficient as ``xrange``, as long as the result is only used in a ``for``-loop.
-
-See the section in `Standard Interpreter Optimizations`_ for more details.
-
-.. _`Standard Interpreter Optimizations`: ../interpreter-optimizations.html#range-lists
-

diff --git a/pypy/doc/config/objspace.std.optimized_comparison_op.txt b/pypy/doc/config/objspace.std.optimized_comparison_op.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.optimized_comparison_op.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Optimize the comparison of two integers a bit.

diff --git a/pypy/doc/config/objspace.soabi.txt b/pypy/doc/config/objspace.soabi.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.soabi.txt
+++ /dev/null
@@ -1,14 +0,0 @@
-This option controls the tag included into extension module file names.  The
-default is something like `pypy-14`, which means that `import foo` will look for
-a file named `foo.pypy-14.so` (or `foo.pypy-14.pyd` on Windows).
-
-This is an implementation of PEP3149_, with two differences:
-
- * the filename without tag `foo.so` is not considered.
- * the feature is also available on Windows.
-
-When set to the empty string (with `--soabi=`), the interpreter will only look
-for a file named `foo.so`, and will crash if this file was compiled for another
-Python interpreter.
-
-.. _PEP3149: http://www.python.org/dev/peps/pep-3149/

diff --git a/pypy/doc/config/objspace.usemodules._collections.txt b/pypy/doc/config/objspace.usemodules._collections.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._collections.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the '_collections' module.
-Used by the 'collections' standard lib module. This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/objspace.usemodules.micronumpy.txt b/pypy/doc/config/objspace.usemodules.micronumpy.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.micronumpy.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Use the micronumpy module.
-This module provides a very basic numpy-like interface. Major use-case
-is to show how jit scales for other code.

diff --git a/pypy/doc/config/objspace.std.withropeunicode.txt b/pypy/doc/config/objspace.std.withropeunicode.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withropeunicode.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-Use ropes to implement unicode strings (and also normal strings).
-
-See the section in `Standard Interpreter Optimizations`_ for more details.
-
-.. _`Standard Interpreter Optimizations`: ../interpreter-optimizations.html#ropes
-
-

diff --git a/pypy/doc/externaltools.txt b/pypy/doc/externaltools.txt
deleted file mode 100644
--- a/pypy/doc/externaltools.txt
+++ /dev/null
@@ -1,27 +0,0 @@
-External tools&programs needed by PyPy
-======================================
-
-Tools needed for testing
-------------------------
-
-These tools are used in various ways by PyPy tests; if they are not found,
-some tests might be skipped, so they need to be installed on every buildbot
-slave to be sure we actually run all tests:
-
-  - Mono (versions 1.2.1.1 and 1.9.1 known to work)
-
-  - Java/JVM (preferably sun-jdk; version 1.6.0 known to work)
-
-  - Jasmin >= 2.2 (copy it from wyvern, /usr/local/bin/jasmin and /usr/local/share/jasmin.jar)
-
-  - gcc
-
-  - Some libraries (these are Debian package names, adapt as needed):
-
-    * ``python-dev``
-    * ``python-ctypes``
-    * ``libffi-dev``
-    * ``libz-dev`` (for the optional ``zlib`` module)
-    * ``libbz2-dev`` (for the optional ``bz2`` module)
-    * ``libncurses-dev`` (for the optional ``_minimal_curses`` module)
-    * ``libgc-dev`` (only when translating with `--opt=0, 1` or `size`)

diff --git a/pypy/doc/config/objspace.std.prebuiltintto.txt b/pypy/doc/config/objspace.std.prebuiltintto.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.prebuiltintto.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-See :config:`objspace.std.withprebuiltint`.

diff --git a/pypy/doc/config/objspace.std.multimethods.txt b/pypy/doc/config/objspace.std.multimethods.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.multimethods.txt
+++ /dev/null
@@ -1,8 +0,0 @@
-Choose the multimethod implementation.
-
-* ``doubledispatch`` turns
-  a multimethod call into a sequence of normal method calls.
-
-* ``mrd`` uses a technique known as Multiple Row Displacement
-  which precomputes a few compact tables of numbers and
-  function pointers.

diff --git a/pypy/doc/config/objspace.usemodules._ast.txt b/pypy/doc/config/objspace.usemodules._ast.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._ast.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the '_ast' module. 
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/objspace.disable_call_speedhacks.txt b/pypy/doc/config/objspace.disable_call_speedhacks.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.disable_call_speedhacks.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-disable the speed hacks that the interpreter normally does. Usually you don't
-want to set this to False, but some object spaces require it.

diff --git a/pypy/doc/discussion/howtoimplementpickling.txt b/pypy/doc/discussion/howtoimplementpickling.txt
deleted file mode 100644
--- a/pypy/doc/discussion/howtoimplementpickling.txt
+++ /dev/null
@@ -1,340 +0,0 @@
-Designing thread pickling or "the Essence of Stackless Python"
---------------------------------------------------------------
-
-Note from 2007-07-22: This document is slightly out of date
-and should be turned into a description of pickling.
-Some research is necessary to get rid of explicit resume points, etc...
-
-Thread pickling is a unique feature in Stackless Python
-and should be implemented for PyPy pretty soon.
-
-What is meant by pickling?
-..........................
-
-I'd like to define thread pickling as a restartable subset
-of a running program. The re-runnable part should be based
-upon Python frame chains, represented by coroutines, tasklets
-or any other application level switchable subcontext.
-It is surely possible to support pickling of arbitrary
-interplevel state, but this seems to be not mandatory as long
-as we consider Stackless as the reference implementation.
-Extensions of this might be considered when the basic task
-is fulfilled.
-
-Pickling should create a re-startable coroutine-alike thing
-that can run on a different machine, same Python version,
-but not necessarily the same PyPy translation. This belongs
-to the harder parts.
-
-What is not meant by pickling?
-..............................
-
-Saving the whole memory state and writing a loader that
-reconstructs the whole binary with its state im memory
-is not what I consider a real solution. In some sense,
-this can be a fall-back if we fail in every other case,
-but I consider it really nasty for the C backend.
-
-If we had a dynamic backend that supports direct creation
-of the program and its state (example: a Forth backend),
-I would see it as a valid solution, since it is
-relocatable. It is of course a possible fall-back to write
-such a backend of we fail otherwise.
-
-There are some simple steps and some more difficult ones.
-Let's start with the simple.
-
-Basic necessities
-.................
-
-Pickling of a running thread involves a bit more than normal
-object pickling, because there exist many objects which
-don't have a pickling interface, and people would not care
-about pickling them at all. But with thread pickling, these
-objects simply exist as local variables and are needed
-to restore the current runtime environment, and the user
-should not have to know what goes into the pickle.
-
-Examples are
-
-- generators
-- frames
-- cells
-- iterators
-- tracebacks
-
-to name just a few. Fortunately most of these objects already have
-got a pickling implementation in Stackless Python, namely the
-prickelpit.c file.
-
-It should be simple and straightforward to redo these implementations.
-Nevertheless there is a complication. The most natural way to support
-pickling is providing a __getstate__/__setstate__ method pair.
-This is ok for extension types like coroutines/tasklets which we can
-control, but it should be avoided for existing types.
-
-Consider for instance frames. We would have to add a __getstate__
-and a __setstate__ method, which is an interface change. Furthermore,
-we would need to support creation of frames by calling the
-frame type, which is not really intended.
-
-For other types with are already callable, things get more complicated
-because we need to make sure that creating new instances does
-not interfere with existing ways to call the type.
-
-Directly adding a pickling interface to existing types is quite
-likely to produce overlaps in the calling interface. This happened
-for instance, when the module type became callable, and the signature
-was different from what Stackless added before.
-
-For Stackless,
-I used the copyreg module, instead, and created special surrogate
-objects as placeholders, which replace the type of the object
-after unpickling with the right type pointer. For details, see
-the prickelpit.c file in the Stackless distribution.
-
-As a conclusion, pickling of tasklets is an addition to Stackless,
-but not meant to be an extension to Python. The need to support
-pickling of certain objects should not change the interface.
-It is better to decouple this and to use surrogate types for
-pickling which cannot collide with future additions to Python.
-
-The real problem
-................
-
-There are currently some crucial differences between Stackless
-Python (SLP for now) and the PyPy Stackless support (PyPy for now)
-as far as it is grown.
-When CPython does a call to a Python function, there are several
-helper functions involved for adjusting parameters, unpacking
-methods and some more. SLP takes a hard time to remove all these
-C functions from the C stack before starting the Python interpreter
-for the function. This change of behavior is done manually for
-all the helper functions by figuring out, which variables are
-still needed after the call. It turns out that in most cases,
-it is possible to let all the helper functions finish their
-work and return form the function call before the interpreter
-is started at all.
-
-This is the major difference which needs to be tackled for PyPy.
-Whenever we run a Python function, quite a number of functions
-incarnate on the C stack, and they get *not* finished before
-running the new frame. In case of a coroutine switch, we just
-save the whole chain of activation records - c function
-entrypoints with the saved block variables. This is ok for
-coroutine switching, but in the sense of SLP, it is rather
-incomplete and not stackless at all. The stack still exists,
-we can unwind and rebuild it, but it is a problem.
-
-Why a problem?
-..............
-
-In an ideal world, thread pickling would just be building
-chains of pickled frames and nothing else. For every different
-extra activation record like mentioned above, we have the
-problem of how to save this information. We need a representation
-which is not machine or compiler dependent. Right now, PyPy
-is quite unstable in terms of which blocks it will produce,
-what gets inlined, etc. The best solution possible is to try
-to get completely rid of these extra structures.
-
-Unfortunately this is not even possible with SLP, because
-there are different flavors of state which make it hard
-to go without extra information.
-
-SLP switching strategies
-........................
-
-SLP has undergone several rewrites. The first implementation was aiming
-at complete collaboration. A new frame's execution was deferred until
-all the preparational C function calls had left the C stack. There
-was no extra state to be saved.
-
-Well, this is only partially true - there are a couple of situations
-where a recursive call could not be avoided, since the necessary support
-would require heavy rewriting of the implementation.
-
-Examples are
-
-- map is a stateful implementation of iterating over a sequence
-  of operations. It can be made non-recursive if the map operation
-  creates its own frame to keep state.
-  
-- __init__ looks trivial, but the semantics is that the return value
-  of __init__ is supposed to be None, and CPy has a special check for this
-  after the call. This might simply be ignored, but it is a simple example
-  for a case that cannot be handled automatically.
-  
-- things like operator.__add__ can theoretically generate a wild pattern
-  of recursive calls while CPy tries to figure out if it is a numeric
-  add or a sequence add, and other callbacks may occur when methods
-  like __coerce__ get involved. This will never be solved for SLP, but
-  might get a solution by the strategy outlined below.
-  
-The second implementation took a radically different approach. Context
-switches were done by hijacking parts of the C stack, storing them
-away and replacing them by the stack fragment that the target needs.
-This is very powerful and allows to switch even in the context of
-foreign code. With a little risk, I was even able to add concurrency
-to foreign Fortran code. 
-
-The above concept is called Hard (switching), the collaborative Soft (switching).
-Note that an improved version of Hard is still the building block
-for greenlets, which makes them not really green - I'd name it yellow.
-
-The latest SLP rewrites combine both ideas, trying to use Soft whenever
-possible, but using Hard when nested interpreters are in the way.
-
-Notabene, it was never tried to pickle tasklets when Hard
-was involved. In SLP, pickling works with Soft. To gather more
-pickleable situations, you need to invent new frame types
-or write replacement Python code and switch it using Soft.
-
-Analogies between SLP and PyPy
-..............................
-
-Right now, PyPy saves C state of functions in tiny activation records:
-the alive variables of a block, together with the entry point of
-the function that was left.
-This is an improvement over storing raw stack slices, but the pattern
-is similar: The C stack state gets restored when we switch.
-
-In this sense, it was the astonishing resume when Richard and I discussed
-this last week: PyPy essentially does a variant of Hard switching! At least it
-does a compromise that does not really help with pickling.
-
-On the other hand, this approach is half the way. It turns out to
-be an improvement over SLP not to have to avoid recursions in the
-first place. Instead, it seems to be even more elegant and efficient
-to get rid of unnecessary state right in the context of a switch
-and no earlier!
-
-Ways to handle the problem in a minimalistic way
-................................................
-
-Comparing the different approaches of SLP and PyPy, it appears to be
-not necessary to change the interpreter in the first place. PyPy does
-not need to change its calling behavior in order to be cooperative.
-The key point is to find out which activation records need to
-be stored at all. This should be possible to identify as a part
-of the stackless transform.
-
-Consider the simple most common case of calling a normal Python function.
-There are several calls to functions involved, which do preparational
-steps. Without trying to be exact (this is part of the work to be done),
-involved steps are
-
-- decode the arguments of the function
-
-- prepare a new frame
-
-- store the arguments in the frame
-
-- execute the frame
-
-- return the result
-
-Now assume that we do not execute the frame, but do a context switch instead,
-then right now a sequence of activation records is stored on the heap.
-If we want to re-activate this chain of activation records, what do
-we really need to restore before we can do the function call?
-
-- the argument decoding is done, already, and the fact that we could have done
-  the function call shows, that no exception occurred. We can ignore the rest
-  of this activation record and do the housekeeping.
-  
-- the frame is prepared, and arguments are stored in it. The operation
-  succeeded, and we have the frame. We can ignore exception handling
-  and just do housekeeping by getting rid of references.
-  
-- for executing the frame, we need a special function that executes frames. It
-  is possible that we need different flavors due to contexts. SLP does this
-  by using different registered functions which operate on a frame, depending
-  on the frame's state (first entry, reentry after call, returning, yielding etc)
-
-- after executing the frame, exceptions need to be handled in the usual way,
-  and we should return to the issuer of the call.
-
-Some deeper analysis is needed to get these things correct.
-But it should have become quite clear, that after all the preparational
-steps have been done, there is no other state necessary than what we
-have in the Python frames: bound arguments, instruction pointer, that's it.
-
-My proposal is now to do such an analysis by hand, identify the different
-cases to be handled, and then trying to find an algorithm that automatically
-identifies the blocks in the whole program, where the restoring of the
-C stack can be avoided, and we can jump back to the previous caller, directly.
-
-A rough sketch of the necessary analysis:
-
-for every block in an RPython function that can reach unwind:
-Analyze control flow. It should be immediately leading to
-the return block with only one output variable. All other alive variables
-should have ended their liveness in this block.
-
-I think this will not work in the first place. For the bound frame
-arguments for instance, I think we need some notation that these are
-held by the frame, and we can drop their liveness before doing the call,
-hence we don't need to save these variables in the activation record,
-and hence the whole activation record can be removed.
-
-As a conclusion of this incomplete first analysis, it seems to be necessary
-to identify useless activation records in order to support pickling.
-The remaining, irreducible activation records should then be those
-which hold a reference to a Python frame.
-Such a chain is pickleable if its root points back to the context switching code
-of the interp-level implementation of coroutines.
-
-As an observation, this transform not only enables pickling, but
-also is an optimization, if we can avoid saving many activation records.
-
-Another possible observation which I hope to be able to prove is this:
-The remaining irreducible activation records which don't just hold
-a Python frame are those which should be considered special.
-They should be turned into something like special frames, and they would
-be the key to make PyPy completely stackless, a goal which is practically
-impossible for SLP! These activation records would need to become
-part of the official interface and need to get naming support for
-their necessary functions.
-
-I wish to stop this paper here. I believe everything else
-needs to be tried in an implementation, and this is so far
-all I can do just with imagination.
-
-best - chris
-
-Just an addition after some more thinking
-.........................................
-
-Actually it struck me after checking this in, that the problem of
-determining which blocks need to save state and which not it not
-really a Stackless problem. It is a system-immanent problem
-of a missing optimization that we still did not try to solve.
-
-Speaking in terms of GC transform, and especially the refcounting,
-it is probably easy to understand what I mean. Our current refcounting
-implementation is naive, in the sense that we do not try to do the
-optimizations which every extension writer does by hand:
-We do not try to save references.
-
-This is also why I'm always arguing that refcounting can be and
-effectively *is* efficient, because CPython does it very well.
-
-Our refcounting is not aware of variable lifeness, it does not
-track references which are known to be held by other objects.
-Optimizing that would do two things: The refcounting would become
-very efficient, since we would save some 80 % of it.
-The second part, which is relevant to the pickling problem is this:
-By doing a proper analysis, we already would have lost references to 
-all the variables which we don't need to save any longer, because
-we know that they are held in, for instance, frames.
-
-I hope you understand that: If we improve the life-time analysis
-of variables, the sketched problem of above about which blocks
-need to save state and which don't, should become trivial and should
-just vanish. Doing this correctly will solve the pickling problem quasi
-automatically, leading to a more efficient implementation at the same time.
-
-I hope I told the truth and will try to prove it.
-
-ciao - chris

diff --git a/pypy/doc/config/objspace.opcodes.txt b/pypy/doc/config/objspace.opcodes.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.opcodes.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-..  intentionally empty

diff --git a/pypy/doc/config/objspace.usemodules.signal.txt b/pypy/doc/config/objspace.usemodules.signal.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.signal.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'signal' module. 
-This module is expected to be fully working.

diff --git a/pypy/doc/config/objspace.usemodules._io.txt b/pypy/doc/config/objspace.usemodules._io.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._io.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the '_io module.
-Used by the 'io' standard lib module. This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/objspace.usemodules._warnings.txt b/pypy/doc/config/objspace.usemodules._warnings.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._warnings.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Use the '_warning' module. This module is expected to be working and is included by default.

diff --git a/pypy/doc/docindex.txt b/pypy/doc/docindex.txt
deleted file mode 100644
--- a/pypy/doc/docindex.txt
+++ /dev/null
@@ -1,314 +0,0 @@
-=================================================
-PyPy - a Python_ implementation written in Python 
-=================================================
-
-.. _Python: http://www.python.org/doc/2.5.2/
-
-.. sectnum::
-.. contents:: :depth: 1
-
-
-PyPy User Documentation
-===============================================
-
-`getting started`_ provides hands-on instructions 
-including a two-liner to run the PyPy Python interpreter 
-on your system, examples on advanced features and 
-entry points for using PyPy's translation tool chain. 
-
-`FAQ`_ contains some frequently asked questions.
-
-New features of PyPy's Python Interpreter and 
-Translation Framework: 
-
-  * `Differences between PyPy and CPython`_
-  * `What PyPy can do for your objects`_
-  * `Stackless and coroutines`_
-  * `JIT Generation in PyPy`_ 
-  * `Sandboxing Python code`_
-
-Status_ of the project.
-
-
-Project Documentation
-=====================================
-
-PyPy was funded by the EU for several years. See the `web site of the EU
-project`_ for more details.
-
-.. _`web site of the EU project`: http://pypy.org
-
-architecture_ gives a complete view of PyPy's basic design. 
-
-`coding guide`_ helps you to write code for PyPy (especially also describes
-coding in RPython a bit). 
-
-`sprint reports`_ lists reports written at most of our sprints, from
-2003 to the present.
-
-`papers, talks and related projects`_ lists presentations 
-and related projects as well as our published papers.
-
-`ideas for PyPy related projects`_ which might be a good way to get
-into PyPy.
-
-`PyPy video documentation`_ is a page linking to the videos (e.g. of talks and
-introductions) that are available.
-
-`Technical reports`_ is a page that contains links to the
-reports that we submitted to the European Union.
-
-`development methodology`_ describes our sprint-driven approach.
-
-`license`_ contains licensing details (basically a straight MIT-license). 
-
-`Glossary`_ of PyPy words to help you align your inner self with
-the PyPy universe.
-
-
-Status
-===================================
-
-PyPy can be used to run Python programs on Linux, OS/X,
-Windows, on top of .NET, and on top of Java.
-To dig into PyPy it is recommended to try out the current
-Subversion HEAD, which is always working or mostly working,
-instead of the latest release, which is `1.2.0`__.
-
-.. __: release-1.2.0.html
-
-PyPy is mainly developed on Linux and Mac OS X.  Windows is supported,
-but platform-specific bugs tend to take longer before we notice and fix
-them.  Linux 64-bit machines are supported (though it may also take some
-time before we notice and fix bugs).
-
-PyPy's own tests `summary`_, daily updated, run through BuildBot infrastructure.
-You can also find CPython's compliance tests run with compiled ``pypy-c``
-executables there.
-
-information dating from early 2007: 
-
-`PyPy LOC statistics`_ shows LOC statistics about PyPy.
-
-`PyPy statistics`_ is a page with various statistics about the PyPy project.
-
-`compatibility matrix`_ is a diagram that shows which of the various features
-of the PyPy interpreter work together with which other features.
-
-
-Source Code Documentation
-===============================================
-
-`object spaces`_ discusses the object space interface 
-and several implementations. 
-
-`bytecode interpreter`_ explains the basic mechanisms 
-of the bytecode interpreter and virtual machine. 
-
-`interpreter optimizations`_ describes our various strategies for
-improving the performance of our interpreter, including alternative
-object implementations (for strings, dictionaries and lists) in the
-standard object space.
-
-`translation`_ is a detailed overview of our translation process.  The
-rtyper_ is the largest component of our translation process.
-
-`dynamic-language translation`_ is a paper that describes
-the translation process, especially the flow object space
-and the annotator in detail. (This document is one
-of the `EU reports`_.)
-
-`low-level encapsulation`_ describes how our approach hides
-away a lot of low level details. This document is also part
-of the `EU reports`_.
-
-`translation aspects`_ describes how we weave different
-properties into our interpreter during the translation
-process. This document is also part of the `EU reports`_.
-
-`garbage collector`_ strategies that can be used by the virtual
-machines produced by the translation process.
-
-`parser`_ contains (outdated, unfinished) documentation about
-the parser.
-
-`rlib`_ describes some modules that can be used when implementing programs in
-RPython.
-
-`configuration documentation`_ describes the various configuration options that
-allow you to customize PyPy.
-
-`CLI backend`_ describes the details of the .NET backend.
-
-`JIT Generation in PyPy`_ describes how we produce the Python Just-in-time Compiler
-from our Python interpreter.
-
-
-
-.. _`FAQ`: faq.html
-.. _Glossary: glossary.html
-.. _`PyPy video documentation`: video-index.html
-.. _parser: parser.html
-.. _`development methodology`: dev_method.html
-.. _`sprint reports`: sprint-reports.html
-.. _`papers, talks and related projects`: extradoc.html
-.. _`license`: ../../LICENSE
-.. _`PyPy LOC statistics`: http://codespeak.net/~hpk/pypy-stat/
-.. _`PyPy statistics`: http://codespeak.net/pypy/trunk/pypy/doc/statistic
-.. _`object spaces`: objspace.html 
-.. _`interpreter optimizations`: interpreter-optimizations.html 
-.. _`translation`: translation.html 
-.. _`dynamic-language translation`: http://codespeak.net/svn/pypy/extradoc/eu-report/D05.1_Publish_on_translating_a_very-high-level_description.pdf
-.. _`low-level encapsulation`: low-level-encapsulation.html
-.. _`translation aspects`: translation-aspects.html
-.. _`configuration documentation`: config/
-.. _`coding guide`: coding-guide.html 
-.. _`architecture`: architecture.html 
-.. _`getting started`: getting-started.html 
-.. _`theory`: theory.html
-.. _`bytecode interpreter`: interpreter.html 
-.. _`EU reports`: index-report.html
-.. _`Technical reports`: index-report.html
-.. _`summary`: http://codespeak.net:8099/summary
-.. _`ideas for PyPy related projects`: project-ideas.html
-.. _`Nightly builds and benchmarks`: http://tuatara.cs.uni-duesseldorf.de/benchmark.html
-.. _`directory reference`: 
-.. _`rlib`: rlib.html
-.. _`Sandboxing Python code`: sandbox.html
-
-PyPy directory cross-reference 
-------------------------------
-
-Here is a fully referenced alphabetical two-level deep 
-directory overview of PyPy: 
-
-============================   =========================================== 
-Directory                      explanation/links
-============================   =========================================== 
-`annotation/`_                 `type inferencing code`_ for `RPython`_ programs 
-
-`bin/`_                        command-line scripts, mainly `py.py`_ and `translatorshell.py`_
-
-`config/`_                     handles the numerous options for building and running PyPy
-
-`doc/`_                        text versions of PyPy developer documentation
-
-`doc/config/`_                 documentation for the numerous translation options
-
-`doc/discussion/`_             drafts of ideas and documentation
-
-``doc/*/``                     other specific documentation topics or tools
-
-`interpreter/`_                `bytecode interpreter`_ and related objects
-                               (frames, functions, modules,...) 
-
-`interpreter/pyparser/`_       interpreter-level Python source parser
-
-`interpreter/astcompiler/`_    interpreter-level bytecode compiler, via an AST
-                               representation
-
-`module/`_                     contains `mixed modules`_ implementing core modules with 
-                               both application and interpreter level code.
-                               Not all are finished and working.  Use the ``--withmod-xxx``
-                               or ``--allworkingmodules`` translation options.
-
-`objspace/`_                   `object space`_ implementations
-
-`objspace/trace.py`_           the `trace object space`_ monitoring bytecode and space operations
-
-`objspace/dump.py`_            the dump object space saves a large, searchable log file
-                               with all operations
-
-`objspace/taint.py`_           the `taint object space`_, providing object tainting
-
-`objspace/thunk.py`_           the `thunk object space`_, providing unique object features 
-
-`objspace/flow/`_              the FlowObjSpace_ implementing `abstract interpretation`
-
-`objspace/std/`_               the StdObjSpace_ implementing CPython's objects and types
-
-`rlib/`_                       a `"standard library"`_ for RPython_ programs
-
-`rpython/`_                    the `RPython Typer`_ 
-
-`rpython/lltypesystem/`_       the `low-level type system`_ for C-like backends
-
-`rpython/ootypesystem/`_       the `object-oriented type system`_ for OO backends
-
-`rpython/memory/`_             the `garbage collector`_ construction framework
-
-`tool/`_                       various utilities and hacks used from various places 
-
-`tool/algo/`_                  general-purpose algorithmic and mathematic
-                               tools
-
-`tool/pytest/`_                support code for our `testing methods`_
-
-`translator/`_                 translation_ backends and support code
-
-`translator/backendopt/`_      general optimizations that run before a backend generates code
-
-`translator/c/`_               the `GenC backend`_, producing C code from an
-                               RPython program (generally via the rtyper_)
-
-`translator/cli/`_             the `CLI backend`_ for `.NET`_ (Microsoft CLR or Mono_)
-
-`translator/goal/`_            our `main PyPy-translation scripts`_ live here
-
-`translator/jvm/`_             the Java backend
-
-`translator/stackless/`_       the `Stackless Transform`_
-
-`translator/tool/`_            helper tools for translation, including the Pygame
-                               `graph viewer`_
-
-``*/test/``                    many directories have a test subdirectory containing test 
-                               modules (see `Testing in PyPy`_) 
-
-``_cache/``                    holds cache files from internally `translating application 
-                               level to interpreterlevel`_ code.   
-============================   =========================================== 
-
-.. _`bytecode interpreter`: interpreter.html
-.. _`translating application level to interpreterlevel`: geninterp.html
-.. _`Testing in PyPy`: coding-guide.html#testing-in-pypy 
-.. _`mixed modules`: coding-guide.html#mixed-modules 
-.. _`modules`: coding-guide.html#modules 
-.. _`basil`: http://people.cs.uchicago.edu/~jriehl/BasilTalk.pdf
-.. _`object space`: objspace.html
-.. _FlowObjSpace: objspace.html#the-flow-object-space 
-.. _`trace object space`: objspace.html#the-trace-object-space 
-.. _`taint object space`: objspace-proxies.html#taint
-.. _`thunk object space`: objspace-proxies.html#thunk
-.. _`transparent proxies`: objspace-proxies.html#tproxy
-.. _`Differences between PyPy and CPython`: cpython_differences.html
-.. _`What PyPy can do for your objects`: objspace-proxies.html
-.. _`Stackless and coroutines`: stackless.html
-.. _StdObjSpace: objspace.html#the-standard-object-space 
-.. _`abstract interpretation`: theory.html#abstract-interpretation
-.. _`rpython`: coding-guide.html#rpython 
-.. _`type inferencing code`: translation.html#the-annotation-pass 
-.. _`RPython Typer`: translation.html#rpython-typer 
-.. _`testing methods`: coding-guide.html#testing-in-pypy
-.. _`translation`: translation.html 
-.. _`GenC backend`: translation.html#genc 
-.. _`CLI backend`: cli-backend.html
-.. _`py.py`: getting-started-python.html#the-py.py-interpreter
-.. _`translatorshell.py`: getting-started-dev.html#try-out-the-translator
-.. _JIT: jit/index.html
-.. _`JIT Generation in PyPy`: jit/index.html
-.. _`just-in-time compiler generator`: jit/index.html
-.. _rtyper: rtyper.html
-.. _`low-level type system`: rtyper.html#low-level-type
-.. _`object-oriented type system`: rtyper.html#oo-type
-.. _`garbage collector`: garbage_collection.html
-.. _`Stackless Transform`: translation.html#the-stackless-transform
-.. _`main PyPy-translation scripts`: getting-started-python.html#translating-the-pypy-python-interpreter
-.. _`.NET`: http://www.microsoft.com/net/
-.. _Mono: http://www.mono-project.com/
-.. _`"standard library"`: rlib.html
-.. _`graph viewer`: getting-started-dev.html#try-out-the-translator
-.. _`compatibility matrix`: image/compat-matrix.png
-
-.. include:: _ref.txt
-

diff --git a/pypy/doc/config/objspace.usemodules.parser.txt b/pypy/doc/config/objspace.usemodules.parser.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.parser.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Use the 'parser' module. 
-This is PyPy implementation of the standard library 'parser' module (e.g. if
-this option is enabled and you say ``import parser`` you get this module).
-It is enabled by default.

diff --git a/pypy/doc/cli-backend.txt b/pypy/doc/cli-backend.txt
deleted file mode 100644
--- a/pypy/doc/cli-backend.txt
+++ /dev/null
@@ -1,455 +0,0 @@
-===============
-The CLI backend
-===============
-
-The goal of GenCLI is to compile RPython programs to the CLI virtual
-machine.
-
-
-Target environment and language
-===============================
-
-The target of GenCLI is the Common Language Infrastructure environment
-as defined by the `Standard Ecma 335`_.
-
-While in an ideal world we might suppose GenCLI to run fine with
-every implementation conforming to that standard, we know the world we
-live in is far from ideal, so extra efforts can be needed to maintain
-compatibility with more than one implementation.
-
-At the moment of writing the two most popular implementations of the
-standard are supported: Microsoft Common Language Runtime (CLR) and
-Mono.
-
-Then we have to choose how to generate the real executables. There are
-two main alternatives: generating source files in some high level
-language (such as C#) or generating assembly level code in
-Intermediate Language (IL).
-
-The IL approach is much faster during the code generation
-phase, because it doesn't need to call a compiler. By contrast the
-high level approach has two main advantages:
-
-  - the code generation part could be easier because the target
-    language supports high level control structures such as
-    structured loops;
-  
-  - the generated executables take advantage of compiler's
-    optimizations.
-
-In reality the first point is not an advantage in the PyPy context,
-because the `flow graph`_ we start from is quite low level and Python
-loops are already expressed in terms of branches (i.e., gotos).
-
-About the compiler optimizations we must remember that the flow graph
-we receive from earlier stages is already optimized: PyPy implements
-a number of optimizations such a constant propagation and
-dead code removal, so it's not obvious if the compiler could
-do more.
-
-Moreover by emitting IL instruction we are not constrained to rely on
-compiler choices but can directly choose how to map CLI opcodes: since
-the backend often know more than the compiler about the context, we
-might expect to produce more efficient code by selecting the most
-appropriate instruction; e.g., we can check for arithmetic overflow
-only when strictly necessary.
-
-The last but not least reason for choosing the low level approach is
-flexibility in how to get an executable starting from the IL code we
-generate:
-
-  - write IL code to a file, then call the ilasm assembler;
-  
-  - directly generate code on the fly by accessing the facilities
-    exposed by the System.Reflection.Emit API.
-
-
-Handling platform differences
-=============================
-
-Since our goal is to support both Microsoft CLR we have to handle the
-differences between the twos; in particular the main differences are
-in the name of the helper tools we need to call:
-
-=============== ======== ======
-Tool            CLR      Mono
-=============== ======== ======
-IL assembler    ilasm    ilasm2
-C# compiler     csc      gmcs
-Runtime         ...      mono
-=============== ======== ======
-
-The code that handles these differences is located in the sdk.py
-module: it defines an abstract class which exposes some methods
-returning the name of the helpers and one subclass for each of the two
-supported platforms.
-
-Since Microsoft ``ilasm`` is not capable of compiling the PyPy
-standard interpreter due to its size, on Windows machines we also look
-for an existing Mono installation: if present, we use CLR for
-everything except the assembling phase, for which we use Mono's
-``ilasm2``.
-
-
-Targeting the CLI Virtual Machine
-=================================
-
-In order to write a CLI backend we have to take a number of decisions.
-First, we have to choose the typesystem to use: given that CLI
-natively supports primitives like classes and instances,
-ootypesystem is the most natural choice.
-
-Once the typesystem has been chosen there is a number of steps we have
-to do for completing the backend:
-
-  - map ootypesystem's types to CLI Common Type System's
-    types;
-  
-  - map ootypesystem's low level operation to CLI instructions;
-  
-  - map Python exceptions to CLI exceptions;
-  
-  - write a code generator that translates a flow graph
-    into a list of CLI instructions;
-  
-  - write a class generator that translates ootypesystem
-    classes into CLI classes.
-
-
-Mapping primitive types
------------------------
-
-The `rtyper`_ give us a flow graph annotated with types belonging to
-ootypesystem: in order to produce CLI code we need to translate these
-types into their Common Type System equivalents.
-
-For numeric types the conversion is straightforward, since
-there is a one-to-one mapping between the two typesystems, so that
-e.g. Float maps to float64.
-
-For character types the choice is more difficult: RPython has two
-distinct types for plain ASCII and Unicode characters (named UniChar),
-while .NET only supports Unicode with the char type. There are at
-least two ways to map plain Char to CTS:
-
-  - map UniChar to char, thus maintaining the original distinction
-    between the two types: this has the advantage of being a
-    one-to-one translation, but has the disadvantage that RPython
-    strings will not be recognized as .NET strings, since they only
-    would be sequences of bytes;
-  
-  - map both char, so that Python strings will be treated as strings
-    also by .NET: in this case there could be problems with existing
-    Python modules that use strings as sequences of byte, such as the
-    built-in struct module, so we need to pay special attention.
-
-We think that mapping Python strings to .NET strings is
-fundamental, so we chose the second option.
-
-Mapping built-in types
-----------------------
-
-As we saw in section ootypesystem defines a set of types that take
-advantage of built-in types offered by the platform.
-
-For the sake of simplicity we decided to write wrappers
-around .NET classes in order to match the signatures required by
-pypylib.dll:
-
-=================== ===========================================
-ootype              CLI
-=================== ===========================================
-String              System.String
-StringBuilder       System.Text.StringBuilder
-List                System.Collections.Generic.List<T>
-Dict                System.Collections.Generic.Dictionary<K, V>
-CustomDict          pypy.runtime.Dict
-DictItemsIterator   pypy.runtime.DictItemsIterator
-=================== ===========================================
-
-Wrappers exploit inheritance for wrapping the original classes, so,
-for example, pypy.runtime.List<T> is a subclass of
-System.Collections.Generic.List<T> that provides methods whose names
-match those found in the _GENERIC_METHODS of ootype.List
-
-The only exception to this rule is the String class, which is not
-wrapped since in .NET we can not subclass System.String.  Instead, we
-provide a bunch of static methods in pypylib.dll that implement the
-methods declared by ootype.String._GENERIC_METHODS, then we call them
-by explicitly passing the string object in the argument list.
-
-
-Mapping instructions
---------------------
-
-PyPy's low level operations are expressed in Static Single Information
-(SSI) form, such as this::
-
-    v2 = int_add(v0, v1)
-
-By contrast the CLI virtual machine is stack based, which means the
-each operation pops its arguments from the top of the stacks and
-pushes its result there. The most straightforward way to translate SSI
-operations into stack based operations is to explicitly load the
-arguments and store the result into the appropriate places::
-
-    LOAD v0
-    LOAD v1
-    int_add
-    STORE v2
-
-The code produced works correctly but has some inefficiency issue that
-can be addressed during the optimization phase.
-
-The CLI Virtual Machine is fairly expressive, so the conversion
-between PyPy's low level operations and CLI instruction is relatively
-simple: many operations maps directly to the correspondent
-instruction, e.g int_add and sub.
-
-By contrast some instructions do not have a direct correspondent and
-have to be rendered as a sequence of CLI instructions: this is the
-case of the "less-equal" and "greater-equal" family of instructions,
-that are rendered as "greater" or "less" followed by a boolean "not",
-respectively.
-
-Finally, there are some instructions that cannot be rendered directly
-without increasing the complexity of the code generator, such as
-int_abs (which returns the absolute value of its argument).  These
-operations are translated by calling some helper function written in
-C#.
-
-The code that implements the mapping is in the modules opcodes.py.
-
-Mapping exceptions
-------------------
-
-Both RPython and CLI have its own set of exception classes: some of
-these are pretty similar; e.g., we have OverflowError,
-ZeroDivisionError and IndexError on the first side and
-OverflowException, DivideByZeroException and IndexOutOfRangeException
-on the other side.
-
-The first attempt was to map RPython classes to their corresponding
-CLI ones: this worked for simple cases, but it would have triggered
-subtle bugs in more complex ones, because the two exception
-hierarchies don't completely overlap.
-
-At the moment we've chosen to build an RPython exception hierarchy
-completely independent from the CLI one, but this means that we can't
-rely on exceptions raised by built-in operations.  The currently
-implemented solution is to do an exception translation on-the-fly.
-
-As an example consider the RPython int_add_ovf operation, that sums
-two integers and raises an OverflowError exception in case of
-overflow. For implementing it we can use the built-in add.ovf CLI
-instruction that raises System.OverflowException when the result
-overflows, catch that exception and throw a new one::
-
-    .try 
-    { 
-        ldarg 'x_0'
-        ldarg 'y_0'
-        add.ovf 
-        stloc 'v1'
-        leave __check_block_2 
-    } 
-    catch [mscorlib]System.OverflowException 
-    { 
-        newobj instance void class OverflowError::.ctor() 
-        throw 
-    } 
-
-
-Translating flow graphs
------------------------
-
-As we saw previously in PyPy function and method bodies are
-represented by flow graphs that we need to translate CLI IL code. Flow
-graphs are expressed in a format that is very suitable for being
-translated to low level code, so that phase is quite straightforward,
-though the code is a bit involved because we need to take care of three
-different types of blocks.
-
-The code doing this work is located in the Function.render
-method in the file function.py.
-
-First of all it searches for variable names and types used by
-each block; once they are collected it emits a .local IL
-statement used for indicating the virtual machine the number and type
-of local variables used.
-
-Then it sequentially renders all blocks in the graph, starting from the
-start block; special care is taken for the return block which is
-always rendered at last to meet CLI requirements.
-
-Each block starts with an unique label that is used for jumping
-across, followed by the low level instructions the block is composed
-of; finally there is some code that jumps to the appropriate next
-block.
-
-Conditional and unconditional jumps are rendered with their
-corresponding IL instructions: brtrue, brfalse.
-
-Blocks that needs to catch exceptions use the native facilities
-offered by the CLI virtual machine: the entire block is surrounded by
-a .try statement followed by as many catch as needed: each catching
-sub-block then branches to the appropriate block::
-
-
-  # RPython
-  try:
-      # block0
-      ...
-  except ValueError:
-      # block1
-      ...
-  except TypeError:
-      # block2
-      ...
-
-  // IL
-  block0: 
-    .try {
-        ...
-        leave block3
-     }
-     catch ValueError {
-        ...
-        leave block1
-      }
-      catch TypeError {
-        ...
-        leave block2
-      }
-  block1:
-      ...
-      br block3
-  block2:
-      ...
-      br block3
-  block3:
-      ...
-
-There is also an experimental feature that makes GenCLI to use its own
-exception handling mechanism instead of relying on the .NET
-one. Surprisingly enough, benchmarks are about 40% faster with our own
-exception handling machinery.
-
-
-Translating classes
--------------------
-
-As we saw previously, the semantic of ootypesystem classes
-is very similar to the .NET one, so the translation is mostly
-straightforward.
-
-The related code is located in the module class\_.py.  Rendered classes
-are composed of four parts:
-
-  - fields;
-  - user defined methods;
-  - default constructor;
-  - the ToString method, mainly for testing purposes
-
-Since ootype implicitly assumes all method calls to be late bound, as
-an optimization before rendering the classes we search for methods
-that are not overridden in subclasses, and declare as "virtual" only
-the one that needs to.
-
-The constructor does nothing more than calling the base class
-constructor and initializing class fields to their default value.
-
-Inheritance is straightforward too, as it is natively supported by
-CLI. The only noticeable thing is that we map ootypesystem's ROOT
-class to the CLI equivalent System.Object.
-
-The Runtime Environment
------------------------
-
-The runtime environment is a collection of helper classes and
-functions used and referenced by many of the GenCLI submodules. It is
-written in C#, compiled to a DLL (Dynamic Link Library), then linked
-to generated code at compile-time.
-
-The DLL is called pypylib and is composed of three parts:
-
-  - a set of helper functions used to implements complex RPython
-    low-level instructions such as runtimenew and ooparse_int;
-
-  - a set of helper classes wrapping built-in types
-
-  - a set of helpers used by the test framework
-
-
-The first two parts are contained in the pypy.runtime namespace, while
-the third is in the pypy.test one.
-
-
-Testing GenCLI
-==============
-
-As the rest of PyPy, GenCLI is a test-driven project: there is at
-least one unit test for almost each single feature of the
-backend. This development methodology allowed us to early discover
-many subtle bugs and to do some big refactoring of the code with the
-confidence not to break anything.
-
-The core of the testing framework is in the module
-pypy.translator.cli.test.runtest; one of the most important function
-of this module is compile_function(): it takes a Python function,
-compiles it to CLI and returns a Python object that runs the just
-created executable when called.
-
-This way we can test GenCLI generated code just as if it were a simple
-Python function; we can also directly run the generated executable,
-whose default name is main.exe, from a shell: the function parameters
-are passed as command line arguments, and the return value is printed
-on the standard output::
-
-    # Python source: foo.py
-    from pypy.translator.cli.test.runtest import compile_function
-
-    def foo(x, y):
-        return x+y, x*y
-
-    f = compile_function(foo, [int, int])
-    assert f(3, 4) == (7, 12)
-
-
-    # shell
-    $ mono main.exe 3 4
-    (7, 12)
-
-GenCLI supports only few RPython types as parameters: int, r_uint,
-r_longlong, r_ulonglong, bool, float and one-length strings (i.e.,
-chars). By contrast, most types are fine for being returned: these
-include all primitive types, list, tuples and instances.
-
-Installing Python for .NET on Linux
-===================================
-
-With the CLI backend, you can access .NET libraries from RPython;
-programs using .NET libraries will always run when translated, but you
-might also want to test them on top of CPython.
-
-To do so, you can install `Python for .NET`_. Unfortunately, it does
-not work out of the box under Linux.
-
-To make it working, download and unpack the source package of Python
-for .NET; the only version tested with PyPy is the 1.0-rc2, but it
-might work also with others. Then, you need to create a file named
-Python.Runtime.dll.config at the root of the unpacked archive; put the
-following lines inside the file (assuming you are using Python 2.4)::
-
-  <configuration>
-    <dllmap dll="python24" target="libpython2.4.so.1.0" os="!windows"/>
-  </configuration>
-
-The installation should be complete now. To run Python for .NET,
-simply type ``mono python.exe``.
-
-
-.. _`Standard Ecma 335`: http://www.ecma-international.org/publications/standards/Ecma-335.htm
-.. _`flow graph`: translation.html#the-flow-model
-.. _`rtyper`: rtyper.html
-.. _`Python for .NET`: http://pythonnet.sourceforge.net/

diff --git a/pypy/doc/config/translation.backendopt.none.txt b/pypy/doc/config/translation.backendopt.none.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.none.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Do not run any backend optimizations.

diff --git a/pypy/doc/config/objspace.usemodules.txt b/pypy/doc/config/objspace.usemodules.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-..  intentionally empty

diff --git a/pypy/doc/config/objspace.usemodules.clr.txt b/pypy/doc/config/objspace.usemodules.clr.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.clr.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Use the 'clr' module. 

diff --git a/pypy/doc/configuration.txt b/pypy/doc/configuration.txt
deleted file mode 100644
--- a/pypy/doc/configuration.txt
+++ /dev/null
@@ -1,194 +0,0 @@
-=============================
-PyPy's Configuration Handling
-=============================
-
-Due to more and more available configuration options it became quite annoying to
-hand the necessary options to where they are actually used and even more
-annoying to add new options. To circumvent these problems the configuration
-management was introduced. There all the necessary options are stored into an
-configuration object, which is available nearly everywhere in the translation
-toolchain and in the standard interpreter so that adding new options becomes
-trivial. Options are organized into a tree. Configuration objects can be
-created in different ways, there is support for creating an optparse command
-line parser automatically.
-
-
-Main Assumption
-===============
-
-Configuration objects are produced at the entry points  and handed down to
-where they are actually used. This keeps configuration local but available
-everywhere and consistent. The configuration values can be created using the
-command line (already implemented) or a file (still to be done).
-
-
-API Details
-===========
-
-The handling of options is split into two parts: the description of which
-options are available, what their possible values and defaults are and how they
-are organized into a tree. A specific choice of options is bundled into a
-configuration object which has a reference to its option description (and
-therefore makes sure that the configuration values adhere to the option
-description).
-This splitting is remotely similar to the distinction between types and
-instances in the type systems of the rtyper: the types describe what sort of
-fields the instances have.
-
-The Options are organized in a tree. Every option has a name, as does every
-option group. The parts of the full name of the option are separated by dots:
-e.g. ``config.translation.thread``.
-
-Description of Options
-----------------------
-
-All the constructors take a ``name`` and a ``doc`` argument as first arguments
-to give the option or option group a name and to document it. Most constructors
-take a ``default`` argument that specifies the default value of the option. If
-this argument is not supplied the default value is assumed to be ``None``.
-Most constructors
-also take a ``cmdline`` argument where you can specify what the command line
-option should look like (for example cmdline="-v --version"). If ``cmdline`` is
-not specified a default cmdline option is created that uses the name of the
-option together with its full path. If ``None`` is passed in as ``cmdline`` then
-no command line option is created at all.
-
-Some options types can specify requirements to specify that a particular choice
-for one option works only if a certain choice for another option is used. A
-requirement is specified using a list of pairs. The first element of the pair
-gives the path of the option that is required to be set and the second element
-gives the required value.
-
-
-``OptionDescription``
-+++++++++++++++++++++
-
-This class is used to group suboptions.
-
-    ``__init__(self, name, doc, children)``
-        ``children`` is a list of option descriptions (including
-        ``OptionDescription`` instances for nested namespaces).
-
-``ChoiceOption``
-++++++++++++++++
-
-Represents a choice out of several objects. The option can also have the value
-``None``.
-
-    ``__init__(self, name, doc, values, default=None, requires=None, cmdline=DEFAULT)``
-        ``values`` is a list of values the option can possibly take,
-        ``requires`` is a dictionary mapping values to lists of of two-element
-        tuples.
-
-``BoolOption``
-++++++++++++++
-
-Represents a choice between ``True`` and ``False``. 
-
-    ``__init__(self, name, doc, default=None, requires=None, suggests=None, cmdline=DEFAULT, negation=True)``
-        ``default`` specifies the default value of the option. ``requires`` is
-        a list of two-element tuples describing the requirements when the
-        option is set to true, ``suggests`` is a list of the same structure but
-        the options in there are only suggested, not absolutely necessary. The
-        difference is small: if the current option is set to True, both the
-        required and the suggested options are set. The required options cannot
-        be changed later, though. ``negation`` specifies whether the negative
-        commandline option should be generated.
-
-
-``IntOption``
-+++++++++++++
-
-Represents a choice of an integer.
-
-    ``__init__(self, name, doc, default=None, cmdline=DEFAULT)``
-        
-
-
-``FloatOption``
-+++++++++++++++
-
-Represents a choice of a floating point number.
-
-    ``__init__(self, name, doc, default=None, cmdline=DEFAULT)``
-        
-
-
-``StrOption``
-+++++++++++++
-
-Represents the choice of a string.
-
-    ``__init__(self, name, doc, default=None, cmdline=DEFAULT)``
-        
-
-
-
-Configuration Objects
----------------------
-
-``Config`` objects hold the chosen values for the options (of the default,
-if no choice was made). A ``Config`` object is described by an
-``OptionDescription`` instance. The attributes of the ``Config`` objects are the
-names of the children of the ``OptionDescription``. Example::
-
-    >>> from pypy.config.config import OptionDescription, Config, BoolOption
-    >>> descr = OptionDescription("options", "", [
-    ...     BoolOption("bool", "", default=False)])
-    >>>
-    >>> config = Config(descr)
-    >>> config.bool
-    False
-    >>> config.bool = True
-    >>> config.bool
-    True
-
-
-Description of the (useful) methods on ``Config``:
-
-    ``__init__(self, descr, **overrides)``:
-        ``descr`` is an instance of ``OptionDescription`` that describes the
-        configuration object. ``overrides`` can be used to set different default
-        values (see method ``override``).
-
-    ``override(self, overrides)``:
-        override default values. This marks the overridden values as defaults,
-        which makes it possible to change them (you can usually change values
-        only once). ``overrides`` is a dictionary of path strings to values.
-
-    ``set(self, **kwargs)``:
-        "do what I mean"-interface to option setting. Searches all paths
-        starting from that config for matches of the optional arguments and sets
-        the found option if the match is not ambiguous.
-
-
-Production of optparse Parsers
-------------------------------
-
-To produce an optparse parser use the function ``to_optparse``. It will create
-an option parser using callbacks in such a way that the config object used for
-creating the parser is updated automatically.
-
-    ``to_optparse(config, useoptions=None, parser=None)``:
-        Returns an optparse parser.  ``config`` is the configuration object for
-        which to create the parser.  ``useoptions`` is a list of options for
-        which to create command line options. It can contain full paths to
-        options or also paths to an option description plus an additional ".*"
-        to produce command line options for all sub-options of that description.
-        If ``useoptions`` is ``None``, then all sub-options are turned into
-        cmdline options. ``parser`` can be an existing parser object, if
-        ``None`` is passed in, then a new one is created.
-
-
-The usage of config objects in PyPy
-===================================
-
-The two large parts of PyPy, the standard interpreter and the translation
-toolchain, have two separate sets of options. The translation toolchain options
-can be found on the ``config`` attribute of all ``TranslationContext``
-instances and are described in translationoption.py_. The interpreter options
-are attached to the object space, also under the name ``config`` and are
-described in pypyoption.py_.
-
-.. _translationoption.py: ../config/translationoption.py
-.. _pypyoption.py: ../config/pypyoption.py

diff --git a/pypy/doc/config/objspace.usemodules._demo.txt b/pypy/doc/config/objspace.usemodules._demo.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._demo.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Use the '_demo' module. 
-
-This is the demo module for mixed modules. Not enabled by default.

diff --git a/pypy/doc/config/objspace.std.withcelldict.txt b/pypy/doc/config/objspace.std.withcelldict.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withcelldict.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Enable cell-dicts. This optimization is not helpful without the JIT. In the
-presence of the JIT, it greatly helps looking up globals.

diff --git a/pypy/doc/config/translation.backendopt.clever_malloc_removal_heuristic.txt b/pypy/doc/config/translation.backendopt.clever_malloc_removal_heuristic.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.clever_malloc_removal_heuristic.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Internal option. Switch to a different weight heuristic for inlining.
-This is for clever malloc removal (:config:`translation.backendopt.clever_malloc_removal`).
-
-.. internal

diff --git a/pypy/doc/config/objspace.usemodules._pickle_support.txt b/pypy/doc/config/objspace.usemodules._pickle_support.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._pickle_support.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-Use the '_pickle_support' module. 
-Internal helpers for pickling runtime builtin types (frames, cells, etc)
-for `stackless`_ tasklet pickling support.
-.. _`stackless`: ../stackless.html
-
-.. internal

diff --git a/pypy/doc/clr-module.txt b/pypy/doc/clr-module.txt
deleted file mode 100644
--- a/pypy/doc/clr-module.txt
+++ /dev/null
@@ -1,143 +0,0 @@
-===============================
-The ``clr`` module for PyPy.NET
-===============================
-
-PyPy.NET give you access to the surrounding .NET environment via the
-``clr`` module. This module is still experimental: some features are
-still missing and its interface might change in next versions, but
-it's still useful to experiment a bit with PyPy.NET.
-
-PyPy.NET provides an import hook that lets you to import .NET namespaces
-seamlessly as they were normal Python modules.  Then, 
-
-PyPY.NET native classes try to behave as much as possible in the
-"expected" way both for the developers used to .NET and for the ones
-used to Python.
-
-In particular, the following features are mapped one to one because
-they exist in both worlds:
-
-  - .NET constructors are mapped to the Python __init__ method;
-
-  - .NET instance methods are mapped to Python methods;
-
-  - .NET static methods are mapped to Python static methods (belonging
-    to the class);
-
-  - .NET properties are mapped to property-like Python objects (very
-    similar to the Python ``property`` built-in);
-
-  - .NET indexers are mapped to Python __getitem__ and __setitem__;
-
-  - .NET enumerators are mapped to Python iterators.
-
-Moreover, all the usual Python features such as bound and unbound
-methods are available as well.
-
-Example of usage
-================
-
-Here is an example of interactive session using the ``clr`` module::
-
-    >>>> from System.Collections import ArrayList
-    >>>> obj = ArrayList()
-    >>>> obj.Add(1)
-    0
-    >>>> obj.Add(2)
-    1
-    >>>> obj.Add("foo")
-    2
-    >>>> print obj[0], obj[1], obj[2]
-    1 2 foo
-    >>>> print obj.Count
-    3
-
-Conversion of parameters
-========================
-
-When calling a .NET method Python objects are converted to .NET
-objects.  Lots of effort have been taken to make the conversion as
-much transparent as possible; in particular, all the primitive types
-such as int, float and string are converted to the corresponding .NET
-types (e.g., ``System.Int32``, ``System.Float64`` and
-``System.String``).
-
-Python objects without a corresponding .NET types (e.g., instances of
-user classes) are passed as "black boxes", for example to be stored in
-some sort of collection.
-
-The opposite .NET to Python conversions happens for the values returned
-by the methods. Again, primitive types are converted in a
-straightforward way; non-primitive types are wrapped in a Python object, 
-so that they can be treated as usual.
-
-Overload resolution
-===================
-
-When calling an overloaded method, PyPy.NET tries to find the best
-overload for the given arguments; for example, consider the
-``System.Math.Abs`` method::
-
-
-    >>>> from System import Math
-    >>>> Math.Abs(-42)
-    42
-    >>>> Math.Abs(-42.0)
-    42.0
-
-``System.Math.Abs`` has got overloadings both for integers and floats:
-in the first case we call the method ``System.Math.Abs(int32)``, while
-in the second one we call the method ``System.Math.Abs(float64)``.
-
-If the system can't find a best overload for the given parameters, a
-TypeError exception is raised.
-
-
-Generic classes
-================
-
-Generic classes are fully supported.  To instantiate a generic class, you need
-to use the ``[]`` notation::
-
-    >>>> from System.Collections.Generic import List
-    >>>> mylist = List[int]()
-    >>>> mylist.Add(42)
-    >>>> mylist.Add(43)
-    >>>> mylist.Add("foo")
-    Traceback (most recent call last):
-      File "<console>", line 1, in <interactive>
-    TypeError: No overloads for Add could match
-    >>>> mylist[0]
-    42
-    >>>> for item in mylist: print item
-    42
-    43
-
-
-External assemblies and Windows Forms
-=====================================
-
-By default, you can only import .NET namespaces that belongs to already loaded
-assemblies.  To load additional .NET assemblies, you can use
-``clr.AddReferenceByPartialName``.  The following example loads
-``System.Windows.Forms`` and ``System.Drawing`` to display a simple Windows
-Form displaying the usual "Hello World" message::
-
-    >>>> import clr
-    >>>> clr.AddReferenceByPartialName("System.Windows.Forms")
-    >>>> clr.AddReferenceByPartialName("System.Drawing")
-    >>>> from System.Windows.Forms import Application, Form, Label
-    >>>> from System.Drawing import Point
-    >>>>
-    >>>> frm = Form()
-    >>>> frm.Text = "The first pypy-cli Windows Forms app ever"
-    >>>> lbl = Label()
-    >>>> lbl.Text = "Hello World!"
-    >>>> lbl.AutoSize = True
-    >>>> lbl.Location = Point(100, 100)
-    >>>> frm.Controls.Add(lbl)
-    >>>> Application.Run(frm)
-
-Unfortunately at the moment you can't do much more than this with Windows
-Forms, because we still miss support for delegates and so it's not possible
-to handle events.

diff --git a/pypy/doc/config/objspace.allworkingmodules.txt b/pypy/doc/config/objspace.allworkingmodules.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.allworkingmodules.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-This option enables the usage of all modules that are known to be working well
-and that translate without problems.
-
-Note that this option defaults to True (except when running
-``py.py`` because it takes a long time to start).  To force it
-to False, use ``--no-allworkingmodules``.

diff --git a/pypy/doc/config/translation.noprofopt.txt b/pypy/doc/config/translation.noprofopt.txt
deleted file mode 100644

diff --git a/pypy/doc/config/objspace.usemodules.fcntl.txt b/pypy/doc/config/objspace.usemodules.fcntl.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.fcntl.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'fcntl' module. 
-This module is expected to be fully working.

diff --git a/pypy/doc/config/objspace.usemodules.math.txt b/pypy/doc/config/objspace.usemodules.math.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.math.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'math' module. 
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/objspace.txt b/pypy/doc/config/objspace.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-..  intentionally empty

diff --git a/pypy/doc/config/objspace.usemodules.array.txt b/pypy/doc/config/objspace.usemodules.array.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.array.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Use interpreter-level version of array module (on by default).

diff --git a/pypy/doc/config/translation.cli.exception_transformer.txt b/pypy/doc/config/translation.cli.exception_transformer.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.cli.exception_transformer.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Use the exception transformer instead of the native .NET exceptions to
-implement RPython exceptions. Enable this option only if you know what
-you are doing.

diff --git a/pypy/doc/getting-started-python.txt b/pypy/doc/getting-started-python.txt
deleted file mode 100644
--- a/pypy/doc/getting-started-python.txt
+++ /dev/null
@@ -1,302 +0,0 @@
-==============================================
-Getting Started with PyPy's Python Interpreter
-==============================================
-
-.. contents::
-.. sectnum::
-
-PyPy's Python interpreter is a very compliant Python
-interpreter implemented in Python.  When translated to C, it passes most of 
-`CPythons core language regression tests`_ and comes with many of the extension
-modules included in the standard library including ``ctypes``. It can run large
-libraries such as Django_ and Twisted_. There are some small behavioral
-differences to CPython and some missing extensions, for details see `CPython
-differences`_.
-
-.. _Django: http://djangoproject.org
-.. _Twisted: http://twistedmatrix.com
-
-.. _`CPython differences`: cpython_differences.html
-
-To actually use PyPy's Python interpreter, the first thing you typically do is
-translate it to get a reasonably performing interpreter. This is described in
-the next section. If you just want to play around a bit, you can also try
-untranslated `py.py interpreter`_ (which is extremely slow, but still fast
-enough for tiny examples).
-
-Translating the PyPy Python interpreter
----------------------------------------
-
-(**Note**: for some hints on how to translate the Python interpreter under
-Windows, see the `windows document`_)
-
-.. _`windows document`: windows.html
-
-You can translate the whole of PyPy's Python interpreter to low level C code,
-`CLI code`_, or `JVM code`_.
-
-1. Install dependencies.  You need (these are Debian package names,
-   adapt as needed):
-
-   * ``gcc``
-   * ``python-dev``
-   * ``python-ctypes`` if you are still using Python2.4
-   * ``libffi-dev``
-   * ``pkg-config`` (to help us locate libffi files)
-   * ``libz-dev`` (for the optional ``zlib`` module)
-   * ``libbz2-dev`` (for the optional ``bz2`` module)
-   * ``libncurses-dev`` (for the optional ``_minimal_curses`` module)
-   * ``libexpat1-dev`` (for the optional ``pyexpat`` module)
-   * ``libssl-dev`` (for the optional ``_ssl`` module)
-   * ``libgc-dev`` (Boehm: only when translating with `--opt=0, 1` or `size`)
-
-2. Translation is somewhat time-consuming (30 min to
-   over one hour) and RAM-hungry.  If you have less than 1.5 GB of
-   RAM (or a slow machine) you might want to pick the
-   `optimization level`_ `1` in the next step.  A level of
-   `2` or `3` or `jit` gives much better results, though.
-
-   Let me stress this another time: at ``--opt=1`` you get the Boehm
-   GC, which is here mostly for historical and for testing reasons.
-   You really do not want to pick it.  The resulting ``pypy-c`` is
-   slow.
-
-3. Run::
-
-     cd pypy/translator/goal
-     python translate.py --opt=jit targetpypystandalone.py
-
-   possibly replacing ``--opt=jit`` with another `optimization level`_
-   of your choice like ``--opt=2`` if you do not want the included JIT
-   compiler.  (As of March 2010, the default level is ``--opt=2``, and
-   ``--opt=jit`` requires an Intel **32-bit** environment.)
-
-.. _`optimization level`: config/opt.html
-
-If everything works correctly this will create an executable
-``pypy-c`` in the current directory.  Type ``pypy-c --help``
-to see the options it supports - mainly the same basic
-options as CPython.  In addition, ``pypy-c --info`` prints the
-translation options that where used to produce this particular
-executable. The executable behaves mostly like a normal Python interpreter::
-
-    $ ./pypy-c
-    Python 2.5.2 (64177, Apr 16 2009, 16:33:13)
-    [PyPy 1.1.0] on linux2
-    Type "help", "copyright", "credits" or "license" for more information.
-    And now for something completely different: ``this sentence is false''
-    >>>> 46 - 4
-    42
-    >>>> from test import pystone
-    >>>> pystone.main()
-    Pystone(1.1) time for 50000 passes = 2.57
-    This machine benchmarks at 19455.3 pystones/second
-    >>>>
-
-This executable can be moved around or copied on other machines; see
-Installation_ below.  For now a JIT-enabled ``pypy-c`` always produces
-debugging output to stderr when it exits, unless translated with
-``--jit-debug=off``.
-
-The ``translate.py`` script takes a very large number of options controlling
-what to translate and how.  See ``translate.py -h``. Some of the more
-interesting options (but for now incompatible with the JIT) are:
-
-   * ``--stackless``: this produces a pypy-c that includes features
-     inspired by `Stackless Python <http://www.stackless.com>`__.
-
-   * ``--gc=boehm|ref|marknsweep|semispace|generation|hybrid``:
-     choose between using
-     the `Boehm-Demers-Weiser garbage collector`_, our reference
-     counting implementation or four of own collector implementations
-     (the default depends on the optimization level).
-
-Find a more detailed description of the various options in our `configuration
-sections`_.
-
-.. _`configuration sections`: config/index.html
-
-.. _`translate PyPy with the thunk object space`:
-
-Translating with non-standard options
-++++++++++++++++++++++++++++++++++++++++
-
-It is possible to have non-standard features enabled for translation,
-but they are not really tested any more.  Look for example at the
-`objspace proxies`_ document.
-
-.. _`objspace proxies`: objspace-proxies.html
-
-.. _`CLI code`: 
-
-Translating using the CLI backend
-+++++++++++++++++++++++++++++++++
-
-To create a standalone .NET executable using the `CLI backend`_::
-
-    ./translate.py --backend=cli targetpypystandalone.py
-
-Or better, try out the experimental `branch/cli-jit`_ described by
-Antonio Cuni's `Ph.D. thesis`_ and translate with the JIT::
-
-    ./translate.py -Ojit --backend=cli targetpypystandalone.py
-
-.. _`branch/cli-jit`: http://codespeak.net/svn/pypy/branch/cli-jit/
-.. _`Ph.D. thesis`: http://codespeak.net/svn/user/antocuni/phd/thesis/thesis.pdf
-
-The executable and all its dependencies will be stored in the
-./pypy-cli-data directory. To run pypy.NET, you can run
-./pypy-cli-data/main.exe. If you are using Linux or Mac, you can use
-the convenience ./pypy-cli script::
-
-    $ ./pypy-cli
-    Python 2.5.2 (64219, Apr 17 2009, 13:54:38)
-    [PyPy 1.1.0] on linux2
-    Type "help", "copyright", "credits" or "license" for more information.
-    And now for something completely different: ``distopian and utopian chairs''
-    >>>> 
-
-Moreover, at the moment it's not possible to do the full translation
-using only the tools provided by the Microsoft .NET SDK, since
-``ilasm`` crashes when trying to assemble the pypy-cli code due to its
-size.  Microsoft .NET SDK 2.0.50727.42 is affected by this bug; other
-version could be affected as well: if you find a version of the SDK
-that works, please tell us.
-
-Windows users that want to compile their own pypy-cli can install
-Mono_: if a Mono installation is detected the translation toolchain
-will automatically use its ``ilasm2`` tool to assemble the
-executables.
-
-To try out the experimental .NET integration, check the documentation of the
-clr_ module.
-
-.. _`JVM code`: 
-
-Translating using the JVM backend
-+++++++++++++++++++++++++++++++++
-
-To create a standalone JVM executable::
-
-    ./translate.py --backend=jvm targetpypystandalone.py
-
-This will create a jar file ``pypy-jvm.jar`` as well as a convenience
-script ``pypy-jvm`` for executing it.  To try it out, simply run
-``./pypy-jvm``::
-
-    $ ./pypy-jvm 
-    Python 2.5.2 (64214, Apr 17 2009, 08:11:23)
-    [PyPy 1.1.0] on darwin
-    Type "help", "copyright", "credits" or "license" for more information.
-    And now for something completely different: ``# assert did not crash''
-    >>>> 
-
-Alternatively, you can run it using ``java -jar pypy-jvm.jar``. At the moment
-the executable does not provide any interesting features, like integration with
-Java.
-
-Installation
-++++++++++++
-
-A prebuilt ``pypy-c`` can be installed in a standard location like
-``/usr/local/bin``, although some details of this process are still in
-flux.  It can also be copied to other machines as long as their system
-is "similar enough": some details of the system on which the translation
-occurred might be hard-coded in the executable.
-
-For installation purposes, note that the executable needs to be able to
-find its version of the Python standard library in the following three
-directories: ``lib-python/2.5.2``, ``lib-python/modified-2.5.2`` and
-``lib_pypy``.  They are located by "looking around" starting from the
-directory in which the executable resides.  The current logic is to try
-to find a ``PREFIX`` from which the directories
-``PREFIX/lib-python/2.5.2`` and ``PREFIX/lib-python/modified.2.5.2`` and
-``PREFIX/lib_pypy`` can all be found.  The prefixes that are tried are::
-
-    .
-    ./lib/pypy1.2
-    ..
-    ../lib/pypy1.2
-    ../..
-    ../../lib/pypy-1.2
-    ../../..
-    etc.
-
-In order to use ``distutils`` or ``setuptools`` a directory ``PREFIX/site-packages`` needs to be created. Here's an example session setting up and using ``easy_install``::
-
-    $ cd PREFIX
-    $ mkdir site-packages
-    $ curl -sO http://peak.telecommunity.com/dist/ez_setup.py
-    $ bin/pypy-c ez_setup.py
-    ...
-    $ bin/easy_install WebOb
-    $ bin/pypy-c           
-    Python 2.5.2 (64714, Apr 27 2009, 08:16:13)
-    [PyPy 1.1.0] on linux2
-    Type "help", "copyright", "credits" or "license" for more information.
-    And now for something completely different: ``PyPy doesn't have copolyvariadic dependently-monomorphed hyperfluxads''
-    >>>> import webob
-    >>>>               
-
-.. _`py.py interpreter`:
-
-Running the Python Interpreter Without Translation
----------------------------------------------------
-
-The py.py interpreter
-+++++++++++++++++++++
-
-To start interpreting Python with PyPy, install a C compiler that is
-supported by distutils and use Python 2.4 or greater to run PyPy::
-
-    cd pypy
-    python bin/py.py
-
-After a few seconds (remember: this is running on top of CPython), 
-you should be at the PyPy prompt, which is the same as the Python 
-prompt, but with an extra ">".
-
-Now you are ready to start running Python code.  Most Python
-modules should work if they don't involve CPython extension 
-modules.  **This is slow, and most C modules are not present by
-default even if they are standard!**  Here is an example of
-determining PyPy's performance in pystones:: 
-
-    >>>> from test import pystone 
-    >>>> pystone.main(10)
-
-The parameter is the number of loops to run through the test. The
-default is 50000, which is far too many to run in a non-translated
-PyPy version (i.e. when PyPy's interpreter itself is being interpreted 
-by CPython).
-
-py.py options
-+++++++++++++
-
-To list the PyPy interpreter command line options, type::
-
-    cd pypy
-    python bin/py.py --help
-
-py.py supports most of the options that CPython supports too (in addition to a
-large amount of options that can be used to customize py.py).
-As an example of using PyPy from the command line, you could type::
-
-    python py.py -c "from test import pystone; pystone.main(10)"
-
-Alternatively, as with regular Python, you can simply give a
-script name on the command line::
-
-    python py.py ../../lib-python/2.5.2/test/pystone.py 10
-
-See our  `configuration sections`_ for details about what all the commandline
-options do.
-
-
-.. _Mono: http://www.mono-project.com/Main_Page
-.. _`CLI backend`: cli-backend.html
-.. _`Boehm-Demers-Weiser garbage collector`: http://www.hpl.hp.com/personal/Hans_Boehm/gc/
-.. _clr: clr-module.html
-.. _`CPythons core language regression tests`: http://codespeak.net:8099/summary?category=applevel&branch=%3Ctrunk%3E
-
-.. include:: _ref.txt

diff --git a/pypy/doc/config/translation.builtins_can_raise_exceptions.txt b/pypy/doc/config/translation.builtins_can_raise_exceptions.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.builtins_can_raise_exceptions.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Internal option.
-
-.. internal

diff --git a/pypy/doc/discussion/summer-of-pypy-pytest.txt b/pypy/doc/discussion/summer-of-pypy-pytest.txt
deleted file mode 100644
--- a/pypy/doc/discussion/summer-of-pypy-pytest.txt
+++ /dev/null
@@ -1,56 +0,0 @@
-============================================
-Summer of PyPy proposal: Distributed py.test
-============================================
-
-
-Purpose:
-========
-
-The main purpose of distributing py.test is to speedup tests
-of actual applications (running all pypy tests already takes
-ages).
-
-Method:
-=======
-
-Remote imports:
----------------
-
-On the beginning of communication, master server sends to client
-import hook code, which then can import all needed libraries.
-
-Libraries are uploaded server -> client if they're needed (when
-__import__ is called). Possible extension is to add some kind of
-checksum (md5?) and store files in some directory.
-
-Previous experiments:
----------------------
-
-Previous experiments tried to run on the lowest level - when function/
-method is called. This is pretty clear (you run as few code on client
-side as possible), but has got some drawbacks:
-
-- You must simulate *everything* and transform it to server side in
-  case of need of absolutely anything (tracebacks, short and long,
-  source code etc.)
-- It's sometimes hard to catch exceptions.
-- Top level code in testing module does not work at all.
-
-Possible approach:
-------------------
-
-On client side (side really running tests) run some kind of cut-down
-session, which is imported by remote import at the very beginning and
-after that, we run desired tests (probably by importing whole test
-file which allows us to have top-level imports).
-
-Then we transfer output data to server as string, possibly tweaking
-file names (which is quite easy).
-
-Deliverables:
-=============
-
-- better use of testing machines
-- cut down test time
-- possible extension to run distributed code testing, by running and
-  controlling several distributed parts on different machines.

diff --git a/pypy/doc/config/translation.sandbox.txt b/pypy/doc/config/translation.sandbox.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.sandbox.txt
+++ /dev/null
@@ -1,15 +0,0 @@
-Generate a special fully-sandboxed executable.
-
-The fully-sandboxed executable cannot be run directly, but
-only as a subprocess of an outer "controlling" process.  The
-sandboxed process is "safe" in the sense that it doesn't do
-any library or system call - instead, whenever it would like
-to perform such an operation, it marshals the operation name
-and the arguments to its stdout and it waits for the
-marshalled result on its stdin.  This controller process must
-handle these operation requests, in any way it likes, allowing
-full virtualization.
-
-For examples of controller processes, see
-``pypy/translator/sandbox/interact.py`` and
-``pypy/translator/sandbox/pypy_interact.py``.

diff --git a/pypy/doc/config/translation.backendopt.raisingop2direct_call.txt b/pypy/doc/config/translation.backendopt.raisingop2direct_call.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.raisingop2direct_call.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Internal option. Transformation required by the LLVM backend.
-
-.. internal

diff --git a/pypy/doc/config/objspace.usemodules._winreg.txt b/pypy/doc/config/objspace.usemodules._winreg.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._winreg.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the built-in '_winreg' module, provides access to the Windows registry.
-This module is expected to be working and is included by default on Windows.

diff --git a/pypy/doc/config/objspace.usemodules._minimal_curses.txt b/pypy/doc/config/objspace.usemodules._minimal_curses.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._minimal_curses.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the '_curses' module.
-This module is just a stub.  It only implements a few functions.

diff --git a/pypy/doc/glossary.txt b/pypy/doc/glossary.txt
deleted file mode 100644
--- a/pypy/doc/glossary.txt
+++ /dev/null
@@ -1,237 +0,0 @@
-PyPy, like any large project, has developed a jargon of its own.  This
-document gives brief definition of some of these terms and provides
-links to more information.
-
-**abstract interpretation**
-    The technique of interpreting the bytecode of a user program with
-    an interpreter that handles abstract objects instead of concrete ones.
-    It can be used to check the bytecode or see what it does, without
-    actually executing it with concrete values.  See Theory_.
-
-.. _annotator:
-
-**annotator**
-    The component of the translator_\ 's toolchain_ that performs a form
-    of `type inference`_ on the flow graph. See the `annotator pass`_
-    in the documentation.
-
-.. _`application level`:
-
-**application level**
-    applevel_ code is normal Python code running on top of the PyPy or
-    CPython_ interpreter (see `interpreter level`_)
-
-.. _backend:
-
-**backend**
-    Code generator that converts an `RPython
-    <coding-guide.html#restricted-python>`__ program to a `target
-    language`_ using the PyPy toolchain_. A backend uses either the
-    lltypesystem_ or the ootypesystem_.
-
-.. _`compile-time`:
-
-**compile-time**
-    In the context of the JIT_, compile time is when the JIT is
-    generating machine code "just in time".
-
-.. _CPython:
-
-**CPython**
-    The "default" implementation of Python, written in C and
-    distributed by the PSF_ on http://www.python.org.
-
-.. _`external function`:
-
-**external function**
-    Functions that we don't want to implement in Python for various
-    reasons (e.g. they need to make calls into the OS) and whose
-    implementation will be provided by the backend.
-
-.. _`garbage collection framework`:
-
-**garbage collection framework**
-    Code that makes it possible to write `PyPy's garbage collectors`_
-    in Python itself.
-
-.. _`interpreter level`:
-
-**interpreter level**
-    Code running at this level is part of the implementation of the
-    PyPy interpreter and cannot interact normally with `application
-    level`_ code; it typically provides implementation for an object
-    space and its builtins.
-
-.. _`jit`:
-
-**jit**
-  `just in time compiler`_.
-
-.. _llinterpreter:
-
-**llinterpreter**
-   Piece of code that is able to interpret flow graphs.  This is very
-   useful for testing purposes, especially if you work on the RPython_
-   Typer.
-
-.. _lltypesystem:
-
-**lltypesystem**
-   A `C-like type model <rtyper.html#low-level-types>`__ that contains
-   structs and pointers.  A backend_ that uses this type system is also
-   called a low-level backend.  The C backend uses this
-   typesystem.
-
-.. _`low-level helper`:
-
-**low-level helper**
-    A function that the RTyper_ can use a call to as part of implementing
-    some operation in terms of the target `type system`_.
-
-.. _`mixed module`:
-
-**mixed module**
-  a module that accesses PyPy's `interpreter level`_.  The name comes
-  from the fact that the module's implementation can be a mixture of
-  `application level`_ and `interpreter level`_ code.
-
-.. _`object space`:
-
-**multimethod**
-   A callable object that invokes a different Python function based
-   on the type of all its arguments (instead of just the class of the
-   first argument, as with normal methods).  See Theory_.
-
-**object space**
-   The `object space <objspace.html>`__ (often abbreviated to
-   "objspace") creates all objects and knows how to perform operations
-   on the objects. You may think of an object space as being a library
-   offering a fixed API, a set of operations, with implementations
-   that a) correspond to the known semantics of Python objects, b)
-   extend or twist these semantics, or c) serve whole-program analysis
-   purposes.
-
-.. _ootypesystem:
-
-**ootypesystem**
-   An `object oriented type model <rtyper.html#object-oriented-types>`__
-   containing classes and instances.  A backend_ that uses this type system
-   is also called a high-level backend.  The JVM and CLI backends 
-   all use this typesystem.
-
-.. _`prebuilt constant`:
-
-**prebuilt constant**
-   In RPython_ module globals are considered constants.  Moreover,
-   global (i.e. prebuilt) lists and dictionaries are supposed to be
-   immutable ("prebuilt constant" is sometimes abbreviated to "pbc").
-
-.. _`rpython`:
-
-.. _`promotion`:
-
-**promotion**
-   JIT_ terminology.  *promotion* is a way of "using" a `run-time`_
-   value at `compile-time`_, essentially by deferring compilation
-   until the run-time value is known. See if `the jit docs`_ help.
-
-**rpython**
-   `Restricted Python`_, a limited subset of the Python_ language.
-   The limitations make `type inference`_ possible.
-   It is also the language that the PyPy interpreter itself is written
-   in.
-
-.. _`rtyper`:
-
-**rtyper**
-   Based on the type annotations, the `RPython Typer`_ turns the flow
-   graph into one that fits the model of the target platform/backend_
-   using either the lltypesystem_ or the ootypesystem_.
-
-.. _`run-time`:
-
-**run-time**
-   In the context of the JIT_, run time is when the code the JIT has
-   generated is executing.
-
-.. _`specialization`:
-
-**specialization**
-   A way of controlling how a specific function is handled by the
-   annotator_.  One specialization is to treat calls to a function
-   with different argument types as if they were calls to different
-   functions with identical source.
-
-.. _`stackless`:
-
-**stackless**
-    Technology that enables various forms of non conventional control
-    flow, such as coroutines, greenlets and tasklets.  Inspired by
-    Christian Tismer's `Stackless Python <http://www.stackless.com>`__.
-
-.. _`standard interpreter`:
-
-**standard interpreter**
-   It is the `subsystem implementing the Python language`_, composed
-   of the bytecode interpreter and of the standard objectspace.
-
-.. _toolchain:
-
-**timeshifting**
-   JIT_ terminology.  *timeshifting* is to do with moving from the
-   world where there are only `run-time`_ operations to a world where
-   there are both `run-time`_ and `compile-time`_ operations.
-
-**toolchain**
-   The `annotator pass`_, `The RPython Typer`_, and various
-   `backends`_.
-
-.. _`transformation`:
-
-**transformation**
-   Code that modifies flowgraphs to weave in `translation-aspects`_
-
-.. _`translation-time`:
-
-**translation-time**
-   In the context of the JIT_, translation time is when the PyPy
-   source is being analyzed and the JIT itself is being created.
-
-.. _`translator`:
-
-**translator**
-  Tool_ based on the PyPy interpreter which can translate
-  sufficiently static Python programs into low-level code.
-
-.. _`type system`:
-
-**type system**
-    The RTyper can target either the lltypesystem_ or the ootypesystem_.
-
-.. _`type inference`:
-
-**type inference**
-   Deduces either partially or fully the type of expressions as
-   described in this `type inference article on Wikipedia`_.
-   PyPy's tool-chain own flavour of type inference is described
-   in the `annotator pass`_ section.
-
-.. _applevel: coding-guide.html#application-level
-.. _`target language`: getting-started-dev.html#trying-out-the-translator
-.. _`just in time compiler`: jit/index.html
-.. _`the jit docs`: jit/index.html
-.. _`type inference article on Wikipedia`: http://en.wikipedia.org/wiki/Type_inference
-.. _`annotator pass`: translation.html#the-annotation-pass
-.. _`The RPython Typer`: translation.html#the-rpython-typer
-.. _`backends`: getting-started-dev.html#trying-out-the-translator
-.. _Tool: getting-started-dev.html#trying-out-the-translator
-.. _`translation-aspects`: translation-aspects.html
-.. _`PyPy's garbage collectors`: garbage_collection.html
-.. _`Restricted Python`: coding-guide.html#restricted-python
-.. _PSF: http://www.python.org/psf/
-.. _Python: http://www.python.org
-.. _`RPython Typer`: rtyper.html
-.. _`subsystem implementing the Python language`: architecture.html#standard-interpreter
-.. _Theory: theory.html
-
-.. include:: _ref.txt

diff --git a/pypy/doc/config/translation.ootype.mangle.txt b/pypy/doc/config/translation.ootype.mangle.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.ootype.mangle.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Mangle the names of user defined attributes of the classes, in order
-to ensure that every name is unique. Default is true, and it should
-not be turned off unless you know what you are doing.

diff --git a/pypy/doc/discussion/security-ideas.txt b/pypy/doc/discussion/security-ideas.txt
deleted file mode 100644
--- a/pypy/doc/discussion/security-ideas.txt
+++ /dev/null
@@ -1,312 +0,0 @@
-==============
-Security ideas
-==============
-
-These are some notes I (Armin) took after a talk at Chalmers by Steve
-Zdancewic: "Encoding Information Flow in Haskell".  That talk was
-presenting a pure Haskell approach with monad-like constructions; I
-think that the approach translates well to PyPy at the level of RPython.
-
-
-The problem
------------
-
-The problem that we try to solve here is: how to give the programmer a
-way to write programs that are easily checked to be "secure", in the
-sense that bugs shouldn't allow confidential information to be
-unexpectedly leaked.  This is not security as in defeating actively
-malicious attackers.
-
-
-Example
--------
-
-Let's suppose that we want to write a telnet-based application for a
-bidding system.  We want normal users to be able to log in with their
-username and password, and place bids (i.e. type in an amount of money).
-The server should record the highest bid so far but not allow users to
-see that number.  Additionally, the administrator should be able to log
-in with his own password and see the highest bid.  The basic program::
-
-    def mainloop():
-        while True:
-            username = raw_input()
-            password = raw_input()
-            user = authenticate(username, password)
-            if user == 'guest':
-                serve_guest()
-            elif user == 'admin':
-                serve_admin()
-
-    def serve_guest():
-        global highest_bid
-        print "Enter your bid:"
-        n = int(raw_input())
-        if n > highest_bid:     #
-            highest_bid = n     #
-        print "Thank you"
-
-    def serve_admin():
-        print "Highest big is:", highest_bid
-
-The goal is to make this program more secure by declaring and enforcing
-the following properties: first, the guest code is allowed to manipulate
-the highest_bid, as in the lines marked with ``#``, but these lines must
-not leak back the highest_bid in a form visible to the guest user;
-second, the printing in serve_admin() must only be allowed if the user
-that logged in is really the administrator (e.g. catch bugs like
-accidentally swapping the serve_guest() and serve_admin() calls in
-mainloop()).
-
-
-Preventing leak of information in guest code: 1st try
------------------------------------------------------
-
-The basic technique to prevent leaks is to attach "confidentiality
-level" tags to objects.  In this example, the highest_bid int object
-would be tagged with label="secret", e.g. by being initialized as::
-
-    highest_bid = tag(0, label="secret")
-
-At first, we can think about an object space where all objects have such
-a label, and the label propagates to operations between objects: for
-example, code like ``highest_bid += 1`` would produce a new int object
-with again label="secret".
-
-Where this approach doesn't work is with if/else or loops.  In the above
-example, we do::
-
-        if n > highest_bid:
-            ...
-
-However, by the object space rules introduced above, the result of the
-comparison is a "secret" bool objects.  This means that the guest code
-cannot know if it is True or False, and so the PyPy interpreter has no
-clue if it must following the ``then`` or ``else`` branch of the ``if``.
-So the guest code could do ``highest_bid += 1`` and probably even
-``highest_bid = max(highest_bid, n)`` if max() is a clever enough
-built-in function, but clearly this approach doesn't work well for more
-complicated computations that we would like to perform at this point.
-
-There might be very cool possible ideas to solve this with doing some
-kind of just-in-time flow object space analysis.  However, here is a
-possibly more practical approach.  Let's forget about the object space
-tricks and start again.  (See `Related work`_ for why the object space
-approach doesn't work too well.)
-
-
-Preventing leak of information in guest code with the annotator instead
------------------------------------------------------------------------
-
-Suppose that the program runs on top of CPython and not necessarily
-PyPy.  We will only need PyPy's annotator.  The idea is to mark the code
-that manipulates highest_bid explicitly, and make it RPython in the
-sense that we can take its flow space and follow the calls (we don't
-care about the precise types here -- we will use different annotations).
-Note that only the bits that manipulates the secret values needs to be
-RPython.  Example::
-
-    # on top of CPython, 'hidden' is a type that hides a value without
-    # giving any way to normal programs to access it, so the program
-    # cannot do anything with 'highest_bid'
-
-    highest_bid = hidden(0, label="secure")
-
-    def enter_bid(n):
-        if n > highest_bid.value:
-            highest_bid.value = n
-
-    enter_bid = secure(enter_bid)
-
-    def serve_guest():
-        print "Enter your bid:"
-        n = int(raw_input())
-        enter_bid(n)
-        print "Thank you"
-
-The point is that the expression ``highest_bid.value`` raises a
-SecurityException when run normally: it is not allowed to read this
-value.  The secure() decorator uses the annotator on the enter_bid()
-function, with special annotations that I will describe shortly.  Then
-secure() returns a "compiled" version of enter_bid.  The compiled
-version is checked to satisfy the security constrains, and it contains
-special code that then enables the ``highest_bid.value`` to work.
-
-The annotations propagated by secure() are ``SomeSecurityLevel``
-annotations.  Normal constants are propagated as
-SomeSecurityLevel("public").  The ``highest_bid.value`` returns the
-annotation SomeSecurityLevel("secret"), which is the label of the
-constant ``highest_bid`` hidden object.  We define operations between
-two SomeSecurityLevels to return a SomeSecurityLevel which is the max of
-the secret levels of the operands.
-
-The key point is that secure() checks that the return value is
-SomeSecurityLevel("public").  It also checks that only
-SomeSecurityLevel("public") values are stored e.g. in global data
-structures.
-
-In this way, any CPython code like serve_guest() can safely call
-``enter_bid(n)``.  There is no way to leak information about the current
-highest bid back out of the compiled enter_bid().
-
-
-Declassification
-----------------
-
-Now there must be a controlled way to leak the highest_bid value,
-otherwise it is impossible even for the admin to read it.  Note that
-serve_admin(), which prints highest_bid, is considered to "leak" this
-value because it is an input-output, i.e. it escapes the program.  This
-is a leak that we actually want -- the terminology is that serve_admin()
-must "declassify" the value.
-
-To do this, there is a capability-like model that is easy to implement
-for us.  Let us modify the main loop as follows::
-
-    def mainloop():
-        while True:
-            username = raw_input()
-            password = raw_input()
-            user, priviledge_token = authenticate(username, password)
-            if user == 'guest':
-                serve_guest()
-            elif user == 'admin':
-                serve_admin(priviledge_token)
-            del priviledge_token   # make sure nobody else uses it
-
-The idea is that the authenticate() function (shown later) also returns
-a "token" object.  This is a normal Python object, but it should not be
-possible for normal Python code to instantiate such an object manually.
-In this example, authenticate() returns a ``priviledge("public")`` for
-guests, and a ``priviledge("secret")`` for admins.  Now -- and this is
-the insecure part of this scheme, but it is relatively easy to control
--- the programmer must make sure that these priviledge_token objects
-don't go to unexpected places, particularly the "secret" one.  They work
-like capabilities: having a reference to them allows parts of the
-program to see secret information, of a confidentiality level up to the
-one corresponding to the token.
-
-Now we modify serve_admin() as follows:
-
-    def serve_admin(token):
-        print "Highest big is:", declassify(highest_bid, token=token)
-
-The declassify() function reads the value if the "token" is privileged
-enough, and raises an exception otherwise.
-
-What are we protecting here?  The fact that we need the administrator
-token in order to see the highest bid.  If by mistake we swap the
-serve_guest() and serve_admin() lines in mainloop(), then what occurs is
-that serve_admin() would be called with the guest token.  Then
-declassify() would fail.  If we assume that authenticate() is not buggy,
-then the rest of the program is safe from leak bugs.
-
-There are another variants of declassify() that are convenient.  For
-example, in the RPython parts of the code, declassify() can be used to
-control more precisely at which confidentiality levels we want which
-values, if there are more than just two such levels.  The "token"
-argument could also be implicit in RPython parts, meaning "use the
-current level"; normal non-RPython code always runs at "public" level,
-but RPython functions could run with higher current levels, e.g. if they
-are called with a "token=..." argument.
-
-(Do not confuse this with what enter_bid() does: enter_bid() runs at the
-public level all along.  It is ok for it to compute with, and even
-modify, the highest_bid.value.  The point of enter_bid() was that by
-being an RPython function the annotator can make sure that the value, or
-even anything that gives a hint about the value, cannot possibly escape
-from the function.)
-
-It is also useful to have "globally trusted" administrator-level RPython
-functions that always run at a higher level than the caller, a bit like
-Unix programs with the "suid" bit.  If we set aside the consideration
-that it should not be possible to make new "suid" functions too easily,
-then we could define the authenticate() function of our server example
-as follows::
-
-    def authenticate(username, password):
-        database = {('guest', 'abc'): priviledge("public"),
-                    ('admin', '123'): priviledge("secret")}
-        token_obj = database[username, password]
-        return username, declassify(token_obj, target_level="public")
-
-    authenticate = secure(authenticate, suid="secret")
-
-The "suid" argument makes the compiled function run on level "secret"
-even if the caller is "public" or plain CPython code.  The declassify()
-in the function is allowed because of the current level of "secret".
-Note that the function returns a "public" tuple -- the username is
-public, and the token_obj is declassified to public.  This is the
-property that allows CPython code to call it.
-
-Of course, like a Unix suid program the authenticate() function could be
-buggy and leak information, but like suid programs it is small enough
-for us to feel that it is secure just by staring at the code.
-
-An alternative to the suid approach is to play with closures, e.g.::
-
-    def setup():
-        #initialize new levels -- this cannot be used to access existing levels
-        public_level = create_new_priviledge("public")
-        secret_level = create_new_priviledge("secret")
-
-        database = {('guest', 'abc'): public_level,
-                    ('admin', '123'): secret_level}
-
-        def authenticate(username, password):
-            token_obj = database[username, password]
-            return username, declassify(token_obj, target_level="public",
-                                                   token=secret_level)
-
-        return secure(authenticate)
-
-    authenticate = setup()
-
-In this approach, declassify() works because it has access to the
-secret_level token.  We still need to make authenticate() a secure()
-compiled function to hide the database and the secret_level more
-carefully; otherwise, code could accidentally find them by inspecting
-the traceback of the KeyError exception if the username or password is
-invalid.  Also, secure() will check for us that authenticate() indeed
-returns a "public" tuple.
-
-This basic model is easy to extend in various directions.  For example
-secure() RPython functions should be allowed to return non-public
-results -- but then they have to be called either with an appropriate
-"token=..."  keyword, or else they return hidden objects again.  They
-could also be used directly from other RPython functions, in which the
-level of what they return is propagated.
-
-
-Related work
-------------
-
-What I'm describing here is nothing more than an adaptation of existing
-techniques to RPython.
-
-It is noteworthy to mention at this point why the object space approach
-doesn't work as well as we could first expect.  The distinction between
-static checking and dynamic checking (with labels only attached to
-values) seems to be well known; also, it seems to be well known that the
-latter is too coarse in practice.  The problem is about branching and
-looping.  From the object space' point of view it is quite hard to know
-what a newly computed value really depends on.  Basically, it is
-difficult to do better than: after is_true() has been called on a secret
-object, then we must assume that all objects created are also secret
-because they could depend in some way on the truth-value of the previous
-secret object.
-
-The idea to dynamically use static analysis is the key new idea
-presented by Steve Zdancewic in his talk.  You can have small controlled
-RPython parts of the program that must pass through a static analysis,
-and we only need to check dynamically that some input conditions are
-satisfied when other parts of the program call the RPython parts.
-Previous research was mostly about designing languages that are
-completely statically checked at compile-time.  The delicate part is to
-get the static/dynamic mixture right so that even indirect leaks are not
-possible -- e.g. leaks that would occur from calling functions with
-strange arguments to provoke exceptions, and where the presence of the
-exception or not would be information in itself.  This approach seems to
-do that reliably.  (Of course, at the talk many people including the
-speaker were wondering about ways to move more of the checking at
-compile-time, but Python people won't have such worries :-)

diff --git a/pypy/doc/discussion/ctypes_modules.txt b/pypy/doc/discussion/ctypes_modules.txt
deleted file mode 100644
--- a/pypy/doc/discussion/ctypes_modules.txt
+++ /dev/null
@@ -1,65 +0,0 @@
-what is needed for various ctypes-based modules and how feasible they are
-==========================================================================
-
-Quick recap for module evaluation:
-
-1. does the module use callbacks?
-
-2. how sophisticated ctypes usage is (accessing of _objects?)
-
-3. any specific tricks
-
-4. does it have tests?
-
-5. dependencies
-
-6. does it depend on cpython c-api over ctypes?
-
-Pygame
-======
-
-1. yes, for various things, but basic functionality can be achieved without
-
-2. probably not
-
-3. not that I know of
-
-4. yes for tests, no for unittests
-
-5. numpy, but can live without, besides only C-level dependencies. On OS/X
-   it requires PyObjC.
-
-6. no
-
-
-PyOpenGL
-========
-
-1. yes, for GLX, but not for the core functionality
-
-2. probably not
-
-3. all the code is auto-generated
-
-4. it has example programs, no tests
-
-5. numpy, but can live without it. can use various surfaces (including pygame) to draw on
-
-6. no
-
-
-Sqlite
-======
-
-1. yes, but I think it's not necessary
-
-2. no
-
-3. no
-
-4. yes
-
-5. datetime
-
-6. it passes py_object around in few places, not sure why (probably as an
-   opaque argument).

diff --git a/pypy/doc/index.txt b/pypy/doc/index.txt
deleted file mode 100644
--- a/pypy/doc/index.txt
+++ /dev/null
@@ -1,59 +0,0 @@
-
-The PyPy project aims at producing a flexible and fast Python_
-implementation.  The guiding idea is to translate a Python-level
-description of the Python language itself to lower level languages.
-Rumors have it that the secret goal is being faster-than-C which is
-nonsense, isn't it?  `more...`_
-
-Getting into PyPy ... 
-=============================================
-
-* `Release 1.4`_: the latest official release
-
-* `PyPy Blog`_: news and status info about PyPy 
-
-* `Documentation`_: extensive documentation and papers_ about PyPy.  
-
-* `Getting Started`_: Getting started and playing with PyPy. 
-
-Mailing lists, bug tracker, IRC channel
-=============================================
-
-* `Development mailing list`_: development and conceptual
-  discussions. 
-
-* `Subversion commit mailing list`_: updates to code and
-  documentation. 
-
-* `Development bug/feature tracker`_: filing bugs and feature requests. 
-
-* `Sprint mailing list`_: mailing list for organizing upcoming sprints. 
-
-* **IRC channel #pypy on freenode**: Many of the core developers are hanging out 
-  at #pypy on irc.freenode.net.  You are welcome to join and ask questions
-  (if they are not already developed in the FAQ_).
-  You can find logs of the channel here_.
-
-.. XXX play1? 
-
-Meeting PyPy developers
-=======================
-
-The PyPy developers are organizing sprints and presenting results at
-conferences all year round. They will be happy to meet in person with
-anyone interested in the project.  Watch out for sprint announcements
-on the `development mailing list`_.
-
-.. _Python: http://docs.python.org/index.html
-.. _`more...`: architecture.html#mission-statement 
-.. _`PyPy blog`: http://morepypy.blogspot.com/
-.. _`development bug/feature tracker`: https://codespeak.net/issue/pypy-dev/ 
-.. _here: http://tismerysoft.de/pypy/irc-logs/pypy
-.. _`sprint mailing list`: http://codespeak.net/mailman/listinfo/pypy-sprint 
-.. _`subversion commit mailing list`: http://codespeak.net/mailman/listinfo/pypy-svn
-.. _`development mailing list`: http://codespeak.net/mailman/listinfo/pypy-dev
-.. _`FAQ`: faq.html
-.. _`Documentation`: docindex.html 
-.. _`Getting Started`: getting-started.html
-.. _papers: extradoc.html
-.. _`Release 1.4`: http://pypy.org/download.html

diff --git a/pypy/doc/config/objspace.usemodules.zipimport.txt b/pypy/doc/config/objspace.usemodules.zipimport.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.zipimport.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-This module implements zipimport mechanism described
-in PEP 302. It's supposed to work and translate, so it's included
-by default
\ No newline at end of file

diff --git a/pypy/doc/jit/index.txt b/pypy/doc/jit/index.txt
deleted file mode 100644
--- a/pypy/doc/jit/index.txt
+++ /dev/null
@@ -1,26 +0,0 @@
-========================================================================
-                          JIT documentation
-========================================================================
-
-:abstract:
-
-    When PyPy is translated into an executable like ``pypy-c``, the
-    executable contains a full virtual machine that can optionally
-    include a Just-In-Time compiler.  This JIT compiler is **generated
-    automatically from the interpreter** that we wrote in RPython.
-
-    This JIT Compiler Generator can be applied on interpreters for any
-    language, as long as the interpreter itself is written in RPython
-    and contains a few hints to guide the JIT Compiler Generator.
-
-
-Content
-------------------------------------------------------------
-
-- Overview_: motivating our approach
-
-- Notes_ about the current work in PyPy
-
-
-.. _Overview: overview.html
-.. _Notes: pyjitpl5.html

diff --git a/pypy/doc/config/translation.jit_ffi.txt b/pypy/doc/config/translation.jit_ffi.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.jit_ffi.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Internal option: enable OptFfiCall in the jit optimizations.

diff --git a/pypy/doc/config/objspace.usemodules.cpyext.txt b/pypy/doc/config/objspace.usemodules.cpyext.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.cpyext.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Use (experimental) cpyext module, that tries to load and run CPython extension modules

diff --git a/pypy/doc/discussion/VM-integration.txt b/pypy/doc/discussion/VM-integration.txt
deleted file mode 100644
--- a/pypy/doc/discussion/VM-integration.txt
+++ /dev/null
@@ -1,263 +0,0 @@
-==============================================
-Integration of PyPy with host Virtual Machines
-==============================================
-
-This document is based on the discussion I had with Samuele during the
-Duesseldorf sprint. It's not much more than random thoughts -- to be
-reviewed!
-
-Terminology disclaimer: both PyPy and .NET have the concept of
-"wrapped" or "boxed" objects. To avoid confusion I will use "wrapping"
-on the PyPy side and "boxing" on the .NET side.
-
-General idea
-============
-
-The goal is to find a way to efficiently integrate the PyPy
-interpreter with the hosting environment such as .NET. What we would
-like to do includes but it's not limited to:
-
-  - calling .NET methods and instantiate .NET classes from Python
-
-  - subclass a .NET class from Python
-
-  - handle native .NET objects as transparently as possible
-
-  - automatically apply obvious Python <--> .NET conversions when
-    crossing the borders (e.g. integers, string, etc.)
-
-One possible solution is the "proxy" approach, in which we manually
-(un)wrap/(un)box all the objects when they cross the border.
-
-Example
--------
-
-  ::
-
-    public static int foo(int x) { return x}
-
-    >>>> from somewhere import foo
-    >>>> print foo(42)
-
-In this case we need to take the intval field of W_IntObject, box it
-to .NET System.Int32, call foo using reflection, then unbox the return
-value and reconstruct a new (or reuse an existing one) W_IntObject.
-
-The other approach
-------------------
-
-The general idea to solve handle this problem is to split the
-"stateful" and "behavioral" parts of wrapped objects, and use already
-boxed values for storing the state.
-
-This way when we cross the Python --> .NET border we can just throw
-away the behavioral part; when crossing .NET --> Python we have to
-find the correct behavioral part for that kind of boxed object and
-reconstruct the pair.
-
-
-Split state and behaviour in the flowgraphs
-===========================================
-
-The idea is to write a graph transformation that takes an usual
-ootyped flowgraph and split the classes and objects we want into a
-stateful part and a behavioral part.
-
-We need to introduce the new ootypesystem type ``Pair``: it acts like
-a Record but it hasn't its own identity: the id of the Pair is the id
-of its first member.
-
-  XXX about ``Pair``: I'm not sure this is totally right. It means
-  that an object can change identity simply by changing the value of a
-  field???  Maybe we could add the constraint that the "id" field
-  can't be modified after initialization (but it's not easy to
-  enforce).
-
-  XXX-2 about ``Pair``: how to implement it in the backends? One
-  possibility is to use "struct-like" types if available (as in
-  .NET). But in this case it's hard to implement methods/functions
-  that modify the state of the object (such as __init__, usually). The
-  other possibility is to use a reference type (i.e., a class), but in
-  this case there will be a gap between the RPython identity (in which
-  two Pairs with the same state are indistinguishable) and the .NET
-  identity (in which the two objects will have a different identity,
-  of course).
-
-Step 1: RPython source code
----------------------------
-
-  ::
-
-    class W_IntObject:
-        def __init__(self, intval):
-            self.intval = intval
-    
-        def foo(self, x):
-            return self.intval + x
-
-    def bar():
-        x = W_IntObject(41)
-        return x.foo(1)
-
-
-Step 2: RTyping
----------------
-
-Sometimes the following examples are not 100% accurate for the sake of
-simplicity (e.g: we directly list the type of methods instead of the
-ootype._meth instances that contains it).
-
-Low level types
-
-  ::
-
-    W_IntObject = Instance(
-        "W_IntObject",                   # name
-        ootype.OBJECT,                   # base class
-        {"intval": (Signed, 0)},         # attributes
-        {"foo": Meth([Signed], Signed)}  # methods
-    )
-
-
-Prebuilt constants (referred by name in the flowgraphs)
-
-  ::
-
-    W_IntObject_meta_pbc = (...)
-    W_IntObject.__init__ = (static method pbc - see below for the graph)
-
-
-Flowgraphs
-
-  ::
-
-    bar() {
-      1.    x = new(W_IntObject)
-      2.    oosetfield(x, "meta", W_IntObject_meta_pbc)
-      3.    direct_call(W_IntObject.__init__, x, 41)
-      4.    result = oosend("foo", x, 1)
-      5.    return result
-    }
-
-    W_IntObject.__init__(W_IntObject self, Signed intval) {
-      1.    oosetfield(self, "intval", intval)
-    }
-
-    W_IntObject.foo(W_IntObject self, Signed x) {
-      1.    value = oogetfield(self, "value")
-      2.    result = int_add(value, x)
-      3.    return result
-    }
-
-Step 3: Transformation
-----------------------
-
-This step is done before the backend plays any role, but it's still
-driven by its need, because at this time we want a mapping that tell
-us what classes to split and how (i.e., which boxed value we want to
-use).
-
-Let's suppose we want to map W_IntObject.intvalue to the .NET boxed
-``System.Int32``. This is possible just because W_IntObject contains
-only one field. Note that the "meta" field inherited from
-ootype.OBJECT is special-cased because we know that it will never
-change, so we can store it in the behaviour.
-
-
-Low level types
-
-  ::
-
-    W_IntObject_bhvr = Instance(
-        "W_IntObject_bhvr",
-        ootype.OBJECT,
-        {},                                               # no more fields!
-        {"foo": Meth([W_IntObject_pair, Signed], Signed)} # the Pair is also explicitly passed
-    )
-
-    W_IntObject_pair = Pair(
-        ("value", (System.Int32, 0)),  # (name, (TYPE, default))
-        ("behaviour", (W_IntObject_bhvr, W_IntObject_bhvr_pbc))
-    )
-
-
-Prebuilt constants
-
-  ::
-
-    W_IntObject_meta_pbc = (...)
-    W_IntObject.__init__ = (static method pbc - see below for the graph)
-    W_IntObject_bhvr_pbc = new(W_IntObject_bhvr); W_IntObject_bhvr_pbc.meta = W_IntObject_meta_pbc
-    W_IntObject_value_default = new System.Int32(0)
-
-
-Flowgraphs
-
-  ::
-
-    bar() {
-      1.    x = new(W_IntObject_pair) # the behaviour has been already set because
-                                      # it's the default value of the field
-
-      2.    # skipped (meta is already set in the W_IntObject_bhvr_pbc)
-
-      3.    direct_call(W_IntObject.__init__, x, 41)
-
-      4.    bhvr = oogetfield(x, "behaviour")
-            result = oosend("foo", bhvr, x, 1) # note that "x" is explicitly passed to foo
-
-      5.    return result
-    }
-
-    W_IntObject.__init__(W_IntObjectPair self, Signed value) {
-      1.    boxed = clibox(value)             # boxed is of type System.Int32
-            oosetfield(self, "value", boxed)
-    }
-
-    W_IntObject.foo(W_IntObject_bhvr bhvr, W_IntObject_pair self, Signed x) {
-      1.    boxed = oogetfield(self, "value")
-            value = unbox(boxed, Signed)
-
-      2.    result = int_add(value, x)
-
-      3.    return result
-    }
-
-
-Inheritance
------------
-
-Apply the transformation to a whole class (sub)hierarchy is a bit more
-complex. Basically we want to mimic the same hierarchy also on the
-``Pair``\s, but we have to fight the VM limitations. In .NET for
-example, we can't have "covariant fields"::
-
-  class Base {
-        public Base field;
-  }
-
-  class Derived: Base {
-        public Derived field;
-  }
-
-A solution is to use only kind of ``Pair``, whose ``value`` and
-``behaviour`` type are of the most precise type that can hold all the
-values needed by the subclasses::
-
-   class W_Object: pass
-   class W_IntObject(W_Object): ...
-   class W_StringObject(W_Object): ...
-
-   ...
-
-   W_Object_pair = Pair(System.Object, W_Object_bhvr)
-
-Where ``System.Object`` is of course the most precise type that can
-hold both ``System.Int32`` and ``System.String``.
-
-This means that the low level type of all the ``W_Object`` subclasses
-will be ``W_Object_pair``, but it also means that we will need to
-insert the appropriate downcasts every time we want to access its
-fields. I'm not sure how much this can impact performances.
-
-

diff --git a/pypy/doc/eventhistory.txt b/pypy/doc/eventhistory.txt
deleted file mode 100644
--- a/pypy/doc/eventhistory.txt
+++ /dev/null
@@ -1,313 +0,0 @@
-
-
-    The PyPy project is a worldwide collaborative effort and its
-    members are organizing sprints and presenting results at conferences
-    all year round.  **This page is no longer maintained!**  See `our blog`_
-    for upcoming events. 
-
-.. _`our blog`: http://morepypy.blogspot.com/
-
-EuroPython PyPy sprint 6-9 July 2006
-==================================================================
-
-Once again a PyPy sprint took place right after the EuroPython
-Conference from the *6th to the 9th of July*.
-
-Read more in the `EuroPython 2006 sprint report`_.
-
-.. _`EuroPython 2006 sprint report`: http://codespeak.net/pypy/extradoc/sprintinfo/post-ep2006/report.txt
-
-PyPy at XP 2006 and Agile 2006
-==================================================================
-
-PyPy presented experience reports at the two main agile conferences
-this year, `XP 2006`_ and `Agile 2006`_.
-Both experience reports focus on aspects of the sprint-driven
-development method that is being used in PyPy.
-
-.. _`XP 2006`: http://virtual.vtt.fi/virtual/xp2006/ 
-.. _`Agile 2006`: http://www.agile2006.org/
-
-Duesseldorf PyPy sprint 2-9 June 2006
-==================================================================
-
-The next PyPy sprint will be held in the Computer Science department of
-Heinrich-Heine Universitaet Duesseldorf from the *2nd to the 9th of June*.
-Main focus of the sprint will be on the goals of the upcoming June 0.9
-release.
-
-Read more in `the sprint announcement`_, see who is  planning to attend
-on the `people page`_.
-
-.. _`the sprint announcement`: http://codespeak.net/pypy/extradoc/sprintinfo/ddorf2006/announce.html
-.. _`people page`: http://codespeak.net/pypy/extradoc/sprintinfo/ddorf2006/people.html
-
-PyPy sprint at Akihabara (Tokyo, Japan)
-==================================================================
-
-*April 23rd - 29th 2006.* This sprint was in Akihabara, Tokyo, Japan,
-our hosts was FSIJ (Free Software Initiative of Japan) and we aimed
-for the sprint to promote Python and introduce people to PyPy. Good
-progress was also made on PyPy's ootypesystem for the more high level
-backends. For more details, read the last `sprint status`_ page and
-enjoy the pictures_.
-
-.. _`sprint status`: http://codespeak.net/pypy/extradoc/sprintinfo/tokyo/tokyo-planning.html
-.. _`pictures`: http://www.flickr.com/photos/19046555@N00/sets/72057594116388174/
-
-PyPy at Python UK/ACCU Conference (United Kingdom)
-===================================================================
-
-*April 19th - April 22nd 2006.* Several talks about PyPy were hold at
-this year's Python UK/ACCU conference. Read more at the `ACCU site`_.
-
-.. _`ACCU site`: http://www.accu.org/
-
-PyPy at XPDay France 2006 in Paris March 23rd - March 24th 2006
-==================================================================
-
-Logilab presented PyPy at the first `french XP Day`_ that it was
-sponsoring and which was held in Paris. There was over a hundred
-attendants. Interesting talks included Python as an agile language and
-Tools for continuous integration.
- 
-.. _`french XP Day`: http://www.xpday.fr/
-
-Logic Sprint at Louvain-la-Neuve University (Louvain-la-Neuve, Belgium)
-========================================================================
-
-*March 6th - March 10th 2006.* PyPy developers focusing on adding
-logic programming to PyPy will met with the team that developed the Oz
-programming language and the Mozart interpreter.
-
-Read the report_ and the original announcement_.
-
-.. _report: http://codespeak.net/pypy/extradoc/sprintinfo/louvain-la-neuve-2006/report.html
-.. _announcement: http://codespeak.net/pypy/extradoc/sprintinfo/louvain-la-neuve-2006/sprint-announcement.html
-
-PyCon Sprint 2006 (Dallas, Texas, USA)
-==================================================================
-
-*Feb 27th - March 2nd 2006.* The Post-PyCon PyPy Sprint took place
-right after PyCon 2006.
-
-A report is coming up.
-
-
-Talks at PyCon 2006 (Dallas, Texas, USA)
-===================================================================
-
-*Feb 24th - Feb 26th 2006.* PyPy developers spoke at `PyCon 2006`_.
-
-.. _`PyCon 2006`: http://us.pycon.org/TX2006/HomePage 
-
-
-PyPy at Solutions Linux in Paris January 31st - February 2nd 2006
-===================================================================
-
-PyPy developers from Logilab presented the intermediate results of the
-project during the Solutions Linux tradeshow in Paris. A lot of
-enthusiasts already knew about the project and were eager to learn
-about the details. Many people discovered PyPy on this occasion and
-said they were interested in the outcome and would keep an eye on its
-progress. Read the `talk slides`_.
-
-.. _`talk slides`: http://codespeak.net/pypy/extradoc/talk/solutions-linux-paris-2006.html
-
-
-PyPy Sprint in Palma De Mallorca 23rd - 29th January 2006
-===================================================================
-
-The Mallorca sprint that took place in Palma de Mallorca is over.
-Topics included progressing with the JIT work started in G&#246;teborg
-and Paris, GC and optimization work, stackless, and
-improving our way to write glue code for C libraries.
-
-Read more in `the announcement`_, there is a `sprint report`_
-for the first three days and `one for the rest of the sprint`_.
-
-
-.. _`the announcement`: http://codespeak.net/pypy/extradoc/sprintinfo/mallorca/sprint-announcement.html
-.. _`sprint report`: http://codespeak.net/pipermail/pypy-dev/2006q1/002746.html 
-.. _`one for the rest of the sprint`: http://codespeak.net/pipermail/pypy-dev/2006q1/002749.html 
-
-Preliminary EU reports released
-===============================
-
-After many hours of writing and typo-hunting we finally finished the
-`reports for the EU`_. They contain most of the material found on our regular
-documentation page but also a lot of new material not covered there. Note that
-all these documents are not approved by the European Union and therefore only
-preliminary. *(01/06/2006)*
-
-.. _`reports for the EU`: index-report.html
-
-
-PyPy Sprint in G&#246;teborg 7th - 11th December 2005 
-=================================================
-
-The Gothenburg sprint is over. It was a very productive sprint: work has
-been started on a JIT prototype, we added support for __del__ in PyPy, 
-the socket module had some progress, PyPy got faster and work was started to
-expose the internals of our parser and bytecode compiler to the user.
-Michael and Carl have written a `report about the first half`_ and `one about
-the second half`_ of the sprint.  *(12/18/2005)*
-
-.. _`report about the first half`: http://codespeak.net/pipermail/pypy-dev/2005q4/002656.html
-.. _`one about the second half`: http://codespeak.net/pipermail/pypy-dev/2005q4/002660.html
-
-PyPy release 0.8.0
-=================== 
-
-The third PyPy release is out, with an integrated and translatable
-compiler, speed progress, and now the possibility to translate our
-experimental "Thunk" object space (supporting lazy computed objects)
-with its features preserved.
-
-See the `release 0.8 announcement`_ for further details about the release and
-the `getting started`_ document for instructions about downloading it and
-trying it out.  There is also a short FAQ_.  *(11/03/2005)*
-
-.. _`release 0.8 announcement`: release-0.8.0.html
-
-PyPy Sprint in Paris 10th-16th October 2005 
-========================================================
-
-The Paris sprint is over. We are all at home again and more or less exhausted.
-The sprint attracted 18 participants and took place in
-`Logilab offices in Paris`_. We were happy to have five new 
-developers to the PyPy Community! The focus was on implementing
-`continuation-passing`_ style (stackless), making the translation process
-work for target languages with more powerful object systems and some tiny
-steps into the JIT_ direction. Michael and Carl have written
-a `report about day one`_ and `one about day two and three`_. 
-Together with Armin they wrote one about `the rest of the sprint`_ on the
-way back.
-*(10/18/2005)*
-
-.. _`Logilab offices in Paris`: http://codespeak.net/pypy/extradoc/sprintinfo/paris-2005-sprint.html 
-.. _JIT: http://en.wikipedia.org/wiki/Just-in-time_compilation
-.. _`continuation-passing`: http://en.wikipedia.org/wiki/Continuation_passing_style
-.. _`report about day one`: http://codespeak.net/pipermail/pypy-dev/2005q4/002510.html
-.. _`one about day two and three`: http://codespeak.net/pipermail/pypy-dev/2005q4/002512.html
-.. _`the rest of the sprint`: http://codespeak.net/pipermail/pypy-dev/2005q4/002514.html
-
-PyPy release 0.7.0
-=================== 
-
-The first implementation of Python in Python is now also the second
-implementation of Python in C :-)
-
-See the `release announcement`_ for further details about the release and
-the `getting started`_ document for instructions about downloading it and
-trying it out.  We also have the beginning of a FAQ_.  *(08/28/2005)*
-
-.. _`pypy-0.7.0`: 
-.. _`release announcement`: release-0.7.0.html
-.. _`getting started`: getting-started.html
-.. _FAQ: faq.html
-
-PyPy Sprint in Heidelberg 22nd-29th August 2005
-==========================================================
-
-The last `PyPy sprint`_ took place at the Heidelberg University
-in Germany from 22nd August to 29th August (both days included). 
-Its main focus is translation of the whole PyPy interpreter 
-to a low level language and reaching 2.4.1 Python compliance.
-The goal of the sprint is to release a first self-contained
-PyPy-0.7 version.  Carl has written a report about `day 1 - 3`_, 
-there are `some pictures`_ online and a `heidelberg summary report`_
-detailing some of the works that led to the successful release 
-of `pypy-0.7.0`_! 
-
-.. _`heidelberg summary report`: http://codespeak.net/pypy/extradoc/sprintinfo/Heidelberg-report.html 
-.. _`PyPy sprint`: http://codespeak.net/pypy/extradoc/sprintinfo/Heidelberg-sprint.html
-.. _`day 1 - 3`: http://codespeak.net/pipermail/pypy-dev/2005q3/002287.html
-.. _`some pictures`: http://codespeak.net/~hpk/heidelberg-sprint/
-
-PyPy Hildesheim2 finished: first self-contained PyPy run! 
-===========================================================
-
-Up until 31st August we were in a PyPy sprint at `Trillke-Gut`_. 
-Carl has written a `report about day 1`_, Holger 
-about `day 2 and day 3`_ and Carl again about `day 4 and day 5`_, 
-On `day 6`_ Holger reports the `breakthrough`_: PyPy runs 
-on its own! Hurray_!.  And Carl finally reports about the winding
-down of `day 7`_ which saw us relaxing, discussing and generally 
-having a good time.   You might want to look at the selected 
-`pictures from the sprint`_. 
-
-.. _`report about day 1`: http://codespeak.net/pipermail/pypy-dev/2005q3/002217.html 
-.. _`day 2 and day 3`: http://codespeak.net/pipermail/pypy-dev/2005q3/002220.html
-.. _`day 4 and day 5`: http://codespeak.net/pipermail/pypy-dev/2005q3/002234.html
-.. _`day 6`: http://codespeak.net/pipermail/pypy-dev/2005q3/002239.html
-.. _`day 7`: http://codespeak.net/pipermail/pypy-dev/2005q3/002245.html
-.. _`breakthrough`: http://codespeak.net/~hpk/hildesheim2-sprint-www/hildesheim2-sprint-www-Thumbnails/36.jpg
-.. _`hurray`: http://codespeak.net/~hpk/hildesheim2-sprint-www/hildesheim2-sprint-www-Pages/Image37.html
-.. _`pictures from the sprint`: http://codespeak.net/~hpk/hildesheim2-sprint-www/ 
-.. _`Trillke-Gut`: http://www.trillke.net/images/HomePagePictureSmall.jpg
-
-EuroPython 2005 sprints finished 
-======================================================
-
-We had two sprints around EuroPython, one more internal core
-developer one and a public one.  Both sprints were quite
-successful.  Regarding the Pre-EuroPython sprint Michael Hudson 
-has posted summaries of `day 1`_, `day 2`_ and `day 3`_ on 
-the `pypy-dev`_ mailing list.  The larger public sprint 
-has not been summarized yet but it went very well.  We had
-20 people initially attending to hear the tutorials and 
-work a bit.  Later with around 13-14 people we made the
-move to Python-2.4.1, integrated the parser, improved 
-the LLVM backends and type inference in general.  
-*(07/13/2005)* 
-
-.. _`day 1`: http://codespeak.net/pipermail/pypy-dev/2005q2/002169.html
-.. _`day 2`: http://codespeak.net/pipermail/pypy-dev/2005q2/002171.html
-.. _`day 3`: http://codespeak.net/pipermail/pypy-dev/2005q2/002172.html
-.. _`pypy-dev`: http://codespeak.net/mailman/listinfo/pypy-dev
-
-.. _EuroPython: http://europython.org 
-.. _`translation`: translation.html 
-.. _`sprint announcement`: http://codespeak.net/pypy/extradoc/sprintinfo/EP2005-announcement.html
-.. _`list of people coming`: http://codespeak.net/pypy/extradoc/sprintinfo/EP2005-people.html
-
-Duesseldorf PyPy sprint 2-9 June 2006
-==================================================================
-
-The next PyPy sprint will be held in the Computer Science department of
-Heinrich-Heine Universitaet Duesseldorf from the *2nd to the 9th of June*.
-Main focus of the sprint will be on the goals of the upcoming June 0.9
-release.
-
-Read more in `the sprint announcement`_, see who is  planning to attend
-on the `people page`_.
-
-.. _`the sprint announcement`: http://codespeak.net/pypy/extradoc/sprintinfo/ddorf2006/announce.html
-.. _`people page`: http://codespeak.net/pypy/extradoc/sprintinfo/ddorf2006/people.html
-
-
-PyPy at XP 2006 and Agile 2006
-==================================================================
-
-PyPy will present experience reports at the two main agile conferences
-this year, `XP 2006`_ and `Agile 2006`_.
-Both experience reports focus on aspects of the sprint-driven
-development method that is being used in PyPy.
-
-.. _`XP 2006`: http://virtual.vtt.fi/virtual/xp2006/ 
-.. _`Agile 2006`: http://www.agile2006.org/
-
-
-EuroPython PyPy sprint 6-9 July 2006
-==================================================================
-
-Once again a PyPy sprint will take place right after the EuroPython
-Conference. This year it will be from the *6th to the 9th of July*.
-
-Read more in `EuroPython sprint announcement`_, see who is  planning to attend
-on `the people page`_. There is also a page_ in the python wiki.
-
-.. _`EuroPython sprint announcement`: http://codespeak.net/pypy/extradoc/sprintinfo/europython-2006/announce.html
-.. _`the people page`: http://codespeak.net/pypy/extradoc/sprintinfo/europython-2006/people.html
-.. _page: http://wiki.python.org/moin/EuroPython2006

diff --git a/pypy/doc/discussion/GC-performance.txt b/pypy/doc/discussion/GC-performance.txt
deleted file mode 100644
--- a/pypy/doc/discussion/GC-performance.txt
+++ /dev/null
@@ -1,118 +0,0 @@
-StartHeapsize# is the framework GC as of revision 31586 with initial
-bytes_malloced_threshold of 2-512 MB
-
-NewHeuristics is the framework GC with a new heuristics for adjusting
-the bytes_malloced_threshold
-
-::
-
- Pystone
- StartHeapsize2:
- This machine benchmarks at 5426.92 pystones/second
- This machine benchmarks at 5193.91 pystones/second
- This machine benchmarks at 5403.46 pystones/second
- StartHeapsize8:
- This machine benchmarks at 6075.33 pystones/second
- This machine benchmarks at 6007.21 pystones/second
- This machine benchmarks at 6122.45 pystones/second
- StartHeapsize32:
- This machine benchmarks at 6643.05 pystones/second
- This machine benchmarks at 6590.51 pystones/second
- This machine benchmarks at 6593.41 pystones/second
- StartHeapsize128:
- This machine benchmarks at 7065.47 pystones/second
- This machine benchmarks at 7102.27 pystones/second
- This machine benchmarks at 7082.15 pystones/second
- StartHeapsize512:
- This machine benchmarks at 7208.07 pystones/second
- This machine benchmarks at 7197.7 pystones/second
- This machine benchmarks at 7246.38 pystones/second
- NewHeuristics:
- This machine benchmarks at 6821.28 pystones/second
- This machine benchmarks at 6858.71 pystones/second
- This machine benchmarks at 6902.9 pystones/second
-
-
- Richards
- StartHeapSize2:
- Average time per iteration: 5456.21 ms
- Average time per iteration: 5529.31 ms
- Average time per iteration: 5398.82 ms
- StartHeapsize8:
- Average time per iteration: 4775.43 ms
- Average time per iteration: 4753.25 ms
- Average time per iteration: 4781.37 ms
- StartHeapsize32:
- Average time per iteration: 4554.84 ms
- Average time per iteration: 4501.86 ms
- Average time per iteration: 4531.59 ms
- StartHeapsize128:
- Average time per iteration: 4329.42 ms
- Average time per iteration: 4360.87 ms
- Average time per iteration: 4392.81 ms
- StartHeapsize512:
- Average time per iteration: 4371.72 ms
- Average time per iteration: 4399.70 ms
- Average time per iteration: 4354.66 ms
- NewHeuristics:
- Average time per iteration: 4763.56 ms
- Average time per iteration: 4803.49 ms
- Average time per iteration: 4840.68 ms
-
-
- translate rpystone
-   time pypy-c translate --text --batch --backendopt --no-compile targetrpystonedalone.py
- StartHeapSize2:
- real    1m38.459s
- user    1m35.582s
- sys     0m0.440s
- StartHeapsize8:
- real    1m35.398s
- user    1m33.878s
- sys     0m0.376s
- StartHeapsize32:
- real    1m5.475s
- user    1m5.108s
- sys     0m0.180s
- StartHeapsize128:
- real    0m52.941s
- user    0m52.395s
- sys     0m0.328s
- StartHeapsize512:
- real    1m3.727s
- user    0m50.031s
- sys     0m1.240s
- NewHeuristics:
- real    0m53.449s
- user    0m52.771s
- sys     0m0.356s
-
-
- docutils
-   time pypy-c rst2html doc/coding-guide.txt
- StartHeapSize2:
- real    0m36.125s
- user    0m35.562s
- sys     0m0.088s
- StartHeapsize8:
- real    0m32.678s
- user    0m31.106s
- sys     0m0.084s
- StartHeapsize32:
- real    0m22.041s
- user    0m21.085s
- sys     0m0.132s
- StartHeapsize128:
- real    0m19.350s
- user    0m18.653s
- sys     0m0.324s
- StartHeapsize512:
- real    0m19.116s
- user    0m17.517s
- sys     0m0.620s
- NewHeuristics:
- real    0m20.990s
- user    0m20.109s
- sys     0m0.196s
-
-

diff --git a/pypy/doc/config/translation.instrumentctl.txt b/pypy/doc/config/translation.instrumentctl.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.instrumentctl.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Internal option.
-
-.. internal

diff --git a/pypy/doc/config/translation.cc.txt b/pypy/doc/config/translation.cc.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.cc.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Specify which C compiler to use.

diff --git a/pypy/doc/config/translation.backendopt.stack_optimization.txt b/pypy/doc/config/translation.backendopt.stack_optimization.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.stack_optimization.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Enable the optimized code generation for stack based machine, if the backend support it

diff --git a/pypy/doc/config/objspace.std.prebuiltintfrom.txt b/pypy/doc/config/objspace.std.prebuiltintfrom.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.prebuiltintfrom.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-see :config:`objspace.std.withprebuiltint`.

diff --git a/pypy/doc/config/objspace.usemodules.operator.txt b/pypy/doc/config/objspace.usemodules.operator.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.operator.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'operator' module. 
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/objspace.std.txt b/pypy/doc/config/objspace.std.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-..  intentionally empty

diff --git a/pypy/doc/config/objspace.usemodules.__pypy__.txt b/pypy/doc/config/objspace.usemodules.__pypy__.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.__pypy__.txt
+++ /dev/null
@@ -1,9 +0,0 @@
-Use the '__pypy__' module. 
-This module is expected to be working and is included by default.
-It contains special PyPy-specific functionality.
-For example most of the special functions described in the `object space proxies`
-document are in the module.
-See the `__pypy__ module documentation`_ for more details.
-
-.. _`object space proxy`: ../objspace-proxies.html
-.. _`__pypy__ module documentation`: ../__pypy__-module.html

diff --git a/pypy/doc/config/objspace.std.withmethodcachecounter.txt b/pypy/doc/config/objspace.std.withmethodcachecounter.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withmethodcachecounter.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Testing/debug option for :config:`objspace.std.withmethodcache`.

diff --git a/.hgsubstate b/.hgsubstate
--- a/.hgsubstate
+++ b/.hgsubstate
@@ -1,3 +1,3 @@
 80037 greenlet
-80348 lib_pypy/pyrepl
+80409 lib_pypy/pyrepl
 80409 testrunner

diff --git a/pypy/doc/config/translation.backendopt.merge_if_blocks.txt b/pypy/doc/config/translation.backendopt.merge_if_blocks.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.merge_if_blocks.txt
+++ /dev/null
@@ -1,26 +0,0 @@
-This optimization converts parts of flow graphs that result from
-chains of ifs and elifs like this into merged blocks.
-
-By default flow graphing this kind of code::
-
-    if x == 0:
-        f()
-    elif x == 1:
-        g()
-    elif x == 4:
-        h()
-    else:
-        j()
-
-will result in a chain of blocks with two exits, somewhat like this:
-
-.. image:: unmergedblocks.png
-
-(reflecting how Python would interpret this code).  Running this
-optimization will transform the block structure to contain a single
-"choice block" with four exits:
-
-.. image:: mergedblocks.png
-
-This can then be turned into a switch by the C backend, allowing the C
-compiler to produce more efficient code.

diff --git a/pypy/doc/config/translation.fork_before.txt b/pypy/doc/config/translation.fork_before.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.fork_before.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-This is an option mostly useful when working on the PyPy toolchain. If you use
-it, translate.py will fork before the specified phase. If the translation
-crashes after that fork, you can fix the bug in the toolchain, and continue
-translation at the fork-point.

diff --git a/pypy/doc/discussion/parsing-ideas.txt b/pypy/doc/discussion/parsing-ideas.txt
deleted file mode 100644
--- a/pypy/doc/discussion/parsing-ideas.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-add a way to modularize regular expressions:
-
-_HEXNUM = "...";
-_DECNUM = "...";
-NUM = "{_HEXNUM}|{_DECNUM}";

diff --git a/pypy/doc/config/objspace.std.withstrbuf.txt b/pypy/doc/config/objspace.std.withstrbuf.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withstrbuf.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Enable "string buffer" objects.
-
-Similar to "string join" objects, but using a StringBuilder to represent
-a string built by repeated application of ``+=``.

diff --git a/pypy/doc/config/objspace.usemodules._rawffi.txt b/pypy/doc/config/objspace.usemodules._rawffi.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._rawffi.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-An experimental module providing very low-level interface to
-C-level libraries, for use when implementing ctypes, not
-intended for a direct use at all.
\ No newline at end of file

diff --git a/pypy/doc/getting-started.txt b/pypy/doc/getting-started.txt
deleted file mode 100644
--- a/pypy/doc/getting-started.txt
+++ /dev/null
@@ -1,123 +0,0 @@
-==================================
-PyPy - Getting Started 
-==================================
-
-.. contents::
-.. sectnum::
-
-.. _howtopypy: 
-
-What is PyPy ?
-==============
-
-PyPy is an implementation of the Python_ programming language written in
-Python itself, flexible and easy to experiment with.
-We target a large variety of platforms, small and large, by providing a
-compiler toolsuite that can produce custom Python versions.  Platform, memory
-and threading models, as well as the JIT compiler itself, are aspects of the
-translation process - as opposed to encoding low level details into the
-language implementation itself. `more...`_
-
-
-.. _Python: http://docs.python.org/ref
-.. _`more...`: architecture.html
-
-Just the facts 
-============== 
-
-Clone the repository
---------------------
-
-Before you can play with PyPy, you will need to obtain a copy
-of the sources.  This can be done either by `downloading them
-from the download page`_ or by checking them out from the
-repository using mercurial.  We suggest using mercurial if one
-wants to access the current development.
-
-.. _`downloading them from the download page`: download.html
-
-If you choose to use mercurial, you must issue the following command on your
-command line, DOS box, or terminal::
-
-    hg clone http://bitbucket.org/pypy/pypy pypy
-
-If you get an error like this::
-
-    abort: repository [svn]http://codespeak.net/svn/pypy/build/testrunner not found!
-
-it probably means that your mercurial version is too old. You need at least
-Mercurial 1.6 to clone the PyPy repository.
-
-This will clone the repository and place it into a directory
-named ``pypy``, and will get you the PyPy source in
-``pypy/pypy`` and documentation files in ``pypy/pypy/doc``.
-We try to ensure that the tip is always stable, but it might
-occasionally be broken.  You may want to check out `our nightly tests:`_
-find a revision (12-chars alphanumeric string, e.g. "963e808156b3") 
-that passed at least the
-``{linux32}`` tests (corresponding to a ``+`` sign on the
-line ``success``) and then, in your cloned repository, switch to this revision
-using::
-
-    hg up -r XXXXX
-
-where XXXXX is the revision id.
-
-.. _`our nightly tests:`: http://buildbot.pypy.org/summary?branch=<trunk>
-
-If you want to commit to our repository on bitbucket, you will have to
-install subversion in addition to mercurial.
-
-Installing using virtualenv
----------------------------
-
-It is often convenient to run pypy inside a virtualenv.  To do this
-you need a recent version of virtualenv -- 1.5 or greater.  You can
-then install PyPy both from a precompiled tarball or from a mercurial
-checkout::
-
-	# from a tarball
-	$ virtualenv -p /opt/pypy-c-jit-41718-3fb486695f20-linux/bin/pypy my-pypy-env
-
-	# from the mercurial checkout
-	$ virtualenv -p /path/to/pypy/pypy/translator/goal/pypy-c my-pypy-env
-
-Note that bin/python is now a symlink to bin/pypy.
-
-
-Where to go from here
-----------------------
-
-After you successfully manage to get PyPy's source you can read more about:
-
- - `Building and using PyPy's Python interpreter`_
- - `Learning more about the translation toolchain and how to develop (with) PyPy`_
-
-.. _`Building and using PyPy's Python interpreter`: getting-started-python.html
-.. _`Learning more about the translation toolchain and how to develop (with) PyPy`: getting-started-dev.html
-
-
-Understanding PyPy's architecture
----------------------------------
-
-For in-depth information about architecture and coding documentation 
-head over to the `documentation section`_ where you'll find lots of 
-interesting information.  Additionally, in true hacker spirit, you 
-may just `start reading sources`_ . 
-
-.. _`documentation section`: docindex.html 
-.. _`start reading sources`: getting-started-dev.html#start-reading-sources
-
-Filing bugs or feature requests 
--------------------------------
-
-You may file `bug reports`_ on our issue tracker which is
-also accessible through the 'issues' top menu of 
-the PyPy website.  `Using the development tracker`_ has 
-more detailed information on specific features of the tracker. 
-
-.. _`Using the development tracker`: coding-guide.html#using-development-tracker
-.. _bug reports:            https://codespeak.net/issue/pypy-dev/
-
-
-.. include:: _ref.txt

diff --git a/pypy/doc/config/objspace.std.withmethodcache.txt b/pypy/doc/config/objspace.std.withmethodcache.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withmethodcache.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Enable method caching. See the section "Method Caching" in `Standard
-Interpreter Optimizations <../interpreter-optimizations.html#method-caching>`__.

diff --git a/pypy/doc/config/objspace.usemodules._random.txt b/pypy/doc/config/objspace.usemodules._random.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._random.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the '_random' module. It is necessary to use the module "random" from the standard library.
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/discussion/removing-stable-compiler.txt b/pypy/doc/discussion/removing-stable-compiler.txt
deleted file mode 100644
--- a/pypy/doc/discussion/removing-stable-compiler.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-February 28th, 2006
-
-While implementing conditional expressions from 2.5 we had to change
-the stable compiler in order to keep tests from breaking.  While using
-stable compiler as a baseline made sense when the ast compiler was
-new, it is less and less true as new grammar changes are introduced.
-
-Options include
-
-1. Freezing the stable compiler at grammar 2.4.
-
-2. Capture AST output from the stable compiler and use that explicitly
-in current tests instead of regenerating them every time, primarily
-because it allows us to change the grammar without changing the stable
-compiler.
-
-
-In either case, AST production tests for new grammar changes could be
-written manually, which is less effort than fixing the stable
-compiler (which itself isn't really tested anyway).
-
-Discussion by Arre, Anders L., Stuart Williams

diff --git a/pypy/doc/config/translation.backendopt.txt b/pypy/doc/config/translation.backendopt.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-This group contains options about various backend optimization passes. Most of
-them are described in the `EU report about optimization`_
-
-.. _`EU report about optimization`: http://codespeak.net/pypy/extradoc/eu-report/D07.1_Massive_Parallelism_and_Translation_Aspects-2007-02-28.pdf
-

diff --git a/pypy/doc/config/index.txt b/pypy/doc/config/index.txt
deleted file mode 100644
--- a/pypy/doc/config/index.txt
+++ /dev/null
@@ -1,52 +0,0 @@
-==============================
-Configuration Options for PyPy
-==============================
-
-This directory contains documentation for the many `configuration`_
-options that can be used to affect PyPy's behaviour.  There are two
-main classes of option, `object space options`_ and `translation
-options`_.
-
-There are two main entry points that accept options: ``py.py``, which
-implements Python on top of another Python interpreter and accepts all
-the `object space options`_:
-
-.. parsed-literal::
-
-    ./py.py <`objspace options`_>
-
-and the ``translate.py`` translation entry
-point which takes arguments of this form:
-
-.. parsed-literal::
-
-    ./translate.py <`translation options`_> <target>
-
-For the common case of ``<target>`` being ``targetpypystandalone.py``,
-you can then pass the `object space options`_ after
-``targetpypystandalone.py``, i.e. like this:
-
-.. parsed-literal::
-
-    ./translate.py <`translation options`_> targetpypystandalone.py <`objspace options`_>
-
-There is an `overview`_ of all command line arguments that can be
-passed in either position.
-
-Many of the more interesting object space options enable optimizations,
-which are described in `Standard Interpreter Optimizations`_, or allow
-the creation of objects that can barely be imagined in CPython, which
-are documented in `What PyPy can do for your objects`_.
-
-The following diagram gives some hints about which PyPy features work together
-with which other PyPy features:
-
-.. image:: ../image/compat-matrix.png
-
-.. _`configuration`: ../configuration.html
-.. _`objspace options`: commandline.html#objspace
-.. _`object space options`: commandline.html#objspace
-.. _`translation options`: commandline.html#translation
-.. _`overview`: commandline.html
-.. _`Standard Interpreter Optimizations`: ../interpreter-optimizations.html
-.. _`What PyPy can do for your objects`: ../objspace-proxies.html

diff --git a/pypy/doc/config/translation.jit_profiler.txt b/pypy/doc/config/translation.jit_profiler.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.jit_profiler.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Integrate profiler support into the JIT

diff --git a/pypy/doc/config/objspace.usemodules.cmath.txt b/pypy/doc/config/objspace.usemodules.cmath.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.cmath.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'cmath' module. 
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/objspace.usemodules.mmap.txt b/pypy/doc/config/objspace.usemodules.mmap.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.mmap.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'mmap' module. 
-This module is expected to be fully working.

diff --git a/pypy/doc/config/translation.simplifying.txt b/pypy/doc/config/translation.simplifying.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.simplifying.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Internal option.
-
-.. internal

diff --git a/pypy/doc/config/objspace.usemodules._socket.txt b/pypy/doc/config/objspace.usemodules._socket.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._socket.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-Use the '_socket' module. 
-
-This is our implementation of '_socket', the Python builtin module
-exposing socket primitives, which is wrapped and used by the standard
-library 'socket.py' module. It is based on `rffi`_.
-
-.. _`rffi`: ../rffi.html

diff --git a/pypy/doc/config/translation.backend.txt b/pypy/doc/config/translation.backend.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backend.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Which backend to use when translating, see `translation documentation`_.
-
-.. _`translation documentation`: ../translation.html

diff --git a/pypy/doc/config/translation.force_make.txt b/pypy/doc/config/translation.force_make.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.force_make.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Force executing makefile instead of using platform.

diff --git a/pypy/doc/config/translation.vanilla.txt b/pypy/doc/config/translation.vanilla.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.vanilla.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Try to make the resulting compiled program as portable (=movable to another
-machine) as possible. Which is not much.

diff --git a/pypy/doc/config/objspace.usemodules._bisect.txt b/pypy/doc/config/objspace.usemodules._bisect.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._bisect.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Use the '_bisect' module.
-Used, optionally,  by the 'bisect' standard lib module. This module is expected to be working and is included by default.
-
-

diff --git a/pypy/doc/config/translation.jit_backend.txt b/pypy/doc/config/translation.jit_backend.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.jit_backend.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Choose the backend to use for the JIT.
-By default, this is the best backend for the current platform.

diff --git a/pypy/doc/jit/overview.txt b/pypy/doc/jit/overview.txt
deleted file mode 100644
--- a/pypy/doc/jit/overview.txt
+++ /dev/null
@@ -1,195 +0,0 @@
-------------------------------------------------------------------------
-                   Motivating JIT Compiler Generation
-------------------------------------------------------------------------
-
-.. contents::
-.. sectnum::
-
-This is a non-technical introduction and motivation for PyPy's approach
-to Just-In-Time compiler generation.
-
-
-Motivation
-========================================================================
-
-Overview
---------
-
-Writing an interpreter for a complex dynamic language like Python is not
-a small task, especially if, for performance goals, we want to write a
-Just-in-Time (JIT) compiler too.
-
-The good news is that it's not what we did.  We indeed wrote an
-interpreter for Python, but we never wrote any JIT compiler for Python
-in PyPy.  Instead, we use the fact that our interpreter for Python is
-written in RPython, which is a nice, high-level language -- and we turn
-it *automatically* into a JIT compiler for Python.
-
-This transformation is of course completely transparent to the user,
-i.e. the programmer writing Python programs.  The goal (which we
-achieved) is to support *all* Python features -- including, for example,
-random frame access and debuggers.  But it is also mostly transparent to
-the language implementor, i.e. to the source code of the Python
-interpreter.  It only needs a bit of guidance: we had to put a small
-number of hints in the source code of our interpreter.  Based on these
-hints, the *JIT compiler generator* produces a JIT compiler which has
-the same language semantics as the original interpreter by construction.
-This JIT compiler itself generates machine code at runtime, aggressively
-optimizing the user's program and leading to a big performance boost,
-while keeping the semantics unmodified.  Of course, the interesting bit
-is that our Python language interpreter can evolve over time without
-getting out of sync with the JIT compiler.
-
-
-The path we followed
---------------------
-
-Our previous incarnations of PyPy's JIT generator were based on partial
-evaluation. This is a well-known and much-researched topic, considered
-to be very promising. There have been many attempts to use it to
-automatically transform an interpreter into a compiler. However, none of
-them have lead to substantial speedups for real-world languages. We
-believe that the missing key insight is to use partial evaluation to
-produce just-in-time compilers, rather than classical ahead-of-time
-compilers.  If this turns out to be correct, the practical speed of
-dynamic languages could be vastly improved.
-
-All these previous JIT compiler generators were producing JIT compilers
-similar to the hand-written Psyco.  But today, starting from 2009, our
-prototype is no longer using partial evaluation -- at least not in a way
-that would convince paper reviewers.  It is instead based on the notion
-of *tracing JIT,* recently studied for Java and JavaScript.  When
-compared to all existing tracing JITs so far, however, partial
-evaluation gives us some extra techniques that we already had in our
-previous JIT generators, notably how to optimize structures by removing
-allocations.
-
-The closest comparison to our current JIT is Tamarin's TraceMonkey.
-However, this JIT compiler is written manually, which is quite some
-effort.  In PyPy, we write a JIT generator at the level of RPython,
-which means that our final JIT does not have to -- indeed, cannot -- be
-written to encode all the details of the full Python language.  These
-details are automatically supplied by the fact that we have an
-interpreter for full Python.
-
-
-Practical results
------------------
-
-The JIT compilers that we generate use some techniques that are not in
-widespread use so far, but they are not exactly new either.  The point
-we want to make here is not that we are pushing the theoretical limits
-of how fast a given dynamic language can be run.  Our point is: we are
-making it **practical** to have reasonably good Just-In-Time compilers
-for all dynamic languages, no matter how complicated or non-widespread
-(e.g. Open Source dynamic languages without large industry or academic
-support, or internal domain-specific languages).  By practical we mean
-that this should be:
-
-* Easy: requires little more efforts than writing the interpreter in the
-  first place.
-
-* Maintainable: our generated JIT compilers are not separate projects
-  (we do not generate separate source code, but only throw-away C code
-  that is compiled into the generated VM).  In other words, the whole
-  JIT compiler is regenerated anew every time the high-level interpreter
-  is modified, so that they cannot get out of sync no matter how fast
-  the language evolves.
-
-* Fast enough: we can get some rather good performance out of the
-  generated JIT compilers.  That's the whole point, of course.
-
-
-Alternative approaches to improve speed
-========================================================================
-
-+----------------------------------------------------------------------+
-| :NOTE:                                                               |
-|                                                                      |
-|   Please take the following section as just a statement of opinion.  |
-|   In order to be debated over, the summaries should first be         |
-|   expanded into full arguments.  We include them here as links;      |
-|   we are aware of them, even if sometimes pessimistic about them     |
-|   ``:-)``                                                            |
-+----------------------------------------------------------------------+
-
-There are a large number of approaches to improving the execution speed of
-dynamic programming languages, most of which only produce small improvements
-and none offer the flexibility and customisability provided by our approach.
-Over the last 6 years of tweaking, the speed of CPython has only improved by a
-factor of 1.3 or 1.4 (depending on benchmarks).  Many tweaks are applicable to
-PyPy as well. Indeed, some of the CPython tweaks originated as tweaks for PyPy.
-
-IronPython initially achieved a speed of about 1.8 times that of CPython by
-leaving out some details of the language and by leveraging the large investment
-that Microsoft has put into making the .NET platform fast; the current, more
-complete implementation has roughly the same speed as CPython.  In general, the
-existing approaches have reached the end of the road, speed-wise.  Microsoft's
-Dynamic Language Runtime (DLR), often cited in this context, is essentially
-only an API to make the techniques pioneered in IronPython official.  At best,
-it will give another small improvement.
-
-Another technique regularly mentioned is adding types to the language in order
-to speed it up: either explicit optional typing or soft typing (i.e., inferred
-"likely" types).  For Python, all projects in this area have started with a
-simplified subset of the language; no project has scaled up to anything close
-to the complete language.  This would be a major effort and be platform- and
-language-specific.  Moreover maintenance would be a headache: we believe that
-many changes that are trivial to implement in CPython, are likely to invalidate
-previous carefully-tuned optimizations.
-
-For major improvements in speed, JIT techniques are necessary.  For Python,
-Psyco gives typical speedups of 2 to 4 times - up to 100 times in algorithmic
-examples.  It has come to a dead end because of the difficulty and huge costs
-associated with developing and maintaining it.  It has a relatively poor
-encoding of language semantics - knowledge about Python behavior needs to be
-encoded by hand and kept up-to-date.  At least, Psyco works correctly even when
-encountering one of the numerous Python constructs it does not support, by
-falling back to CPython.  The PyPy JIT started out as a metaprogrammatic,
-non-language-specific equivalent of Psyco.
-
-A different kind of prior art are self-hosting JIT compilers such as Jikes.
-Jikes is a JIT compiler for Java written in Java. It has a poor encoding of
-language semantics; it would take an enormous amount of work to encode all the
-details of a Python-like language directly into a JIT compiler.  It also has
-limited portability, which is an issue for Python; it is likely that large
-parts of the JIT compiler would need retargetting in order to run in a
-different environment than the intended low-level one.
-
-Simply reusing an existing well-tuned JIT like that of the JVM does not
-really work, because of concept mismatches between the implementor's
-language and the host VM language: the former needs to be compiled to
-the target environment in such a way that the JIT is able to speed it up
-significantly - an approach which essentially has failed in Python so
-far: even though CPython is a simple interpreter, its Java and .NET
-re-implementations are not significantly faster.
-
-More recently, several larger projects have started in the JIT area.  For
-instance, Sun Microsystems is investing in JRuby, which aims to use the Java
-Hotspot JIT to improve the performance of Ruby. However, this requires a lot of
-hand crafting and will only provide speedups for one language on one platform.
-Some issues are delicate, e.g., how to remove the overhead of constantly boxing
-and unboxing, typical in dynamic languages.  An advantage compared to PyPy is
-that there are some hand optimizations that can be performed, that do not fit
-in the metaprogramming approach.  But metaprogramming makes the PyPy JIT
-reusable for many different languages on many different execution platforms.
-It is also possible to combine the approaches - we can get substantial speedups
-using our JIT and then feed the result to Java's Hotspot JIT for further
-improvement.  One of us is even a member of the `JSR 292`_ Expert Group
-to define additions to the JVM to better support dynamic languages, and
-is contributing insights from our JIT research, in ways that will also
-benefit PyPy.
-
-Finally, tracing JITs are now emerging for dynamic languages like
-JavaScript with TraceMonkey.  The code generated by PyPy is very similar
-(but not hand-written) to the concepts of tracing JITs.
-
-
-Further reading
-========================================================================
-
-The description of the current PyPy JIT generator is given in PyJitPl5_
-(draft).
-
-.. _`JSR 292`: http://jcp.org/en/jsr/detail?id=292
-.. _PyJitPl5: pyjitpl5.html

diff --git a/pypy/doc/config/commandline.txt b/pypy/doc/config/commandline.txt
deleted file mode 100644
--- a/pypy/doc/config/commandline.txt
+++ /dev/null
@@ -1,33 +0,0 @@
-
-.. contents::
-    
-
-.. _objspace:
-.. _`overview-of-command-line-options-for-objspace`:
-
--------------------------------
-PyPy Python interpreter options
--------------------------------
-
-The following options can be used after ``translate.py
-targetpypystandalone`` or as options to ``py.py``.
-
-.. GENERATE: objspace
-
-
-.. _translation:
-.. _`overview-of-command-line-options-for-translation`:
-
----------------------------
-General translation options
----------------------------
-
-The following are options of ``translate.py``.  They must be
-given before the ``targetxxx`` on the command line.
-
-* `--opt -O:`__ set the optimization level `[0, 1, size, mem, 2, 3]`
-
-.. __: opt.html
-
-.. GENERATE: translation
-

diff --git a/pypy/doc/config/translation.backendopt.profile_based_inline_heuristic.txt b/pypy/doc/config/translation.backendopt.profile_based_inline_heuristic.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.profile_based_inline_heuristic.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Internal option. Switch to a different weight heuristic for inlining.
-This is for profile-based inlining (:config:`translation.backendopt.profile_based_inline`).
-
-.. internal

diff --git a/pypy/doc/config/objspace.usemodules._sha.txt b/pypy/doc/config/objspace.usemodules._sha.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._sha.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-Use the built-in _'sha' module.
-This module is expected to be working and is included by default.
-There is also a pure Python version in lib_pypy which is used
-if the built-in is disabled, but it is several orders of magnitude 
-slower.

diff --git a/pypy/doc/config/objspace.usemodules.time.txt b/pypy/doc/config/objspace.usemodules.time.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.time.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Use the 'time' module. 
-
-Obsolete; use :config:`objspace.usemodules.rctime` for our up-to-date version
-of the application-level 'time' module.

diff --git a/pypy/doc/config/objspace.translationmodules.txt b/pypy/doc/config/objspace.translationmodules.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.translationmodules.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-This option enables all modules which are needed to translate PyPy using PyPy.

diff --git a/pypy/doc/discussion/cmd-prompt-translation.txt b/pypy/doc/discussion/cmd-prompt-translation.txt
deleted file mode 100644
--- a/pypy/doc/discussion/cmd-prompt-translation.txt
+++ /dev/null
@@ -1,18 +0,0 @@
-
-t = Translation(entry_point[,<options>])
-t.annotate([<options>])
-t.rtype([<options>])
-t.backendopt[_<backend>]([<options>])
-t.source[_<backend>]([<options>])
-f = t.compile[_<backend>]([<options>])
-
-and t.view(), t.viewcg()
-
-<backend> = c|llvm (for now)
-you can skip steps
-
-<options> = argtypes (for annotation) plus 
-            keyword args:  gc=...|policy=<annpolicy> etc
-
-
-

diff --git a/pypy/doc/config/objspace.usemodules._hashlib.txt b/pypy/doc/config/objspace.usemodules._hashlib.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._hashlib.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the '_hashlib' module.
-Used by the 'hashlib' standard lib module, and indirectly by the various cryptographic libs. This module is expected to be working and is included by default.

diff --git a/pypy/doc/discussion/use_case_of_logic.txt b/pypy/doc/discussion/use_case_of_logic.txt
deleted file mode 100644
--- a/pypy/doc/discussion/use_case_of_logic.txt
+++ /dev/null
@@ -1,75 +0,0 @@
-Use cases for a combination of Logic and Object Oriented programming approach
--------------------------------------------------------------------------------
-
-Workflows
-=========
-
-Defining the next state by solving certain constraints. The more
-general term might be State machines.
-
-Business Logic
-==============
-
-We define Business Logic as expressing consistency (as an example) on
-a set of objects in a business application.
-
-For example checking the consistency of a calculation before
-committing the changes.
-
-The domain is quite rich in example of uses of Business Logic.
-
-Datamining
-===========
-
-An example is Genetic sequence matching.
-
-Databases
-=========
-
-Validity constraints for the data can be expressed as constraints.
-
-Constraints can be used to perform type inference when querying the
-database.
-
-Semantic web
-=============
-
-The use case is like the database case, except the ontology language
-it self is born out of Descriptive Logic
-
-
-User Interfaces
-===============
-
-We use rules to describe the layout and visibility constraints of
-elements that are to be displayed on screen. The rule can also help
-describing how an element is to be displayed depending on its state
-(for instance, out of bound values can be displayed in a different
-colour).
-
-Configuration
-==============
-
-User configuration can use information inferred from : the current
-user, current platforms , version requirements, ...
-
-The validity of the configuration can be checked with the constraints.
-
-
-Scheduling and planning
-========================
-
-Timetables, process scheduling, task scheduling.
-
-Use rules to determine when to execute tasks (only start batch, if load
-is low, and previous batch is finished.
-
-Load sharing.
-
-Route optimization. Planning the routes of a technician based on tools
-needed and such
-
-An example is scheduling a conference like Europython see:
-
-http://lists.logilab.org/pipermail/python-logic/2005-May/000107.html
-

diff --git a/pypy/doc/config/objspace.usemodules.gc.txt b/pypy/doc/config/objspace.usemodules.gc.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.gc.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-Use the 'gc' module. 
-This module is expected to be working and is included by default.
-Note that since the gc module is highly implementation specific, it contains
-only the ``collect`` function in PyPy, which forces a collection when compiled
-with the framework or with Boehm.

diff --git a/pypy/doc/discussion/ctypes_todo.txt b/pypy/doc/discussion/ctypes_todo.txt
deleted file mode 100644
--- a/pypy/doc/discussion/ctypes_todo.txt
+++ /dev/null
@@ -1,34 +0,0 @@
-Few ctypes-related todo points:
-
-* Write down missing parts and port all tests, eventually adding
-  additional tests.
-
-  - for unions and structs, late assignment of _fields_ is somewhat buggy.
-    Tests about behavior of getattr working properly on instances
-    are missing or not comprehensive. Some tests are skipped because I didn't
-    understand the details.
-
-  - _fields_ can be tuples too as well as lists
-
-  - restype being a function is not working.
-
-  - there are features, which we don't support like buffer() and
-    array() protocols.
-
-  - are the _CData_value return lifetime/gc semantics correct?
-
-  - for some ABIs we will need completely filled ffitypes to do the
-    right thing for passing structures by value, we are now passing enough
-    information to rawffi that it should be possible to construct such precise
-    ffitypes in most cases
-
-  - bitfields are not implemented
-
-  - byteorder is not implemented
-
-* as all stuff is applevel, we cannot have it really fast right now.
-
-* we shall at least try to approach ctypes from the point of the jit
-  backends (at least on platforms that we support). The thing is that
-  we need a lot broader support of jit backends for different argument
-  passing in order to do it.

diff --git a/pypy/doc/config/objspace.std.withsmalllong.txt b/pypy/doc/config/objspace.std.withsmalllong.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withsmalllong.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-Enable "small longs", an additional implementation of the Python
-type "long", implemented with a C long long.  It is mostly useful
-on 32-bit; on 64-bit, a C long long is the same as a C long, so
-its usefulness is limited to Python objects of type "long" that
-would anyway fit in an "int".

diff --git a/pypy/doc/config/objspace.usemodules._weakref.txt b/pypy/doc/config/objspace.usemodules._weakref.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._weakref.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-Use the '_weakref' module, necessary for the standard lib 'weakref' module.
-PyPy's weakref implementation is not completely stable yet. The first
-difference to CPython is that weak references only go away after the next
-garbage collection, not immediately. The other problem seems to be that under
-certain circumstances (that we have not determined) weak references keep the
-object alive.

diff --git a/pypy/doc/config/objspace.usemodules.posix.txt b/pypy/doc/config/objspace.usemodules.posix.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.posix.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Use the essential 'posix' module.
-This module is essential, included by default and cannot be removed (even when
-specified explicitly, the option gets overridden later).

diff --git a/pypy/doc/config/translation.backendopt.really_remove_asserts.txt b/pypy/doc/config/translation.backendopt.really_remove_asserts.txt
deleted file mode 100644

diff --git a/pypy/doc/discussion/thoughts_string_interning.txt b/pypy/doc/discussion/thoughts_string_interning.txt
deleted file mode 100644
--- a/pypy/doc/discussion/thoughts_string_interning.txt
+++ /dev/null
@@ -1,211 +0,0 @@
-String Interning in PyPy
-========================
-
-A few thoughts about string interning. CPython gets a remarkable
-speed-up by interning strings. Interned are all builtin string
-objects and all strings used as names. The effect is that when
-a string lookup is done during instance attribute access,
-the dict lookup method will find the string always by identity,
-saving the need to do a string comparison.
-
-Interned Strings in CPython
----------------------------
-
-CPython keeps an internal dictionary named ``interned`` for all of these
-strings. It contains the string both as key and as value, which means
-there are two extra references in principle. Upto Version 2.2, interned
-strings were considered immortal. Once they entered the ``interned`` dict,
-nothing could revert this memory usage.
-
-Starting with Python 2.3, interned strings became mortal by default.
-The reason was less memory usage for strings that have no external
-reference any longer. This seems to be a worthwhile enhancement.
-Interned strings that are really needed always have a real reference.
-Strings which are interned for temporary reasons get a big speed up
-and can be freed after they are no longer in use.
-
-This was implemented by making the ``interned`` dictionary a weak dict,
-by lowering the refcount of interned strings by 2. The string deallocator
-got extra handling to look into the ``interned`` dict when a string is deallocated.
-This is supported by the state variable on string objects which tells
-whether the string is not interned, immortal or mortal.
-
-Implementation problems for PyPy
---------------------------------
-
-- The CPython implementation makes explicit use of the refcount to handle
-  the weak-dict behavior of ``interned``. PyPy does not expose the implementation
-  of object aliveness. Special handling would be needed to simulate mortal
-  behavior. A possible but expensive solution would be to use a real
-  weak dictionary. Another way is to add a special interface to the backend
-  that allows either the two extra references to be reset, or for the
-  boehm collector to exclude the ``interned`` dict from reference tracking.
-
-- PyPy implements quite complete internal strings, as opposed to CPython
-  which always uses its "applevel" strings. It also supports low-level
-  dictionaries. This adds some complication to the issue of interning.
-  Additionally, the interpreter currently handles attribute access
-  by calling wrap(str) on the low-level attribute string when executing 
-  frames. This implies that we have to primarily intern low-level strings
-  and cache the created string objects on top of them.
-  A possible implementation would use a dict with ll string keys and the
-  string objects as values. In order to save the extra dict lookup, we also
-  could consider to cache the string object directly on a field of the rstr,
-  which of course adds some extra cost. Alternatively, a fast id-indexed
-  extra dictionary can provide the mapping from rstr to interned string object.
-  But for efficiency reasons, it is anyway necessary to put an extra flag about
-  interning on the strings. Flagging this by putting the string object itself
-  as the flag might be acceptable. A dummyobject can be used if the interned
-  rstr is not exposed as an interned string object.
-
-Update: a reasonably simple implementation
--------------------------------------------
-
-Instead of the complications using the stringobject as a property of an rstr
-instance, I propose to special case this kind of dictionary (mapping rstr
-to stringobject) and to put an integer ``interned`` field into the rstr. The
-default is -1 for not interned. Non-negative values are the direct index
-of this string into the interning dict. That is, we grow an extra function
-that indexes the dict by slot number of the dict table and gives direct
-access to its value. The dictionary gets special handling on dict_resize,
-to recompute the slot numbers of the interned strings. ATM I'd say we leave
-the strings immortal and support mortality later when we have a cheap
-way to express this (less refcount, exclusion from Boehm, whatever).
-
-A prototype brute-force patch
------------------------------
-
-In order to get some idea how efficient string interning is at the moment,
-I implemented a quite crude version of interning. I patched space.wrap
-to call this intern_string instead of W_StringObject::
-
- def intern_string(space, str):
-     if we_are_translated():
-         _intern_ids = W_StringObject._intern_ids
-         str_id = id(str)
-         w_ret = _intern_ids.get(str_id, None)
-         if w_ret is not None:
-             return w_ret
-         _intern = W_StringObject._intern
-         if str not in _intern:
-             _intern[str] = W_StringObject(space, str)
-         W_StringObject._intern_keep[str_id] = str
-         _intern_ids[str_id] = w_ret = _intern[str]
-         return w_ret
-     else:
-         return W_StringObject(space, str)
-
-This is no general solution at all, since it a) does not provide
-interning of rstr and b) interns every app-level string. The
-implementation is also by far not as efficient as it could be,
-because it utilizes an extra dict _intern_ids which maps the
-id of the rstr to the string object, and a dict _intern_keep to
-keep these ids alive.
-
-With just a single _intern dict from rstr to string object, the
-overall performance degraded slightly instead of an advantage.
-The triple dict patch accelerates richards by about 12 percent.
-Since it still has the overhead of handling the extra dicts,
-I guess we can expect twice the acceleration if we add proper
-interning support.
-
-The resulting estimated 24 % acceleration is still not enough
-to justify an implementation right now.
-
-Here the results of the richards benchmark::
-
-  D:\pypy\dist\pypy\translator\goal>pypy-c-17516.exe -c "from richards import *;Richards.iterations=1;main()"
-  debug: entry point starting
-  debug:  argv -> pypy-c-17516.exe
-  debug:  argv -> -c
-  debug:  argv -> from richards import *;Richards.iterations=1;main()
-  Richards benchmark (Python) starting... [<function entry_point at 0xeae060>]
-  finished.
-  Total time for 1 iterations: 38 secs
-  Average time for iterations: 38885 ms
-  
-  D:\pypy\dist\pypy\translator\goal>pypy-c.exe -c "from richards import *;Richards.iterations=1;main()"
-  debug: entry point starting
-  debug:  argv -> pypy-c.exe
-  debug:  argv -> -c
-  debug:  argv -> from richards import *;Richards.iterations=1;main()
-  Richards benchmark (Python) starting... [<function entry_point at 0xead810>]
-  finished.
-  Total time for 1 iterations: 34 secs
-  Average time for iterations: 34388 ms
-  
-  D:\pypy\dist\pypy\translator\goal>
-
-
-This was just an exercise to get an idea. For sure this is not to be checked in.
-Instead, I'm attaching the simple patch here for reference.
-::
-
-  Index: objspace/std/objspace.py
-  ===================================================================
-  --- objspace/std/objspace.py	(revision 17526)
-  +++ objspace/std/objspace.py	(working copy)
-  @@ -243,6 +243,9 @@
-                   return self.newbool(x)
-               return W_IntObject(self, x)
-           if isinstance(x, str):
-  +            # XXX quick speed testing hack
-  +            from pypy.objspace.std.stringobject import intern_string
-  +            return intern_string(self, x)
-               return W_StringObject(self, x)
-           if isinstance(x, unicode):
-               return W_UnicodeObject(self, [unichr(ord(u)) for u in x]) # xxx
-  Index: objspace/std/stringobject.py
-  ===================================================================
-  --- objspace/std/stringobject.py	(revision 17526)
-  +++ objspace/std/stringobject.py	(working copy)
-  @@ -18,6 +18,10 @@
-   class W_StringObject(W_Object):
-       from pypy.objspace.std.stringtype import str_typedef as typedef
-   
-  +    _intern_ids = {}
-  +    _intern_keep = {}
-  +    _intern = {}
-  +
-       def __init__(w_self, space, str):
-           W_Object.__init__(w_self, space)
-           w_self._value = str
-  @@ -32,6 +36,21 @@
-   
-   registerimplementation(W_StringObject)
-   
-  +def intern_string(space, str):
-  +    if we_are_translated():
-  +        _intern_ids = W_StringObject._intern_ids
-  +        str_id = id(str)
-  +        w_ret = _intern_ids.get(str_id, None)
-  +        if w_ret is not None:
-  +            return w_ret
-  +        _intern = W_StringObject._intern
-  +        if str not in _intern:
-  +            _intern[str] = W_StringObject(space, str)
-  +        W_StringObject._intern_keep[str_id] = str
-  +        _intern_ids[str_id] = w_ret = _intern[str]
-  +        return w_ret
-  +    else:
-  +        return W_StringObject(space, str)
-   
-   def _isspace(ch):
-       return ord(ch) in (9, 10, 11, 12, 13, 32)  
-  Index: objspace/std/stringtype.py
-  ===================================================================
-  --- objspace/std/stringtype.py	(revision 17526)
-  +++ objspace/std/stringtype.py	(working copy)
-  @@ -47,6 +47,10 @@
-       if space.is_true(space.is_(w_stringtype, space.w_str)):
-           return w_obj  # XXX might be reworked when space.str() typechecks
-       value = space.str_w(w_obj)
-  +    # XXX quick hack to check interning effect
-  +    w_obj = W_StringObject._intern.get(value, None)
-  +    if w_obj is not None:
-  +        return w_obj
-       w_obj = space.allocate_instance(W_StringObject, w_stringtype)
-       W_StringObject.__init__(w_obj, space, value)
-       return w_obj
-
-ciao - chris

diff --git a/pypy/doc/discussion/compiled-swamp.txt b/pypy/doc/discussion/compiled-swamp.txt
deleted file mode 100644
--- a/pypy/doc/discussion/compiled-swamp.txt
+++ /dev/null
@@ -1,14 +0,0 @@
-
-We've got huge swamp of compiled pypy-c's used for:
-
-* benchmarks
-* tests
-* compliance tests
-* play1
-* downloads
-* ...
-
-We've got build tool, which we don't use, etc. etc.
-
-Idea is to formalize it more or less, so we'll have single script
-to make all of this work, upload builds to the web page etc.

diff --git a/pypy/doc/config/translation.backendopt.clever_malloc_removal.txt b/pypy/doc/config/translation.backendopt.clever_malloc_removal.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.clever_malloc_removal.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-Try to inline flowgraphs based on whether doing so would enable malloc
-removal (:config:`translation.backendopt.mallocs`.) by eliminating
-calls that result in escaping. This is an experimental optimization,
-also right now some eager inlining is necessary for helpers doing
-malloc itself to be inlined first for this to be effective.
-This option enable also an extra subsequent malloc removal phase.
-
-Callee flowgraphs are considered candidates based on a weight heuristic like
-for basic inlining. (see :config:`translation.backendopt.inline`,
-:config:`translation.backendopt.clever_malloc_removal_threshold` ).

diff --git a/pypy/doc/config/objspace.usemodules.token.txt b/pypy/doc/config/objspace.usemodules.token.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.token.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'token' module. 
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/translation.secondaryentrypoints.txt b/pypy/doc/config/translation.secondaryentrypoints.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.secondaryentrypoints.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Enable secondary entrypoints support list. Needed for cpyext module.

diff --git a/pypy/doc/config/objspace.lonepycfiles.txt b/pypy/doc/config/objspace.lonepycfiles.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.lonepycfiles.txt
+++ /dev/null
@@ -1,16 +0,0 @@
-If turned on, PyPy accepts to import a module ``x`` if it finds a
-file ``x.pyc`` even if there is no file ``x.py``.
-
-This is the way that CPython behaves, but it is disabled by
-default for PyPy because it is a common cause of issues: most
-typically, the ``x.py`` file is removed (manually or by a
-version control system) but the ``x`` module remains
-accidentally importable because the ``x.pyc`` file stays
-around.
-
-The usual reason for wanting this feature is to distribute
-non-open-source Python programs by distributing ``pyc`` files
-only, but this use case is not practical for PyPy at the
-moment because multiple versions of PyPy compiled with various
-optimizations might be unable to load each other's ``pyc``
-files.

diff --git a/pypy/doc/discussion/distribution.txt b/pypy/doc/discussion/distribution.txt
deleted file mode 100644
--- a/pypy/doc/discussion/distribution.txt
+++ /dev/null
@@ -1,34 +0,0 @@
-===================================================
-(Semi)-transparent distribution of RPython programs
-===================================================
-
-Some (rough) ideas how I see distribution
------------------------------------------
-
-The main point about it, is to behave very much like JIT - not
-to perform distribution on Python source code level, but instead
-perform distribution of RPython source, and eventually perform
-distribution of interpreter at the end.
-
-This attempt gives same advantages as off-line JIT (any RPython based
-interpreter, etc.) and gives nice field to play with different
-distribution heuristics. This also makes eventually nice possibility 
-of integrating JIT with distribution, thus allowing distribution
-heuristics to have more information that they might have otherwise and
-as well with specializing different nodes in performing different tasks.
-
-Flow graph level
-----------------
-
-Probably the best place to perform distribution attempt is to insert
-special graph distributing operations into low-level graphs (either lltype
-or ootype based), which will allow distribution heuristic to decide
-on entrypoint to block/graph/some other structure??? what variables/functions
-are accessed inside some part and if it's worth transferring it over wire.
-
-Backend level
--------------
-
-Backends will need explicit support for distribution of any kind. Basically
-it should be possible for backend to remotely call block/graph/structure
-in any manner (it should strongly depend on backend possibilities).

diff --git a/pypy/doc/config/objspace.usemodules.binascii.txt b/pypy/doc/config/objspace.usemodules.binascii.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.binascii.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Use the RPython 'binascii' module.

diff --git a/pypy/doc/config/translation.type_system.txt b/pypy/doc/config/translation.type_system.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.type_system.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Which type system to use when rtyping_. This option should not be set
-explicitly.
-
-.. _rtyping: ../rtyper.html

diff --git a/pypy/doc/discussion/distribution-newattempt.txt b/pypy/doc/discussion/distribution-newattempt.txt
deleted file mode 100644
--- a/pypy/doc/discussion/distribution-newattempt.txt
+++ /dev/null
@@ -1,65 +0,0 @@
-Distribution:
-=============
-
-This is outcome of Armin's and Samuele's ideas and our discussion, 
-kept together by fijal.
-
-The communication layer:
-========================
-
-Communication layer is the layer which takes care of explicit
-communication. Suppose we do have two (or more) running interpreters
-on different machines or in different processes. Let's call it *local side*
-(the one on which we're operating) and *remote side*.
-
-What we want to achieve is to have a transparent enough layer on local
-side, which does not allow user to tell the objects local and remote apart
-(despite __pypy__.internal_repr, which I would consider cheating).
-
-Because in pypy we have possibility to have different implementations
-for types (even builtin ones), we can use that mechanism to implement
-our simple RMI.
-
-The idea is to provide thin layer for accessing remote object, lays as
-different implementation for any possible object. So if you perform any
-operation on an object locally, which is really a remote object, you
-perform all method lookup and do a call on it. Than proxy object
-redirects the call to app-level code (socket, execnet, whatever) which
-calls remote interpreter with given parameters. It's important that we
-can always perform such a call, even if types are not marshallable, because
-we can provide remote proxies of local objects to remote side in that case.
-
-XXX: Need to explain in a bit more informative way.
-
-Example:
---------
-
-Suppose we do have ``class A`` and instance ``a = A()`` on remote side
-and we want to access this from a local side. We make an object of type
-``object`` and we do copy
-``__dict__`` keys with values, which correspond to objects on the remote
-side (have the same type to user) but they've got different implementation.
-(Ie. method calling will look like quite different).
-
-Even cooler example:
---------------------
-
-Reminding hpk's example of 5-liner remote file server. With this we make::
-
-  f = remote_side.import(open)
-  f("file_name").read()
-
-Implementation plans:
----------------------
-
-We need:
-
-* app-level primitives for having 'remote proxy' accessible
-
-* some "serialiser" which is not truly serialising stuff, but making
-  sure communication will go.
-
-* interp-level proxy object which emulates every possible object which
-  delegates operations to app-level primitive proxy.
-
-* to make it work....

diff --git a/pypy/doc/config/objspace.geninterp.txt b/pypy/doc/config/objspace.geninterp.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.geninterp.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-This option enables `geninterp`_. This will usually make the PyPy interpreter
-significantly faster (but also a bit bigger).
-
-.. _`geninterp`: ../geninterp.html

diff --git a/pypy/doc/config/objspace.usemodules.oracle.txt b/pypy/doc/config/objspace.usemodules.oracle.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.oracle.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'oracle' module.
-This module is off by default, requires oracle client installed.

diff --git a/pypy/doc/discussion/distribution-implementation.txt b/pypy/doc/discussion/distribution-implementation.txt
deleted file mode 100644
--- a/pypy/doc/discussion/distribution-implementation.txt
+++ /dev/null
@@ -1,91 +0,0 @@
-=====================================================
-Random implementation details of distribution attempt
-=====================================================
-
-.. contents::
-.. sectnum::
-
-This document attempts to broaden this `dist thoughts`_.
-
-.. _`dist thoughts`: distribution-newattempt.html
-
-Basic implementation:
----------------------
-
-First we do split objects into value-only primitives (like int) and other.
-Basically immutable builtin types which cannot contain user-level objects
-(int, float, long, str, None, etc.) will be always transferred as value-only
-objects (having no states etc.). The every other object (user created classes,
-instances, modules, lists, tuples, etc. etc.) are always executed by reference.
-(Of course if somebody wants to ie. copy the instance, he can marshal/pickle
-this to string and send, but it's outside the scope of this attempt). Special
-case might be immutable data structure (tuple, frozenset) containing simple
-types (this becomes simple type).
-
-XXX: What to do with code types? Marshalling them and sending seems to have no
-sense. Remote execution? Local execution with remote f_locals and f_globals?
-
-Every remote object has got special class W_RemoteXXX where XXX is interp-level
-class implementing this object. W_RemoteXXX implements all the operations
-by using special app-level code that sends method name and arguments over the wire
-(arguments might be either simple objects which are simply send over the app-level
-code or references to local objects).
-
-So the basic scheme would look like::
-
-    remote_ref = remote("Object reference")
-    remote_ref.any_method()
-
-``remote_ref`` in above example looks like normal python object to user,
-but is implemented differently (W_RemoteXXX), and uses app-level proxy
-to forward each interp-level method call.
-
-Abstraction layers:
--------------------
-
-In this section we define remote side as a side on which calls are
-executed and local side is the one on which calls are run.
-
-* Looking from the local side, first thing that we see is object
-  which looks like normal object (has got the same interp-level typedef)
-  but has got different implementation. Basically this is the shallow copy
-  of remote object (however you define shallow, it's up to the code which
-  makes the copy. Basically the copy which can be marshalled or send over
-  the wire or saved for future purpose). This is W_RemoteXXX where XXX is
-  real object name. Some operations on that object requires accessing remote
-  side of the object, some might not need such (for example remote int
-  is totally the same int as local one, it could not even be implemented
-  differently).
-
-* For every interp-level operation, which accesses internals that are not
-  accessible at the local side, (basically all attribute accesses which
-  are accessing things that are subclasses of W_Object) we provide special
-  W_Remote version, which downloads necessary object when needed
-  (if accessed). This is the same as normal W_RemoteXXX (we know the type!)
-  but not needed yet.
-
-* From the remote point of view, every exported object which needs such
-  has got a local appropriate storage W_LocalXXX where XXX is a type 
-  by which it could be accessed from a wire.
-
-The real pain:
---------------
-
-For every attribute access when we get W_RemoteXXX, we need to check
-the download flag - which sucks a bit. (And we have to support it somehow
-in annotator, which sucks a lot). The (some) idea is to wrap all the methods
-with additional checks, but that's both unclear and probably not necessary.
-
-XXX If we can easily change underlying implementation of an object, than
-this might become way easier. Right now I'll try to have it working and
-thing about RPython later.
-
-App-level remote tool:
-----------------------
-
-For purpose of app-level tool which can transfer the data (well, socket might
-be enough, but suppose I want to be more flexible), I would use `py.execnet`_,
-probably using some of the Armin's hacks to rewrite it using greenlets instead
-of threads.
-
-.. _`py.execnet`: http://codespeak.net/py/current/doc/execnet.html

diff --git a/pypy/doc/config/objspace.std.withtypeversion.txt b/pypy/doc/config/objspace.std.withtypeversion.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withtypeversion.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-This (mostly internal) option enables "type versions": Every type object gets an
-(only internally visible) version that is updated when the type's dict is
-changed. This is e.g. used for invalidating caches. It does not make sense to
-enable this option alone.
-
-.. internal

diff --git a/pypy/doc/config/translation.cli.trace_calls.txt b/pypy/doc/config/translation.cli.trace_calls.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.cli.trace_calls.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Internal. Debugging aid for the CLI backend.
-
-.. internal

diff --git a/pypy/doc/config/objspace.usemodules.struct.txt b/pypy/doc/config/objspace.usemodules.struct.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.struct.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-Use the built-in 'struct' module.
-This module is expected to be working and is included by default.
-There is also a pure Python version in lib_pypy which is used
-if the built-in is disabled, but it is several orders of magnitude
-slower.

diff --git a/pypy/doc/architecture.txt b/pypy/doc/architecture.txt
deleted file mode 100644
--- a/pypy/doc/architecture.txt
+++ /dev/null
@@ -1,264 +0,0 @@
-==================================================
-PyPy - Goals and Architecture Overview 
-==================================================
-
-.. contents::
-.. sectnum::
-
-This document gives an overview of the goals and architecture of PyPy.
-See `getting started`_ for a practical introduction and starting points. 
-
-Mission statement 
-====================
-
-We aim to provide:
-
- * a common translation and support framework for producing
-   implementations of dynamic languages, emphasizing a clean
-   separation between language specification and implementation
-   aspects.
-
- * a compliant, flexible and fast implementation of the Python_ Language 
-   using the above framework to enable new advanced features without having
-   to encode low level details into it.
-
-By separating concerns in this way, we intend for our implementation
-of Python - and other dynamic languages - to become robust against almost 
-all implementation decisions, including target platform, memory and 
-threading models, optimizations applied, up to to the point of being able to
-automatically *generate* Just-in-Time compilers for dynamic languages.
-
-Conversely, our implementation techniques, including the JIT compiler 
-generator, should become robust against changes in the languages 
-implemented. 
-
-
-High Level Goals
-=============================
-
-PyPy - the Translation Framework 
------------------------------------------------
-
-Traditionally, language interpreters are written in a target platform language
-like C/Posix, Java or C#.  Each such implementation fundamentally provides 
-a mapping from application source code to the target environment.  One of 
-the goals of the "all-encompassing" environments, like the .NET framework
-and to some extent the Java virtual machine, is to provide standardized
-and higher level functionalities in order to support language implementers
-for writing language implementations. 
-
-PyPy is experimenting with a more ambitious approach.  We are using a
-subset of the high-level language Python, called RPython_, in which we
-write languages as simple interpreters with few references to and
-dependencies on lower level details.  Our translation framework then
-produces a concrete virtual machine for the platform of our choice by
-inserting appropriate lower level aspects.  The result can be customized
-by selecting other feature and platform configurations.
-
-Our goal is to provide a possible solution to the problem of language
-implementers: having to write ``l * o * p`` interpreters for ``l``
-dynamic languages and ``p`` platforms with ``o`` crucial design
-decisions.  PyPy aims at having any one of these parameters changeable
-independently from each other:
-
-* ``l``: the language that we analyze can be evolved or entirely replaced;
-
-* ``o``: we can tweak and optimize the translation process to produce 
-  platform specific code based on different models and trade-offs;
-
-* ``p``: we can write new translator back-ends to target different
-  physical and virtual platforms.
-
-By contrast, a standardized target environment - say .NET -
-enforces ``p=1`` as far as it's concerned.  This helps making ``o`` a
-bit smaller by providing a higher-level base to build upon.  Still,
-we believe that enforcing the use of one common environment 
-is not necessary.  PyPy's goal is to give weight to this claim - at least 
-as far as language implementation is concerned - showing an approach
-to the ``l * o * p`` problem that does not rely on standardization.
-
-The most ambitious part of this goal is to `generate Just-In-Time
-Compilers`_ in a language-independent way, instead of only translating
-the source interpreter into an interpreter for the target platform.
-This is an area of language implementation that is commonly considered
-very challenging because of the involved complexity.
-
-
-PyPy - the Python Interpreter 
---------------------------------------------
-
-Our main motivation for developing the translation framework is to
-provide a full featured, customizable, fast_ and `very compliant`_ Python
-implementation, working on and interacting with a large variety of
-platforms and allowing the quick introduction of new advanced language
-features.
-
-This Python implementation is written in RPython as a relatively simple
-interpreter, in some respects easier to understand than CPython, the C
-reference implementation of Python.  We are using its high level and
-flexibility to quickly experiment with features or implementation
-techniques in ways that would, in a traditional approach, require
-pervasive changes to the source code.  For example, PyPy's Python
-interpreter can optionally provide lazily computed objects - a small
-extension that would require global changes in CPython.  Another example
-is the garbage collection technique: changing CPython to use a garbage
-collector not based on reference counting would be a major undertaking,
-whereas in PyPy it is an issue localized in the translation framework,
-and fully orthogonal to the interpreter source code.
-
-
-PyPy Architecture 
-===========================
-
-As you would expect from a project implemented using ideas from the world
-of `Extreme Programming`_, the architecture of PyPy has evolved over time
-and continues to evolve.  Nevertheless, the high level architecture is 
-stable. As described above, there are two rather independent basic
-subsystems: the `Python Interpreter`_ and the `Translation Framework`_.
-
-.. _`translation framework`:
-
-The Translation Framework
--------------------------
-
-The job of the translation tool chain is to translate RPython_ programs
-into an efficient version of that program for one of various target
-platforms, generally one that is considerably lower-level than Python.
-
-The approach we have taken is to reduce the level of abstraction of the
-source RPython program in several steps, from the high level down to the
-level of the target platform, whatever that may be.  Currently we
-support two broad flavours of target platforms: the ones that assume a
-C-like memory model with structures and pointers, and the ones that
-assume an object-oriented model with classes, instances and methods (as,
-for example, the Java and .NET virtual machines do).
-
-The translation tool chain never sees the RPython source code or syntax
-trees, but rather starts with the *code objects* that define the
-behaviour of the function objects one gives it as input.  It can be
-considered as "freezing" a pre-imported RPython program into an
-executable form suitable for the target platform.
-
-The steps of the translation process can be summarized as follows:
-
-* The code object of each source functions is converted to a `control
-  flow graph` by the `Flow Object Space`_.
-
-* The control flow graphs are processed by the Annotator_, which
-  performs whole-program type inference to annotate each variable of
-  the control flow graph with the types it may take at run-time.
-
-* The information provided by the annotator is used by the RTyper_ to
-  convert the high level operations of the control flow graphs into
-  operations closer to the abstraction level of the target platform.
-
-* Optionally, `various transformations`_ can then be applied which, for
-  example, perform optimizations such as inlining, add capabilities
-  such as stackless_-style concurrency, or insert code for the
-  `garbage collector`_.
-
-* Then, the graphs are converted to source code for the target platform
-  and compiled into an executable.
-
-This process is described in much more detail in the `document about
-the translation process`_ and in the paper `Compiling dynamic language
-implementations`_.
-
-.. _`control flow graph`: translation.html#the-flow-model
-.. _`Flow Object Space`: objspace.html#the-flow-object-space
-.. _Annotator: translation.html#the-annotation-pass
-.. _RTyper: rtyper.html#overview
-.. _`various transformations`: translation.html#the-optional-transformations
-.. _`document about the translation process`: translation.html
-.. _`garbage collector`: garbage_collection.html
-
-
-.. _`standard interpreter`: 
-.. _`python interpreter`: 
-
-The Python Interpreter
--------------------------------------
-
-PyPy's *Python Interpreter* is written in RPython and implements the
-full Python language.  This interpreter very closely emulates the
-behavior of CPython.  It contains the following key components:
-
-- a bytecode compiler responsible for producing Python code objects 
-  from the source code of a user application;
-
-- a `bytecode evaluator`_ responsible for interpreting 
-  Python code objects;
-
-- a `standard object space`_, responsible for creating and manipulating
-  the Python objects seen by the application.
-
-The *bytecode compiler* is the preprocessing phase that produces a
-compact bytecode format via a chain of flexible passes (tokenizer,
-lexer, parser, abstract syntax tree builder, bytecode generator).  The
-*bytecode evaluator* interprets this bytecode.  It does most of its work
-by delegating all actual manipulations of user objects to the *object
-space*.  The latter can be thought of as the library of built-in types.
-It defines the implementation of the user objects, like integers and
-lists, as well as the operations between them, like addition or
-truth-value-testing.
-
-This division between bytecode evaluator and object space is very
-important, as it gives a lot of flexibility.  One can plug in 
-different `object spaces`_ to get different or enriched behaviours 
-of the Python objects.  Additionally, a special more abstract object
-space, the `flow object space`_, allows us to reuse the bytecode
-evaluator for our translation framework.
-
-.. _`bytecode evaluator`: interpreter.html
-.. _`standard object space`: objspace.html#the-standard-object-space
-.. _`object spaces`: objspace.html
-.. _`flow object space`: objspace.html#the-flow-object-space
-
-.. _`the translation framework`:
-
-
-Further reading
-===============
-
-All of PyPy's documentation can be reached from the `documentation
-index`_.  Of particular interest after reading this document might be:
-
- * `getting-started`_: a hands-on guide to getting involved with the
-   PyPy source code.
-
- * `PyPy's approach to virtual machine construction`_: a paper
-   presented to the Dynamic Languages Symposium attached to OOPSLA
-   2006.
-
- * `The translation document`_: a detailed description of our
-   translation process.
-
- * All our `Technical reports`_, including `Compiling dynamic language
-   implementations`_.
-
- * `JIT Generation in PyPy`_, describing how we produce a Just-in-time
-   Compiler from an interpreter.
-
-.. _`documentation index`: docindex.html
-.. _`getting-started`: getting-started.html
-.. _`PyPy's approach to virtual machine construction`: http://codespeak.net/svn/pypy/extradoc/talk/dls2006/pypy-vm-construction.pdf
-.. _`the translation document`: translation.html
-.. _`Compiling dynamic language implementations`: http://codespeak.net/svn/pypy/extradoc/eu-report/D05.1_Publish_on_translating_a_very-high-level_description.pdf
-.. _`Technical reports`: index-report.html
-
-.. _`getting started`: getting-started.html
-.. _`Extreme Programming`: http://www.extremeprogramming.org/
-
-.. _fast: faq.html#how-fast-is-pypy
-.. _`very compliant`: cpython_differences.html
-
-.. _`RPython`: coding-guide.html#rpython
-
-.. _Python: http://docs.python.org/ref
-.. _Psyco: http://psyco.sourceforge.net
-.. _stackless: stackless.html
-.. _`generate Just-In-Time Compilers`: jit/index.html
-.. _`JIT Generation in PyPy`: jit/index.html
-
-.. include:: _ref.txt
-

diff --git a/pypy/doc/discussion/somepbc-refactoring-plan.txt b/pypy/doc/discussion/somepbc-refactoring-plan.txt
deleted file mode 100644
--- a/pypy/doc/discussion/somepbc-refactoring-plan.txt
+++ /dev/null
@@ -1,161 +0,0 @@
-==========================
-   Refactoring SomePBCs
-==========================
-
-Motivation
-==========
-
-Some parts of the annotator, and especially specialization, are quite obscure
-and hackish.  One cause for this is the need to manipulate Python objects like
-functions directly.  This makes it hard to attach additional information directly
-to the objects.  It makes specialization messy because it has to create new dummy
-function objects just to represent the various specialized versions of the function.
-
-
-Plan
-====
-
-Let's introduce nice wrapper objects.  This refactoring is oriented towards
-the following goal: replacing the content of SomePBC() with a plain set of
-"description" wrapper objects.  We shall probably also remove the possibility
-for None to explicitly be in the set and add a can_be_None flag (this is
-closer to what the other SomeXxx classes do).
-
-
-XxxDesc classes
-===============
-
-To be declared in module pypy.annotator.desc, with a mapping
-annotator.bookkeeper.descs = {<python object>: <XxxDesc instance>}
-accessed with bookkeeper.getdesc(<python object>).
-
-Maybe later the module should be moved out of pypy.annotation but for now I
-suppose that it's the best place.
-
-The goal is to have a single Desc wrapper even for functions and classes that
-are specialized.
-
-FunctionDesc
-
-    Describes (usually) a Python function object.  Contains flow graphs: one
-    in the common case, zero for external functions, more than one if there
-    are several specialized versions.  Also describes the signature of the
-    function in a nice format (i.e. not by relying on func_code inspection).
-
-ClassDesc
-
-    Describes a Python class object.  Generally just maps to a ClassDef, but
-    could map to more than one in the presence of specialization.  So we get
-    SomePBC({<ClassDesc>}) annotations for the class, and when it's
-    instantiated it becomes SomeInstance(classdef=...) for the particular
-    selected classdef.
-
-MethodDesc
-
-    Describes a bound method.  Just references a FunctionDesc and a ClassDef
-    (not a ClassDesc, because it's read out of a SomeInstance).
-
-FrozenDesc
-
-    Describes a frozen pre-built instance.  That's also a good place to store
-    some information currently in dictionaries of the bookkeeper.
-
-MethodOfFrozenDesc
-
-    Describes a method of a FrozenDesc.  Just references a FunctionDesc and a
-    FrozenDesc.
-
-NB: unbound method objects are the same as function for our purposes, so they
-become the same FunctionDesc as their im_func.
-
-These XxxDesc classes should share some common interface, as we'll see during
-the refactoring.  A common base class might be a good idea (at least I don't
-see why it would be a bad idea :-)
-
-
-Implementation plan
-===================
-
-* make a branch (/branch/somepbc-refactoring/)
-
-* change the definition of SomePBC, start pypy.annotation.desc
-
-* fix all places that use SomePBC :-)
-
-* turn Translator.flowgraphs into a plain list of flow graphs,
-  and make the FunctionDescs responsible for computing their own flow graphs
-
-* move external function functionality into the FunctionDescs too
-
-
-Status
-======
-
-Done, branch merged.
-
-
-RTyping PBCs of functions
-=========================
-
-The FuncDesc.specialize() method takes an args_s and return a
-corresponding graph.  The caller of specialize() parses the actual
-arguments provided by the simple_call or call_args operation, so that
-args_s is a flat parsed list.  The returned graph must have the same
-number and order of input variables.
-
-For each call family, we compute a table like this (after annotation
-finished)::
-
-          call_shape   FuncDesc1   FuncDesc2   FuncDesc3   ...
-  ----------------------------------------------------------
-   call0    shape1       graph1
-   call1    shape1       graph1      graph2
-   call2    shape1                   graph3     graph4            
-   call3    shape2                   graph5     graph6
-
-
-We then need to merge some of the lines if they look similar enough,
-e.g. call0 and call1.  Precisely, we can merge two lines if they only
-differ in having more or less holes.  In theory, the same graph could
-appear in two lines that are still not mergeable because of other
-graphs.  For sanity of implementation, we should check that at the end
-each graph only appears once in the table (unless there is only one
-*column*, in which case all problems can be dealt with at call sites).
-
-(Note that before this refactoring, the code was essentially requiring
-that the table ended up with either one single row or one single
-column.)
-
-The table is computed when the annotation is complete, in
-compute_at_fixpoint(), which calls the FuncDesc's consider_call_site()
-for each call site.  The latter merges lines as soon as possible.  The
-table is attached to the call family, grouped by call shape.
-
-During RTyping, compute_at_fixpoint() is called after each new ll
-helper is annotated.  Normally, this should not modify existing tables
-too much, but in some situations it will.  So the rule is that
-consider_call_site() should not add new (unmerged) rows to the table
-after the table is considered "finished" (again, unless there is only
-one column, in which case we should not discover new columns).
-
-XXX this is now out of date, in the details at least.
-
-RTyping other callable PBCs
-===========================
-
-The above picture attaches "calltable" information to the call
-families containing the function.  When it comes to rtyping a call of
-another kind of pbc (class, instance-method, frozenpbc-method) we have
-two basic choices:
-
- - associate the calltable information with the funcdesc that
-   ultimately ends up getting called, or
-
- - attach the calltable to the callfamily that contains the desc
-   that's actually being called.
-
-Neither is totally straightforward: the former is closer to what
-happens on the trunk but new families of funcdescs need to be created
-at the end of annotation or by normalisation.  The latter is more of a
-change.  The former is also perhaps a bit unnatural for ootyped
-backends.

diff --git a/pypy/doc/config/objspace.usemodules.__builtin__.txt b/pypy/doc/config/objspace.usemodules.__builtin__.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.__builtin__.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the '__builtin__' module. 
-This module is essential, included by default and should not be removed.

diff --git a/pypy/doc/config/objspace.usemodules._lsprof.txt b/pypy/doc/config/objspace.usemodules._lsprof.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._lsprof.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Use the '_lsprof' module. 

diff --git a/pypy/doc/config/translation.compilerflags.txt b/pypy/doc/config/translation.compilerflags.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.compilerflags.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Experimental. Specify extra flags to pass to the C compiler.

diff --git a/pypy/doc/interpreter.txt b/pypy/doc/interpreter.txt
deleted file mode 100644
--- a/pypy/doc/interpreter.txt
+++ /dev/null
@@ -1,410 +0,0 @@
-===================================
-PyPy - Bytecode Interpreter 
-===================================
-
-.. contents::
-.. sectnum::
-
-
-Introduction and Overview
-===============================
-
-This document describes the implementation of PyPy's 
-Bytecode Interpreter and related Virtual Machine functionalities. 
-
-PyPy's bytecode interpreter has a structure reminiscent of CPython's
-Virtual Machine: It processes code objects parsed and compiled from
-Python source code.  It is implemented in the `interpreter/`_ directory.
-People familiar with the CPython implementation will easily recognize
-similar concepts there.  The major differences are the overall usage of
-the `object space`_ indirection to perform operations on objects, and
-the organization of the built-in modules (described `here`_).
-
-Code objects are a nicely preprocessed, structured representation of
-source code, and their main content is *bytecode*.  We use the same
-compact bytecode format as CPython 2.4.  Our bytecode compiler is
-implemented as a chain of flexible passes (tokenizer, lexer, parser,
-abstract syntax tree builder, bytecode generator).  The latter passes
-are based on the ``compiler`` package from the standard library of
-CPython, with various improvements and bug fixes. The bytecode compiler
-(living under `interpreter/astcompiler/`_) is now integrated and is
-translated with the rest of PyPy.
-
-Code objects contain
-condensed information about their respective functions, class and 
-module body source codes.  Interpreting such code objects means
-instantiating and initializing a `Frame class`_ and then
-calling its ``frame.eval()`` method.  This main entry point 
-initialize appropriate namespaces and then interprets each 
-bytecode instruction.  Python's standard library contains
-the `lib-python/2.5.2/dis.py`_ module which allows to view
-the Virtual's machine bytecode instructions:: 
-
-    >>> import dis
-    >>> def f(x):
-    ...     return x + 1
-    >>> dis.dis(f)
-    2         0 LOAD_FAST                0 (x)
-              3 LOAD_CONST               1 (1)
-              6 BINARY_ADD          
-              7 RETURN_VALUE        
-
-CPython as well as PyPy are stack-based virtual machines, i.e.
-they don't have registers but put object to and pull objects
-from a stack.  The bytecode interpreter is only responsible
-for implementing control flow and putting and pulling black
-box objects to and from this value stack.  The bytecode interpreter 
-does not know how to perform operations on those black box
-(`wrapped`_) objects for which it delegates to the `object
-space`_.  In order to implement a conditional branch in a program's
-execution, however, it needs to gain minimal knowledge about a
-wrapped object.  Thus, each object space has to offer a
-``is_true(w_obj)`` operation which returns an
-interpreter-level boolean value.  
-
-For the understanding of the interpreter's inner workings it
-is crucial to recognize the concepts of `interpreter-level and
-application-level`_ code.  In short, interpreter-level is executed
-directly on the machine and invoking application-level functions 
-leads to an bytecode interpretation indirection. However, 
-special care must be taken regarding exceptions because
-application level exceptions are wrapped into ``OperationErrors`` 
-which are thus distinguished from plain interpreter-level exceptions. 
-See `application level exceptions`_ for some more information
-on ``OperationErrors``. 
-
-The interpreter implementation offers mechanisms to allow a
-caller to be unaware if a particular function invocation leads
-to bytecode interpretation or is executed directly at
-interpreter-level.  The two basic kinds of `Gateway classes`_
-expose either an interpreter-level function to
-application-level execution (``interp2app``) or allow
-transparent invocation of application-level helpers
-(``app2interp``) at interpreter-level. 
-
-Another task of the bytecode interpreter is to care for exposing its 
-basic code, frame, module and function objects to application-level 
-code.  Such runtime introspection and modification abilities are 
-implemented via `interpreter descriptors`_ (also see Raymond Hettingers 
-`how-to guide for descriptors`_ in Python, PyPy uses this model extensively). 
-
-A significant complexity lies in `function argument parsing`_.  Python as a 
-language offers flexible ways of providing and receiving arguments 
-for a particular function invocation.  Not only does it take special care 
-to get this right, it also presents difficulties for the `annotation
-pass`_ which performs a whole-program analysis on the
-bytecode interpreter, argument parsing and gatewaying code
-in order to infer the types of all values flowing across function
-calls. 
-
-It is for this reason that PyPy resorts to generate
-specialized frame classes and functions at `initialization
-time`_ in order to let the annotator only see rather static 
-program flows with homogeneous name-value assignments on 
-function invocations. 
-
-.. _`how-to guide for descriptors`: http://users.rcn.com/python/download/Descriptor.htm
-.. _`annotation pass`: translation.html#the-annotation-pass
-.. _`initialization time`: translation.html#initialization-time
-.. _`interpreter-level and application-level`: coding-guide.html#interpreter-level 
-.. _`wrapped`: coding-guide.html#wrapping-rules
-.. _`object space`: objspace.html
-.. _`application level exceptions`: coding-guide.html#applevel-exceptions
-.. _`here`: coding-guide.html#modules
-
-
-Bytecode Interpreter Implementation Classes  
-================================================
-
-.. _`Frame class`: 
-.. _`Frame`: 
-
-Frame classes
------------------
-
-The concept of Frames is pervasive in executing programs and 
-on virtual machines in particular. They are sometimes called
-*execution frame* because they hold crucial information
-regarding the execution of a Code_ object, which in turn is
-often directly related to a Python `Function`_.  Frame
-instances hold the following state: 
-
-- the local scope holding name-value bindings, usually implemented 
-  via a "fast scope" which is an array of wrapped objects
-
-- a blockstack containing (nested) information regarding the
-  control flow of a function (such as ``while`` and ``try`` constructs) 
-
-- a value stack where bytecode interpretation pulls object
-  from and puts results on.
-
-- a reference to the *globals* dictionary, containing
-  module-level name-value bindings 
-
-- debugging information from which a current line-number and 
-  file location can be constructed for tracebacks 
-
-Moreover the Frame class itself has a number of methods which implement
-the actual bytecodes found in a code object.  In fact, PyPy already constructs 
-four specialized Frame class variants depending on the code object: 
-
-- PyInterpFrame (in `pypy/interpreter/pyopcode.py`_)  for
-  basic simple code objects (not involving generators or nested scopes) 
-
-- PyNestedScopeFrame (in `pypy/interpreter/nestedscope.py`_) 
-  for code objects that reference nested scopes, inherits from PyInterpFrame
-
-- PyGeneratorFrame (in `pypy/interpreter/generator.py`_) 
-  for code objects that yield values to the caller, inherits from PyInterpFrame
-
-- PyNestedScopeGeneratorFrame for code objects that reference
-  nested scopes and yield values to the caller, inherits from both PyNestedScopeFrame
-  and PyGeneratorFrame 
-
-.. _Code: 
-
-Code Class 
------------- 
-
-PyPy's code objects contain the same information found in CPython's code objects. 
-They differ from Function_ objects in that they are only immutable representations 
-of source code and don't contain execution state or references to the execution
-environment found in `Frames`.  Frames and Functions have references 
-to a code object. Here is a list of Code attributes:
-
-* ``co_flags`` flags if this code object has nested scopes/generators 
-* ``co_stacksize`` the maximum depth the stack can reach while executing the code
-* ``co_code`` the actual bytecode string 
- 
-* ``co_argcount`` number of arguments this code object expects 
-* ``co_varnames`` a tuple of all argument names pass to this code object
-* ``co_nlocals`` number of local variables 
-* ``co_names`` a tuple of all names used in the code object
-* ``co_consts`` a tuple of prebuilt constant objects ("literals") used in the code object 
-* ``co_cellvars`` a tuple of Cells containing values for access from nested scopes 
-* ``co_freevars`` a tuple of Cell names from "above" scopes 
- 
-* ``co_filename`` source file this code object was compiled from 
-* ``co_firstlineno`` the first linenumber of the code object in its source file 
-* ``co_name`` name of the code object (often the function name) 
-* ``co_lnotab`` a helper table to compute the line-numbers corresponding to bytecodes 
-
-In PyPy, code objects also have the responsibility of creating their Frame_ objects
-via the `'create_frame()`` method.  With proper parser and compiler support this would
-allow to create custom Frame objects extending the execution of functions
-in various ways.  The several Frame_ classes already utilize this flexibility
-in order to implement Generators and Nested Scopes. 
-
-.. _Function: 
-
-Function and Method classes
-----------------------------
-
-The PyPy ``Function`` class (in `pypy/interpreter/function.py`_) 
-represents a Python function.  A ``Function`` carries the following 
-main attributes: 
-
-* ``func_doc`` the docstring (or None) 
-* ``func_name`` the name of the function 
-* ``func_code`` the Code_ object representing the function source code 
-* ``func_defaults`` default values for the function (built at function definition time)
-* ``func_dict`` dictionary for additional (user-defined) function attributes 
-* ``func_globals`` reference to the globals dictionary 
-* ``func_closure`` a tuple of Cell references  
-
-``Functions`` classes also provide a ``__get__`` descriptor which creates a Method
-object holding a binding to an instance or a class.  Finally, ``Functions``
-and ``Methods`` both offer a ``call_args()`` method which executes 
-the function given an `Arguments`_ class instance. 
-
-.. _Arguments: 
-.. _`function argument parsing`: 
-
-Arguments Class 
--------------------- 
-
-The Argument class (in `pypy/interpreter/argument.py`_) is
-responsible for parsing arguments passed to functions.  
-Python has rather complex argument-passing concepts:
-
-- positional arguments 
-
-- keyword arguments specified by name 
-
-- default values for positional arguments, defined at function
-  definition time 
-
-- "star args" allowing a function to accept remaining
-  positional arguments 
-
-- "star keyword args" allow a function to accept additional 
-  arbitrary name-value bindings 
-
-Moreover, a Function_ object can get bound to a class or instance 
-in which case the first argument to the underlying function becomes
-the bound object.  The ``Arguments`` provides means to allow all 
-this argument parsing and also cares for error reporting. 
-
-
-.. _`Module`: 
-
-Module Class 
-------------------- 
-
-A ``Module`` instance represents execution state usually constructed 
-from executing the module's source file.  In addition to such a module's
-global ``__dict__`` dictionary it has the following application level 
-attributes: 
-
-* ``__doc__`` the docstring of the module
-* ``__file__`` the source filename from which this module was instantiated 
-* ``__path__`` state used for relative imports 
-
-Apart from the basic Module used for importing
-application-level files there is a more refined
-``MixedModule`` class (see `pypy/interpreter/mixedmodule.py`_)
-which allows to define name-value bindings both at application
-level and at interpreter level.  See the ``__builtin__``
-module's `pypy/module/__builtin__/__init__.py`_ file for an
-example and the higher level `chapter on Modules in the coding
-guide`_. 
-
-.. _`__builtin__ module`: http://codespeak.net/svn/pypy/trunk/pypy/module/ 
-.. _`chapter on Modules in the coding guide`: coding-guide.html#modules 
-
-.. _`Gateway classes`: 
-
-Gateway classes 
----------------------- 
-
-A unique PyPy property is the ability to easily cross the barrier
-between interpreted and machine-level code (often referred to as
-the difference between `interpreter-level and application-level`_). 
-Be aware that the according code (in `pypy/interpreter/gateway.py`_) 
-for crossing the barrier in both directions is somewhat
-involved, mostly due to the fact that the type-inferring
-annotator needs to keep track of the types of objects flowing
-across those barriers. 
-
-.. _typedefs:
-
-Making interpreter-level functions available at application-level
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
-In order to make an interpreter-level function available at 
-application level, one invokes ``pypy.interpreter.gateway.interp2app(func)``. 
-Such a function usually takes a ``space`` argument and any number 
-of positional arguments. Additionally, such functions can define
-an ``unwrap_spec`` telling the ``interp2app`` logic how
-application-level provided arguments should be unwrapped
-before the actual interpreter-level function is invoked. 
-For example, `interpreter descriptors`_ such as the ``Module.__new__`` 
-method for allocating and constructing a Module instance are 
-defined with such code:: 
-
-    Module.typedef = TypeDef("module",
-        __new__ = interp2app(Module.descr_module__new__.im_func,
-                             unwrap_spec=[ObjSpace, W_Root, Arguments]),
-        __init__ = interp2app(Module.descr_module__init__),
-                        # module dictionaries are readonly attributes
-        __dict__ = GetSetProperty(descr_get_dict, cls=Module), 
-        __doc__ = 'module(name[, doc])\n\nCreate a module object...' 
-        )
-
-The actual ``Module.descr_module__new__`` interpreter-level method 
-referenced from the ``__new__`` keyword argument above is defined 
-like this:: 
-
-    def descr_module__new__(space, w_subtype, __args__):
-        module = space.allocate_instance(Module, w_subtype)
-        Module.__init__(module, space, None)
-        return space.wrap(module)
-
-Summarizing, the ``interp2app`` mechanism takes care to route 
-an application level access or call to an internal interpreter-level 
-object appropriately to the descriptor, providing enough precision
-and hints to keep the type-inferring annotator happy. 
-
-
-Calling into application level code from interpreter-level 
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
-Application level code is `often preferable`_. Therefore, 
-we often like to invoke application level code from interpreter-level. 
-This is done via the Gateway's ``app2interp`` mechanism
-which we usually invoke at definition time in a module. 
-It generates a hook which looks like an interpreter-level 
-function accepting a space and an arbitrary number of arguments. 
-When calling a function at interpreter-level the caller side 
-does usually not need to be aware if its invoked function
-is run through the PyPy interpreter or if it will directly
-execute on the machine (after translation). 
-
-Here is an example showing how we implement the Metaclass 
-finding algorithm of the Python language in PyPy::
-
-    app = gateway.applevel(r'''
-        def find_metaclass(bases, namespace, globals, builtin):
-            if '__metaclass__' in namespace:
-                return namespace['__metaclass__']
-            elif len(bases) > 0:
-                base = bases[0]
-                if hasattr(base, '__class__'):
-                        return base.__class__
-                else:
-                        return type(base)
-            elif '__metaclass__' in globals:
-                return globals['__metaclass__']
-            else:
-                try:
-                    return builtin.__metaclass__
-                except AttributeError:
-                    return type
-    ''', filename=__file__)
-
-    find_metaclass  = app.interphook('find_metaclass')
-
-The ``find_metaclass`` interpreter-level hook is invoked 
-with five arguments from the ``BUILD_CLASS`` opcode implementation
-in `pypy/interpreter/pyopcode.py`_:: 
-
-    def BUILD_CLASS(f):
-        w_methodsdict = f.valuestack.pop()
-        w_bases       = f.valuestack.pop()
-        w_name        = f.valuestack.pop()
-        w_metaclass = find_metaclass(f.space, w_bases,
-                                     w_methodsdict, f.w_globals,
-                                     f.space.wrap(f.builtin))
-        w_newclass = f.space.call_function(w_metaclass, w_name,
-                                           w_bases, w_methodsdict)
-        f.valuestack.push(w_newclass)
-
-Note that at a later point we can rewrite the ``find_metaclass`` 
-implementation at interpreter-level and we would not have 
-to modify the calling side at all. 
-
-.. _`often preferable`: coding-guide.html#app-preferable
-.. _`interpreter descriptors`: 
-
-Introspection and Descriptors 
-------------------------------
-
-Python traditionally has a very far-reaching introspection model 
-for bytecode interpreter related objects. In PyPy and in CPython read
-and write accesses to such objects are routed to descriptors. 
-Of course, in CPython those are implemented in ``C`` while in
-PyPy they are implemented in interpreter-level Python code. 
-
-All instances of a Function_, Code_, Frame_ or Module_ classes
-are also ``Wrappable`` instances which means they can be represented 
-at application level.  These days, a PyPy object space needs to
-work with a basic descriptor lookup when it encounters
-accesses to an interpreter-level object:  an object space asks
-a wrapped object for its type via a ``getclass`` method and then 
-calls the type's ``lookup(name)`` function in order to receive a descriptor 
-function.  Most of PyPy's internal object descriptors are defined at the
-end of `pypy/interpreter/typedef.py`_.  You can use these definitions 
-as a reference for the exact attributes of interpreter classes visible 
-at application level. 
-
-.. include:: _ref.txt

diff --git a/pypy/doc/config/objspace.usemodules._codecs.txt b/pypy/doc/config/objspace.usemodules._codecs.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._codecs.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the '_codecs' module. 
-Used by the 'codecs' standard lib module. This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/objspace.usemodules.unicodedata.txt b/pypy/doc/config/objspace.usemodules.unicodedata.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.unicodedata.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'unicodedata' module. 
-This module is expected to be fully working.

diff --git a/pypy/doc/config/translation.no__thread.txt b/pypy/doc/config/translation.no__thread.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.no__thread.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Don't use gcc __thread attribute for fast thread local storage
-implementation . Increases the chance that moving the resulting
-executable to another same processor Linux machine will work. (see
-:config:`translation.vanilla`).

diff --git a/pypy/doc/config/translation.backendopt.inline.txt b/pypy/doc/config/translation.backendopt.inline.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.inline.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-Inline flowgraphs based on an heuristic, the default one considers
-essentially the a weight for the flowgraph based on the number of
-low-level operations in them (see
-:config:`translation.backendopt.inline_threshold` ).
-
-Some amount of inlining in order to have RPython builtin type helpers
-inlined is needed for malloc removal
-(:config:`translation.backendopt.mallocs`) to be effective.
-
-This optimization is used by default.

diff --git a/pypy/doc/config/translation.countmallocs.txt b/pypy/doc/config/translation.countmallocs.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.countmallocs.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Internal; used by some of the C backend tests to check that the number of
-allocations matches the number of frees.
-
-.. internal

diff --git a/pypy/doc/config/objspace.std.newshortcut.txt b/pypy/doc/config/objspace.std.newshortcut.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.newshortcut.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Performance only: cache and shortcut calling __new__ from builtin types

diff --git a/pypy/doc/discussion/translation-swamp.txt b/pypy/doc/discussion/translation-swamp.txt
deleted file mode 100644
--- a/pypy/doc/discussion/translation-swamp.txt
+++ /dev/null
@@ -1,30 +0,0 @@
-===================================================================
-List of things that need to be improved for translation to be saner
-===================================================================
-
-
- * understand nondeterminism after rtyping
- 
- * experiment with different heuristics:
- 
-    * weigh backedges more (TESTING)
-    * consider size of outer function
-    * consider number of arguments (TESTING)
-
- * find a more deterministic inlining order (TESTING using number of callers)
-
- * experiment with using a base inlining threshold and then drive inlining by
-   malloc removal possibilities (using escape analysis)
-
- * move the inlining of gc helpers just before emitting the code.
-   throw the graph away (TESTING, need to do a new framework translation)
-
- * for gcc: use just one implement file (TRIED: turns out to be a bad idea,
-   because gcc uses too much ram). Need to experiment more now that
-   inlining should at least be more deterministic!
-
-things to improve the framework gc
-==================================
-
- * find out whether a function can collect
-

diff --git a/pypy/doc/config/translation.insist.txt b/pypy/doc/config/translation.insist.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.insist.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Don't stop on the first `rtyping`_ error. Instead, try to rtype as much as
-possible and show the collected error messages in the end.
-
-.. _`rtyping`: ../rtyper.html

diff --git a/pypy/doc/config/objspace.opcodes.CALL_METHOD.txt b/pypy/doc/config/objspace.opcodes.CALL_METHOD.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.opcodes.CALL_METHOD.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-Enable a pair of bytecodes that speed up method calls.
-See ``pypy.interpreter.callmethod`` for a description.
-
-The goal is to avoid creating the bound method object in the common
-case.  So far, this only works for calls with no keyword, no ``*arg``
-and no ``**arg`` but it would be easy to extend.
-
-For more information, see the section in `Standard Interpreter Optimizations`_.
-
-.. _`Standard Interpreter Optimizations`: ../interpreter-optimizations.html#lookup-method-call-method

diff --git a/pypy/doc/download.txt b/pypy/doc/download.txt
deleted file mode 100644
--- a/pypy/doc/download.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-
-Download one of the following release files: 
-=============================================
-
-Download page has moved to `pypy.org`_.
-
-.. _`pypy.org`: http://pypy.org/download.html

diff --git a/pypy/doc/config/objspace.opcodes.CALL_LIKELY_BUILTIN.txt b/pypy/doc/config/objspace.opcodes.CALL_LIKELY_BUILTIN.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.opcodes.CALL_LIKELY_BUILTIN.txt
+++ /dev/null
@@ -1,12 +0,0 @@
-Introduce a new opcode called ``CALL_LIKELY_BUILTIN``. It is used when something
-is called, that looks like a builtin function (but could in reality be shadowed
-by a name in the module globals). For all module globals dictionaries it is
-then tracked which builtin name is shadowed in this module. If the
-``CALL_LIKELY_BUILTIN`` opcode is executed, it is checked whether the builtin is
-shadowed. If not, the corresponding builtin is called. Otherwise the object that
-is shadowing it is called instead. If no shadowing is happening, this saves two
-dictionary lookups on calls to builtins.
-
-For more information, see the section in `Standard Interpreter Optimizations`_.
-
-.. _`Standard Interpreter Optimizations`: ../interpreter-optimizations.html#call-likely-builtin

diff --git a/pypy/doc/config/translation.backendopt.storesink.txt b/pypy/doc/config/translation.backendopt.storesink.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.storesink.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Store sinking optimization. On by default.

diff --git a/pypy/doc/carbonpython.txt b/pypy/doc/carbonpython.txt
deleted file mode 100644
--- a/pypy/doc/carbonpython.txt
+++ /dev/null
@@ -1,230 +0,0 @@
-==================================================
-CarbonPython, aka C# considered harmful
-==================================================
-
-CarbonPython overview
-=====================
-
-CarbonPython is an experimental RPython to .NET compiler. Its main
-focus is to produce DLLs to be used by other .NET programs, not
-standalone executables; if you want to compile an RPython standalone
-program, have a look to `translate.py`_.
-
-Compiled RPython programs are much faster (up to 250x) than
-interpreted IronPython programs, hence it might be a convenient
-replacement for C# when more speed is needed. RPython programs can be
-as fast as C# programs.
-
-RPython is a restrict subset of Python, static enough to be analyzed
-and compiled efficiently to lower level languages.  To read more about
-the RPython limitations read the `RPython description`_.
-
-**Disclaimer**: RPython is a much less convenient language than Python
-to program with. If you do not need speed, there is no reason to look
-at RPython.
-
-**Big disclaimer**: CarbonPython is still in a pre-alpha stage: it's
-not meant to be used for production code, and the API might change in
-the future. Despite this, it might be useful in some situations and
-you are encouraged to try it by yourself. Suggestions, bug-reports and
-even better patches are welcome.
-
-.. _`RPython description`: coding-guide.html#restricted-python
-.. _`translate.py`: faq.html#how-do-i-compile-my-own-interpreters
-
-
-Quick start
-===========
-
-Suppose you want to write a little DLL in RPython and call its
-function from C#.
-
-Here is the file mylibrary.py::
-
-    from pypy.translator.cli.carbonpython import export
-
-    @export(int, int)
-    def add(x, y):
-        return x+y
-
-    @export(int, int)
-    def sub(x, y):
-        return x-y
-
-
-And here the C# program main.cs::
-
-    using System;
-    public class CarbonPythonTest
-    {
-        public static void Main()
-        {
-            Console.WriteLine(mylibrary.add(40, 2));
-            Console.WriteLine(mylibrary.sub(44, 2));
-        }
-    }
-
-Once the files have been created, you can compile ``mylibrary.py``
-with CarbonPython to get the corresponding DLL::
-
-    $ python carbonpython.py mylibrary.py
-    ... lot of stuff
-
-Then, we compile main.cs into an executable, being sure to add a
-reference to the newly created ``mylibrary.dll``::
-
-    # with mono on linux
-    $ gmcs /r:mylibrary.dll main.cs
-
-    # with Microsoft CLR on windows
-    c:\> csc /r:mylibrary main.cs
-
-Now we can run the executable to see whether the answers are right::
-
-    $ mono main.exe
-    42
-    42
-
-
-Multiple entry-points
-=====================
-
-In RPython, the type of each variable is inferred by the `Annotator`_:
-the annotator analyzed the whole program top-down starting from an
-entry-point, i.e. a function whose we specified the types of the
-parameters.
-
-This approach works for a standalone executables, but not for a
-library that by definition is composed by more than one
-entry-point. Thus, you need to explicitly specify which functions you
-want to include in your DLL, together with the expected input types.
-
-To mark a function as an entry-point, you use the ``@export``
-decorator, which is defined in ``pypy.translator.cli.carbonpython``,
-as shown by the previous example.  Note that you do not need to
-specify the return type, because it is automatically inferenced by the
-annotator.
-
-.. _`Annotator`: translation.html#annotator
-
-
-Namespaces
-==========
-
-Since `CLS`_ (Common Language Specification) does not support module
-level static methods, RPython functions marked as entry-points are
-compiled to static methods of a class, in order to be accessible by
-every CLS-compliant language such as C# or VB.NET.
-
-The class which each function is placed in depends on its
-**namespace**; for example, if the namespace of a function ``foo`` is
-``A.B.C``, the function will be rendered as a static method of the
-``C`` class inside the ``A.B`` namespace. This allows C# and
-IronPython code to call the function using the intuitive ``A.B.C.foo``
-syntax.
-
-By default, the default namespace for exported function is the same as
-the name of the module. Thus in the previous example the default
-namespace is ``mylibrary`` and the functions are placed inside the
-corresponding class in the global namespace.
-
-You can change the default namespace by setting the ``_namespace_``
-variable in the module you are compiling::
-
-    _namespace_ = 'Foo.Bar'
-
-    @export(int, int)
-    def f(x, y):
-        pass
-
-Finally, you can also set a specific namespace on a per-function
-basis, using the appropriate keyword argument of the ``@export``
-decorator::
-
-    @export(int, int, namespace='Foo.Bar')
-    def f(x, y):
-        pass
-
-
-.. _`CLS`: http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-335.pdf
-
-
-Exporting classes
-=================
-
-RPython libraries can also export classes: to export a class, add the
-``@export`` decorator to its ``__init__`` method; similarly, you can
-also export any methods of the class::
-
-    class MyClass:
-
-        @export(int)
-        def __init__(self, x):
-            self.x = x
-
-        @export
-        def getx(self):
-            return self.x
-
-
-Note that the type of ``self`` must not be specified: it will
-automatically assumed to be ``MyClass``.
-
-The ``__init__`` method is not automatically mapped to the .NET
-constructor; to properly initialize an RPython object from C# or
-IronPython code you need to explicitly call ``__init__``; for example,
-in C#::
-
-    MyClass obj = new MyClass();
-    obj.__init__(x);
-
-Note that this is needed only when calling RPython code from 
-outside; the RPython compiler automatically calls ``__init__``
-whenever an RPython class is instantiated.
-
-In the future this discrepancy will be fixed and the ``__init__``
-method will be automatically mapped to the constructor.
-
-
-Accessing .NET libraries
-========================
-
-**Warning**: the API for accessing .NET classes from RPython is highly
-experimental and will probably change in the future.
-
-In RPython you can access native .NET classes through the ``CLR``
-object defined in ``translator.cli.dotnet``: from there, you can
-navigate through namespaces using the usual dot notation; for example,
-``CLR.System.Collections.ArrayList`` refers to the ``ArrayList`` class
-in the ``System.Collections`` namespace.
-
-To instantiate a .NET class, simply call it::
-
-    ArrayList = CLR.System.Collections.ArrayList
-    def foo():
-        obj = ArrayList()
-        obj.Add(42)
-        return obj
-
-At the moment there is no special syntax support for indexers and
-properties: for example, you can't access ArrayList's elements using
-the square bracket notation, but you have to call the call the
-``get_Item`` and ``set_Item`` methods; similarly, to access a property
-``XXX`` you need to call ``get_XXX`` and ``set_XXX``::
-
-    def foo():
-        obj = ArrayList()
-        obj.Add(42)
-        print obj.get_Item(0)
-        print obj.get_Count()
-
-Static methods and are also supported, as well as overloadings::
-
-    Math = CLR.System.Math
-    def foo():
-        print Math.Abs(-42)
-        print Math.Abs(-42.0)
-
-
-At the moment, it is not possible to reference assemblies other than
-mscorlib. This will be fixed soon.

diff --git a/pypy/doc/__pypy__-module.txt b/pypy/doc/__pypy__-module.txt
deleted file mode 100644
--- a/pypy/doc/__pypy__-module.txt
+++ /dev/null
@@ -1,86 +0,0 @@
-=======================
-The ``__pypy__`` module
-=======================
-
-The ``__pypy__`` module is the main entry point to special features provided
-by PyPy's standard interpreter. Its content depends on `configuration options`_ 
-which may add new functionality and functions whose existence or non-existence 
-indicates the presence of such features. 
-
-.. _`configuration options`: config/index.html
-
-Generally available functionality
-=================================
-
- - ``internal_repr(obj)``: return the interpreter-level representation of an
-   object.
- - ``bytebuffer(length)``: return a new read-write buffer of the given length.
-   It works like a simplified array of characters (actually, depending on the
-   configuration the ``array`` module internally uses this).
-
-Thunk Object Space Functionality
-================================
-
-When the thunk object space is used (choose with :config:`objspace.name`),
-the following functions are put into ``__pypy__``:
-
- - ``thunk``
- - ``is_thunk``
- - ``become``
- - ``lazy``
-
-Those are all described in the `interface section of the thunk object space
-docs`_.
-
-For explanations and examples see the `thunk object space docs`_.
-
-.. _`thunk object space docs`: objspace-proxies.html#thunk
-.. _`interface section of the thunk object space docs`: objspace-proxies.html#thunk-interface
-
-Taint Object Space Functionality
-================================
-
-When the taint object space is used (choose with :config:`objspace.name`),
-the following names are put into ``__pypy__``:
-
- - ``taint``
- - ``is_tainted``
- - ``untaint``
- - ``taint_atomic``
- - ``_taint_debug``
- - ``_taint_look``
- - ``TaintError``
-
-Those are all described in the `interface section of the taint object space
-docs`_.
-
-For more detailed explanations and examples see the `taint object space docs`_.
-
-.. _`taint object space docs`: objspace-proxies.html#taint
-.. _`interface section of the taint object space docs`: objspace-proxies.html#taint-interface
-
-Transparent Proxy Functionality
-===============================
-
-If `transparent proxies`_ are enabled (with :config:`objspace.std.withtproxy`)
-the following functions are put into ``__pypy__``:
-
- - ``tproxy(typ, controller)``: Return something that looks like it is of type
-   typ. Its behaviour is completely controlled by the controller. See the docs
-   about `transparent proxies`_ for detail.
-
- - ``get_tproxy_controller(obj)``: If obj is really a transparent proxy, return
-   its controller. Otherwise return None.
-
-.. _`transparent proxies`: objspace-proxies.html#tproxy
-
-
-Functionality available on py.py (not after translation)
-========================================================
-
- - ``isfake(obj)``: returns True if ``obj`` is faked.
-
- - ``interp_pdb()``: start a pdb at interpreter-level.
-
-
-

diff --git a/pypy/doc/config/objspace.std.withstrslice.txt b/pypy/doc/config/objspace.std.withstrslice.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withstrslice.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-Enable "string slice" objects.
-
-See the page about `Standard Interpreter Optimizations`_ for more details.
-
-.. _`Standard Interpreter Optimizations`: ../interpreter-optimizations.html#string-slice-objects
-
-

diff --git a/pypy/doc/config/objspace.std.withprebuiltint.txt b/pypy/doc/config/objspace.std.withprebuiltint.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withprebuiltint.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-This option enables the caching of small integer objects (similar to what
-CPython does). The range of which integers are cached can be influenced with
-the :config:`objspace.std.prebuiltintfrom` and
-:config:`objspace.std.prebuiltintto` options.
-

diff --git a/pypy/doc/config/objspace.usemodules.errno.txt b/pypy/doc/config/objspace.usemodules.errno.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.errno.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'errno' module. 
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/objspace.usemodules.sys.txt b/pypy/doc/config/objspace.usemodules.sys.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.sys.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'sys' module. 
-This module is essential, included by default and should not be removed.

diff --git a/pypy/doc/discussion/pypy_metaclasses_in_cl.txt b/pypy/doc/discussion/pypy_metaclasses_in_cl.txt
deleted file mode 100644
--- a/pypy/doc/discussion/pypy_metaclasses_in_cl.txt
+++ /dev/null
@@ -1,139 +0,0 @@
-IRC log
-=======
-
-::
-
-    [09:41] <dialtone> arigo: is it possible to ask the backendoptimizer to completely remove all the oogetfield('meta', obj)?
-    [09:42] <dialtone> and at the same time to change all the oogetfield('somefield', meta) into oogetfield('somefield', obj)
-    [09:42] <dialtone> because then we wouldn't need the metaclass hierarchy anymore
-    [09:42] <dialtone> (at least in common lisp)
-    [09:42] <arigo> as far as I know the idea was indeed to be able to do this kind of things
-    [09:43] <arigo> but not necessarily in the existing backendopt
-    [09:44] <dialtone> uhmmm
-    [09:44] <dialtone> I have no idea how to do this stuff
-    [09:44] <arigo> if I understand it correctly, as a first step you can just tweak gencl to recognize oogetfield('meta', obj)
-    [09:44] <dialtone> I'll think about it on the plane maybe
-    [09:44] <arigo> and produce a same_as equivalent instead
-    [09:44] <arigo> (do I make any sense at all?)
-    [09:44] <dialtone> yes
-    [09:45] <dialtone> same_as(meta, obj)
-    [09:45] <dialtone> so that the next oogetfield() will still work on meta which in reality is the obj
-    [09:45] <arigo> yes
-    [09:45] <dialtone> thus you obtained the same thing without removing anything
-    [09:45] <dialtone> cool
-    [09:46] <antocuni> dialtone: can you explain me better what are you trying to do?
-    [09:46] <dialtone> it looks kinda simple
-    [09:46] <dialtone> am I a fool?
-    [09:46] <dialtone> antocuni: I want to get rid of the metaclass stuff in common lisp
-    [09:47] <dialtone> since common lisp supports class variables
-    [09:47] <dialtone> (DEFCLASS foo () ((bar :allocate :class)))
-    [09:47] <antocuni> cool
-    [09:47] <dialtone> but to do that I also have to get rid of the opcodes that work on the object model
-    [09:48] <dialtone> at first I thought about removing the metaclass related operations (or change them) but armin got a great idea about using same_as
-    [09:48] idnar (i=mithrand at unaffiliated/idnar) left irc: Remote closed the connection
-    [09:48] <arigo> there might be a few problems, though
-    [09:48] <dialtone> and here comes the part I feared
-    [09:48] <arigo> I'm not sure if the meta object is used for more than oogetfields
-    [09:49] <arigo> and also, let's see if there are name clashes in the fields
-    [09:49] <antocuni> I can't understand a thing: are you trying to lookup some fields in the obj directly, instead of in the metclass, right?
-    [09:49] <dialtone> antocuni: yes
-    [09:50] <antocuni> why an object should have fields that belongs to its metaclass?
-    [09:50] <dialtone> arigo: uhmmm you can have both a class variable and an instance variable named in the same way?
-    [09:50] <dialtone> metaclass is not a real metaclass
-    [09:50] <arigo> I don't know
-    [09:50] <braintone> arigo - r26566 - Support geterrno() from rctypes to genc.
-    [09:50] <antocuni> dialtone: ah, now I understand
-    [09:50] <arigo> I would expect it not to be the case, as the names come from RPython names
-    [09:51] <dialtone> arigo: indeed
-    [09:51] <dialtone> but I guess I can set different accessors maybe for class level things and for instance level things
-    [09:51] <dialtone> let's try
-    [09:51] <dialtone> no...
-    [09:52] <dialtone> so a name clash would break stuff
-    [09:52] <dialtone> but... how do you recognize an access to a class variable and one to an instance variable from RPython?
-    [09:53] <arigo> dialtone: I think we don't have name clashes, because there is some mangling anyway
-    [09:53] <dialtone> cool
-    [09:53] <arigo> if I see it correctly, class variable names start with 'pbc' and instance ones with 'o'
-    [09:53] <dialtone> that's what we've done in gencl yes
-    [09:54] <arigo> ? that's what the ootyping is doing
-    [09:54] <dialtone> yes yes
-    [09:54] <arigo> :-)
-    [09:54] <dialtone> I mean that I see the distinction in gencl :)
-    [09:54] <dialtone> sooooooo
-    [09:55] <dialtone> if I have a getfield where the first argument is meta and I simply emit the same code that I emit for the same_as I should be safe removing all the meta stuff... maybe
-    [09:55] <dialtone> seems like a tiny change in gencl
-    [09:55] <arigo> dialtone: in RPython, the annotator says that attributes are instance fields as soon as they are written to instances, otherwise they are class attributes
-    [09:56] <arigo> yes, it should work
-    [09:56] Palats (n=Pierre at izumi.palats.com) left irc: Read error: 104 (Connection reset by peer)
-    [09:56] <dialtone> unless of course metaclasses are used for something else than class variables
-    [09:56] <arigo> ideally, you should not look for the name 'meta' but for some other hint
-    [09:57] <arigo> I'm not completely at ease with the various levels of ootype
-    [09:57] <dialtone> neither am I\
-    [09:57] <nikh> all field names other than those defined by ootype (like "meta") will be mangled, so i guess checking for "meta" is good enough
-    [09:57] <dialtone> and I also have to ignore the setfield opcode that deals with metaclasses
-    [09:58] <dialtone> or make it a same_as as well
-    [09:59] <arigo> apparently, the meta instances are used as the ootype of RPython classes
-    [10:00] <arigo> so they can be manipulated by RPython code that passes classes around
-    [10:01] <arigo> I guess you can also pass classes around in CL, read attributes from them, and instantiate them
-    [10:01] <dialtone> yes
-    [10:01] <arigo> so a saner approach might be to try to have gencl use CL classes instead of these meta instances
-    [10:03] <dialtone> uhmmmmm
-    [10:03] <arigo> which means: recognize if an ootype.Instance is actually representing an RPython class (by using a hint)
-    [10:03] <dialtone> I also have to deal with the Class_
-    [10:03] <dialtone> but that can probably be set to standard-class
-    [10:03] <arigo> yes, I think it's saner to make, basically, oogetfield('class_') be a same_as
-    [10:04] <dialtone> cool
-    [10:04] <dialtone> I think I'll save this irc log to put it in the svn tree for sanxiyn
-    [10:04] <nikh> to recognize RPython class represenations: if the ootype.Instance has the superclass ootypesystem.rclass.CLASSTYPE, then it's a "metaclass"
-    [10:04] <dialtone> he is thinking about this in the plane (at least this is what he told)
-    [10:05] <arigo> :-)
-    [10:05] <arigo> nikh: yes
-    [10:05] <arigo> ootype is indeed rather complicated, level-wise, to support limited languages like Java
-    [10:05] <nikh> unfortunately, yes
-    [10:05] <nikh> well, in a way it's very convenient for the backends
-    [10:05] <nikh> but if you want to use more native constructs, it gets hairy quickly
-    [10:05] <dialtone> I dunno
-    [10:05] <dialtone> depends on the backend
-    [10:06] <arigo> hum, there is still an information missing that gencl would need here
-    [10:06] <dialtone> I think if the language of the backend is powerful enough it could use an higher abstraction
-    [10:07] <arigo> dialtone: yes, there is also the (hairly to implement) idea of producing slightly different things for different back-ends too
-    [10:07] <dialtone> using backendopts?
-    [10:08] <dialtone> would it make sense to have a kind of backend_supports=['metaclasses', 'classvariables', 'first_class_functions'...]
-    [10:08] <arigo> maybe, but I was thinking about doing different things in ootypesystem/rclass already
-    [10:08] <arigo> yes, such a backend_supports would be great
-    [10:09] <nikh> dialtone: there is still an hour left to sprint, so go go go ;)
-    [10:09] <nikh> you can do it, if you want it ;)
-    [10:09] <arigo> what is missing is the link from the concrete Instance types, and which Instance corresponds to its meta-instance
-    [10:10] idnar (i=mithrand at unaffiliated/idnar) joined #pypy.
-    [10:10] <arigo> dialtone: it's not as simple as making an oogetfield be a same_as
-    [10:10] <dialtone> KnowledgeUnboundError, Missing documentation in slot brain
-    [10:10] <arigo> right now for CL the goal would be to generate for a normal Instance, a DEFCLASS whose :allocate :class attributes are the attributes of the meta-Instance
-    [10:11] <nikh> we could optionally have class fields in Instances, and then operations like ooget/setclassfield
-    [10:11] <dialtone> the reason why I ask is that if we manage to do this then we could also use default Condition as Exception
-    [10:11] <dialtone> and we could map the Conditions in common lisp to exceptions in python transparently
-    [10:12] <dialtone> since the object systems will then match (and they are vaguely similar anyway)
-    [10:12] <arigo> nice
-    [10:12] <dialtone> at least I think
-    [10:18] <arigo> I'm still rather confused by ootypesystem/rclass
-    [10:18] <arigo> although I think that blame would show my name on quite some bits :-)
-    [10:19] <arigo> there are no class attributes read through instances
-    [10:19] <arigo> they are turned into method calls
-    [10:19] <arigo> accessor methods
-    [10:20] <arigo> it's a bit organically grown
-    [10:20] <arigo> accessor methods were introduced at one point, and the meta-Instance later
-    [10:21] <dialtone> uhmmm
-    [10:22] <nikh> what was the reason for having accessor methods?
-    [10:22] <nikh> they seem to be only generated for class vars that are overriden in subclasses.
-    [10:22] <arigo> yes
-    [10:22] <arigo> before we had the meta-Instance trick, it was the only way to avoid storing the value in all instances
-    [10:22] <nikh> aha
-    [10:23] <nikh> we could possibly get rid of these accessors
-    [10:23] <arigo> now, yes, by storing the values in the meta-Instance
-    [10:23] <nikh> they are alway anyway stored in the meta-Instance, I think
-    [10:23] <arigo> no, I think that other values are stored in the meta-Instance right now
-    [10:24] <arigo> it's the values that are only ever accessed with a syntax 'ClassName.attr', i.e. not through an instance
-    [10:24] <arigo> ...more precisely, with 'x = ClassName or OtherClassName; x.attr'
-    [10:25] <nikh> hm, i'm still trying to read this out of the code ...
-    [10:28] <arigo> it's in ClassRepr._setup_repr()
-    [10:28] <arigo> there is no clsfields here, just pbcfields
-    [10:28] <arigo> # attributes showing up in getattrs done on the class as a PBC
-    [10:28] <nikh> i see

diff --git a/pypy/doc/config/translation.withsmallfuncsets.txt b/pypy/doc/config/translation.withsmallfuncsets.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.withsmallfuncsets.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Represent function sets smaller than this option's value as an integer instead
-of a function pointer. A call is then done via a switch on that integer, which
-allows inlining etc. Small numbers for this can speed up PyPy (try 5).

diff --git a/pypy/doc/config/translation.backendopt.remove_asserts.txt b/pypy/doc/config/translation.backendopt.remove_asserts.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.remove_asserts.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Remove raising of assertions from the flowgraphs, which might give small speedups.

diff --git a/pypy/doc/config/translation.ootype.txt b/pypy/doc/config/translation.ootype.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.ootype.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-This group contains options specific for ootypesystem.

diff --git a/pypy/doc/config/objspace.usemodules.termios.txt b/pypy/doc/config/objspace.usemodules.termios.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.termios.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'termios' module. 
-This module is expected to be fully working.

diff --git a/pypy/doc/config/objspace.usemodules.cStringIO.txt b/pypy/doc/config/objspace.usemodules.cStringIO.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.cStringIO.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Use the built-in cStringIO module.
-
-If not enabled, importing cStringIO gives you the app-level
-implementation from the standard library StringIO module.

diff --git a/pypy/doc/config/objspace.usemodules.thread.txt b/pypy/doc/config/objspace.usemodules.thread.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.thread.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Use the 'thread' module. 

diff --git a/pypy/doc/config/objspace.std.logspaceoptypes.txt b/pypy/doc/config/objspace.std.logspaceoptypes.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.logspaceoptypes.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-.. internal
-
-Wrap "simple" bytecode implementations like BINARY_ADD with code that collects
-information about which types these bytecodes receive as arguments.

diff --git a/pypy/doc/discussion/chained_getattr.txt b/pypy/doc/discussion/chained_getattr.txt
deleted file mode 100644
--- a/pypy/doc/discussion/chained_getattr.txt
+++ /dev/null
@@ -1,70 +0,0 @@
-
-
-"chained getattr/module global lookup" optimization
-(discussion during trillke-sprint 2007, anto/holger, 
-a bit of samuele and cf earlier on)  
-
-random example: 
-
-    code: 
-        import os.path
-        normed = [os.path.normpath(p) for p in somelist]
-    bytecode: 
-        [...]
-         LOAD_GLOBAL              (os)
-         LOAD_ATTR                (path)
-         LOAD_ATTR                (normpath)
-         LOAD_FAST                (p)
-         CALL_FUNCTION            1
-
-    would be turned by pypy-compiler into: 
-
-         LOAD_CHAINED_GLOBAL      (os,path,normpath)
-         LOAD_FAST                (p)
-         CALL_FUNCTION            1
-       
-    now for the LOAD_CHAINED_GLOBAL bytecode implementation:
-
-        Module dicts have a special implementation, providing: 
-
-        - an extra "fastlookup" rpython-dict serving as a cache for
-          LOAD_CHAINED_GLOBAL places within the modules: 
-
-          * keys are e.g. ('os', 'path', 'normpath')
-
-          * values are tuples of the form: 
-            ([obj1, obj2, obj3], [ver1, ver2])
-
-             "ver1" refer to the version of the globals of "os"
-             "ver2" refer to the version of the globals of "os.path"
-             "obj3" is the resulting "normpath" function 
-
-        - upon changes to the global dict, "fastlookup.clear()" is called
-
-        - after the fastlookup entry is filled for a given
-          LOAD_CHAINED_GLOBAL index, the following checks need
-          to be performed in the bytecode implementation::
-    
-              value = f_globals.fastlookup.get(key, None)
-              if value is None:
-                 # fill entry 
-              else:
-                  # check that our cached lookups are still valid 
-                  assert isinstance(value, tuple) 
-                  objects, versions = value
-                  i = 0
-                  while i < len(versions): 
-                      lastversion = versions[i]
-                      ver = getver_for_obj(objects[i])
-                      if ver == -1 or ver != lastversion:
-                         name = key[i]
-                         objects[i] = space.getattr(curobj, name)
-                         versions[i] = ver
-                      curobj = objects[i]
-                      i += 1
-              return objects[i]
-
-            def getver_for_obj(obj):
-                if "obj is not Module":
-                    return -1
-                return obj.w_dict.version 

diff --git a/pypy/doc/config/translation.backendopt.clever_malloc_removal_threshold.txt b/pypy/doc/config/translation.backendopt.clever_malloc_removal_threshold.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.clever_malloc_removal_threshold.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Weight threshold used to decide whether to inline flowgraphs.  
-This is for clever malloc removal (:config:`translation.backendopt.clever_malloc_removal`).

diff --git a/pypy/doc/config/objspace.std.builtinshortcut.txt b/pypy/doc/config/objspace.std.builtinshortcut.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.builtinshortcut.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-A shortcut speeding up primitive operations between built-in types.
-
-This is a space-time trade-off: at the moment, this option makes a
-translated pypy-c executable bigger by about 1.7 MB.  (This can probably
-be improved with careful analysis.)

diff --git a/pypy/doc/config/objspace.std.withmapdict.txt b/pypy/doc/config/objspace.std.withmapdict.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withmapdict.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-Enable the new version of "sharing dictionaries".
-
-See the section in `Standard Interpreter Optimizations`_ for more details.
-
-.. _`Standard Interpreter Optimizations`: ../interpreter-optimizations.html#sharing-dicts

diff --git a/pypy/doc/extradoc.txt b/pypy/doc/extradoc.txt
deleted file mode 100644
--- a/pypy/doc/extradoc.txt
+++ /dev/null
@@ -1,349 +0,0 @@
-=================================================
-PyPy - papers, talks and related projects 
-=================================================
-
-Papers
-----------------------------------
-
-*Articles about PyPy published so far, most recent first:* (bibtex_ file)
-
-* `High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`_,
-  A. Cuni, Ph.D. thesis
-
-* `Tracing the Meta-Level: PyPy's Tracing JIT Compiler`_,
-  C.F. Bolz, A. Cuni, M. Fijalkowski, A. Rigo
-
-* `Faster than C#: Efficient Implementation of Dynamic Languages on .NET`_,
-  A. Cuni, D. Ancona and A. Rigo
-
-* `Automatic JIT Compiler Generation with Runtime Partial Evaluation`_
-  (Master Thesis), C.F. Bolz
-
-* `RPython: A Step towards Reconciling Dynamically and Statically Typed
-  OO Languages`_, D. Ancona, M. Ancona, A. Cuni and N.D. Matsakis
-
-* `How to *not* write Virtual Machines for Dynamic Languages`_,
-  C.F. Bolz and A. Rigo
-
-* `PyPy's approach to virtual machine construction`_, A. Rigo and S. Pedroni
-
-
-*Non-published articles (only submitted so far, or technical reports):*
-
-* `Automatic generation of JIT compilers for dynamic languages in .NET`_,
-  D. Ancona, C.F. Bolz, A. Cuni and A. Rigo
-
-* `EU Reports`_: a list of all the reports we produced until 2007 for the
-  European Union sponsored part of PyPy.  Notably, it includes:
-
-* `Core Object Optimization Results`_, PyPy Team
-
-* `Compiling Dynamic Language Implementations`_, PyPy Team
-
-
-*Other research using PyPy (as far as we know it):*
-
-* `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_,
-  C. Bruni and T. Verwaest
-
-* `Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`_,
-  C.F. Bolz, A. Kuhn, A. Lienhard, N. Matsakis, O. Nierstrasz, L. Renggli,
-  A. Rigo and T. Verwaest
-
-
-*Previous work:*
-
-* `Representation-Based Just-in-Time Specialization and the Psyco Prototype
-  for Python`_, A. Rigo
-
-
-.. _bibtex: http://codespeak.net/svn/pypy/extradoc/talk/bibtex.bib
-.. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://codespeak.net/svn/user/antocuni/phd/thesis/thesis.pdf
-.. _`How to *not* write Virtual Machines for Dynamic Languages`: http://codespeak.net/svn/pypy/extradoc/talk/dyla2007/dyla.pdf
-.. _`Tracing the Meta-Level: PyPy's Tracing JIT Compiler`: http://codespeak.net/svn/pypy/extradoc/talk/icooolps2009/bolz-tracing-jit.pdf
-.. _`Faster than C#: Efficient Implementation of Dynamic Languages on .NET`: http://codespeak.net/svn/pypy/extradoc/talk/icooolps2009-dotnet/cli-jit.pdf
-.. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://codespeak.net/svn/user/cfbolz/jitpl/thesis/final-master.pdf
-.. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07
-.. _`EU Reports`: index-report.html
-.. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://www.iam.unibe.ch/~verwaest/pygirl.pdf
-.. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz
-.. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7
-.. _`Automatic generation of JIT compilers for dynamic languages in .NET`: http://codespeak.net/svn/pypy/extradoc/talk/ecoop2009/main.pdf
-.. _`Core Object Optimization Results`: http://codespeak.net/svn/pypy/extradoc/eu-report/D06.1_Core_Optimizations-2007-04-30.pdf
-.. _`Compiling Dynamic Language Implementations`: http://codespeak.net/pypy/extradoc/eu-report/D05.1_Publish_on_translating_a_very-high-level_description.pdf
-
-
-Talks and Presentations 
-----------------------------------
-
-Talks in 2010
-+++++++++++++
-
-* `PyCon 2010`_.
-
-
-Talks in 2009
-+++++++++++++
-
-* `RuPy 2009`_.
-
-* `EuroPython talks 2009`_.
-
-* `PyCon talks 2009`_.
-
-* `Wroclaw (Poland) presentation`_ by Maciej Fijalkowski.  Introduction,
-  including about the current JIT.
-
-* `PyPy talk at OpenBossa 09`_ (blog post).
-
-
-Talks in 2008
-+++++++++++++
-
-* Talk `at PyCon Poland 08`_.  In Polish.
-
-* `The PyPy Project and You`_, by Michael Hudson at OSDC 2008.
-
-* `Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`_
-  by C.F. Bolz et al.; `pdf of the presentation`__ at S3 2008.
-
-* `EuroPython talks 2008`_.
-
-* PyPy at the `Maemo summit`_.
-
-* `PyCon UK 2008 - JIT`_ and `PyCon UK 2008 - Status`_.
-
-* `PyCon Italy 2008`_.
-
-* Talk by Maciej Fijalkowski `at SFI 08`_, Cracow (Poland) Academic IT
-  Festival.
-
-* `RuPy 2008`_.
-
-* `PyCon 2008`_.
-
-.. __: http://codespeak.net/svn/pypy/extradoc/talk/s3-2008/talk.pdf
-
-
-Talks in 2007
-+++++++++++++
-
-* Our "road show" tour of the United States: presentations `at IBM`__
-  and `at Google`__.
-
-* `ESUG 2007`_.
-
-* `RPython: A Step towards Reconciling Dynamically and Statically Typed
-  OO Languages`_ at DLS 2007.  `Pdf of the presentation`__.
-
-* Talks at `Bern (Switzerland) 2007`_.
-
-* `PyCon UK 2007`_.
-
-* A presentation in Dresden_ by Maciej Fijalkowski.
-
-* Multiple talks at `EuroPython 2007`_.
-
-* A presentation at `Bad Honnef 2007`_ by C.F. Bolz about the Prolog
-  interpreter.
-
-* A `Dzug talk`_ by Holger Krekel.
-
-* Multiple talks at `PyCon 2007`_.
-
-* A talk at `PyCon - Uno 2007`_.
-
-* `RuPy 2007`_.
-
-* `Warsaw 2007`_.
-
-.. __: http://codespeak.net/svn/pypy/extradoc/talk/roadshow-ibm/
-.. __: http://codespeak.net/svn/pypy/extradoc/talk/roadshow-google/Pypy_architecture.pdf
-.. __: http://codespeak.net/svn/pypy/extradoc/talk/dls2007/rpython-talk.pdf
-
-
-Talks in 2006
-+++++++++++++
-
-* `Warsaw 2006`_.
-
-* `Tokyo 2006`_.
-
-* `PyPy's VM Approach`_ talk, given by Armin Rigo at the Dynamic Languages
-  Symposium at OOPSLA'06 (Portland OR), and by Samuele Pedroni at Intel
-  Hillsboro (OR)  (October). The talk presents the paper 
-  `PyPy's approach to virtual machine construction`_ accepted for 
-  the symposium.
-
-* `PyPy Status`_ talk, given by Samuele Pedroni at the Vancouner
-  Python Workshop 2006 (August). 
-
-* `Trouble in Paradise`_: the Open Source Project PyPy, 
-  EU-funding and Agile Practices talk, by Bea During at
-  Agile 2006 (experience report).
-
-*  `Sprint Driven Development`_, Agile Methodologies in a
-   Distributed Open Source Project (PyPy) talk, by Bea During
-   at XP 2006 (experience report).
-      
-* `Kill -1`_: process refactoring in the PyPy project talk, by Bea During
-  at the Agile track/Europython 2006.
-
-* `What can PyPy do for you`_, by Armin Rigo and Carl Friedrich Bolz given at
-  EuroPython 2006. The talk describes practical usecases of PyPy.
-
-* `PyPy 3000`_, a purely implementation-centered lightning talk at EuroPython
-  2006, given by Armin Rigo and Holger Krekel.
-
-* `PyPy introduction at EuroPython 2006`_, given by Michael Hudson, also
-  stating the status of the project.
-
-* Very similar to the EuroPython intro talk (but somewhat older) is the
-  `PyPy intro`_ talk, given by Michael Hudson at ACCU 2006 (April) 
-
-* `PyPy development method`_ talk, given by Bea During and
-  Holger Krekel at Pycon2006 
-
-Talks in 2005
-+++++++++++++
-
-
-* `PyPy - the new Python implementation on the block`_, 
-  given by Carl Friedrich Bolz and Holger Krekel at the 
-  22nd Chaos Communication Conference in Berlin, Dec. 2005. 
-  
-* `Open Source, EU-Funding and Agile Methods`_, given by Holger Krekel
-  and Bea During at the 22nd Chaos Communication Conference in Berlin, Dec. 2005
-
-* `Sprinting the PyPy way`_, an overview about our sprint methodology, given by
-  Bea During during EuroPython 2005. (More PyPy talks were given, but are
-  not present in detail.)
-
-* `PyCon 2005`_ animated slices, mostly reporting on the translator status.
-
-* `py lib slides`_ from the py lib talk at PyCon 2005 
-  (py is used as a support/testing library for PyPy). 
-
-Talks in 2004
-+++++++++++++
-
-* `EU funding for FOSS`_ talk on Chaos Communication
-  Conference in Berlin, Dec 2004. 
-
-Talks in 2003
-+++++++++++++
-
-* oscon2003-paper_ an early paper presented at Oscon 2003 describing 
-  what the PyPy project is about and why you should care. 
-
-* `Architecture introduction slides`_ a mostly up-to-date
-  introduction for the Amsterdam PyPy-Sprint Dec 2003. 
-
-.. _`PyCon 2010`: http://morepypy.blogspot.com/2010/02/pycon-2010-report.html
-.. _`RuPy 2009`: http://morepypy.blogspot.com/2009/11/pypy-on-rupy-2009.html
-.. _`PyPy 3000`: http://codespeak.net/pypy/extradoc/talk/ep2006/pypy3000.txt
-.. _`What can PyPy do for you`: http://codespeak.net/pypy/extradoc/talk/ep2006/usecases-slides.html
-.. _`PyPy introduction at EuroPython 2006`: http://codespeak.net/pypy/extradoc/talk/ep2006/intro.pdf
-.. _`PyPy - the new Python implementation on the block`: http://codespeak.net/pypy/extradoc/talk/22c3/hpk-tech.html
-.. _`PyPy development method`: http://codespeak.net/pypy/extradoc/talk/pycon2006/method_talk.html
-.. _`PyPy intro`: http://codespeak.net/pypy/extradoc/talk/accu2006/accu-2006.pdf 
-.. _oscon2003-paper: http://codespeak.net/pypy/extradoc/talk/oscon2003-paper.html
-.. _`Architecture introduction slides`: http://codespeak.net/pypy/extradoc/talk/amsterdam-sprint-intro.pdf
-.. _`EU funding for FOSS`: http://codespeak.net/pypy/extradoc/talk/2004-21C3-pypy-EU-hpk.pdf
-.. _`py lib slides`: http://codespeak.net/pypy/extradoc/talk/2005-pycon-py.pdf
-.. _`PyCon 2005`: http://codespeak.net/pypy/extradoc/talk/pypy-talk-pycon2005/README.html
-.. _`Trouble in Paradise`: http://codespeak.net/pypy/extradoc/talk/agile2006/during-oss-sprints_talk.pdf
-.. _`Sprint Driven Development`: http://codespeak.net/pypy/extradoc/talk/xp2006/during-xp2006-sprints.pdf
-.. _`Kill -1`: http://codespeak.net/pypy/extradoc/talk/ep2006/kill_1_agiletalk.pdf
-.. _`Open Source, EU-Funding and Agile Methods`: http://codespeak.net/pypy/extradoc/talk/22c3/agility.pdf
-.. _`PyPy Status`: http://codespeak.net/pypy/extradoc/talk/vancouver/talk.html
-.. _`Sprinting the PyPy way`: http://codespeak.net/svn/pypy/extradoc/talk/ep2005/pypy_sprinttalk_ep2005bd.pdf
-.. _`PyPy's VM Approach`: http://codespeak.net/pypy/extradoc/talk/dls2006/talk.html
-.. _`PyPy's approach to virtual machine construction`: http://codespeak.net/svn/pypy/extradoc/talk/dls2006/pypy-vm-construction.pdf
-.. _`EuroPython talks 2009`: http://codespeak.net/svn/pypy/extradoc/talk/ep2009/
-.. _`PyCon talks 2009`: http://codespeak.net/svn/pypy/extradoc/talk/pycon2009/
-.. _`Wroclaw (Poland) presentation`: http://codespeak.net/svn/pypy/extradoc/talk/wroclaw2009/talk.pdf
-.. _`PyPy talk at OpenBossa 09`: http://morepypy.blogspot.com/2009/03/pypy-talk-at-openbossa-09.html
-.. _`at SFI 08`: http://codespeak.net/svn/pypy/extradoc/talk/sfi2008/
-.. _`at PyCon Poland 08`: http://codespeak.net/svn/pypy/extradoc/talk/pyconpl-2008/talk.pdf
-.. _`The PyPy Project and You`: http://codespeak.net/svn/pypy/extradoc/talk/osdc2008/osdc08.pdf
-.. _`EuroPython talks 2008`: http://codespeak.net/svn/pypy/extradoc/talk/ep2008/
-.. _`Maemo summit`: http://morepypy.blogspot.com/2008/09/pypypython-at-maemo-summit.html
-.. _`PyCon UK 2008 - JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pycon-uk-2008/jit/pypy-vm.pdf
-.. _`PyCon UK 2008 - Status`: http://codespeak.net/svn/pypy/extradoc/talk/pycon-uk-2008/status/status.pdf
-.. _`PyCon Italy 2008`: http://codespeak.net/svn/pypy/extradoc/talk/pycon-italy-2008/pypy-vm.pdf
-.. _`RuPy 2008`: http://codespeak.net/svn/pypy/extradoc/talk/rupy2008/
-.. _`RuPy 2007`: http://codespeak.net/svn/pypy/extradoc/talk/rupy2007/
-.. _`PyCon 2008`: http://codespeak.net/svn/pypy/extradoc/talk/pycon2008/
-.. _`ESUG 2007`: http://codespeak.net/svn/pypy/extradoc/talk/esug2007/
-.. _`Bern (Switzerland) 2007`: http://codespeak.net/svn/pypy/extradoc/talk/bern2007/
-.. _`PyCon UK 2007`: http://codespeak.net/svn/pypy/extradoc/talk/pyconuk07/
-.. _Dresden: http://codespeak.net/svn/pypy/extradoc/talk/dresden/
-.. _`EuroPython 2007`: http://codespeak.net/svn/pypy/extradoc/talk/ep2007/
-.. _`Bad Honnef 2007`: http://codespeak.net/svn/pypy/extradoc/talk/badhonnef2007/talk.pdf
-.. _`Dzug talk`: http://codespeak.net/svn/pypy/extradoc/talk/dzug2007/dzug2007.txt
-.. _`PyCon 2007`: http://codespeak.net/svn/pypy/extradoc/talk/pycon2007/
-.. _`PyCon - Uno 2007`: http://codespeak.net/svn/pypy/extradoc/talk/pycon-uno2007/pycon07.pdf
-.. _`Warsaw 2007`: http://codespeak.net/svn/pypy/extradoc/talk/warsaw2007/
-.. _`Warsaw 2006`: http://codespeak.net/svn/pypy/extradoc/talk/warsaw2006/
-.. _`Tokyo 2006`: http://codespeak.net/svn/pypy/extradoc/talk/tokyo/
-
-
-Related projects 
-----------------------------------
-
-* TraceMonkey_ is using a tracing JIT, similar to the tracing
-  JITs generated by our (in-progress) JIT generator.
-
-* Dynamo_ showcased `transparent dynamic optimization`_
-  generating an optimized version of a binary program at runtime. 
-
-* Tailoring Dynamo_ to interpreter implementations and challenges -
-  Gregory Sullivan et. al., 
-  `Dynamic Native Optimization of Native Interpreters`_. IVME 03. 2003.
-
-* Stackless_ is a recursion-free version of Python.
-
-* Psyco_ is a just-in-time specializer for Python.
-
-* JikesRVM_ a research dynamic optimizing Java VM written in Java.
-
-* `Squeak`_ is a Smalltalk-80 implementation written in
-  Smalltalk, being used in `Croquet`_, an experimental 
-  distributed multi-user/multi-programmer virtual world. 
-
-* `LLVM`_ the low level virtual machine project. 
-
-* `CLR under the hood`_ (powerpoint, works with open office) gives 
-  a good introduction to the underlying models of Microsoft's Common 
-  Language Runtime, the Intermediate Language, JIT and GC issues. 
-  
-* spyweb translates Python programs to Scheme. (site unavailable)
-
-* Jython_ is a Python implementation in Java.
-
-* IronPython_ a new Python implementation compiling Python into 
-  Microsoft's Common Language Runtime (CLR) Intermediate Language (IL).
-
-* Tunes_ is not entirely unrelated.  The web site changed a lot, but a
-  snapshot of the `old Tunes Wiki`_ is available on codespeak; browsing
-  through it is a lot of fun.
-
-.. _TraceMonkey: https://wiki.mozilla.org/JavaScript:TraceMonkey
-.. _`CLR under the hood`: http://download.microsoft.com/download/2/4/d/24dfac0e-fec7-4252-91b9-fb2310603f14/CLRUnderTheHood.BradA.ppt
-.. _Stackless: http://stackless.com 
-.. _Psyco: http://psyco.sourceforge.net
-.. _Jython: http://www.jython.org
-.. _`Squeak`: http://www.squeak.org/
-.. _`Croquet`: http://www.opencroquet.org/
-.. _`transparent dynamic optimization`: http://www.hpl.hp.com/techreports/1999/HPL-1999-77.pdf
-.. _Dynamo: http://www.hpl.hp.com/techreports/1999/HPL-1999-78.pdf
-.. _testdesign: coding-guide.html#test-design
-.. _feasible: http://codespeak.net/pipermail/pypy-dev/2004q2/001289.html
-.. _rock: http://codespeak.net/pipermail/pypy-dev/2004q1/001255.html
-.. _LLVM: http://llvm.org/
-.. _IronPython: http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython
-.. _`Dynamic Native Optimization of Native Interpreters`: http://www.ai.mit.edu/~gregs/dynamorio.html
-.. _JikesRVM: http://jikesrvm.sf.net
-.. _Tunes: http://tunes.org
-.. _`old Tunes Wiki`: http://codespeak.net/cliki.tunes.org/

diff --git a/pypy/doc/discussion/cli-optimizations.txt b/pypy/doc/discussion/cli-optimizations.txt
deleted file mode 100644
--- a/pypy/doc/discussion/cli-optimizations.txt
+++ /dev/null
@@ -1,233 +0,0 @@
-Possible optimizations for the CLI backend
-==========================================
-
-Stack push/pop optimization
----------------------------
-
-The CLI's VM is a stack based machine: this fact doesn't play nicely
-with the SSI form the flowgraphs are generated in. At the moment
-gencli does a literal translation of the SSI statements, allocating a
-new local variable for each variable of the flowgraph.
-
-For example, consider the following RPython code and the corresponding
-flowgraph::
-
-  def bar(x, y):
-      foo(x+y, x-y)
-
-
-  inputargs: x_0 y_0
-  v0 = int_add(x_0, y_0)
-  v1 = int_sub(x_0, y_0)
-  v2 = directcall((sm foo), v0, v1)
-
-This is the IL code generated by the CLI backend::
-
-  .locals init (int32 v0, int32 v1, int32 v2)
-    
-  block0:
-    ldarg 'x_0'
-    ldarg 'y_0'
-    add 
-    stloc 'v0'
-    ldarg 'x_0'
-    ldarg 'y_0'
-    sub 
-    stloc 'v1'
-    ldloc 'v0'
-    ldloc 'v1'
-    call int32 foo(int32, int32)
-    stloc 'v2'
-
-As you can see, the results of 'add' and 'sub' are stored in v0 and
-v1, respectively, then v0 and v1 are reloaded onto stack. These
-store/load is redundant, since the code would work nicely even without
-them::
-
-  .locals init (int32 v2)
-    
-  block0:
-    ldarg 'x_0'
-    ldarg 'y_0'
-    add 
-    ldarg 'x_0'
-    ldarg 'y_0'
-    sub 
-    call int32 foo(int32, int32)
-    stloc 'v2'
-
-I've checked the native code generated by the Mono Jit on x86 and I've
-seen that it does not optimize it. I haven't checked the native code
-generated by Microsoft CLR, yet.
-
-Thus, we might consider to optimize it manually; it should not be so
-difficult, but it is not trivial because we have to make sure that the
-dropped locals are used only once.
-
-
-Mapping RPython exceptions to native CLI exceptions
----------------------------------------------------
-
-Both RPython and CLI have its own set of exception classes: some of
-these are pretty similar; e.g., we have OverflowError,
-ZeroDivisionError and IndexError on the first side and
-OverflowException, DivideByZeroException and IndexOutOfRangeException
-on the other side.
-
-The first attempt was to map RPython classes to their corresponding
-CLI ones: this worked for simple cases, but it would have triggered
-subtle bugs in more complex ones, because the two exception
-hierarchies don't completely overlap.
-
-For now I've chosen to build an RPython exception hierarchy
-completely independent from the CLI one, but this means that we can't
-rely on exceptions raised by standard operations. The currently
-implemented solution is to do an exception translation on-the-fly; for
-example, the 'ind_add_ovf' is translated into the following IL code::
-
-  .try 
-  { 
-      ldarg 'x_0'
-      ldarg 'y_0'
-      add.ovf 
-      stloc 'v1'
-      leave __check_block_2 
-  } 
-  catch [mscorlib]System.OverflowException 
-  { 
-      newobj instance void class exceptions.OverflowError::.ctor() 
-      dup 
-      ldsfld class Object_meta pypy.runtime.Constants::exceptions_OverflowError_meta 
-      stfld class Object_meta Object::meta 
-      throw 
-  } 
-
-I.e., it catches the builtin OverflowException and raises a RPython
-OverflowError.
-
-I haven't measured timings yet, but I guess that this machinery brings
-to some performance penalties even in the non-overflow case; a
-possible optimization is to do the on-the-fly translation only when it
-is strictly necessary, i.e. only when the except clause catches an
-exception class whose subclass hierarchy is compatible with the
-builtin one. As an example, consider the following RPython code::
-
-  try:
-    return mylist[0]
-  except IndexError:
-    return -1
-
-Given that IndexError has no subclasses, we can map it to
-IndexOutOfBoundException and directly catch this one::
-
-  try
-  {
-    ldloc 'mylist'
-    ldc.i4 0
-    call int32 getitem(MyListType, int32)
-    ...
-  }
-  catch [mscorlib]System.IndexOutOfBoundException
-  {
-    // return -1
-    ...
-  }
-
-By contrast we can't do so if the except clause catches classes that
-don't directly map to any builtin class, such as LookupError::
-
-  try:
-    return mylist[0]
-  except LookupError:
-    return -1
-
-Has to be translated in the old way::
-
-  .try 
-  { 
-    ldloc 'mylist'
-    ldc.i4 0
-
-    .try 
-    {
-        call int32 getitem(MyListType, int32)
-    }
-    catch [mscorlib]System.IndexOutOfBoundException
-    { 
-        // translate IndexOutOfBoundException into IndexError
-        newobj instance void class exceptions.IndexError::.ctor() 
-        dup 
-        ldsfld class Object_meta pypy.runtime.Constants::exceptions_IndexError_meta 
-        stfld class Object_meta Object::meta 
-        throw 
-    }
-    ...
-  }
-  .catch exceptions.LookupError
-  {
-    // return -1
-    ...
-  }
-
-
-Specializing methods of List
-----------------------------
-
-Most methods of RPython lists are implemented by ll_* helpers placed
-in rpython/rlist.py. For some of those we have a direct correspondent
-already implemented in .NET List<>; we could use the oopspec attribute
-for doing an on-the-fly replacement of these low level helpers with
-their builtin correspondent. As an example the 'append' method is
-already mapped to pypylib.List.append. Thanks to Armin Rigo for the
-idea of using oopspec.
-
-
-Doing some caching on Dict
---------------------------
-
-The current implementations of ll_dict_getitem and ll_dict_get in
-ootypesystem.rdict do two consecutive lookups (calling ll_contains and
-ll_get) on the same key. We might cache the result of
-pypylib.Dict.ll_contains so that the successive ll_get don't need a
-lookup. Btw, we need some profiling before choosing the best way. Or
-we could directly refactor ootypesystem.rdict for doing a single
-lookup.
-
-XXX
-I tried it on revision 32917 and performance are slower! I don't know
-why, but pypy.net pystone.py is slower by 17%, and pypy.net
-richards.py is slower by 71% (!!!). I don't know why, need to be
-investigated further.
-
-
-Optimize StaticMethod
----------------------
-
-::
-
-  2006-10-02, 13:41
-
-  <pedronis> antocuni: do you try to not wrap static methods that are just called and not passed around
-  <antocuni> no
-             I think I don't know how to detect them
-  <pedronis> antocuni: you should try to render them just as static methods not as instances when possible
-             you need to track what appears only in direct_calls vs other places
-
-
-Optimize Unicode
-----------------
-
-We should try to use native .NET unicode facilities instead of our
-own. These should save both time (especially startup time) and memory.
-
-On 2006-10-02 I got these benchmarks:
-
-Pypy.NET             Startup time   Memory used
-with unicodedata          ~12 sec     112508 Kb
-without unicodedata        ~6 sec      79004 Kb
-
-The version without unicodedata is buggy, of course.
-
-Unfortunately it seems that .NET doesn't expose all the things we
-need, so we will still need some data. For example there is no way to
-get the unicode name of a char.

diff --git a/pypy/doc/config/objspace.std.optimized_list_getitem.txt b/pypy/doc/config/objspace.std.optimized_list_getitem.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.optimized_list_getitem.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Optimized list[int] a bit.

diff --git a/pypy/doc/geninterp.txt b/pypy/doc/geninterp.txt
deleted file mode 100644
--- a/pypy/doc/geninterp.txt
+++ /dev/null
@@ -1,188 +0,0 @@
-The Interpreter-Level backend
------------------------------
-
-http://codespeak.net/pypy/trunk/pypy/translator/geninterplevel.py
-
-Motivation
-++++++++++
-
-PyPy often makes use of `application-level`_ helper methods.
-The idea of the 'geninterplevel' backend is to automatically transform
-such application level implementations to their equivalent representation
-at interpreter level.  Then, the RPython to C translation hopefully can
-produce more efficient code than always re-interpreting these methods.
-
-One property of translation from application level Python to
-Python is, that the produced code does the same thing as the
-corresponding interpreted code, but no interpreter is needed
-any longer to execute this code.
-
-.. _`application-level`: coding-guide.html#app-preferable
-
-Bootstrap issue
-+++++++++++++++
-
-One issue we had so far was of bootstrapping: some pieces of the
-interpreter (e.g. exceptions) were written in geninterped code.
-It is unclear how much of it is left, thought.
-
-That bootstrap issue is (was?) solved by invoking a new bytecode interpreter
-which runs on FlowObjspace. FlowObjspace is complete without
-complicated initialization. It is able to do abstract interpretation
-of any Rpythonic code, without actually implementing anything. It just
-records all the operations the bytecode interpreter would have done by
-building flowgraphs for all the code. What the Python backend does is
-just to produce correct Python code from these flowgraphs and return
-it as source code. In the produced code Python operations recorded in
-the original flowgraphs are replaced by calls to the corresponding
-methods in the `object space`_ interface.
-
-.. _`object space`: objspace.html
-
-Example
-+++++++
-
-.. _implementation: ../../pypy/translator/geninterplevel.py
-
-Let's try a little example. You might want to look at the flowgraph that it
-produces. Here, we directly run the Python translation and look at the
-generated source. See also the header section of the implementation_ for the
-interface::
-
-    >>> from pypy.translator.geninterplevel import translate_as_module
-    >>> entrypoint, source = translate_as_module("""
-    ...
-    ... def g(n):
-    ...     i = 0
-    ...     while n:
-    ...         i = i + n
-    ...         n = n - 1
-    ...     return i
-    ...
-    ... """)
-
-This call has invoked a PyPy bytecode interpreter running on FlowObjspace,
-recorded every possible codepath into a flowgraph, and then rendered the
-following source code:: 
-
-    #!/bin/env python
-    # -*- coding: LATIN-1 -*-
-
-    def initapp2interpexec(space):
-      """NOT_RPYTHON"""
-
-      def g(space, w_n_1):
-        goto = 3 # startblock
-        while True:
-
-            if goto == 1:
-                v0 = space.is_true(w_n)
-                if v0 == True:
-                    goto = 2
-                else:
-                    goto = 4
-
-            if goto == 2:
-                w_1 = space.add(w_0, w_n)
-                w_2 = space.sub(w_n, gi_1)
-                w_n, w_0 = w_2, w_1
-                goto = 1
-                continue
-
-            if goto == 3:
-                w_n, w_0 = w_n_1, gi_0
-                goto = 1
-                continue
-
-            if goto == 4:
-                return w_0
-
-      fastf_g = g
-
-      g3dict = space.newdict()
-      gs___name__ = space.new_interned_str('__name__')
-      gs_app2interpexec = space.new_interned_str('app2interpexec')
-      space.setitem(g3dict, gs___name__, gs_app2interpexec)
-      gs_g = space.new_interned_str('g')
-      from pypy.interpreter import gateway
-      gfunc_g = space.wrap(gateway.interp2app(fastf_g, unwrap_spec=[gateway.ObjSpace, gateway.W_Root]))
-      space.setitem(g3dict, gs_g, gfunc_g)
-      gi_1 = space.wrap(1)
-      gi_0 = space.wrap(0)
-      return g3dict
-
-You see that actually a single function is produced:
-``initapp2interpexec``. This is the function that you will call with a
-space as argument. It defines a few functions and then does a number
-of initialization steps, builds the global objects the function need,
-and produces the PyPy function object ``gfunc_g``.
-
-The return value is ``g3dict``, which contains a module name and the
-function we asked for.
-
-Let's have a look at the body of this code: The definition of ``g`` is
-used as ``fast_g`` in the ``gateway.interp2app`` which constructs a
-PyPy function object which takes care of argument unboxing (based on
-the ``unwrap_spec``), and of invoking the original ``g``.
-
-We look at the definition of ``g`` itself which does the actual
-computation. Comparing to the flowgraph, you see a code block for
-every block in the graph.  Since Python has no goto statement, the
-jumps between the blocks are implemented by a loop that switches over
-a ``goto`` variable.
-
-::
-
-    .       if goto == 1:
-                v0 = space.is_true(w_n)
-                if v0 == True:
-                    goto = 2
-                else:
-                    goto = 4
-
-This is the implementation of the "``while n:``". There is no implicit state,
-everything is passed over to the next block by initializing its
-input variables. This directly resembles the nature of flowgraphs.
-They are completely stateless.
-
-
-::
-
-    .       if goto == 2:
-                w_1 = space.add(w_0, w_n)
-                w_2 = space.sub(w_n, gi_1)
-                w_n, w_0 = w_2, w_1
-                goto = 1
-                continue
-
-The "``i = i + n``" and "``n = n - 1``" instructions.
-You see how every instruction produces a new variable.
-The state is again shuffled around by assigning to the
-input variables ``w_n`` and ``w_0`` of the next target, block 1.
-
-Note that it is possible to rewrite this by re-using variables,
-trying to produce nested blocks instead of the goto construction
-and much more. The source would look much more like what we
-used to write by hand. For the C backend, this doesn't make much
-sense since the compiler optimizes it for us. For the Python interpreter it could
-give a bit more speed. But this is a temporary format and will
-get optimized anyway when we produce the executable.
-
-Interplevel Snippets in the Sources
-+++++++++++++++++++++++++++++++++++
-
-Code written in application space can consist of complete files
-to be translated, or they
-can be tiny snippets scattered all over a source file, similar
-to our example from above.
-
-Translation of these snippets is done automatically and cached
-in pypy/_cache with the modulename and the md5 checksum appended
-to it as file name. If you have run your copy of pypy already,
-this folder should exist and have some generated files in it.
-These files consist of the generated code plus a little code
-that auto-destructs the cached file (plus .pyc/.pyo versions)
-if it is executed as __main__. On windows this means you can wipe
-a cached code snippet clear by double-clicking it. Note also that
-the auto-generated __init__.py file wipes the whole directory
-when executed.

diff --git a/pypy/doc/garbage_collection.txt b/pypy/doc/garbage_collection.txt
deleted file mode 100644
--- a/pypy/doc/garbage_collection.txt
+++ /dev/null
@@ -1,127 +0,0 @@
-==========================
-Garbage Collection in PyPy
-==========================
-
-.. contents::
-.. sectnum::
-
-Introduction
-============
-
-**Warning**: The overview and description of our garbage collection
-strategy and framework is not here but in the `EU-report on this
-topic`_.  The present document describes the specific garbage collectors
-that we wrote in our framework.
-
-.. _`EU-report on this topic`: http://codespeak.net/pypy/extradoc/eu-report/D07.1_Massive_Parallelism_and_Translation_Aspects-2007-02-28.pdf
-
-
-Garbage collectors currently written for the GC framework
-=========================================================
-
-(Very rough sketch only for now.)
-
-Reminder: to select which GC you want to include in a translated
-RPython program, use the ``--gc=NAME`` option of ``translate.py``.
-For more details, see the `overview of command line options for
-translation`_.
-
-.. _`overview of command line options for translation`: config/commandline.html#translation
-
-Mark and Sweep
---------------
-
-Classical Mark and Sweep collector.  Also contains a lot of experimental
-and half-unmaintained features.  See `rpython/memory/gc/marksweep.py`_.
-
-Semispace copying collector
----------------------------
-
-Two arenas of equal size, with only one arena in use and getting filled
-with new objects.  When the arena is full, the live objects are copied
-into the other arena using Cheney's algorithm.  The old arena is then
-cleared.  See `rpython/memory/gc/semispace.py`_.
-
-On Unix the clearing is done by reading ``/dev/zero`` into the arena,
-which is extremely memory efficient at least on Linux: it lets the
-kernel free the RAM that the old arena used and replace it all with
-allocated-on-demand memory.
-
-The size of each semispace starts at 8MB but grows as needed when the
-amount of objects alive grows.
-
-Generational GC
----------------
-
-This is a two-generations GC.  See `rpython/memory/gc/generation.py`_.
-
-It is implemented as a subclass of the Semispace copying collector.  It
-adds a nursery, which is a chunk of the current semispace.  Its size is
-computed to be half the size of the CPU Level 2 cache.  Allocations fill
-the nursery, and when it is full, it is collected and the objects still
-alive are moved to the rest of the current semispace.
-
-The idea is that it is very common for objects to die soon after they
-are created.  Generational GCs help a lot in this case, particularly if
-the amount of live objects really manipulated by the program fits in the
-Level 2 cache.  Moreover, the semispaces fill up much more slowly,
-making full collections less frequent.
-
-Hybrid GC
----------
-
-This is a three-generations GC.
-
-It is implemented as a subclass of the Generational GC.  The Hybrid GC
-can handle both objects that are inside and objects that are outside the
-semispaces ("external").  The external objects are not moving and
-collected in a mark-and-sweep fashion.  Large objects are allocated as
-external objects to avoid costly moves.  Small objects that survive for
-a long enough time (several semispace collections) are also made
-external so that they stop moving.
-
-This is coupled with a segregation of the objects in three generations.
-Each generation is collected much less often than the previous one.  The
-division of the generations is slightly more complicated than just
-nursery / semispace / external; see the diagram at the start of the
-source code, in `rpython/memory/gc/hybrid.py`_.
-
-Mark & Compact GC
------------------
-
-Inspired, at least partially, by Squeak's garbage collector, this is a
-single-arena GC in which collection compacts the objects in-place.  The
-main point of this GC is to save as much memory as possible (to be not
-worse than the Semispace), but without the peaks of double memory usage
-during collection.
-
-Unlike the Semispace GC, collection requires a number of passes over the
-data.  This makes collection quite slower.  Future improvements could be
-to add a nursery to Mark & Compact in order to mitigate this issue.
-
-During a collection, we reuse the space in-place if it is still large
-enough.  If not, we need to allocate a new, larger space, and move the
-objects there; however, this move is done chunk by chunk, and chunks are
-cleared (i.e. returned to the OS) as soon as they have been moved away.
-This means that (from the point of view of the OS) a collection will
-never cause an important temporary growth of total memory usage.
-
-More precisely, a collection is triggered when the space contains more
-than N*M bytes, where N is the number of bytes alive after the previous
-collection and M is a constant factor, by default 1.5.  This guarantees
-that the total memory usage of the program never exceeds 1.5 times the
-total size of its live objects.
-
-The objects themselves are quite compact: they are allocated next to
-each other in the heap, separated by a GC header of only one word (4
-bytes on 32-bit platforms) and possibly followed by up to 3 bytes of
-padding for non-word-sized objects (e.g. strings).  There is a small
-extra memory usage during collection: an array containing 2 bytes per
-surviving object is needed to make a backup of (half of) the surviving
-objects' header, in order to let the collector store temporary relation
-information in the regular headers.
-
-More details are available as comments at the start of the source
-in `rpython/memory/gc/markcompact.py`_.
-
-.. include:: _ref.txt

diff --git a/pypy/doc/extending.txt b/pypy/doc/extending.txt
deleted file mode 100644
--- a/pypy/doc/extending.txt
+++ /dev/null
@@ -1,103 +0,0 @@
-
-Writing extension modules for pypy
-===================================
-
-This document tries to explain how to interface the PyPy python interpreter
-with any external library.
-
-Note: We try to describe state-of-the art, but it
-might fade out of date as this is the front on which things are changing
-in pypy rapidly.
-
-Possibilities
-=============
-
-Right now, there are three possibilities of providing third-party modules
-for the PyPy python interpreter (in order of usefulness):
-
-* Write them in pure python and use ctypes, see ctypes_
-  section
-
-* Write them in pure python and use direct libffi low-level bindings, See
-  \_rawffi_ module description.
-
-* Write them in RPython as mixedmodule_, using *rffi* as bindings.
-
-.. _ctypes: #CTypes
-.. _\_rawffi: #LibFFI
-.. _mixedmodule: #Mixed Modules
-
-CTypes
-======
-
-The ctypes module in PyPy is ready to use.
-It's goal is to be as-compatible-as-possible with the
-`CPython ctypes`_ version. Right now it's able to support large examples,
-such as pyglet. PyPy is planning to have a 100% compatible ctypes
-implementation, without the CPython C-level API bindings (so it is very
-unlikely that direct object-manipulation trickery through this API will work).
-
-We also provide a `ctypes-configure`_ for overcoming the platform dependencies,
-not relying on the ctypes codegen. This tool works by querying gcc about
-platform-dependent details (compiling small snippets of C code and running
-them), so it'll benefit not pypy-related ctypes-based modules as well.
-
-.. _`ctypes-configure`: http://codespeak.net/~fijal/configure.html
-
-Pros
-----
-
-Stable, CPython-compatible API
-
-Cons
-----
-
-Only pure-python code (slow), problems with platform-dependency (although
-we partially solve those). PyPy implementation is now very slow.
-
-_`CPython ctypes`: http://python.net/crew/theller/ctypes/
-
-LibFFI
-======
-
-Mostly in order to be able to write a ctypes module, we developed a very
-low-level libffi bindings. (libffi is a C-level library for dynamic calling,
-which is used by CPython ctypes). This library provides stable and usable API,
-although it's API is a very low-level one. It does not contain any
-magic.
-
-Pros
-----
-
-Works. Combines disadvantages of using ctypes with disadvantages of
-using mixed modules. Probably more suitable for a delicate code
-where ctypes magic goes in a way.
-
-Cons
-----
-
-Slow. CPython-incompatible API, very rough and low-level
-
-Mixed Modules
-=============
-
-This is the most advanced and powerful way of writing extension modules.
-It has some serious disadvantages:
-
-* a mixed module needs to be written in RPython, which is far more
-  complicated than Python (XXX link)
-
-* due to lack of separate compilation (as of April 2008), each
-  compilation-check requires to recompile whole PyPy python interpreter,
-  which takes 0.5-1h. We plan to solve this at some point in near future.
-
-* although rpython is a garbage-collected language, the border between
-  C and RPython needs to be managed by hand (each object that goes into the
-  C level must be explicitly freed) XXX we try to solve this
-
-Some document is available `here`_
-
-.. _`here`: rffi.html
-
-XXX we should provide detailed docs about lltype and rffi, especially if we
-    want people to follow that way.

diff --git a/pypy/doc/config/objspace.usemodules._testing.txt b/pypy/doc/config/objspace.usemodules._testing.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._testing.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Use the '_testing' module. This module exists only for PyPy own testing purposes.
- 
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/translation.gc.txt b/pypy/doc/config/translation.gc.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.gc.txt
+++ /dev/null
@@ -1,13 +0,0 @@
-Choose the Garbage Collector used by the translated program:
-
-  - "ref": reference counting. Takes very long to translate and the result is
-    slow.
-
-  - "marksweep": naive mark & sweep.
-
-  - "semispace": a copying semi-space GC.
-
-  - "generation": a generational GC using the semi-space GC for the
-    older generation.
-
-  - "boehm": use the Boehm conservative GC.

diff --git a/pypy/doc/config/translation.instrument.txt b/pypy/doc/config/translation.instrument.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.instrument.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Internal option.
-
-.. internal

diff --git a/pypy/doc/config/objspace.usemodules.imp.txt b/pypy/doc/config/objspace.usemodules.imp.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.imp.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'imp' module.
-This module is included by default.

diff --git a/pypy/doc/contributor.txt b/pypy/doc/contributor.txt
deleted file mode 100644
--- a/pypy/doc/contributor.txt
+++ /dev/null
@@ -1,105 +0,0 @@
-
-Contributors to PyPy
-====================
-
-Here is a list of developers who have committed to the PyPy source
-code base, ordered by number of commits (which is certainly not a very
-appropriate measure but it's something)::
-
-
-    Armin Rigo
-    Maciej Fijalkowski
-    Carl Friedrich Bolz
-    Samuele Pedroni
-    Antonio Cuni
-    Michael Hudson
-    Christian Tismer
-    Holger Krekel
-    Eric van Riet Paap
-    Richard Emslie
-    Anders Chrigstrom
-    Amaury Forgeot d Arc
-    Aurelien Campeas
-    Anders Lehmann
-    Niklaus Haldimann
-    Seo Sanghyeon
-    Leonardo Santagada
-    Lawrence Oluyede
-    Jakub Gustak
-    Guido Wesdorp
-    Benjamin Peterson
-    Alexander Schremmer
-    Niko Matsakis
-    Ludovic Aubry
-    Alex Martelli
-    Toon Verwaest
-    Stephan Diehl
-    Adrien Di Mascio
-    Stefan Schwarzer
-    Tomek Meka
-    Patrick Maupin
-    Jacob Hallen
-    Laura Creighton
-    Bob Ippolito
-    Camillo Bruni
-    Simon Burton
-    Bruno Gola
-    Alexandre Fayolle
-    Marius Gedminas
-    Guido van Rossum
-    Valentino Volonghi
-    Adrian Kuhn
-    Paul deGrandis
-    Gerald Klix
-    Wanja Saatkamp
-    Anders Hammarquist
-    Oscar Nierstrasz
-    Eugene Oden
-    Lukas Renggli
-    Guenter Jantzen
-    Dinu Gherman
-    Bartosz Skowron
-    Georg Brandl
-    Ben Young
-    Jean-Paul Calderone
-    Nicolas Chauvat
-    Rocco Moretti
-    Michael Twomey
-    boria
-    Jared Grubb
-    Olivier Dormond
-    Stuart Williams
-    Jens-Uwe Mager
-    Justas Sadzevicius
-    Mikael Sch&#246;nenberg
-    Brian Dorsey
-    Jonathan David Riehl
-    Beatrice During
-    Elmo M&#228;ntynen
-    Andreas Friedge
-    Alex Gaynor
-    Anders Qvist
-    Alan McIntyre
-    Bert Freudenberg
-    Pieter Zieschang
-    Jacob Oscarson
-    Lutz Paelike
-    Michael Schneider
-    Artur Lisiecki
-    Lene Wagner
-    Christopher Armstrong
-    Jan de Mooij
-    Jacek Generowicz
-    Gasper Zejn
-    Stephan Busemann
-    Yusei Tahara
-    Godefroid Chappelle
-    Toby Watson
-    Andrew Thompson
-    Joshua Gilbert
-    Anders Sigfridsson
-    David Schneider
-    Michael Chermside
-    tav
-    Martin Blais
-    Victor Stinner

diff --git a/pypy/doc/config/translation.backendopt.profile_based_inline.txt b/pypy/doc/config/translation.backendopt.profile_based_inline.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.profile_based_inline.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-Inline flowgraphs only for call-sites for which there was a minimal
-number of calls during an instrumented run of the program. Callee
-flowgraphs are considered candidates based on a weight heuristic like
-for basic inlining. (see :config:`translation.backendopt.inline`,
-:config:`translation.backendopt.profile_based_inline_threshold` ).
-
-The option takes as value a string which is the arguments to pass to
-the program for the instrumented run.
-
-This optimization is not used by default.
\ No newline at end of file

diff --git a/pypy/doc/config/translation.txt b/pypy/doc/config/translation.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-..  intentionally empty

diff --git a/pypy/doc/config/translation.shared.txt b/pypy/doc/config/translation.shared.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.shared.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Build pypy as a shared library or a DLL, with a small executable to run it.
-This is necessary on Windows to expose the C API provided by the cpyext module.

diff --git a/pypy/doc/config/objspace.usemodules.pypyjit.txt b/pypy/doc/config/objspace.usemodules.pypyjit.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.pypyjit.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Use the 'pypyjit' module. 

diff --git a/pypy/doc/config/translation.thread.txt b/pypy/doc/config/translation.thread.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.thread.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Enable threading. The only target where this has visible effect is PyPy (this
-also enables the ``thread`` module then).

diff --git a/pypy/doc/config/objspace.usemodules._multiprocessing.txt b/pypy/doc/config/objspace.usemodules._multiprocessing.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._multiprocessing.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the '_multiprocessing' module.
-Used by the 'multiprocessing' standard lib module. This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/translation.backendopt.inline_threshold.txt b/pypy/doc/config/translation.backendopt.inline_threshold.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.inline_threshold.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Weight threshold used to decide whether to inline flowgraphs.
-This is for basic inlining (:config:`translation.backendopt.inline`).

diff --git a/pypy/doc/coding-guide.txt b/pypy/doc/coding-guide.txt
deleted file mode 100644
--- a/pypy/doc/coding-guide.txt
+++ /dev/null
@@ -1,1088 +0,0 @@
-=====================================
-PyPy - Coding Guide
-=====================================
-
-.. contents::
-.. sectnum::
-
-
-This document describes coding requirements and conventions for
-working with the PyPy code base.  Please read it carefully and
-ask back any questions you might have. The document does not talk
-very much about coding style issues. We mostly follow `PEP 8`_ though.
-If in doubt, follow the style that is already present in the code base.
-
-.. _`PEP 8`: http://www.python.org/dev/peps/pep-0008/
-
-.. _`RPython`:
-
-Overview and motivation
-========================
-
-We are writing a Python interpreter in Python, using Python's well known
-ability to step behind the algorithmic problems as a language. At first glance,
-one might think this achieves nothing but a better understanding how the
-interpreter works.  This alone would make it worth doing, but we have much
-larger goals.
-
-
-CPython vs. PyPy
--------------------
-
-Compared to the CPython implementation, Python takes the role of the C
-Code. We rewrite the CPython interpreter in Python itself.  We could
-also aim at writing a more flexible interpreter at C level but we
-want to use Python to give an alternative description of the interpreter.
-
-The clear advantage is that such a description is shorter and simpler to
-read, and many implementation details vanish. The drawback of this approach is
-that this interpreter will be unbearably slow as long as it is run on top
-of CPython.
-
-To get to a useful interpreter again, we need to translate our
-high-level description of Python to a lower level one.  One rather
-straight-forward way is to do a whole program analysis of the PyPy
-interpreter and create a C source, again. There are many other ways,
-but let's stick with this somewhat canonical approach.
-
-
-.. _`application-level`:
-.. _`interpreter-level`:
-
-Application-level and interpreter-level execution and objects
--------------------------------------------------------------
-
-Since Python is used for implementing all of our code base, there is a
-crucial distinction to be aware of: that between *interpreter-level* objects and 
-*application-level* objects.  The latter are the ones that you deal with
-when you write normal python programs.  Interpreter-level code, however,
-cannot invoke operations nor access attributes from application-level
-objects.  You will immediately recognize any interpreter level code in
-PyPy, because half the variable and object names start with a ``w_``, which
-indicates that they are `wrapped`_ application-level values. 
-
-Let's show the difference with a simple example.  To sum the contents of
-two variables ``a`` and ``b``, one would write the simple application-level
-``a+b`` -- in contrast, the equivalent interpreter-level code is
-``space.add(w_a, w_b)``, where ``space`` is an instance of an object space,
-and ``w_a`` and ``w_b`` are typical names for the wrapped versions of the
-two variables.
-
-It helps to remember how CPython deals with the same issue: interpreter
-level code, in CPython, is written in C and thus typical code for the
-addition is ``PyNumber_Add(p_a, p_b)`` where ``p_a`` and ``p_b`` are C
-variables of type ``PyObject*``. This is conceptually similar to how we write
-our interpreter-level code in Python.
-
-Moreover, in PyPy we have to make a sharp distinction between
-interpreter- and application-level *exceptions*: application exceptions
-are always contained inside an instance of ``OperationError``.  This
-makes it easy to distinguish failures (or bugs) in our interpreter-level code
-from failures appearing in a python application level program that we are
-interpreting.
-
-
-.. _`app-preferable`: 
-
-Application level is often preferable 
--------------------------------------
-
-Application-level code is substantially higher-level, and therefore
-correspondingly easier to write and debug.  For example, suppose we want
-to implement the ``update`` method of dict objects.  Programming at
-application level, we can write an obvious, simple implementation, one
-that looks like an **executable definition** of ``update``, for
-example::
-
-    def update(self, other):
-        for k in other.keys():
-            self[k] = other[k]
-
-If we had to code only at interpreter level, we would have to code
-something much lower-level and involved, say something like::
-
-    def update(space, w_self, w_other):
-        w_keys = space.call_method(w_other, 'keys')
-        w_iter = space.iter(w_keys)
-        while True:
-            try:
-                w_key = space.next(w_iter)
-            except OperationError, e:
-                if not e.match(space, space.w_StopIteration):
-                    raise       # re-raise other app-level exceptions
-                break
-            w_value = space.getitem(w_other, w_key)
-            space.setitem(w_self, w_key, w_value)
-
-This interpreter-level implementation looks much more similar to the C
-source code.  It is still more readable than its C counterpart because 
-it doesn't contain memory management details and can use Python's native 
-exception mechanism. 
-
-In any case, it should be obvious that the application-level implementation 
-is definitely more readable, more elegant and more maintainable than the
-interpreter-level one (and indeed, dict.update is really implemented at
-applevel in PyPy).
-
-In fact, in almost all parts of PyPy, you find application level code in
-the middle of interpreter-level code.  Apart from some bootstrapping
-problems (application level functions need a certain initialization
-level of the object space before they can be executed), application
-level code is usually preferable.  We have an abstraction (called the
-'Gateway') which allows the caller of a function to remain ignorant of
-whether a particular function is implemented at application or
-interpreter level. 
-
-our runtime interpreter is "restricted python"
-----------------------------------------------
-
-In order to make a C code generator feasible all code on interpreter level has
-to restrict itself to a subset of the Python language, and we adhere to some
-rules which make translation to lower level languages feasible. Code on
-application level can still use the full expressivity of Python.
-
-Unlike source-to-source translations (like e.g. Starkiller_ or more recently
-ShedSkin_) we start
-translation from live python code objects which constitute our Python
-interpreter.   When doing its work of interpreting bytecode our Python
-implementation must behave in a static way often referenced as
-"RPythonic".
-
-.. _Starkiller: http://www.python.org/pycon/dc2004/papers/1/paper.pdf
-.. _ShedSkin: http://shed-skin.blogspot.com/
-
-However, when the PyPy interpreter is started as a Python program, it
-can use all of the Python language until it reaches a certain point in
-time, from which on everything that is being executed must be static.
-That is, during initialization our program is free to use the
-full dynamism of Python, including dynamic code generation.
-
-An example can be found in the current implementation which is quite
-elegant: For the definition of all the opcodes of the Python
-interpreter, the module ``dis`` is imported and used to initialize our
-bytecode interpreter.  (See ``__initclass__`` in
-`pypy/interpreter/pyopcode.py`_).  This
-saves us from adding extra modules to PyPy. The import code is run at
-startup time, and we are allowed to use the CPython builtin import
-function.
-
-After the startup code is finished, all resulting objects, functions,
-code blocks etc. must adhere to certain runtime restrictions which we
-describe further below.  Here is some background for why this is so:
-during translation, a whole program analysis ("type inference") is
-performed, which makes use of the restrictions defined in RPython. This
-enables the code generator to emit efficient machine level replacements
-for pure integer objects, for instance.
-
-Restricted Python
-=================
-
-RPython Definition, not
------------------------
-
-The list and exact details of the "RPython" restrictions are a somewhat
-evolving topic.  In particular, we have no formal language definition
-as we find it more practical to discuss and evolve the set of
-restrictions while working on the whole program analysis.  If you
-have any questions about the restrictions below then please feel
-free to mail us at pypy-dev at codespeak net.
-
-.. _`wrapped object`: coding-guide.html#wrapping-rules
-
-Flow restrictions
--------------------------
-
-**variables**
-
-  variables should contain values of at most one type as described in
-  `Object restrictions`_ at each control flow point, that means for
-  example that joining control paths using the same variable to
-  contain both a string and a int must be avoided.  It is allowed to
-  mix None (basically with the role of a null pointer) with many other
-  types: `wrapped objects`, class instances, lists, dicts, strings, etc.
-  but *not* with int and floats.
-
-**constants**
-
-  all module globals are considered constants.  Their binding must not
-  be changed at run-time.  Moreover, global (i.e. prebuilt) lists and
-  dictionaries are supposed to be immutable: modifying e.g. a global
-  list will give inconsistent results.  However, global instances don't
-  have this restriction, so if you need mutable global state, store it
-  in the attributes of some prebuilt singleton instance.
-
-**control structures**
-
-  all allowed but yield, ``for`` loops restricted to builtin types
-
-**range**
-
-  ``range`` and ``xrange`` are identical. ``range`` does not necessarily create an array,
-  only if the result is modified. It is allowed everywhere and completely
-  implemented. The only visible difference to CPython is the inaccessibility
-  of the ``xrange`` fields start, stop and step.
-
-**definitions**
-
-  run-time definition of classes or functions is not allowed.
-
-**generators**
-
-  generators are not supported.
-
-**exceptions**
-
-+ fully supported
-+ see below `Exception rules`_ for restrictions on exceptions raised by built-in operations
-
-
-Object restrictions
--------------------------
-
-We are using
-
-**integer, float, boolean**
-
-  works.
-
-**strings**
-
-  a lot of, but not all string methods are supported.  Indexes can be
-  negative.  In case they are not, then you get slightly more efficient
-  code if the translator can prove that they are non-negative.  When
-  slicing a string it is necessary to prove that the slice start and
-  stop indexes are non-negative.
-
-**tuples**
-
-  no variable-length tuples; use them to store or return pairs or n-tuples of
-  values. Each combination of types for elements and length constitute a separate
-  and not mixable type.
-
-**lists**
-
-  lists are used as an allocated array.  Lists are over-allocated, so list.append()
-  is reasonably fast.  Negative or out-of-bound indexes are only allowed for the
-  most common operations, as follows:
-
-  - *indexing*:
-    positive and negative indexes are allowed. Indexes are checked when requested
-    by an IndexError exception clause.
-  
-  - *slicing*:
-    the slice start must be within bounds. The stop doesn't need to, but it must
-    not be smaller than the start.  All negative indexes are disallowed, except for
-    the [:-1] special case.  No step.
-
-  - *other operators*:
-    ``+``, ``+=``, ``in``, ``*``, ``*=``, ``==``, ``!=`` work as expected.
-
-  - *methods*:
-    append, index, insert, extend, reverse, pop.  The index used in pop() follows
-    the same rules as for *indexing* above.  The index used in insert() must be within
-    bounds and not negative.
-
-**dicts**
-
-  dicts with a unique key type only, provided it is hashable. 
-  String keys have been the only allowed key types for a while, but this was generalized. 
-  After some re-optimization,
-  the implementation could safely decide that all string dict keys should be interned.
-
-
-**list comprehensions**
-
-  may be used to create allocated, initialized arrays.
-  After list over-allocation was introduced, there is no longer any restriction.
-
-**functions**
-
-+ statically called functions may use defaults and a variable number of
-  arguments (which may be passed as a list instead of a tuple, so write code
-  that does not depend on it being a tuple).
-
-+ dynamic dispatch enforces the use of signatures that are equal for all
-  possible called function, or at least "compatible enough".  This
-  concerns mainly method calls, when the method is overridden or in any
-  way given different definitions in different classes.  It also concerns
-  the less common case of explicitly manipulated function objects.
-  Describing the exact compatibility rules is rather involved (but if you
-  break them, you should get explicit errors from the rtyper and not
-  obscure crashes.)
-
-**builtin functions**
-
-  A number of builtin functions can be used.  The precise set can be
-  found in `pypy/annotation/builtin.py`_ (see ``def builtin_xxx()``).
-  Some builtin functions may be limited in what they support, though.
-
-  ``int, float, str, ord, chr``... are available as simple conversion
-  functions.  Note that ``int, float, str``... have a special meaning as
-  a type inside of isinstance only.
-
-**classes**
-
-+ methods and other class attributes do not change after startup
-+ single inheritance is fully supported
-+ simple mixins work too, but the mixed in class needs a ``_mixin_ = True``
-  class attribute
-
-+ classes are first-class objects too
-
-**objects**
-
-  in PyPy, wrapped objects are borrowed from the object space. Just like
-  in CPython, code that needs e.g. a dictionary can use a wrapped dict
-  and the object space operations on it.
-
-This layout makes the number of types to take care about quite limited.
-
-
-Integer Types
--------------------------
-
-While implementing the integer type, we stumbled over the problem that
-integers are quite in flux in CPython right now. Starting on Python 2.2,
-integers mutate into longs on overflow.  However, shifting to the left
-truncates up to 2.3 but extends to longs as well in 2.4.  By contrast, we need
-a way to perform wrap-around machine-sized arithmetic by default, while still
-being able to check for overflow when we need it explicitly.  Moreover, we need
-a consistent behavior before and after translation.
-
-We use normal integers for signed arithmetic.  It means that before
-translation we get longs in case of overflow, and after translation we get a
-silent wrap-around.  Whenever we need more control, we use the following
-helpers (which live the `pypy/rlib/rarithmetic.py`_):
-
-.. _`pypy/rlib/rarithmetic.py`: ../../pypy/rlib/rarithmetic.py
-
-
-**ovfcheck()**
-
-  This special function should only be used with a single arithmetic operation
-  as its argument, e.g. ``z = ovfcheck(x+y)``.  Its intended meaning is to
-  perform the given operation in overflow-checking mode.
-
-  At run-time, in Python, the ovfcheck() function itself checks the result
-  and raises OverflowError if it is a ``long``.  But the code generators use
-  ovfcheck() as a hint: they replace the whole ``ovfcheck(x+y)`` expression
-  with a single overflow-checking addition in C.
-
-**ovfcheck_lshift()**
-
-  ovfcheck_lshift(x, y) is a workaround for ovfcheck(x<<y), because the
-  latter doesn't quite work in Python prior to 2.4, where the expression
-  ``x<<y`` will never return a long if the input arguments are ints.  There is
-  a specific function ovfcheck_lshift() to use instead of some convoluted
-  expression like ``x*2**y`` so that code generators can still recognize it as
-  a single simple operation.
-
-**intmask()**
-
-  This function is used for wrap-around arithmetic.  It returns the lower bits
-  of its argument, masking away anything that doesn't fit in a C "signed long int".
-  Its purpose is, in Python, to convert from a Python ``long`` that resulted from a
-  previous operation back to a Python ``int``.  The code generators ignore
-  intmask() entirely, as they are doing wrap-around signed arithmetic all the time
-  by default anyway.  (We have no equivalent of the "int" versus "long int"
-  distinction of C at the moment and assume "long ints" everywhere.)
-
-**r_uint**
-
-  In a few cases (e.g. hash table manipulation), we need machine-sized unsigned
-  arithmetic.  For these cases there is the r_uint class, which is a pure
-  Python implementation of word-sized unsigned integers that silently wrap
-  around.  The purpose of this class (as opposed to helper functions as above)
-  is consistent typing: both Python and the annotator will propagate r_uint
-  instances in the program and interpret all the operations between them as
-  unsigned.  Instances of r_uint are special-cased by the code generators to
-  use the appropriate low-level type and operations.
-  Mixing of (signed) integers and r_uint in operations produces r_uint that
-  means unsigned results.  To convert back from r_uint to signed integers, use
-  intmask().
-
-
-Exception rules
----------------------
-
-Exceptions are by default not generated for simple cases.::
-
-    #!/usr/bin/python
-
-        lst = [1,2,3,4,5]
-        item = lst[i]    # this code is not checked for out-of-bound access
-
-        try:
-            item = lst[i]
-        except IndexError:
-            # complain
-
-Code with no exception handlers does not raise exceptions (after it has been
-translated, that is.  When you run it on top of CPython, it may raise
-exceptions, of course). By supplying an exception handler, you ask for error
-checking. Without, you assure the system that the operation cannot fail.
-This rule does not apply to *function calls*: any called function is
-assumed to be allowed to raise any exception.
-
-For example::
-
-    x = 5.1
-    x = x + 1.2       # not checked for float overflow
-    try:
-        x = x + 1.2
-    except OverflowError:
-        # float result too big
-
-But::
-
-    z = some_function(x, y)    # can raise any exception
-    try:
-        z = some_other_function(x, y)
-    except IndexError:
-        # only catches explicitly-raised IndexErrors in some_other_function()
-        # other exceptions can be raised, too, and will not be caught here.
-
-The ovfcheck() function described above follows the same rule: in case of
-overflow, it explicitly raise OverflowError, which can be caught anywhere.
-
-Exceptions explicitly raised or re-raised will always be generated.
-
-PyPy is debuggable on top of CPython
-------------------------------------
-
-PyPy has the advantage that it is runnable on standard
-CPython.  That means, we can run all of PyPy with all exception
-handling enabled, so we might catch cases where we failed to
-adhere to our implicit assertions.
-
-.. _`wrapping rules`:
-.. _`wrapped`:
-
-
-RPylint
--------
-
-Pylint_ is a static code checker for Python. Recent versions
-(>=0.13.0) can be run with the ``--rpython-mode`` command line option. This option
-enables the RPython checker which will checks for some of the
-restrictions RPython adds on standard Python code (and uses a 
-more aggressive type inference than the one used by default by
-pylint). The full list of checks is available in the documentation of
-Pylint. 
-
-RPylint can be a nice tool to get some information about how much work
-will be needed to convert a piece of Python code to RPython, or to get
-started with RPython.  While this tool will not guarantee that the
-code it checks will be translate successfully, it offers a few nice
-advantages over running a translation:
-
-* it is faster and therefore provides feedback faster than  ``translate.py``
-
-* it does not stop at the first problem it finds, so you can get more
-  feedback on the code in one run
-
-* the messages tend to be a bit less cryptic 
-
-* you can easily run it from emacs, vi, eclipse or visual studio.
-
-Note: if pylint is not prepackaged for your OS/distribution, or if
-only an older version is available, you will need to install from
-source. In that case, there are a couple of dependencies,
-logilab-common_ and astng_ that you will need to install too before
-you can use the tool. 
-
-.. _Pylint: http://www.logilab.org/projects/pylint
-.. _logilab-common: http://www.logilab.org/projects/common
-.. _astng: http://www.logilab.org/projects/astng
-
-
-
-Wrapping rules
-==============
-
-Wrapping
---------- 
-
-PyPy is made of Python source code at two levels: there is on the one hand
-*application-level code* that looks like normal Python code, and that
-implements some functionalities as one would expect from Python code (e.g. one
-can give a pure Python implementation of some built-in functions like
-``zip()``).  There is also *interpreter-level code* for the functionalities
-that must more directly manipulate interpreter data and objects (e.g. the main
-loop of the interpreter, and the various object spaces).
-
-Application-level code doesn't see object spaces explicitly: it runs using an
-object space to support the objects it manipulates, but this is implicit.
-There is no need for particular conventions for application-level code.  The
-sequel is only about interpreter-level code.  (Ideally, no application-level
-variable should be called ``space`` or ``w_xxx`` to avoid confusion.)
-
-The ``w_`` prefixes so lavishly used in the example above indicate,
-by PyPy coding convention, that we are dealing with *wrapped* (or *boxed*) objects,
-that is, interpreter-level objects which the object space constructs
-to implement corresponding application-level objects.  Each object
-space supplies ``wrap``, ``unwrap``, ``int_w``, ``interpclass_w``,
-etc. operations that move between the two levels for objects of simple
-built-in types; each object space also implements other Python types
-with suitable interpreter-level classes with some amount of internal
-structure.
-
-For example, an application-level Python ``list``
-is implemented by the `standard object space`_ as an
-instance of ``W_ListObject``, which has an instance attribute
-``wrappeditems`` (an interpreter-level list which contains the
-application-level list's items as wrapped objects).
-
-The rules are described in more details below.
-
-
-Naming conventions
-------------------
-
-* ``space``: the object space is only visible at
-  interpreter-level code, where it is by convention passed around by the name
-  ``space``.
-
-* ``w_xxx``: any object seen by application-level code is an
-  object explicitly managed by the object space.  From the
-  interpreter-level point of view, this is called a *wrapped*
-  object.  The ``w_`` prefix is used for any type of
-  application-level object.
-
-* ``xxx_w``: an interpreter-level container for wrapped
-  objects, for example a list or a dict containing wrapped
-  objects.  Not to be confused with a wrapped object that
-  would be a list or a dict: these are normal wrapped objects,
-  so they use the ``w_`` prefix.
-
-
-Operations on ``w_xxx``
------------------------
-
-The core bytecode interpreter considers wrapped objects as black boxes.
-It is not allowed to inspect them directly.  The allowed
-operations are all implemented on the object space: they are
-called ``space.xxx()``, where ``xxx`` is a standard operation
-name (``add``, ``getattr``, ``call``, ``eq``...). They are documented in the
-`object space document`_.
-
-A short warning: **don't do** ``w_x == w_y`` or ``w_x is w_y``!
-rationale for this rule is that there is no reason that two
-wrappers are related in any way even if they contain what
-looks like the same object at application-level.  To check
-for equality, use ``space.is_true(space.eq(w_x, w_y))`` or
-even better the short-cut ``space.eq_w(w_x, w_y)`` returning
-directly a interpreter-level bool.  To check for identity,
-use ``space.is_true(space.is_(w_x, w_y))`` or better
-``space.is_w(w_x, w_y)``.
-
-.. _`object space document`: objspace.html#interface
-
-.. _`applevel-exceptions`: 
-
-Application-level exceptions
-----------------------------
-
-Interpreter-level code can use exceptions freely.  However,
-all application-level exceptions are represented as an
-``OperationError`` at interpreter-level.  In other words, all
-exceptions that are potentially visible at application-level
-are internally an ``OperationError``.  This is the case of all
-errors reported by the object space operations
-(``space.add()`` etc.).
-
-To raise an application-level exception::
-
-    raise OperationError(space.w_XxxError, space.wrap("message"))
-
-To catch a specific application-level exception::
-
-    try:
-        ...
-    except OperationError, e:
-        if not e.match(space, space.w_XxxError):
-            raise
-        ...
-
-This construct catches all application-level exceptions, so we
-have to match it against the particular ``w_XxxError`` we are
-interested in and re-raise other exceptions.  The exception
-instance ``e`` holds two attributes that you can inspect:
-``e.w_type`` and ``e.w_value``.  Do not use ``e.w_type`` to
-match an exception, as this will miss exceptions that are
-instances of subclasses.
-
-We are thinking about replacing ``OperationError`` with a
-family of common exception classes (e.g. ``AppKeyError``,
-``AppIndexError``...) so that we can more easily catch them.
-The generic ``AppError`` would stand for all other
-application-level classes.
-
-
-.. _`modules`:
-
-Modules in PyPy
-===============
-
-Modules visible from application programs are imported from
-interpreter or application level files.  PyPy reuses almost all python
-modules of CPython's standard library, currently from version 2.5.2.  We
-sometimes need to `modify modules`_ and - more often - regression tests
-because they rely on implementation details of CPython.
-
-If we don't just modify an original CPython module but need to rewrite
-it from scratch we put it into `lib_pypy/`_ as a pure application level
-module.
-
-When we need access to interpreter-level objects we put the module into
-`pypy/module`_.  Such modules use a `mixed module mechanism`_
-which makes it convenient to use both interpreter- and application-level parts
-for the implementation.  Note that there is no extra facility for
-pure-interpreter level modules, you just write a mixed module and leave the
-application-level part empty.
-
-Determining the location of a module implementation
----------------------------------------------------
-
-You can interactively find out where a module comes from, when running py.py.
-here are examples for the possible locations::
-
-    >>>> import sys
-    >>>> sys.__file__
-    '/home/hpk/pypy-dist/pypy/module/sys/*.py'
-
-    >>>> import operator
-    >>>> operator.__file__
-    '/home/hpk/pypy-dist/lib_pypy/operator.py'
-
-    >>>> import opcode
-    >>>> opcode.__file__
-    '/home/hpk/pypy-dist/lib-python/modified-2.5.2/opcode.py'
-
-    >>>> import os
-    faking <type 'posix.stat_result'>
-    faking <type 'posix.statvfs_result'>
-    >>>> os.__file__
-    '/home/hpk/pypy-dist/lib-python/2.5.2/os.py'
-    >>>>
-
-Module directories / Import order
----------------------------------
-
-Here is the order in which PyPy looks up Python modules:
-
-*pypy/modules*
-
-    mixed interpreter/app-level builtin modules, such as
-    the ``sys`` and ``__builtin__`` module.
-
-*contents of PYTHONPATH*
-
-    lookup application level modules in each of the ``:`` separated
-    list of directories, specified in the ``PYTHONPATH`` environment
-    variable.
-
-*lib_pypy/*
-
-    contains pure Python reimplementation of modules.
-
-*lib-python/modified-2.5.2/*
-
-    The files and tests that we have modified from the CPython library.
-
-*lib-python/2.5.2/*
-
-    The unmodified CPython library. **Never ever check anything in there**.
-
-.. _`modify modules`:
-
-Modifying a CPython library module or regression test
--------------------------------------------------------
-
-Although PyPy is very compatible with CPython we sometimes need
-to change modules contained in our copy of the standard library,
-often due to the fact that PyPy works with all new-style classes
-by default and CPython has a number of places where it relies
-on some classes being old-style.
-
-If you want to change a module or test contained in ``lib-python/2.5.2``
-then make sure that you copy the file to our ``lib-python/modified-2.5.2``
-directory first.  In subversion commandline terms this reads::
-
-    svn cp lib-python/2.5.2/somemodule.py lib-python/modified-2.5.2/
-
-and subsequently you edit and commit
-``lib-python/modified-2.5.2/somemodule.py``.  This copying operation is
-important because it keeps the original CPython tree clean and makes it
-obvious what we had to change.
-
-.. _`mixed module mechanism`:
-.. _`mixed modules`:
-
-Implementing a mixed interpreter/application level Module
----------------------------------------------------------
-
-If a module needs to access PyPy's interpreter level
-then it is implemented as a mixed module.
-
-Mixed modules are directories in `pypy/module`_ with an  `__init__.py`
-file containing specifications where each name in a module comes from.
-Only specified names will be exported to a Mixed Module's applevel
-namespace.
-
-Sometimes it is necessary to really write some functions in C (or
-whatever target language). See `rffi`_ and `external functions
-documentation`_ for details. The latter approach is cumbersome and
-being phased out and former has currently quite a few rough edges.
-
-.. _`rffi`: rffi.html
-.. _`external functions documentation`: translation.html#extfunccalls
-
-application level definitions
-.............................
-
-Application level specifications are found in the `appleveldefs`
-dictionary found in ``__init__.py`` files of directories in ``pypy/module``.
-For example, in `pypy/module/__builtin__/__init__.py`_ you find the following
-entry specifying where ``__builtin__.locals`` comes from::
-
-     ...
-     'locals'        : 'app_inspect.locals',
-     ...
-
-The ``app_`` prefix indicates that the submodule ``app_inspect`` is
-interpreted at application level and the wrapped function value for ``locals``
-will be extracted accordingly.
-
-interpreter level definitions
-.............................
-
-Interpreter level specifications are found in the ``interpleveldefs``
-dictionary found in ``__init__.py`` files of directories in ``pypy/module``.
-For example, in `pypy/module/__builtin__/__init__.py`_ the following
-entry specifies where ``__builtin__.len`` comes from::
-
-     ...
-     'len'       : 'operation.len',
-     ...
-
-The ``operation`` submodule lives at interpreter level and ``len``
-is expected to be exposable to application level.  Here is
-the definition for ``operation.len()``::
-
-    def len(space, w_obj):
-        "len(object) -> integer\n\nReturn the number of items of a sequence or mapping."
-        return space.len(w_obj)
-
-Exposed interpreter level functions usually take a ``space`` argument
-and some wrapped values (see `wrapping rules`_) .
-
-You can also use a convenient shortcut in ``interpleveldefs`` dictionaries:
-namely an expression in parentheses to specify an interpreter level
-expression directly (instead of pulling it indirectly from a file)::
-
-    ...
-    'None'          : '(space.w_None)',
-    'False'         : '(space.w_False)',
-    ...
-
-The interpreter level expression has a ``space`` binding when
-it is executed.
-
-Adding an entry under pypy/module (e.g. mymodule) entails automatic
-creation of a new config option (such as --withmod-mymodule and
---withoutmod-mymodule (the later being the default)) for py.py and
-translate.py.
-
-Testing modules in ``lib_pypy/``
---------------------------------
-
-You can go to the `lib_pypy/pypy_test/`_ directory and invoke the testing tool
-("py.test" or "python ../../pypy/test_all.py") to run tests against the
-lib_pypy hierarchy.  Note, that tests in `lib_pypy/pypy_test/`_ are allowed
-and encouraged to let their tests run at interpreter level although
-`lib_pypy/`_ modules eventually live at PyPy's application level.
-This allows us to quickly test our python-coded reimplementations
-against CPython.
-
-Testing modules in ``pypy/module``
-----------------------------------
-
-Simply change to ``pypy/module`` or to a subdirectory and `run the
-tests as usual`_.
-
-
-Testing modules in ``lib-python``
------------------------------------
-
-In order to let CPython's regression tests run against PyPy
-you can switch to the `lib-python/`_ directory and run
-the testing tool in order to start compliance tests.
-(XXX check windows compatibility for producing test reports).
-
-Naming conventions and directory layout
-===========================================
-
-Directory and File Naming
--------------------------
-
-- directories/modules/namespaces are always **lowercase**
-
-- never use plural names in directory and file names
-
-- ``__init__.py`` is usually empty except for
-  ``pypy/objspace/*`` and ``pypy/module/*/__init__.py``.
-
-- don't use more than 4 directory nesting levels
-
-- keep filenames concise and completion-friendly.
-
-Naming of python objects
-------------------------
-
-- class names are **CamelCase**
-
-- functions/methods are lowercase and ``_`` separated
-
-- objectspace classes are spelled ``XyzObjSpace``. e.g.
-
-  - StdObjSpace
-  - FlowObjSpace
-
-- at interpreter level and in ObjSpace all boxed values
-  have a leading ``w_`` to indicate "wrapped values".  This
-  includes w_self.  Don't use ``w_`` in application level
-  python only code.
-
-Committing & Branching to the repository
------------------------------------------------------
-
-- write good log messages because several people
-  are reading the diffs.
-
-- if you add (text/py) files to the repository then please run
-  pypy/tool/fixeol in that directory.  This will make sure
-  that the property 'svn:eol-style' is set to native which
-  allows checkin/checkout in native line-ending format.
-
-- branching (aka "svn copy") of source code should usually
-  happen at ``svn/pypy/trunk`` level in order to have a full
-  self-contained pypy checkout for each branch.   For branching
-  a ``try1`` branch you would for example do::
-
-    svn cp http://codespeak.net/svn/pypy/trunk \
-           http://codespeak.net/svn/pypy/branch/try1
-
-  This allows to checkout the ``try1`` branch and receive a
-  self-contained working-copy for the branch.   Note that
-  branching/copying is a cheap operation with subversion, as it
-  takes constant time irrespective of the size of the tree.
-
-- To learn more about how to use subversion read `this document`_.
-
-.. _`this document`: svn-help.html
-
-
-
-.. _`using development tracker`:
-
-Using the development bug/feature tracker
-=========================================
-
-We have a `development tracker`_, based on Richard Jones'
-`roundup`_ application.  You can file bugs,
-feature requests or see what's going on
-for the next milestone, both from an E-Mail and from a
-web interface.
-
-use your codespeak login or register
-------------------------------------
-
-If you already committed to the PyPy source code, chances
-are that you can simply use your codespeak login that
-you use for subversion or for shell access.
-
-If you are not a commiter then you can still `register with
-the tracker`_ easily.
-
-modifying Issues from svn commit messages
------------------------------------------
-
-If you are committing something related to
-an issue in the development tracker you
-can correlate your login message to a tracker
-item by following these rules:
-
-- put the content of ``issueN STATUS`` on a single
-  new line
-
-- `N` must be an existing issue number from the `development tracker`_.
-
-- STATUS is one of::
-
-    unread
-    chatting
-    in-progress
-    testing
-    duplicate
-    resolved
-
-.. _`register with the tracker`: https://codespeak.net/issue/pypy-dev/user?@template=register
-.. _`development tracker`: http://codespeak.net/issue/pypy-dev/
-.. _`roundup`: http://roundup.sf.net
-
-
-.. _`testing in PyPy`:
-.. _`test-design`: 
-
-Testing in PyPy
-===============
-
-Our tests are based on the new `py.test`_ tool which lets you write
-unittests without boilerplate.  All tests of modules
-in a directory usually reside in a subdirectory **test**.  There are
-basically two types of unit tests:
-
-- **Interpreter Level tests**. They run at the same level as PyPy's
-  interpreter.
-
-- **Application Level tests**. They run at application level which means
-  that they look like straight python code but they are interpreted by PyPy.
-
-Both types of tests need an `objectspace`_ they can run with (the interpreter
-dispatches operations on objects to an objectspace).  If you run a test you
-can usually give the '-o' switch to select an object space.  E.g. '-o thunk'
-will select the thunk object space. The default is the `Standard Object Space`_
-which aims to implement unmodified Python semantics.
-
-.. _`standard object space`: objspace.html#standard-object-space
-.. _`objectspace`: objspace.html
-.. _`py.test`: http://codespeak.net/py/current/doc/test.html
-
-Interpreter level tests
------------------------
-
-You can write test functions and methods like this::
-
-    def test_something(space):
-        # use space ...
-
-    class TestSomething:
-        def test_some(self):
-            # use 'self.space' here
-
-Note that the prefix `test` for test functions and `Test` for test
-classes is mandatory.  In both cases you can import Python modules at
-module global level and use plain 'assert' statements thanks to the
-usage of the `py.test`_ tool.
-
-Application Level tests
------------------------
-
-For testing the conformance and well-behavedness of PyPy it
-is often sufficient to write "normal" application-level
-Python code that doesn't need to be aware of any particular
-coding style or restrictions.  If we have a choice we often
-use application level tests which usually look like this::
-
-    def app_test_something():
-        # application level test code
-
-    class AppTestSomething:
-        def test_this(self):
-            # application level test code
-
-These application level test functions will run on top
-of PyPy, i.e. they have no access to interpreter details.
-You cannot use imported modules from global level because
-they are imported at interpreter-level while you test code
-runs at application level. If you need to use modules
-you have to import them within the test function.
-
-Another possibility to pass in data into the AppTest is to use
-the ``setup_class`` method of the AppTest. All wrapped objects that are
-attached to the class there and start with ``w_`` can be accessed
-via self (but without the ``w_``) in the actual test method. An example::
-
-    from pypy.objspace.std import StdObjSpace 
-
-    class AppTestErrno: 
-        def setup_class(cls): 
-            cls.space = StdObjSpace()
-            cls.w_d = cls.space.wrap({"a": 1, "b", 2})
-
-        def test_dict(self):
-            assert self.d["a"] == 1
-            assert self.d["b"] == 2
-
-.. _`run the tests as usual`:
-
-Command line tool test_all
---------------------------
-
-You can run almost all of PyPy's tests by invoking::
-
-  python test_all.py file_or_directory
-
-which is a synonym for the general `py.test`_ utility
-located in the ``pypy`` directory.  For switches to
-modify test execution pass the ``-h`` option.
-
-Test conventions
-----------------
-
-- adding features requires adding appropriate tests.  (It often even
-  makes sense to first write the tests so that you are sure that they
-  actually can fail.)
-
-- All over the pypy source code there are test/ directories
-  which contain unittests.  Such scripts can usually be executed
-  directly or are collectively run by pypy/test_all.py
-
-- each test directory needs a copy of pypy/tool/autopath.py which
-  upon import will make sure that sys.path contains the directory
-  where 'pypy' is in.
-
-.. _`change documentation and website`:
-
-Changing documentation and website
-==================================
-
-documentation/website files in your local checkout
----------------------------------------------------
-
-Most of the PyPy's documentation and website is kept in
-`pypy/documentation` and `pypy/documentation/website` respectively.
-You can simply edit or add '.txt' files which contain ReST-markuped
-files.  Here is a `ReST quickstart`_ but you can also just look
-at the existing documentation and see how things work.
-
-.. _`ReST quickstart`: http://docutils.sourceforge.net/docs/rst/quickref.html
-
-Automatically test documentation/website changes
-------------------------------------------------
-
-.. _`docutils home page`:
-.. _`docutils`: http://docutils.sourceforge.net/
-
-We automatically check referential integrity and ReST-conformance.  In order to
-run the tests you need docutils_ installed.  Then go to the local checkout
-of the documentation directory and run the tests::
-
-    cd .../pypy/documentation
-    python ../test_all.py
-
-If you see no failures chances are high that your modifications at least
-don't produce ReST-errors or wrong local references.  A side effect of running
-the tests is that you have `.html` files in the documentation directory
-which you can point your browser to!
-
-Additionally, if you also want to check for remote references inside
-the documentation issue::
-
-    python ../test_all.py --checkremote
-
-which will check that remote URLs are reachable.
-
-
-.. include:: _ref.txt

diff --git a/pypy/doc/config/objspace.usemodules._ssl.txt b/pypy/doc/config/objspace.usemodules._ssl.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._ssl.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Use the '_ssl' module, which implements SSL socket operations.

diff --git a/pypy/doc/config/objspace.std.withrope.txt b/pypy/doc/config/objspace.std.withrope.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withrope.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-Enable ropes to be the default string implementation.
-
-See the section in `Standard Interpreter Optimizations`_ for more details.
-
-.. _`Standard Interpreter Optimizations`: ../interpreter-optimizations.html#ropes
-
-

diff --git a/pypy/doc/discussion/outline-external-ootype.txt b/pypy/doc/discussion/outline-external-ootype.txt
deleted file mode 100644
--- a/pypy/doc/discussion/outline-external-ootype.txt
+++ /dev/null
@@ -1,213 +0,0 @@
-Some discussion about external objects in ootype
-================================================
-
-Current approaches:
-
-* BasicExternal, used for js backend
-
-* SomeCliXxx for .NET backend
-
-BasicExternal
--------------
-
-* Is using types to make rpython happy (ie, every single method or field
-  is hardcoded)
-
-* Supports callbacks by SomeGenericCallable
-
-* Supports fields, also with callable fields
-
-SomeCliXxx
-----------
-
-* Supports method overloading
-
-* Supports inheritance in a better way
-
-* Supports static methods
-
-Would be extremely cool to have just one approach instead of two,
-so here are some notes:
-
-* There should be one mechanism, factored out nicely out of any backend,
-  to support any possible backend (cli, js, jvm for now).
-
-* This approach might be eventually extended by a backend itself, but
-  as much as possible code should be factored out.
-
-* Backend should take care itself about creating such classes, either
-  manually or automatically.
-
-* Should support superset of needs of all backends (ie callbacks,
-  method overloading, etc.)
-
-
-Proposal of alternative approach
-================================
-
-The goal of the task is to let RPython program access "external
-objects" which are available in the target platform; these include:
-
-  - external classes (e.g. for .NET: System.Collections.ArrayList)
-
-  - external instances (e.g. for js: window, window.document)
-
-  - external functions? (they are not needed for .NET and JVM, maybe
-    for js?)
-
-External objects should behave as much as possible as "internal
-objects".
-
-Moreover, we want to preserve the possibility of *testing* RPython
-programs on top of CPython if possible. For example, it should be
-possible to RPython programs using .NET external objects using
-PythonNet; probably there is something similar for JVM, but not for
-JS as I know.
-
-
-How to represent types
-----------------------
-
-First, some definitions: 
-
-  - high-level types are the types used by the annotator
-    (SomeInteger() & co.)
-
-  - low-level types are the types used by the rtyper (Signed & co.)
-
-  - platform-level types are the types used by the backends (e.g. int32 for
-    .NET)
-
-Usually, RPython types are described "top-down": we start from the
-annotation, then the rtyper transforms the high-level types into
-low-level types, then the backend transforms low-level types into
-platform-level types. E.g. for .NET, SomeInteger() -> Signed -> int32.
-
-External objects are different: we *already* know the platform-level
-types of our objects and we can't modify them. What we need to do is
-to specify an annotation that after the high-level -> low-level ->
-platform-level transformation will give us the correct types.
-
-For primitive types it is usually easy to find the correct annotation;
-if we have an int32, we know that it's ootype is Signed and the
-corresponding annotation is SomeInteger().
-
-For non-primitive types such as classes, we must use a "bottom-up"
-approach: first, we need a description of platform-level interface of
-the class; then we construct the corresponding low-level type and
-teach the backends how to treat such "external types". Finally, we
-wrap the low-level types into special "external annotation".
-
-For example, consider a simple existing .NET class::
-
-    class Foo {
-        public float bar(int x, int y) { ... }
-    }
-
-The corresponding low-level type could be something like this::
-
-    Foo = ootype.ExternalInstance({'bar': ([Signed, Signed], Float)})
-
-Then, the annotation for Foo's instances is SomeExternalInstance(Foo).
-This way, the transformation from high-level types to platform-level
-types is straightforward and correct.
-
-Finally, we need support for static methods: similarly for classes, we
-can define an ExternalStaticMeth low-level type and a
-SomeExternalStaticMeth annotation.
-
-
-How to describe types
----------------------
-
-To handle external objects we must specify their signatures. For CLI
-and JVM the job can be easily automatized, since the objects have got
-precise signatures.
-
-For JS, signatures must be written by hand, so we must provide a
-convenient syntax for it; I think it should be possible to use the
-current syntax and write a tool which translates it to low-level
-types.
-
-
-RPython interface
------------------
-
-External objects are exposed as special Python objects that gets
-annotated as SomeExternalXXX. Each backend can choose its own way to
-provide these objects to the RPython programmer.
-
-External classes will be annotated as SomeExternalClass; two
-operations are allowed:
-
-  - call: used to instantiate the class, return an object which will
-    be annotated as SomeExternalInstance.
-
-  - access to static methods: return an object which will be annotated
-    as SomeExternalStaticMeth.
-
-Instances are annotated as SomeExternalInstance. Prebuilt external
-objects (such as JS's window.document) are annotated as
-SomeExternalInstance(const=...).
-
-Open issues
------------
-
-Exceptions
-~~~~~~~~~~
-
-.NET and JVM users want to catch external exceptions in a natural way;
-e.g.::
-
-    try:
-        ...
-    except System.OverflowException:
-        ...
-
-This is not straightforward because to make the flow objspace happy the
-object which represent System.OverflowException must be a real Python
-class that inherits from Exception.
-
-This means that the Python objects which represent external classes
-must be Python classes itself, and that classes representing
-exceptions must be special cased and made subclasses of Exception.
-
-
-Inheritance
-~~~~~~~~~~~
-
-It would be nice to allow programmers to inherit from an external
-class. Not sure about the implications, though.
-
-Callbacks
-~~~~~~~~~
-
-I know that they are an issue for JS, but I don't know how they are
-currently implemented.
-
-Special methods/properties
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-In .NET there are special methods that can be accessed using a special
-syntax, for example indexer or properties. It would be nice to have in
-RPython the same syntax as C#.
-
-
-Implementation details
-----------------------
-
-The CLI backend use a similar approach right now, but it could be
-necessary to rewrite a part of it.
-
-To represent low-level types, it uses NativeInstance, a subclass of
-ootype.Instance that contains all the information needed by the
-backend to reference the class (e.g., the namespace). It also supports
-overloading.
-
-For annotations, it reuses SomeOOInstance, which is also a wrapper
-around a low-level type but it has been designed for low-level
-helpers. It might be saner to use another annotation not to mix apples
-and oranges, maybe factoring out common code.
-
-I don't know whether and how much code can be reused from the existing
-bltregistry.

diff --git a/pypy/doc/config/translation.linkerflags.txt b/pypy/doc/config/translation.linkerflags.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.linkerflags.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Experimental. Specify extra flags to pass to the linker.

diff --git a/pypy/doc/config/objspace.std.withstrjoin.txt b/pypy/doc/config/objspace.std.withstrjoin.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withstrjoin.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-Enable "string join" objects.
-
-See the page about `Standard Interpreter Optimizations`_ for more details.
-
-.. _`Standard Interpreter Optimizations`: ../interpreter-optimizations.html#string-join-objects
-
-

diff --git a/pypy/doc/config/objspace.usemodules._file.txt b/pypy/doc/config/objspace.usemodules._file.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._file.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Use the '_file' module. It is an internal module that contains helper
-functionality for the builtin ``file`` type.
-
-.. internal

diff --git a/pypy/doc/_ref.txt b/pypy/doc/_ref.txt
deleted file mode 100644
--- a/pypy/doc/_ref.txt
+++ /dev/null
@@ -1,107 +0,0 @@
-.. _`demo/`: ../../demo
-.. _`demo/pickle_coroutine.py`: ../../demo/pickle_coroutine.py
-.. _`lib-python/`: ../../lib-python
-.. _`lib-python/2.5.2/dis.py`: ../../lib-python/2.5.2/dis.py
-.. _`annotation/`:
-.. _`pypy/annotation`: ../../pypy/annotation
-.. _`pypy/annotation/annrpython.py`: ../../pypy/annotation/annrpython.py
-.. _`annotation/binaryop.py`: ../../pypy/annotation/binaryop.py
-.. _`pypy/annotation/builtin.py`: ../../pypy/annotation/builtin.py
-.. _`pypy/annotation/model.py`: ../../pypy/annotation/model.py
-.. _`bin/`: ../../pypy/bin
-.. _`config/`: ../../pypy/config
-.. _`pypy/config/pypyoption.py`: ../../pypy/config/pypyoption.py
-.. _`doc/`: ../../pypy/doc
-.. _`doc/config/`: ../../pypy/doc/config
-.. _`doc/discussion/`: ../../pypy/doc/discussion
-.. _`interpreter/`:
-.. _`pypy/interpreter`: ../../pypy/interpreter
-.. _`pypy/interpreter/argument.py`: ../../pypy/interpreter/argument.py
-.. _`interpreter/astcompiler/`:
-.. _`pypy/interpreter/astcompiler`: ../../pypy/interpreter/astcompiler
-.. _`pypy/interpreter/executioncontext.py`: ../../pypy/interpreter/executioncontext.py
-.. _`pypy/interpreter/function.py`: ../../pypy/interpreter/function.py
-.. _`interpreter/gateway.py`:
-.. _`pypy/interpreter/gateway.py`: ../../pypy/interpreter/gateway.py
-.. _`pypy/interpreter/generator.py`: ../../pypy/interpreter/generator.py
-.. _`pypy/interpreter/mixedmodule.py`: ../../pypy/interpreter/mixedmodule.py
-.. _`pypy/interpreter/module.py`: ../../pypy/interpreter/module.py
-.. _`pypy/interpreter/nestedscope.py`: ../../pypy/interpreter/nestedscope.py
-.. _`pypy/interpreter/pyopcode.py`: ../../pypy/interpreter/pyopcode.py
-.. _`interpreter/pyparser/`:
-.. _`pypy/interpreter/pyparser`: ../../pypy/interpreter/pyparser
-.. _`pypy/interpreter/pyparser/pytokenizer.py`: ../../pypy/interpreter/pyparser/pytokenizer.py
-.. _`pypy/interpreter/pyparser/parser.py`: ../../pypy/interpreter/pyparser/parser.py
-.. _`pypy/interpreter/pyparser/pyparse.py`: ../../pypy/interpreter/pyparser/pyparse.py
-.. _`pypy/interpreter/pyparser/future.py`: ../../pypy/interpreter/pyparser/future.py
-.. _`pypy/interpreter/pyparser/metaparser.py`: ../../pypy/interpreter/pyparser/metaparser.py
-.. _`pypy/interpreter/astcompiler/astbuilder.py`: ../../pypy/interpreter/astcompiler/astbuilder.py
-.. _`pypy/interpreter/astcompiler/optimize.py`: ../../pypy/interpreter/astcompiler/optimize.py
-.. _`pypy/interpreter/astcompiler/codegen.py`: ../../pypy/interpreter/astcompiler/codegen.py
-.. _`pypy/interpreter/astcompiler/tools/asdl_py.py`: ../../pypy/interpreter/astcompiler/tools/asdl_py.py
-.. _`pypy/interpreter/astcompiler/tools/Python.asdl`: ../../pypy/interpreter/astcompiler/tools/Python.asdl
-.. _`pypy/interpreter/astcompiler/assemble.py`: ../../pypy/interpreter/astcompiler/assemble.py
-.. _`pypy/interpreter/astcompiler/symtable.py`: ../../pypy/interpreter/astcompiler/symtable.py
-.. _`pypy/interpreter/astcompiler/asthelpers.py`: ../../pypy/interpreter/astcompiler/asthelpers.py
-.. _`pypy/interpreter/astcompiler/ast.py`: ../../pypy/interpreter/astcompiler/ast.py
-.. _`pypy/interpreter/typedef.py`: ../../pypy/interpreter/typedef.py
-.. _`lib/`:
-.. _`lib_pypy/`: ../../lib_pypy
-.. _`lib/distributed/`: ../../lib_pypy/distributed
-.. _`lib_pypy/stackless.py`: ../../lib_pypy/stackless.py
-.. _`lib_pypy/pypy_test/`: ../../lib_pypy/pypy_test
-.. _`module/`:
-.. _`pypy/module`:
-.. _`pypy/module/`: ../../pypy/module
-.. _`pypy/module/__builtin__/__init__.py`: ../../pypy/module/__builtin__/__init__.py
-.. _`pypy/module/_stackless/test/test_clonable.py`: ../../pypy/module/_stackless/test/test_clonable.py
-.. _`pypy/module/_stackless/test/test_composable_coroutine.py`: ../../pypy/module/_stackless/test/test_composable_coroutine.py
-.. _`objspace/`:
-.. _`pypy/objspace`: ../../pypy/objspace
-.. _`objspace/dump.py`: ../../pypy/objspace/dump.py
-.. _`objspace/flow/`: ../../pypy/objspace/flow
-.. _`objspace/std/`:
-.. _`pypy/objspace/std`: ../../pypy/objspace/std
-.. _`objspace/taint.py`: ../../pypy/objspace/taint.py
-.. _`objspace/thunk.py`:
-.. _`pypy/objspace/thunk.py`: ../../pypy/objspace/thunk.py
-.. _`objspace/trace.py`:
-.. _`pypy/objspace/trace.py`: ../../pypy/objspace/trace.py
-.. _`pypy/rlib`:
-.. _`rlib/`: ../../pypy/rlib
-.. _`pypy/rlib/rarithmetic.py`: ../../pypy/rlib/rarithmetic.py
-.. _`pypy/rlib/test`: ../../pypy/rlib/test
-.. _`pypy/rpython`:
-.. _`pypy/rpython/`:
-.. _`rpython/`: ../../pypy/rpython
-.. _`rpython/lltypesystem/`: ../../pypy/rpython/lltypesystem
-.. _`pypy/rpython/lltypesystem/lltype.py`:
-.. _`rpython/lltypesystem/lltype.py`: ../../pypy/rpython/lltypesystem/lltype.py
-.. _`rpython/memory/`: ../../pypy/rpython/memory
-.. _`rpython/memory/gc/generation.py`: ../../pypy/rpython/memory/gc/generation.py
-.. _`rpython/memory/gc/hybrid.py`: ../../pypy/rpython/memory/gc/hybrid.py
-.. _`rpython/memory/gc/markcompact.py`: ../../pypy/rpython/memory/gc/markcompact.py
-.. _`rpython/memory/gc/marksweep.py`: ../../pypy/rpython/memory/gc/marksweep.py
-.. _`rpython/memory/gc/semispace.py`: ../../pypy/rpython/memory/gc/semispace.py
-.. _`rpython/ootypesystem/`: ../../pypy/rpython/ootypesystem
-.. _`rpython/ootypesystem/ootype.py`: ../../pypy/rpython/ootypesystem/ootype.py
-.. _`rpython/rint.py`: ../../pypy/rpython/rint.py
-.. _`rpython/rlist.py`: ../../pypy/rpython/rlist.py
-.. _`rpython/rmodel.py`: ../../pypy/rpython/rmodel.py
-.. _`pypy/rpython/rtyper.py`: ../../pypy/rpython/rtyper.py
-.. _`pypy/rpython/test/test_llinterp.py`: ../../pypy/rpython/test/test_llinterp.py
-.. _`pypy/test_all.py`: ../../pypy/test_all.py
-.. _`tool/`: ../../pypy/tool
-.. _`tool/algo/`: ../../pypy/tool/algo
-.. _`tool/pytest/`: ../../pypy/tool/pytest
-.. _`pypy/translator`:
-.. _`translator/`: ../../pypy/translator
-.. _`translator/backendopt/`: ../../pypy/translator/backendopt
-.. _`translator/c/`: ../../pypy/translator/c
-.. _`translator/cli/`: ../../pypy/translator/cli
-.. _`translator/goal/`: ../../pypy/translator/goal
-.. _`pypy/translator/goal/targetnopstandalone.py`: ../../pypy/translator/goal/targetnopstandalone.py
-.. _`translator/jvm/`: ../../pypy/translator/jvm
-.. _`translator/stackless/`: ../../pypy/translator/stackless
-.. _`translator/tool/`: ../../pypy/translator/tool
-.. _`translator/js/`: http://codespeak.net/svn/pypy/branch/oo-jit/pypy/translator/js/

diff --git a/pypy/doc/config/objspace.usemodules._ffi.txt b/pypy/doc/config/objspace.usemodules._ffi.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._ffi.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Applevel interface to libffi.  It is more high level than _rawffi, and most importantly it is JIT friendly

diff --git a/pypy/doc/config/opt.txt b/pypy/doc/config/opt.txt
deleted file mode 100644
--- a/pypy/doc/config/opt.txt
+++ /dev/null
@@ -1,50 +0,0 @@
-The ``--opt`` or ``-O`` translation option
-==========================================
-
-This meta-option selects a default set of optimization
-settings to use during a translation.  Usage::
-
-    translate.py --opt=#
-    translate.py -O#
-
-where ``#`` is the desired optimization level.  The valid choices are:
-
-    =============  ========================================================
-      Level        Description
-    =============  ========================================================
-    `--opt=0`      all optimizations off; fastest translation `(*)`_
-    `--opt=1`      non-time-consuming optimizations on `(*)`_
-    `--opt=size`   minimize the size of the final executable `(*)`_
-    `--opt=mem`    minimize the run-time RAM consumption (in-progress)
-    `--opt=2`      all optimizations on; good run-time performance
-    `--opt=3`      same as `--opt=2`; remove asserts; gcc profiling `(**)`_
-    `--opt=jit`    includes the JIT and tweak other optimizations for it
-    =============  ========================================================
-
-.. _`(*)`:
-
-`(*)`: The levels `0, 1` and `size` use the `Boehm-Demers-Weiser
-garbage collector`_ (Debian package ``libgc-dev``).  The translation
-itself is faster and consumes less memory; the final executable is
-smaller but slower.  The other levels use one of our built-in `custom
-garbage collectors`_.
-
-.. _`(**)`:
-    
-`(**)`: The level `3` enables gcc profile-driven recompilation when
-translating PyPy.
-
-The exact set of optimizations enabled by each level depends
-on the backend.  Individual translation targets can also
-select their own options based on the level: when translating
-PyPy, the level `mem` enables the memory-saving object
-implementations in the object space; levels `2` and `3` enable
-the advanced object implementations that give an increase in
-performance; level `3` also enables gcc profile-driven
-recompilation.
-
-The default level is `2`.
-
-
-.. _`Boehm-Demers-Weiser garbage collector`: http://www.hpl.hp.com/personal/Hans_Boehm/gc/
-.. _`custom garbage collectors`: ../garbage_collection.html

diff --git a/pypy/doc/config/objspace.usemodules.itertools.txt b/pypy/doc/config/objspace.usemodules.itertools.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.itertools.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the interp-level 'itertools' module.
-If not included, a slower app-level version of itertools is used.

diff --git a/pypy/doc/config/translation.jit.txt b/pypy/doc/config/translation.jit.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.jit.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Enable the JIT generator, for targets that have JIT support.
-Experimental so far.

diff --git a/pypy/doc/ctypes-implementation.txt b/pypy/doc/ctypes-implementation.txt
deleted file mode 100644
--- a/pypy/doc/ctypes-implementation.txt
+++ /dev/null
@@ -1,184 +0,0 @@
-
-=============================
-PyPy's ctypes implementation 
-=============================
-
-Summary
-========
-
-Terminology:
-
-* application level code - code written in full Python
-
-* interpreter level code - code written in RPython, compiled
-  to something else, say C, part of the interpreter.
-
-PyPy's ctypes implementation in its current state proves the
-feasibility of implementing a module with the same interface and
-behavior for PyPy as ctypes for CPython.
-
-PyPy's implementation internally uses `libffi`_ like CPython's ctypes.
-In our implementation as much as possible of the code is written in
-full Python, not RPython. In CPython's situation, the equivalent would
-be to write as little as possible code in C.  We essentially favored
-rapid experimentation over worrying about speed for this first trial
-implementation. This allowed to provide a working implementation with
-a large part of ctypes features in 2 months real time.
-
-We reused the ``ctypes`` package version 1.0.2 as-is from CPython. We
-implemented ``_ctypes`` which is a C module in CPython mostly in pure
-Python based on a lower-level layer extension module ``_rawffi``.
-
-.. _`libffi`: http://sources.redhat.com/libffi/
-
-Low-level part: ``_rawffi``
-============================
-
-This PyPy extension module (``pypy/module/_rawffi``) exposes a simple interface
-to create C objects (arrays and structures) and calling functions
-in dynamic libraries through libffi. Freeing objects in most cases and making
-sure that objects referring to each other are kept alive is responsibility of the higher levels.
-
-This module uses bindings to libffi which are defined in ``pypy/rlib/libffi.py``.
-
-We tried to keep this module as small as possible. It is conceivable
-that other implementations (e.g. Jython) could use our ctypes
-implementation by writing their version of ``_rawffi``.
-
-High-level parts
-=================
-
-The reused ``ctypes`` package lives in ``lib_pypy/ctypes``. ``_ctypes``
-implementing the same interface as ``_ctypes`` in CPython is in
-``lib_pypy/_ctypes``.
-
-Discussion and limitations
-=============================
-
-Reimplementing ctypes features was in general possible. PyPy supports
-pluggable garbage collectors, some of them are moving collectors, this
-means that the strategy of passing direct references inside Python
-objects to an external library is not feasible (unless the GCs
-support pinning, which is not the case right now).  The consequence of
-this is that sometimes copying instead of sharing is required, this
-may result in some semantics differences. C objects created with
-_rawffi itself are allocated outside of the GC heap, such that they can be
-passed to external functions without worries.
-
-Porting the implementation to interpreter-level should likely improve
-its speed.  Furthermore the current layering and the current _rawffi
-interface require more object allocations and copying than strictly
-necessary; this too could be improved.
-
-The implementation was developed and has only been tested on x86-32 Linux.
-
-Here is a list of the limitations and missing features of the
-current implementation:
-
-* No support for ``PyXxx`` functions from ``libpython``, for obvious reasons.
-
-* We copy Python strings instead of having pointers to raw buffers
-
-* Features we did not get to implement:
-
-  - custom alignment and bit-fields
-
-  - resizing (``resize()`` function)
-
-  - non-native byte-order objects
-
-  - callbacks accepting by-value structures
-
-  - slight semantic differences that ctypes makes
-    between its primitive types and user subclasses
-    of its primitive types
-
-Getting the code and test suites
-=================================
-
-A stable revision of PyPy containing the ctypes implementation can be checked out with subversion from the tag: 
-
-http://codespeak.net/svn/pypy/tag/ctypes-stable
-
-The various tests and later examples can be run on x86-32 Linux. We tried them
-on an up-to-date Ubuntu 7.10 x86-32 system.
-
-If one goes inside the checkout it is possible to run ``_rawffi`` tests with::
-
-    $ cd pypy
-    $ python test_all.py module/_rawffi/
-
-The ctypes implementation test suite is derived from the tests for
-ctypes 1.0.2, we have skipped some tests corresponding to not
-implemented features or implementation details, we have also added
-some tests.
-
-To run the test suite a compiled pypy-c is required with the proper configuration. To build the required pypy-c  one should inside the checkout::
-
-   $ cd pypy/translator/goal
-   $ ./translate.py --text --batch --gc=generation targetpypystandalone.py 
-     --withmod-_rawffi --allworkingmodules
-
-this should produce a pypy-c executable in the ``goal`` directory.
-
-To run the tests then::
-
-   $ cd ../../.. # back to pypy-trunk
-   $ ./pypy/translator/goal/pypy-c pypy/test_all.py lib/pypy1.2/lib_pypy/pypy_test/ctypes_tests
-
-There should be 36 skipped tests and all other tests should pass.
-
-Running application examples
-==============================
-
-`pyglet`_ is known to run. We had some success also with pygame-ctypes which is not maintained anymore and with a snapshot of the experimental pysqlite-ctypes. We will only describe how to run the pyglet examples.
-
-pyglet
--------
-
-We tried pyglet checking it out from its repository at revision 1984.
-For convenience a tarball of the checkout can also be found at:
-
-http://codespeak.net/~pedronis/pyglet-r1984.tgz
-
-From pyglet, the following examples are known to work:
-  
-  - opengl.py
-  - multiple_windows.py
-  - events.py
-  - html_label.py
-  - timer.py
-  - window_platform_event.py
-  - fixed_resolution.py
-
-The pypy-c translated to run the ctypes tests can be used to run the pyglet examples as well. They can be run like e.g.::
-
-    $ cd pyglet/
-    $ PYTHONPATH=. ../ctypes-stable/pypy/translator/goal/pypy-c examples/opengl.py
-
-
-they usually should be terminated with ctrl-c. Refer to the their doc strings for details about how they should behave.
-
-The following examples don't work for reasons independent from ctypes:
-
-  - image_convert.py needs PIL
-  - image_display.py needs PIL
-  - astraea/astraea.py needs PIL
-
-We did not try the following examples:
-
-  - media_player.py needs avbin or at least a proper sound card setup for
-    .wav files
-  - video.py needs avbin
-  - soundscape needs avbin
-
-.. _`pyglet`: http://pyglet.org/
-
-
-ctypes configure
-=================
-
-We also released `ctypes-configure`_, which is an experimental package trying to
-approach the portability issues of ctypes-based code.
-
-.. _`ctypes-configure`: http://codespeak.net/~fijal/configure.html

diff --git a/pypy/doc/config/objspace.name.txt b/pypy/doc/config/objspace.name.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.name.txt
+++ /dev/null
@@ -1,16 +0,0 @@
-Determine which `Object Space`_ to use. The `Standard Object Space`_ gives the
-normal Python semantics, the others are `Object Space Proxies`_ giving
-additional features (except the Flow Object Space which is not intended
-for normal usage):
-
-  * thunk_: The thunk object space adds lazy evaluation to PyPy.
-  * taint_: The taint object space adds soft security features.
-  * dump_:  Using this object spaces results in the dumpimp of all operations
-    to a log.
-
-.. _`Object Space`: ../objspace.html
-.. _`Object Space Proxies`: ../objspace-proxies.html
-.. _`Standard Object Space`: ../objspace.html#standard-object-space
-.. _thunk: ../objspace-proxies.html#thunk
-.. _taint: ../objspace-proxies.html#taint
-.. _dump: ../objspace-proxies.html#dump

diff --git a/pypy/doc/config/translation.stackless.txt b/pypy/doc/config/translation.stackless.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.stackless.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-Run the `stackless transform`_ on each generated graph, which enables the use
-of coroutines at RPython level and the "stackless" module when translating
-PyPy.
-
-.. _`stackless transform`: ../stackless.html

diff --git a/pypy/doc/config/objspace.std.methodcachesizeexp.txt b/pypy/doc/config/objspace.std.methodcachesizeexp.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.methodcachesizeexp.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Set the cache size (number of entries) for :config:`objspace.std.withmethodcache`.

diff --git a/pypy/doc/index-report.txt b/pypy/doc/index-report.txt
deleted file mode 100644
--- a/pypy/doc/index-report.txt
+++ /dev/null
@@ -1,169 +0,0 @@
-============================================
-PyPy - Overview over the EU-reports
-============================================
-
-Below reports summarize and discuss research and development results 
-of the PyPy project during the EU funding period (Dez 2004 - March 2007). 
-They also are very good documentation if you'd like to know in more
-detail about motivation and implementation of the various parts 
-and aspects of PyPy.  Feel free to send questions or comments
-to `pypy-dev`_, the development list. 
-
-Reports of 2007
-===============
-
-The `PyPy EU Final Activity Report`_ summarizes the 28 month EU project
-period (Dec 2004-March 2007) on technical, scientific and community levels. 
-You do not need prior knowledge about PyPy but some technical knowledge about 
-computer language implementations is helpful.  The report contains reflections 
-and recommendations which might be interesting for other project aiming 
-at funded Open Source research. *(2007-05-11)* 
-
-`D09.1 Constraint Solving and Semantic Web`_ is  a report about PyPy's logic
-programming and constraint solving features, as well as the work going on to
-tie semantic web technologies and PyPy together. *(2007-05-11)*
-
-`D14.4 PyPy-1.0 Milestone report`_ (for language developers and researchers)
-summarizes research & technical results of the PyPy-1.0 release and discusses
-related development process and community aspects. *(2007-05-01)*
-
-`D08.2 JIT Compiler Architecture`_ is a report about the Architecture and
-working of our JIT compiler generator. *(2007-05-01)*
-
-`D08.1 JIT Compiler Release`_ reports on our successfully including a
-JIT compiler for Python and the novel framework we used to
-automatically generate it in PyPy 1.0. *(2007-04-30)*
-
-`D06.1 Core Object Optimization Results`_ documents the optimizations
-we implemented in the interpreter and object space: dictionary
-implementations, method call optimizations, etc. The report is still not final
-so we are very interested in any feedback *(2007-04-04)*
-
-`D14.5 Documentation of the development process`_ documents PyPy's
-sprint-driven development process and puts it into the context of agile
-methodologies. *(2007-03-30)*
-
-`D13.1 Integration and Configuration`_ is a report about our build and
-configuration toolchain as well as the planned Debian packages. It also
-describes the work done to integrate the results of other workpackages into the
-rest of the project. *(2007-03-30)*
-
-`D02.2 Release Scheme`_ lists PyPy's six public releases and explains the release structure, tools, directories and policies for performing PyPy releases. *(2007-03-30)*
-
-`D01.2-4 Project Organization`_ is a report about the management activities
-within the PyPy project and PyPy development process. *(2007-03-28)*
-
-`D11.1 PyPy for Embedded Devices`_ is a report about the possibilities of using
-PyPy technology for programming embedded devices. *(2007-03-26)*
-
-`D02.3 Testing Tool`_ is a report about the
-`py.test`_ testing tool which is part of the `py-lib`_. *(2007-03-23)*
-
-`D10.1 Aspect-Oriented, Design-by-Contract Programming and RPython static
-checking`_ is a report about the ``aop`` module providing an Aspect Oriented
-Programming mechanism for PyPy, and how this can be leveraged to implement a
-Design-by-Contract module. It also introduces RPylint static type checker for
-RPython code. *(2007-03-22)*
-
-`D12.1 High-Level-Backends and Feature Prototypes`_ is
-a report about our high-level backends and our
-several validation prototypes: an information flow security prototype,
-a distribution prototype and a persistence proof-of-concept. *(2007-03-22)*
-
-`D14.2 Tutorials and Guide Through the PyPy Source Code`_ is 
-a report about the steps we have taken to make the project approachable for
-newcomers. *(2007-03-22)*
-
-
-`D02.1 Development Tools and Website`_ is a report
-about the codespeak_ development environment and additional tool support for the
-PyPy development process. *(2007-03-21)*
-
-`D03.1 Extension Compiler`_ is a report about
-PyPy's extension compiler and RCTypes, as well as the effort to keep up with
-CPython's changes. *(2007-03-21)*
-
-
-`D07.1 Massive Parallelism and Translation Aspects`_ is a report about
-PyPy's optimization efforts, garbage collectors and massive parallelism
-(stackless) features.  This report refers to the paper `PyPy's approach
-to virtual machine construction`_. *(2007-02-28)*
-
-
-
-.. _`py-lib`: http://codespeak.net/py/current/doc/
-.. _`py.test`: http://codespeak.net/py/current/doc/test.html
-.. _codespeak: http://codespeak.net/
-.. _`pypy-dev`: http://codespeak.net/mailman/listinfo/pypy-dev
-
-
-Reports of 2006
-===============
-
-`D14.3 Report about Milestone/Phase 2`_ is the final report about
-the second phase of the EU project, summarizing and detailing technical, 
-research, dissemination and community aspects.  Feedback is very welcome! 
-
-
-Reports of 2005
-===============
-
-`D04.1 Partial Python Implementation`_ contains details about the 0.6 release.
-All the content can be found in the regular documentation section.
-
-`D04.2 Complete Python Implementation`_ contains details about the 0.7 release.
-All the content can be found in the regular documentation section.
-
-`D04.3 Parser and Bytecode Compiler`_ describes our parser and bytecode compiler.
-
-`D04.4 PyPy as a Research Tool`_ contains details about the 0.8 release.
-All the content can be found in the regular documentation section.
-
-`D05.1 Compiling Dynamic Language Implementations`_ is a paper that describes
-the translation process, especially the flow object space and the annotator in
-detail.
-
-`D05.2 A Compiled Version of PyPy`_ contains more details about the 0.7 release.
-All the content can be found in the regular documentation section.
-
-`D05.3 Implementation with Translation Aspects`_
-describes how our approach hides away a lot of low level details.
-
-`D05.4 Encapsulating Low Level Aspects`_ describes how we weave different
-properties into our interpreter during the translation process.
-
-`D14.1 Report about Milestone/Phase 1`_ describes what happened in the PyPy
-project during the first year of EU funding (December 2004 - December 2005)
-
-.. _`PyPy EU Final Activity Report`: http://codespeak.net/pypy/extradoc/eu-report/PYPY-EU-Final-Activity-Report.pdf
-.. _`D01.2-4 Project Organization`: http://codespeak.net/pypy/extradoc/eu-report/D01.2-4_Project_Organization-2007-03-28.pdf
-.. _`D02.1 Development Tools and Website`: http://codespeak.net/pypy/extradoc/eu-report/D02.1_Development_Tools_and_Website-2007-03-21.pdf
-.. _`D02.2 Release Scheme`: http://codespeak.net/svn/pypy/extradoc/eu-report/D02.2_Release_Scheme-2007-03-30.pdf
-.. _`D02.3 Testing Tool`: http://codespeak.net/pypy/extradoc/eu-report/D02.3_Testing_Framework-2007-03-23.pdf
-.. _`D03.1 Extension Compiler`: http://codespeak.net/pypy/extradoc/eu-report/D03.1_Extension_Compiler-2007-03-21.pdf
-.. _`D04.1 Partial Python Implementation`: http://codespeak.net/svn/pypy/extradoc/eu-report/D04.1_Partial_Python_Implementation_on_top_of_CPython.pdf
-.. _`D04.2 Complete Python Implementation`: http://codespeak.net/svn/pypy/extradoc/eu-report/D04.2_Complete_Python_Implementation_on_top_of_CPython.pdf
-.. _`D04.3 Parser and Bytecode Compiler`: http://codespeak.net/svn/pypy/extradoc/eu-report/D04.3_Report_about_the_parser_and_bytecode_compiler.pdf
-.. _`D04.4 PyPy as a Research Tool`: http://codespeak.net/svn/pypy/extradoc/eu-report/D04.4_Release_PyPy_as_a_research_tool.pdf
-.. _`D05.1 Compiling Dynamic Language Implementations`: http://codespeak.net/svn/pypy/extradoc/eu-report/D05.1_Publish_on_translating_a_very-high-level_description.pdf
-.. _`D05.2 A Compiled Version of PyPy`: http://codespeak.net/svn/pypy/extradoc/eu-report/D05.2_A_compiled,_self-contained_version_of_PyPy.pdf
-.. _`D05.3 Implementation with Translation Aspects`: http://codespeak.net/svn/pypy/extradoc/eu-report/D05.3_Publish_on_implementation_with_translation_aspects.pdf
-.. _`D05.4 Encapsulating Low Level Aspects`: http://codespeak.net/svn/pypy/extradoc/eu-report/D05.4_Publish_on_encapsulating_low_level_language_aspects.pdf
-.. _`D06.1 Core Object Optimization Results`: http://codespeak.net/svn/pypy/extradoc/eu-report/D06.1_Core_Optimizations-2007-04-30.pdf
-.. _`D07.1 Massive Parallelism and Translation Aspects`: http://codespeak.net/pypy/extradoc/eu-report/D07.1_Massive_Parallelism_and_Translation_Aspects-2007-02-28.pdf
-.. _`D08.2 JIT Compiler Architecture`: http://codespeak.net/pypy/extradoc/eu-report/D08.2_JIT_Compiler_Architecture-2007-05-01.pdf
-.. _`D08.1 JIT Compiler Release`: http://codespeak.net/pypy/extradoc/eu-report/D08.1_JIT_Compiler_Release-2007-04-30.pdf
-.. _`D09.1 Constraint Solving and Semantic Web`: http://codespeak.net/pypy/extradoc/eu-report/D09.1_Constraint_Solving_and_Semantic_Web-2007-05-11.pdf
-.. _`D10.1 Aspect-Oriented, Design-by-Contract Programming and RPython static checking`: http://codespeak.net/pypy/extradoc/eu-report/D10.1_Aspect_Oriented_Programming_in_PyPy-2007-03-22.pdf
-.. _`D11.1 PyPy for Embedded Devices`: http://codespeak.net/pypy/extradoc/eu-report/D11.1_PyPy_for_Embedded_Devices-2007-03-26.pdf
-.. _`D12.1 High-Level-Backends and Feature Prototypes`: http://codespeak.net/pypy/extradoc/eu-report/D12.1_H-L-Backends_and_Feature_Prototypes-2007-03-22.pdf
-.. _`D13.1 Integration and Configuration`: http://codespeak.net/pypy/extradoc/eu-report/D13.1_Integration_and_Configuration-2007-03-30.pdf 
-.. _`D14.1 Report about Milestone/Phase 1`: http://codespeak.net/svn/pypy/extradoc/eu-report/D14.1_Report_about_Milestone_Phase_1.pdf
-.. _`D14.2 Tutorials and Guide Through the PyPy Source Code`: http://codespeak.net/pypy/extradoc/eu-report/D14.2_Tutorials_and_Guide_Through_the_PyPy_Source_Code-2007-03-22.pdf
-.. _`D14.3 Report about Milestone/Phase 2`: http://codespeak.net/pypy/extradoc/eu-report/D14.3_Report_about_Milestone_Phase_2-final-2006-08-03.pdf
-.. _`D14.4 PyPy-1.0 Milestone report`: http://codespeak.net/pypy/extradoc/eu-report/D14.4_Report_About_Milestone_Phase_3-2007-05-01.pdf
-.. _`D14.5 Documentation of the development process`: http://codespeak.net/pypy/extradoc/eu-report/D14.5_Documentation_of_the_development_process-2007-03-30.pdf
-
-
-
-.. _`PyPy's approach to virtual machine construction`: http://codespeak.net/svn/pypy/extradoc/talk/dls2006/pypy-vm-construction.pdf

diff --git a/pypy/doc/config/objspace.usemodules.marshal.txt b/pypy/doc/config/objspace.usemodules.marshal.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.marshal.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'marshal' module. 
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/objspace.usemodules.symbol.txt b/pypy/doc/config/objspace.usemodules.symbol.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.symbol.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'symbol' module. 
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/objspace.std.withsmallint.txt b/pypy/doc/config/objspace.std.withsmallint.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withsmallint.txt
+++ /dev/null
@@ -1,6 +0,0 @@
-Use "tagged pointers" to represent small enough integer values: Integers that
-fit into 31 bits (respective 63 bits on 64 bit machines) are not represented by
-boxing them in an instance of ``W_IntObject``. Instead they are represented as a
-pointer having the lowest bit set and the rest of the bits used to store the
-value of the integer. This gives a small speedup for integer operations as well
-as better memory behaviour.

diff --git a/pypy/doc/config/translation.list_comprehension_operations.txt b/pypy/doc/config/translation.list_comprehension_operations.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.list_comprehension_operations.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Experimental optimization for list comprehensions in RPython.
-

diff --git a/pypy/doc/cleanup-todo.txt b/pypy/doc/cleanup-todo.txt
deleted file mode 100644
--- a/pypy/doc/cleanup-todo.txt
+++ /dev/null
@@ -1,30 +0,0 @@
-
-PyPy cleanup areas
-==================
-
-This is a todo list that lists various areas of PyPy that should be cleaned up
-(for whatever reason: less mess, less code duplication, etc).
-
-translation toolchain
----------------------
-
- - low level backends should share more code
- - all backends should have more consistent interfaces
- - geninterp is a hack
- - delegate finding type stuff like vtables etc to GC, cleaner interface for rtti,
-   simplify translator/c/gc.py
- - clean up the tangle of including headers in the C backend
- - make approach for loading modules more sane, mixedmodule capture
-   too many platform dependencies especially for pypy-cli
- - review pdbplus, especially the graph commands, also in the light of
-   https://codespeak.net/issue/pypy-dev/issue303 and the fact that
-   we can have more than one translator/annotator around (with the
-   timeshifter)
-
-interpreter
------------
-
- - review the things implemented at applevel whether they are performance-
-   critical
-
- - review CPython regression test suite, enable running tests, fix bugs

diff --git a/pypy/doc/config/translation.rweakref.txt b/pypy/doc/config/translation.rweakref.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.rweakref.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-This indicates if the backend and GC policy support RPython-level weakrefs.
-Can be tested in an RPython program to select between two implementation
-strategies.

diff --git a/pypy/doc/config/translation.verbose.txt b/pypy/doc/config/translation.verbose.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.verbose.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Print some more information during translation.

diff --git a/pypy/doc/config/objspace.usepycfiles.txt b/pypy/doc/config/objspace.usepycfiles.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usepycfiles.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-If this option is used, then PyPy imports and generates "pyc" files in the
-same way as CPython.  This is true by default and there is not much reason
-to turn it off nowadays.  If off, PyPy never produces "pyc" files and
-ignores any "pyc" file that might already be present.

diff --git a/pypy/doc/config/translation.backendopt.print_statistics.txt b/pypy/doc/config/translation.backendopt.print_statistics.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.print_statistics.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Debugging option. Print statics about the forest of flowgraphs as they
-go through the various backend optimizations.
\ No newline at end of file

diff --git a/pypy/doc/config/translation.gcremovetypeptr.txt b/pypy/doc/config/translation.gcremovetypeptr.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.gcremovetypeptr.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-If set, save one word in every object.  Framework GC only.

diff --git a/pypy/doc/config/translation.gctransformer.txt b/pypy/doc/config/translation.gctransformer.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.gctransformer.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-internal option

diff --git a/pypy/doc/config/objspace.timing.txt b/pypy/doc/config/objspace.timing.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.timing.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-timing of various parts of the interpreter (simple profiling)

diff --git a/pypy/doc/config/objspace.std.withtproxy.txt b/pypy/doc/config/objspace.std.withtproxy.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withtproxy.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Enable `transparent proxies`_.
-
-.. _`transparent proxies`: ../objspace-proxies.html#tproxy

diff --git a/pypy/doc/config/translation.output.txt b/pypy/doc/config/translation.output.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.output.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Specify file name that the produced executable gets.

diff --git a/pypy/doc/discussion/oz-thread-api.txt b/pypy/doc/discussion/oz-thread-api.txt
deleted file mode 100644
--- a/pypy/doc/discussion/oz-thread-api.txt
+++ /dev/null
@@ -1,49 +0,0 @@
-Some rough notes about the Oz threading model
-=============================================
-
-(almost verbatim from CTM)
-
-Scheduling
-----------
-
-Fair scheduling through round-robin.
-
-With priority levels : three queues exist, which manage high, medium,
-low priority threads. The time slice ratio for these is
-100:10:1. Threads inherit the priority of their parent.
-
-Mozart uses an external timer approach to implement thread preemption.
-
-Thread ops
-----------
-
-All these ops are defined in a Thread namespace/module.
-
-this()               -> current thread's name (*not* another thread's name)
-state(t)             -> return state of t in {runnable, blocked, terminated}
-suspend(t)            : suspend t
-resume(t)             : resume execution of t
-preempt(t)            : preempt t
-terminate(t)          : terminate t immediately
-injectException(t, e) : raise exception e in t
-setPriority(t, p)     : set t's priority to p
-
-Interestingly, coroutines can be build upon this thread
-API. Coroutines have two ops : spawn and resume.
-
-spawn(p)             -> creates a coroutine with procedure p, returns pid
-resume(c)             : transfers control from current coroutine to c
-
-The implementation of these ops in terms of the threads API is as
-follows :
-
-def spawn(p):
-    in_thread:
-        pid = Thread.this()
-        Thread.suspend(pid)
-        p()
-
-def resume(cid):
-    Thread.resume cid
-    Thread.suspend(Thread.this())
-

diff --git a/pypy/doc/faq.txt b/pypy/doc/faq.txt
deleted file mode 100644
--- a/pypy/doc/faq.txt
+++ /dev/null
@@ -1,425 +0,0 @@
-==========================
-Frequently Asked Questions
-==========================
-
-.. contents::
-
-
-General
-=======
-
--------------
-What is PyPy?
--------------
-
-PyPy is both:
-
- - a reimplementation of Python in Python, and
-
- - a framework for implementing interpreters and virtual machines for
-   programming languages, especially dynamic languages.
-
-PyPy tries to find new answers about ease of creation, flexibility,
-maintainability and speed trade-offs for language implementations.
-For further details see our `goal and architecture document`_ .
-
-.. _`goal and architecture document`: architecture.html
-
-
-.. _`drop in replacement`:
-
-------------------------------------------
-Is PyPy a drop in replacement for CPython?
-------------------------------------------
-
-Almost!
-
-The mostly likely stumbling block for any given project is support for
-`extension modules`_.  PyPy supports a continually growing
-number of extension modules, but so far mostly only those found in the
-standard library.
-
-The language features (including builtin types and functions) are very
-complete and well tested, so if your project does not use many
-extension modules there is a good chance that it will work with PyPy.
-
-We list the differences we know about in `cpython_differences`_.
-
-There is also an experimental support for CPython extension modules, so
-they'll run without change (from current observation, rather with little
-change) on trunk. It has been a part of 1.4 release, but support is still
-in alpha phase.
-
-.. _`extension modules`: cpython_differences.html#extension-modules
-.. _`cpython_differences`: cpython_differences.html
-
---------------------------------
-On what platforms does PyPy run?
---------------------------------
-
-PyPy is regularly and extensively tested on Linux machines and on Mac
-OS X and mostly works under Windows too (but is tested there less
-extensively). PyPy needs a CPython running on the target platform to
-bootstrap, as cross compilation is not really meant to work yet.
-At the moment you need CPython 2.4 (with ctypes) or CPython 2.5 or 2.6
-for the translation process. PyPy's JIT requires an x86 or x86_64 CPU.
-
-
-------------------------------------------------
-Which Python version (2.x?) does PyPy implement?
-------------------------------------------------
-
-PyPy currently aims to be fully compatible with Python 2.5. That means that
-it contains the standard library of Python 2.5 and that it supports 2.5
-features (such as the with statement).  
-
-.. _threading:
-
--------------------------------------------------
-Do threads work?  What are the modules that work?
--------------------------------------------------
-
-Operating system-level threads basically work. If you enable the ``thread``
-module then PyPy will get support for GIL based threading.
-Note that PyPy also fully supports `stackless-like
-microthreads`_ (although both cannot be mixed yet).
-
-All pure-python modules should work, unless they rely on ugly
-cpython implementation details, in which case it's their fault.
-There is an increasing number of compatible CPython extensions working,
-including things like wxPython or PIL. This is an ongoing development effort
-to bring as many CPython extension modules working as possible.
-
-.. _`stackless-like microthreads`: stackless.html
-
-
-------------------------------------
-Can I use CPython extension modules?
-------------------------------------
-
-Yes, but the feature is in alpha state and is available only on trunk
-(not in the 1.2 release). However, we'll only ever support well-behaving
-CPython extensions. Please consult PyPy developers on IRC or mailing list
-for explanations if your favorite module works and how you can help to make
-it happen in case it does not.
-
-We fully support ctypes-based extensions, however.
-
-------------------------------------------
-How do I write extension modules for PyPy?
-------------------------------------------
-
-See `Writing extension modules for PyPy`__.
-
-.. __: extending.html
-
-
-.. _`slower than CPython`:
-.. _`how fast is pypy`:
-
------------------
-How fast is PyPy?
------------------
-
-.. _whysoslow:
-
-In three words, PyPy is "kind of fast".  In more than three
-words, the answer to this question is hard to give as a single
-number.  The fastest PyPy available so far is clearly PyPy
-`with a JIT included`_, optimized and translated to C.  This
-version of PyPy is "kind of fast" in the sense that there are
-numerous examples of Python code that run *much faster* than
-CPython, up to a large number of times faster.  And there are
-also examples of code that are just as slow as without the
-JIT.  A PyPy that does not include a JIT has performance that
-is more predictable: it runs generally somewhere between 1 and
-2 times slower than CPython, in the worst case up to 4 times
-slower.
-
-Obtaining good measurements for the performance when run on
-the CLI or JVM is difficult, but the JIT on the CLI `seems to
-work nicely`__ too.
-
-.. __: http://codespeak.net/svn/user/antocuni/phd/thesis/thesis.pdf
-.. _`with a JIT included`: jit/index.html
-
-
-.. _`prolog and javascript`:
-
-----------------------------------------------------------------
-Can PyPy support interpreters for other languages beyond Python?
-----------------------------------------------------------------
-
-The toolsuite that translates the PyPy interpreter is quite
-general and can be used to create optimized versions of interpreters
-for any language, not just Python.  Of course, these interpreters
-can make use of the same features that PyPy brings to Python:
-translation to various languages, stackless features,
-garbage collection, implementation of various things like arbitrarily long
-integers, etc. 
-
-Currently, we have preliminary versions of a JavaScript interpreter
-(Leonardo Santagada as his Summer of PyPy project), a `Prolog interpreter`_
-(Carl Friedrich Bolz as his Bachelor thesis), and a `SmallTalk interpreter`_
-(produced during a sprint).  `All of them`_ are unfinished at the moment.
-
-.. _`Prolog interpreter`: http://codespeak.net/svn/pypy/lang/prolog/
-.. _`SmallTalk interpreter`: http://dx.doi.org/10.1007/978-3-540-89275-5_7
-.. _`All of them`: http://codespeak.net/svn/pypy/lang/
-
-
-Development
-===========
-
------------------------------------------------------------
-How do I get into PyPy development?  Can I come to sprints?
------------------------------------------------------------
-
-Sure you can come to sprints! We always welcome newcomers and try to help them
-get started in the project as much as possible (e.g. by providing tutorials and
-pairing them with experienced PyPy developers). Newcomers should have some
-Python experience and read some of the PyPy documentation before coming to a
-sprint.
-
-Coming to a sprint is usually also the best way to get into PyPy development.
-If you want to start on your own, take a look at the list of `project
-suggestions`_. If you get stuck or need advice, `contact us`_. Usually IRC is
-the most immediate way to get feedback (at least during some parts of the day;
-many PyPy developers are in Europe) and the `mailing list`_ is better for long
-discussions.
-
-.. _`project suggestions`: project-ideas.html
-.. _`contact us`: index.html
-.. _`mailing list`: http://codespeak.net/mailman/listinfo/pypy-dev
-
-----------------------------------------------------------------------
-I am getting strange errors while playing with PyPy, what should I do?
-----------------------------------------------------------------------
-
-It seems that a lot of strange, unexplainable problems can be magically
-solved by removing all the \*.pyc files from the PyPy source tree
-(the script `py.cleanup`_ from py/bin will do that for you).
-Another thing you can do is removing the directory pypy/_cache
-completely. If the error is persistent and still annoys you after this
-treatment please send us a bug report (or even better, a fix :-)
-
-.. _`py.cleanup`: http://codespeak.net/py/current/doc/bin.html
-
--------------------------------------------------------------
-OSError: ... cannot restore segment prot after reloc... Help?
--------------------------------------------------------------
-
-On Linux, if SELinux is enabled, you may get errors along the lines of
-"OSError: externmod.so: cannot restore segment prot after reloc: Permission
-denied." This is caused by a slight abuse of the C compiler during
-configuration, and can be disabled by running the following command with root
-privileges::
-
-    # setenforce 0
-
-This will disable SELinux's protection and allow PyPy to configure correctly.
-Be sure to enable it again if you need it!
-
-
-PyPy translation tool chain
-===========================
-
-----------------------------------------
-Can PyPy compile normal Python programs?
-----------------------------------------
-
-No, PyPy is not a Python compiler.
-
-In Python, it is mostly impossible to *prove* anything about the types
-that a program will manipulate by doing a static analysis.  It should be
-clear if you are familiar with Python, but if in doubt see [BRETT]_.
-
-What could be attempted is static "soft typing", where you would use a
-whole bunch of heuristics to guess what types are probably going to show
-up where.  In this way, you could compile the program into two copies of
-itself: a "fast" version and a "slow" version.  The former would contain
-many guards that allow it to fall back to the latter if needed.  That
-would be a wholly different project than PyPy, though.  (As far as we
-understand it, this is the approach that the LLVM__ group would like to
-see LLVM used for, so if you feel like working very hard and attempting
-something like this, check with them.)
-
-.. __: http://llvm.org/
-
-What PyPy contains is, on the one hand, an non-soft static type
-inferencer for RPython, which is a sublanguage that we defined just so
-that it's possible and not too hard to do that; and on the other hand,
-for the full Python language, we have an interpreter, and a JIT
-generator which can produce a Just-In-Time Compiler from the
-interpreter.  The resulting JIT works for the full Python language in a
-way that doesn't need type inference at all.
-
-For more motivation and details about our approach see also [D05.1]_,
-section 3.
-
-.. [BRETT] Brett Cannon,
-           Localized Type Inference of Atomic Types in Python,
-           http://www.ocf.berkeley.edu/~bac/thesis.pdf
-
-.. [D05.1] Compiling Dynamic Language Implementations,
-           Report from the PyPy project to the E.U.,
-           http://codespeak.net/svn/pypy/extradoc/eu-report/D05.1_Publish_on_translating_a_very-high-level_description.pdf
-
-.. _`PyPy's RPython`: 
-
-------------------------------
-What is this RPython language?
-------------------------------
-
-RPython is a restricted subset of the Python language.   It is used for 
-implementing dynamic language interpreters within the PyPy framework.  The
-restrictions are to ensure that type inference (and so, ultimately, translation
-to other languages) of RPython programs is possible. These restrictions only
-apply after the full import happens, so at import time arbitrary Python code can
-be executed. 
-
-The property of "being RPython" always applies to a full program, not to single
-functions or modules (the translation tool chain does a full program analysis).
-"Full program" in the context of "being RPython" is all the code reachable from
-an "entry point" function. The translation toolchain follows all calls
-recursively and discovers what belongs to the program and what not.
-
-The restrictions that apply to programs to be RPython mostly limit the ability
-of mixing types in arbitrary ways. RPython does not allow the usage of two
-different types in the same variable. In this respect (and in some others) it
-feels a bit like Java. Other features not allowed in RPython are the usage of
-special methods (``__xxx__``) except ``__init__`` and ``__del__``, and the
-usage of reflection capabilities (e.g. ``__dict__``).
-
-Most existing standard library modules are not RPython, except for
-some functions in ``os``, ``math`` and ``time`` that are natively
-supported. In general it is quite unlikely that an existing Python
-program is by chance RPython; it is most likely that it would have to be
-heavily rewritten.
-To read more about the RPython limitations read the `RPython description`_.
-
-.. _`RPython description`: coding-guide.html#restricted-python
-
----------------------------------------------------------------
-Does RPython have anything to do with Zope's Restricted Python?
----------------------------------------------------------------
-
-No.  `Zope's RestrictedPython`_ aims to provide a sandboxed 
-execution environment for CPython.   `PyPy's RPython`_ is the implementation
-language for dynamic language interpreters.  However, PyPy also provides 
-a robust `sandboxed Python Interpreter`_. 
-
-.. _`sandboxed Python Interpreter`: sandbox.html
-.. _`Zope's RestrictedPython`: http://pypi.python.org/pypi/RestrictedPython
-
--------------------------------------------------------------------------
-Can I use PyPy and RPython to compile smaller parts of my Python program?
--------------------------------------------------------------------------
-
-No.  That would be possible, and we played with early attempts in that
-direction, but there are many delicate issues: for example, how the
-compiled and the non-compiled parts exchange data.  Supporting this in a
-nice way would be a lot of work.
-
-PyPy is certainly a good starting point for someone that would like to
-work in that direction.  Early attempts were dropped because they
-conflicted with refactorings that we needed in order to progress on the
-rest of PyPy; the currently active developers of PyPy have different
-priorities.  If someone wants to start working in that direction I
-imagine that he might get a (very little) bit of support from us,
-though.
-
-Alternatively, it's possible to write a mixed-module, i.e. an extension
-module for PyPy in RPython, which you can then import from your Python
-program when it runs on top of PyPy.  This is similar to writing a C
-extension module for CPython in term of investment of effort (without
-all the INCREF/DECREF mess, though).
-
-------------------------------------------------------
-What's the ``"NOT_RPYTHON"`` I see in some docstrings?
-------------------------------------------------------
-
-If you put "NOT_RPYTHON" into the docstring of a function and that function is
-found while trying to translate an RPython program, the translation process
-stops and reports this as an error. You can therefore mark functions as
-"NOT_RPYTHON" to make sure that they are never analyzed.
-
-
--------------------------------------------------------------------
-Couldn't we simply take a Python syntax tree and turn it into Lisp?
--------------------------------------------------------------------
-
-It's not necessarily nonsense, but it's not really The PyPy Way.  It's
-pretty hard, without some kind of type inference, to translate, say this
-Python::
-
-    a + b
-
-into anything significantly more efficient than this Common Lisp::
-
-    (py:add a b)
-
-And making type inference possible is what RPython is all about.
-
-You could make ``#'py:add`` a generic function and see if a given CLOS
-implementation is fast enough to give a useful speed (but I think the
-coercion rules would probably drive you insane first).  -- mwh
-
---------------------------------------------
-Do I have to rewrite my programs in RPython?
---------------------------------------------
-
-No.  PyPy always runs your code in its own interpreter, which is a
-full and compliant Python 2.5 interpreter.  RPython_ is only the
-language in which parts of PyPy itself are written and extension
-modules for it.  The answer to whether something needs to be written as
-an extension module, apart from the "gluing to external libraries" reason, will
-change over time as speed for normal Python code improves.
-
--------------------------
-Which backends are there?
--------------------------
-
-Currently, there are backends for C_, the CLI_, and the JVM_.
-All of these can translate the entire PyPy interpreter.
-To learn more about backends take a look at the `translation document`_.
-
-.. _C: translation.html#the-c-back-end
-.. _CLI: cli-backend.html
-.. _JVM: translation.html#genjvm
-.. _`translation document`: translation.html
-
-----------------------
-How do I compile PyPy?
-----------------------
-
-See the `getting-started`_ guide.
-
-.. _`how do I compile my own interpreters`:
-
--------------------------------------
-How do I compile my own interpreters?
--------------------------------------
-
-Start from the example of
-`pypy/translator/goal/targetnopstandalone.py`_, which you compile by
-typing::
-
-    python translate.py targetnopstandalone
-
-You can have a look at intermediate C source code, which is (at the
-moment) put in ``/tmp/usession-*/testing_1/testing_1.c``.  Of course,
-all the functions and stuff used directly and indirectly by your
-``entry_point()`` function has to be RPython_.
-
-
-.. _`RPython`: coding-guide.html#rpython
-.. _`getting-started`: getting-started.html
-
-.. include:: _ref.txt
-
-----------------------------------------------------------
-Why does PyPy draw a Mandelbrot fractal while translating?
-----------------------------------------------------------
-
-Because it's fun.

diff --git a/pypy/doc/config/objspace.usemodules.exceptions.txt b/pypy/doc/config/objspace.usemodules.exceptions.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.exceptions.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'exceptions' module.
-This module is essential, included by default and should not be removed.

diff --git a/pypy/doc/discussion/gc.txt b/pypy/doc/discussion/gc.txt
deleted file mode 100644
--- a/pypy/doc/discussion/gc.txt
+++ /dev/null
@@ -1,77 +0,0 @@
-
-*Note: this things are experimental and are being implemented on the
-`io-improvements`_ branch*
-
-.. _`io-improvements`: http://codespeak.net/svn/pypy/branch/io-improvements
-
-=============
-GC operations
-=============
-
-This document tries to gather gc-related issues which are very recent
-or in-development. Also, it tries to document needed gc refactorings
-and expected performance of certain gc-related operations.
-
-Problem area
-============
-
-Since some of our gcs are moving, we at some point decided to simplify
-the issue of having care of it by always copying the contents of
-data that goes to C level. This yields a performance penalty, also
-because some gcs does not move data around anyway.
-
-So we decided to introduce new operations which will simplify issues
-regarding this.
-
-Pure gc operations
-==================
-
-(All available from rlib.rgc)
-
-* can_move(p) - returns a flag telling whether pointer p will move.
-  useful for example when you want to know whether memcopy is safe.
-
-* malloc_nonmovable(TP, n=None) - tries to allocate non-moving object.
-  if it succeeds, it return an object, otherwise (for whatever reasons)
-  returns null pointer. Does not raise! (never)
-
-Usage patterns
-==============
-
-Usually those functions are used via helpers located in rffi. For things like
-os.write - first get_nonmovingbuffer(data) that will give you a pointer
-suitable of passing to C and finally free_nonmovingbuffer.
-
-For os.read like usage - you first call alloc_buffer (that will allocate a
-buffer of desired size passable to C) and afterwards create str_from_buffer,
-finally calling keep_buffer_alive_until_here.
-
-String builder
-==============
-
-In Python strings are immutable by design. In RPython this still yields true,
-but since we cooperate with lower (C/POSIX) level, which has no notion of
-strings, we use buffers. Typical use case is to use list of characters l and
-than ''.join(l) in order to get string. This requires a lot of unnecessary
-copying, which yields performance penalty for such operations as string
-formatting. Hence the idea of string builder. String builder would be an
-object to which you can append strings or characters and afterwards build it
-to a string. Ideally, this set of operations would not contain any copying
-whatsoever.
-
-Low level gc operations for string builder
-------------------------------------------
-
-* alloc_buffer(T, size) - allocates Array(nolength=True) with possibility
-  of later becoming of shape T
-
-* realloc_buffer(buf, newsize) - tries to shrink or enlarge buffer buf. Returns
-  new pointer (since it might involve copying)
-
-* build_buffer(T, buf) - creates a type T (previously passed to alloc_buffer)
-  from buffer.
-
-Depending on a gc, those might be implemented dumb (realloc always copies)
-or using C-level realloc. Might be implemented also in whatever clever way
-comes to mind.
-

diff --git a/pypy/doc/config/translation.taggedpointers.txt b/pypy/doc/config/translation.taggedpointers.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.taggedpointers.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Enable tagged pointers. This option is mostly useful for the Smalltalk and
-Prolog interpreters. For the Python interpreter the option
-:config:`objspace.std.withsmallint` should be used.

diff --git a/pypy/doc/config/objspace.std.sharesmallstr.txt b/pypy/doc/config/objspace.std.sharesmallstr.txt
deleted file mode 100644

diff --git a/pypy/doc/config/objspace.usemodules._locale.txt b/pypy/doc/config/objspace.usemodules._locale.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._locale.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Use the '_locale' module.
-This module runs _locale written in RPython (instead of ctypes version).
-It's not really finished yet; it's enabled by default on Windows.

diff --git a/pypy/doc/jit/_ref.txt b/pypy/doc/jit/_ref.txt
deleted file mode 100644

diff --git a/pypy/doc/config/translation.log.txt b/pypy/doc/config/translation.log.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.log.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-Include debug prints in the translation.
-
-These must be enabled by setting the PYPYLOG environment variable.
-The exact set of features supported by PYPYLOG is described in
-pypy/translation/c/src/debug.h.

diff --git a/pypy/doc/config/translation.profopt.txt b/pypy/doc/config/translation.profopt.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.profopt.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-Use GCCs profile-guided optimizations. This option specifies the the
-arguments with which to call pypy-c (and in general the translated
-RPython program) to gather profile data. Example for pypy-c: "-c 'from
-richards import main;main(); from test import pystone;
-pystone.main()'"

diff --git a/pypy/doc/config/objspace.usemodules.rbench.txt b/pypy/doc/config/objspace.usemodules.rbench.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.rbench.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Use the built-in 'rbench' module.
-This module contains geninterpreted versions of pystone and richards,
-so it is useful to measure the interpretation overhead of the various
-pypy-\*.

diff --git a/pypy/doc/config/translation.backendopt.profile_based_inline_threshold.txt b/pypy/doc/config/translation.backendopt.profile_based_inline_threshold.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.profile_based_inline_threshold.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Weight threshold used to decide whether to inline flowgraphs.
-This is for profile-based inlining (:config:`translation.backendopt.profile_based_inline`).

diff --git a/pypy/doc/getting-started-dev.txt b/pypy/doc/getting-started-dev.txt
deleted file mode 100644
--- a/pypy/doc/getting-started-dev.txt
+++ /dev/null
@@ -1,425 +0,0 @@
-===============================================================================
-PyPy - Getting Started with the Translation Toolchain and Development Process
-===============================================================================
-
-.. contents::
-.. sectnum::
-
-.. _`try out the translator`:
-
-Trying out the translator
-------------------------- 
-
-The translator is a tool based on the PyPy interpreter which can translate
-sufficiently static Python programs into low-level code (in particular it can
-be used to translate the `full Python interpreter`_). To be able to use it
-you need to (if you want to look at the flowgraphs, which you obviously
-should):
-
-  * Download and install Pygame_.
-
-  * Download and install `Dot Graphviz`_ (optional if you have an internet
-    connection: the flowgraph viewer then connects to
-    codespeak.net and lets it convert the flowgraph by a graphviz server).
-
-To start the interactive translator shell do::
-
-    cd pypy
-    python bin/translatorshell.py
-
-Test snippets of translatable code are provided in the file
-``pypy/translator/test/snippet.py``, which is imported under the name
-``snippet``.  For example::
-
-    >>> t = Translation(snippet.is_perfect_number)
-    >>> t.view()
-        
-After that, the graph viewer pops up, that lets you interactively inspect the
-flow graph. To move around, click on something that you want to inspect.
-To get help about how to use it, press 'H'. To close it again, press 'Q'.
-
-Trying out the type annotator
-+++++++++++++++++++++++++++++
-
-We have a type annotator that can completely infer types for functions like
-``is_perfect_number`` (as well as for much larger examples)::
-
-    >>> t.annotate([int])
-    >>> t.view()
-
-Move the mouse over variable names (in red) to see their inferred types.
-
-
-Translating the flow graph to C code
-++++++++++++++++++++++++++++++++++++
-
-The graph can be turned into C code::
-
-   >>> t.rtype()
-   >>> f = t.compile_c()
-
-The first command replaces the operations with other low level versions that
-only use low level types that are available in C (e.g. int). To try out the
-compiled version::
-
-   >>> f(5)
-   False
-   >>> f(6)
-   True
-
-Translating the flow graph to CLI or JVM code
-+++++++++++++++++++++++++++++++++++++++++++++
-
-PyPy also contains a `CLI backend`_ and JVM backend which
-can translate flow graphs into .NET executables or a JVM jar
-file respectively.  Both are able to translate the entire
-interpreter.  You can try out the CLI and JVM backends
-from the interactive translator shells as follows::
-
-    >>> def myfunc(a, b): return a+b
-    ... 
-    >>> t = Translation(myfunc)
-    >>> t.annotate([int, int])
-    >>> f = t.compile_cli() # or compile_jvm()
-    >>> f(4, 5)
-    9
-
-The object returned by ``compile_cli`` or ``compile_jvm``
-is a wrapper around the real
-executable: the parameters are passed as command line arguments, and
-the returned value is read from the standard output.  
-
-Once you have compiled the snippet, you can also try to launch the
-executable directly from the shell. You will find the 
-executable in one of the ``/tmp/usession-*`` directories::
-
-    # For CLI:
-    $ mono /tmp/usession-trunk-<username>/main.exe 4 5
-    9
-
-    # For JVM:
-    $ java -cp /tmp/usession-trunk-<username>/pypy pypy.Main 4 5
-    9
-
-To translate and run for the CLI you must have the SDK installed: Windows
-users need the `.NET Framework SDK 2.0`_, while Linux and Mac users
-can use Mono_.  To translate and run for the JVM you must have a JDK 
-installed (at least version 5) and ``java``/``javac`` on your path.
-
-A slightly larger example
-+++++++++++++++++++++++++
-
-There is a small-to-medium demo showing the translator and the annotator::
-
-    cd demo
-    ../pypy/translator/goal/translate.py --view --annotate bpnn.py
-
-This causes ``bpnn.py`` to display itself as a call graph and class
-hierarchy.  Clicking on functions shows the flow graph of the particular
-function.  Clicking on a class shows the attributes of its instances.  All
-this information (call graph, local variables' types, attributes of
-instances) is computed by the annotator.
-
-To turn this example to C code (compiled to the executable ``bpnn-c``),
-type simply::
-
-    ../pypy/translator/goal/translate.py bpnn.py
-
-
-Translating Full Programs
-+++++++++++++++++++++++++
-
-To translate full RPython programs, there is the script ``translate.py`` in
-``translator/goal``. Examples for this are a slightly changed version of
-Pystone::
-
-    cd pypy/translator/goal
-    python translate.py targetrpystonedalone
-
-This will produce the executable "targetrpystonedalone-c".
-
-The largest example of this process is to translate the `full Python
-interpreter`_. There is also an FAQ about how to set up this process for `your
-own interpreters`_.
-
-.. _`your own interpreters`: faq.html#how-do-i-compile-my-own-interpreters
-
-.. _`start reading sources`: 
-
-Where to start reading the sources
----------------------------------- 
-
-PyPy is made from parts that are relatively independent from each other.
-You should start looking at the part that attracts you most (all paths are
-relative to the PyPy top level directory).  You may look at our `directory reference`_ 
-or start off at one of the following points:
-
-*  `pypy/interpreter`_ contains the bytecode interpreter: bytecode dispatcher
-   in pyopcode.py_, frame and code objects in eval.py_ and pyframe.py_,
-   function objects and argument passing in function.py_ and argument.py_,
-   the object space interface definition in baseobjspace.py_, modules in
-   module.py_ and mixedmodule.py_.  Core types supporting the bytecode 
-   interpreter are defined in typedef.py_.
-
-*  `pypy/interpreter/pyparser`_ contains a recursive descent parser,
-   and input data files that allow it to parse both Python 2.3 and 2.4
-   syntax.  Once the input data has been processed, the parser can be
-   translated by the above machinery into efficient code.
- 
-*  `pypy/interpreter/astcompiler`_ contains the compiler.  This
-   contains a modified version of the compiler package from CPython
-   that fixes some bugs and is translatable.  That the compiler and
-   parser are translatable is new in 0.8.0 and it makes using the
-   resulting binary interactively much more pleasant.
-
-*  `pypy/objspace/std`_ contains the `Standard object space`_.  The main file
-   is objspace.py_.  For each type, the files ``xxxtype.py`` and
-   ``xxxobject.py`` contain respectively the definition of the type and its
-   (default) implementation.
-
-*  `pypy/objspace`_ contains a few other object spaces: the thunk_,
-   trace_ and flow_ object spaces.  The latter is a relatively short piece
-   of code that builds the control flow graphs when the bytecode interpreter
-   runs in it.
-
-*  `pypy/translator`_ contains the code analysis and generation stuff.
-   Start reading from translator.py_, from which it should be easy to follow
-   the pieces of code involved in the various translation phases.
-
-*  `pypy/annotation`_ contains the data model for the type annotation that
-   can be inferred about a graph.  The graph "walker" that uses this is in
-   `pypy/annotation/annrpython.py`_.
-
-*  `pypy/rpython`_ contains the code of the RPython typer. The typer transforms
-   annotated flow graphs in a way that makes them very similar to C code so
-   that they can be easy translated. The graph transformations are controlled
-   by the stuff in `pypy/rpython/rtyper.py`_. The object model that is used can
-   be found in `pypy/rpython/lltypesystem/lltype.py`_. For each RPython type
-   there is a file rxxxx.py that contains the low level functions needed for
-   this type.
-
-*  `pypy/rlib`_ contains the RPython standard library, things that you can
-   use from rpython.
-
-.. _optionaltool: 
-
-
-Running PyPy's unit tests
--------------------------
-
-PyPy development always was and is still thorougly test-driven. 
-We use the flexible `py.test testing tool`_ which you can `install independently
-<http://pytest.org/getting-started.html>`_ and use indepedently
-from PyPy for other projects.
-
-The PyPy source tree comes with an inlined version of ``py.test``
-which you can invoke by typing::
-
-    python pytest.py -h
-
-This is usually equivalent to using an installed version::
-
-    py.test -h
-
-If you encounter problems with the installed version
-make sure you have the correct version installed which
-you can find out with the ``--version`` switch.
-
-Now on to running some tests.  PyPy has many different test directories
-and you can use shell completion to point at directories or files::
-
-    py.test pypy/interpreter/test/test_pyframe.py
-
-    # or for running tests of a whole subdirectory
-    py.test pypy/interpreter/
-
-See `py.test usage and invocations`_ for some more generic info 
-on how you can run tests.
-
-Beware trying to run "all" pypy tests by pointing to the root
-directory or even the top level subdirectory ``pypy``.  It takes
-hours and uses huge amounts of RAM and is not recommended.
-
-To run CPython regression tests you can point to the ``lib-python``
-directory::
-
-    py.test lib-python/2.7.0/test/test_datetime.py
-
-This will usually take a long time because this will run
-the PyPy Python interpreter on top of CPython.  On the plus
-side, it's usually still faster than doing a full translation
-and running the regression test with the translated PyPy Python
-interpreter.
-
-.. _`py.test testing tool`: http://pytest.org
-.. _`py.test usage and invocations`: http://pytest.org/usage.html#usage
-
-Special Introspection Features of the Untranslated Python Interpreter
----------------------------------------------------------------------
-
-If you are interested in the inner workings of the PyPy Python interpreter,
-there are some features of the untranslated Python interpreter that allow you
-to introspect its internals.
-
-Interpreter-level console
-+++++++++++++++++++++++++
-
-If you start an untranslated Python interpreter via::
-
-    python pypy-svn/pypy/bin/py.py
-
-If you press
-<Ctrl-C> on the console you enter the interpreter-level console, a
-usual CPython console.  You can then access internal objects of PyPy
-(e.g. the `object space`_) and any variables you have created on the PyPy
-prompt with the prefix ``w_``::
-
-    >>>> a = 123
-    >>>> <Ctrl-C>
-    *** Entering interpreter-level console ***
-    >>> w_a
-    W_IntObject(123)
-
-The mechanism works in both directions. If you define a variable with the ``w_`` prefix on the interpreter-level, you will see it on the app-level::
-
-    >>> w_l = space.newlist([space.wrap(1), space.wrap("abc")])
-    >>> <Ctrl-D>
-    *** Leaving interpreter-level console ***
-
-    KeyboardInterrupt
-    >>>> l
-    [1, 'abc']
-
-.. _`object space`: objspace.html
-
-Note that the prompt of the interpreter-level console is only '>>>' since
-it runs on CPython level. If you want to return to PyPy, press <Ctrl-D> (under
-Linux) or <Ctrl-Z>, <Enter> (under Windows).
-
-You may be interested in reading more about the distinction between
-`interpreter-level and app-level`_.
-
-.. _`interpreter-level and app-level`: coding-guide.html#interpreter-level
-
-.. _`trace example`: 
-
-Tracing bytecode and operations on objects
-++++++++++++++++++++++++++++++++++++++++++ 
-
-You can use the trace object space to monitor the interpretation
-of bytecodes in connection with object space operations.  To enable 
-it, set ``__pytrace__=1`` on the interactive PyPy console:: 
-
-    >>>> __pytrace__ = 1
-    Tracing enabled
-    >>>> a = 1 + 2
-    |- <<<< enter <inline>a = 1 + 2 @ 1 >>>>
-    |- 0    LOAD_CONST    0 (W_IntObject(1))
-    |- 3    LOAD_CONST    1 (W_IntObject(2))
-    |- 6    BINARY_ADD
-      |-    add(W_IntObject(1), W_IntObject(2))   -> W_IntObject(3)
-    |- 7    STORE_NAME    0 (a)
-      |-    hash(W_StringObject('a'))   -> W_IntObject(-468864544)
-      |-    int_w(W_IntObject(-468864544))   -> -468864544
-    |-10    LOAD_CONST    2 (<W_NoneObject()>)
-    |-13    RETURN_VALUE
-    |- <<<< leave <inline>a = 1 + 2 @ 1 >>>>
-
-Demos
--------
-
-The `demo/`_ directory contains examples of various aspects of PyPy,
-ranging from running regular Python programs (that we used as compliance goals) 
-over experimental distribution mechanisms to examples translating 
-sufficiently static programs into low level code. 
-
-Additional Tools for running (and hacking) PyPy 
------------------------------------------------
-
-We use some optional tools for developing PyPy. They are not required to run 
-the basic tests or to get an interactive PyPy prompt but they help to
-understand  and debug PyPy especially for the translation process.  
-
-graphviz & pygame for flow graph viewing (highly recommended)
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
-graphviz and pygame are both necessary if you
-want to look at generated flow graphs: 
-
-	graphviz: http://www.graphviz.org/Download.php 
-
-	pygame: http://www.pygame.org/download.shtml
-
-CTypes on Python 2.4
-++++++++++++++++++++++++++++
-
-`ctypes`_ is included in CPython 2.5 and higher.  CPython 2.4 users needs to
-install it if they want to run low-level tests. See
-the `download page of ctypes`_.
-
-.. _`download page of ctypes`: http://sourceforge.net/project/showfiles.php?group_id=71702
-.. _`ctypes`: http://starship.python.net/crew/theller/ctypes/
-
-.. _`py.test`:
-
-py.test and the py lib 
-+++++++++++++++++++++++
-
-The `py.test testing tool`_ drives all our testing needs.
-
-We use the `py library`_ for filesystem path manipulations, terminal
-writing, logging and some other support  functionality.
-
-You don't neccessarily need to install these two libraries because 
-we also ship them inlined in the PyPy source tree.
-
-Getting involved 
------------------
-
-PyPy employs an open development process.  You are invited to join our
-`pypy-dev mailing list`_ or look at the other `contact
-possibilities`_.  Usually we give out commit rights fairly liberally, so if you
-want to do something with PyPy, you can become a committer. We are also doing
-coding Sprints which are
-separately announced and often happen around Python conferences such
-as EuroPython or Pycon. Upcoming events are usually announced on `the blog`_.
-
-.. _`full Python interpreter`: getting-started-python.html
-.. _`the blog`: http://morepypy.blogspot.com
-.. _`pypy-dev mailing list`: http://codespeak.net/mailman/listinfo/pypy-dev
-.. _`contact possibilities`: index.html
-
-.. _`py library`: http://pylib.org
-
-.. _`Spidermonkey`: http://www.mozilla.org/js/spidermonkey/
-
-.. _`.NET Framework SDK 2.0`: http://msdn.microsoft.com/netframework/downloads/updates/default.aspx
-.. _Mono: http://www.mono-project.com/Main_Page
-.. _`CLI backend`: cli-backend.html
-.. _clr: clr-module.html
-
-.. _`Dot Graphviz`:           http://www.graphviz.org/
-.. _Pygame:                 http://www.pygame.org/
-.. _pyopcode.py:            http://codespeak.net/svn/pypy/trunk/pypy/interpreter/pyopcode.py
-.. _eval.py:                http://codespeak.net/svn/pypy/trunk/pypy/interpreter/eval.py
-.. _pyframe.py:             http://codespeak.net/svn/pypy/trunk/pypy/interpreter/pyframe.py
-.. _function.py:            http://codespeak.net/svn/pypy/trunk/pypy/interpreter/function.py
-.. _argument.py:            http://codespeak.net/svn/pypy/trunk/pypy/interpreter/argument.py
-.. _baseobjspace.py:        http://codespeak.net/svn/pypy/trunk/pypy/interpreter/baseobjspace.py
-.. _module.py:              http://codespeak.net/svn/pypy/trunk/pypy/interpreter/module.py
-.. _mixedmodule.py:          http://codespeak.net/svn/pypy/trunk/pypy/interpreter/mixedmodule.py
-.. _typedef.py:             http://codespeak.net/svn/pypy/trunk/pypy/interpreter/typedef.py
-.. _Standard object space:  objspace.html#the-standard-object-space
-.. _objspace.py:            ../../pypy/objspace/std/objspace.py
-.. _thunk:                  ../../pypy/objspace/thunk.py
-.. _trace:                  ../../pypy/objspace/trace.py
-.. _flow:                   ../../pypy/objspace/flow/
-.. _translator.py:          ../../pypy/translator/translator.py
-.. _mailing lists:          index.html
-.. _documentation:          docindex.html 
-.. _unit tests:             coding-guide.html#test-design
-
-.. _`directory reference`: docindex.html#directory-reference
-
-.. include:: _ref.txt
-

diff --git a/pypy/doc/discussion/finalizer-order.txt b/pypy/doc/discussion/finalizer-order.txt
deleted file mode 100644
--- a/pypy/doc/discussion/finalizer-order.txt
+++ /dev/null
@@ -1,166 +0,0 @@
-Ordering finalizers in the SemiSpace GC
-=======================================
-
-Goal
-----
-
-After a collection, the SemiSpace GC should call the finalizers on
-*some* of the objects that have one and that have become unreachable.
-Basically, if there is a reference chain from an object a to an object b
-then it should not call the finalizer for b immediately, but just keep b
-alive and try again to call its finalizer after the next collection.
-
-This basic idea fails when there are cycles.  It's not a good idea to
-keep the objects alive forever or to never call any of the finalizers.
-The model we came up with is that in this case, we could just call the
-finalizer of one of the objects in the cycle -- but only, of course, if
-there are no other objects outside the cycle that has a finalizer and a
-reference to the cycle.
-
-More precisely, given the graph of references between objects::
-
-    for each strongly connected component C of the graph:
-        if C has at least one object with a finalizer:
-            if there is no object outside C which has a finalizer and
-            indirectly references the objects in C:
-                mark one of the objects of C that has a finalizer
-                copy C and all objects it references to the new space
-
-    for each marked object:
-        detach the finalizer (so that it's not called more than once)
-        call the finalizer
-
-Algorithm
----------
-
-During deal_with_objects_with_finalizers(), each object x can be in 4
-possible states::
-
-    state[x] == 0:  unreachable
-    state[x] == 1:  (temporary state, see below)
-    state[x] == 2:  reachable from any finalizer
-    state[x] == 3:  alive
-
-Initially, objects are in state 0 or 3 depending on whether they have
-been copied or not by the regular sweep done just before.  The invariant
-is that if there is a reference from x to y, then state[y] >= state[x].
-
-The state 2 is used for objects that are reachable from a finalizer but
-that may be in the same strongly connected component than the finalizer.
-The state of these objects goes to 3 when we prove that they can be
-reached from a finalizer which is definitely not in the same strongly
-connected component.  Finalizers on objects with state 3 must not be
-called.
-
-Let closure(x) be the list of objects reachable from x, including x
-itself.  Pseudo-code (high-level) to get the list of marked objects::
-
-    marked = []
-    for x in objects_with_finalizers:
-        if state[x] != 0:
-            continue
-        marked.append(x)
-        for y in closure(x):
-            if state[y] == 0:
-                state[y] = 2
-            elif state[y] == 2:
-                state[y] = 3
-    for x in marked:
-        assert state[x] >= 2
-        if state[x] != 2:
-            marked.remove(x)
-
-This does the right thing independently on the order in which the
-objects_with_finalizers are enumerated.  First assume that [x1, .., xn]
-are all in the same unreachable strongly connected component; no object
-with finalizer references this strongly connected component from
-outside.  Then:
-
-* when x1 is processed, state[x1] == .. == state[xn] == 0 independently
-  of whatever else we did before.  So x1 gets marked and we set
-  state[x1] = .. = state[xn] = 2.
-
-* when x2, ... xn are processed, their state is != 0 so we do nothing.
-
-* in the final loop, only x1 is marked and state[x1] == 2 so it stays
-  marked.
-
-Now, let's assume that x1 and x2 are not in the same strongly connected
-component and there is a reference path from x1 to x2.  Then:
-
-* if x1 is enumerated before x2, then x2 is in closure(x1) and so its
-  state gets at least >= 2 when we process x1.  When we process x2 later
-  we just skip it ("continue" line) and so it doesn't get marked.
-
-* if x2 is enumerated before x1, then when we process x2 we mark it and
-  set its state to >= 2 (before x2 is in closure(x2)), and then when we
-  process x1 we set state[x2] == 3.  So in the final loop x2 gets
-  removed from the "marked" list.
-
-I think that it proves that the algorithm is doing what we want.
-
-The next step is to remove the use of closure() in the algorithm in such
-a way that the new algorithm has a reasonable performance -- linear in
-the number of objects whose state it manipulates::
-
-    marked = []
-    for x in objects_with_finalizers:
-        if state[x] != 0:
-            continue
-        marked.append(x)
-        recursing on the objects y starting from x:
-            if state[y] == 0:
-                state[y] = 1
-                follow y's children recursively
-            elif state[y] == 2:
-                state[y] = 3
-                follow y's children recursively
-            else:
-                don't need to recurse inside y
-        recursing on the objects y starting from x:
-            if state[y] == 1:
-                state[y] = 2
-                follow y's children recursively
-            else:
-                don't need to recurse inside y
-    for x in marked:
-        assert state[x] >= 2
-        if state[x] != 2:
-            marked.remove(x)
-
-In this algorithm we follow the children of each object at most 3 times,
-when the state of the object changes from 0 to 1 to 2 to 3.  In a visit
-that doesn't change the state of an object, we don't follow its children
-recursively.
-
-In practice, in the SemiSpace, Generation and Hybrid GCs, we can encode
-the 4 states with a single extra bit in the header:
-
-      =====  =============  ========  ====================
-      state  is_forwarded?  bit set?  bit set in the copy?
-      =====  =============  ========  ====================
-        0      no             no        n/a
-        1      no             yes       n/a
-        2      yes            yes       yes
-        3      yes          whatever    no
-      =====  =============  ========  ====================
-
-So the loop above that does the transition from state 1 to state 2 is
-really just a copy(x) followed by scan_copied().  We must also clear the
-bit in the copy at the end, to clean up before the next collection
-(which means recursively bumping the state from 2 to 3 in the final
-loop).
-
-In the MiniMark GC, the objects don't move (apart from when they are
-copied out of the nursery), but we use the flag GCFLAG_VISITED to mark
-objects that survive, so we can also have a single extra bit for
-finalizers:
-
-      =====  ==============  ============================
-      state  GCFLAG_VISITED  GCFLAG_FINALIZATION_ORDERING
-      =====  ==============  ============================
-        0        no              no
-        1        no              yes
-        2        yes             yes
-        3        yes             no
-      =====  ==============  ============================

diff --git a/pypy/doc/config/objspace.std.withdictmeasurement.txt b/pypy/doc/config/objspace.std.withdictmeasurement.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.withdictmeasurement.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Internal option.
-
-.. internal

diff --git a/pypy/doc/how-to-release.txt b/pypy/doc/how-to-release.txt
deleted file mode 100644
--- a/pypy/doc/how-to-release.txt
+++ /dev/null
@@ -1,54 +0,0 @@
-Making a PyPy Release
-=======================
-
-Overview
----------
-
-As a meta rule setting up issues in the tracker for items here may help not
-forgetting things. A set of todo files may also work.
-
-Check and prioritize all issues for the release, postpone some if necessary,
-create new  issues also as necessary. A meeting (or meetings) should be
-organized to decide what things are priorities, should go in and work for
-the release. 
-
-An important thing is to get the documentation into an up-to-date state!
-
-Release Steps
-----------------
-
-* at code freeze make a release branch under
-  http://codepeak.net/svn/pypy/release/x.y(.z). IMPORTANT: bump the
-  pypy version number in module/sys/version.py and in
-  module/cpyext/include/patchlevel.h, notice that the branch
-  will capture the revision number of this change for the release;
-  some of the next updates may be done before or after branching; make
-  sure things are ported back to the trunk and to the branch as
-  necessary
-* update pypy/doc/contributor.txt (and possibly LICENSE)
-* update README
-* go to pypy/tool/release and run:
-  force-builds.py /release/<release branch>
-* wait for builds to complete, make sure there are no failures
-* run pypy/tool/release/make_release.py, this will build necessary binaries
-  and upload them to pypy.org
-
-  Following binaries should be built, however, we need more buildbots:
-    JIT: windows, linux, os/x
-    no JIT: windows, linux, os/x
-    sandbox: linux, os/x
-    stackless: windows, linux, os/x
-
-* write release announcement pypy/doc/release-x.y(.z).txt
-  the release announcement should contain a direct link to the download page
-* update pypy.org (under extradoc/pypy.org), rebuild and commit
-
-* update http://codespeak.net/pypy/trunk:
-   code0> + chmod -R yourname:users /www/codespeak.net/htdocs/pypy/trunk
-   local> cd ..../pypy/doc && py.test
-   local> cd ..../pypy
-   local> rsync -az doc codespeak.net:/www/codespeak.net/htdocs/pypy/trunk/pypy/
-
-* post announcement on morepypy.blogspot.com
-* send announcements to pypy-dev, python-list,
-  python-announce, python-dev ...

diff --git a/pypy/doc/config/objspace.usemodules.select.txt b/pypy/doc/config/objspace.usemodules.select.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.select.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'select' module. 
-This module is expected to be fully working.

diff --git a/pypy/doc/config/objspace.std.getattributeshortcut.txt b/pypy/doc/config/objspace.std.getattributeshortcut.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.std.getattributeshortcut.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Performance only: track types that override __getattribute__.

diff --git a/pypy/doc/config/objspace.usemodules.bz2.txt b/pypy/doc/config/objspace.usemodules.bz2.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.bz2.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'bz2' module. 
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/discussion/emptying-the-malloc-zoo.txt b/pypy/doc/discussion/emptying-the-malloc-zoo.txt
deleted file mode 100644
--- a/pypy/doc/discussion/emptying-the-malloc-zoo.txt
+++ /dev/null
@@ -1,40 +0,0 @@
-.. coding: utf-8
-
-Emptying the malloc zoo
-=======================
-
-Around the end-of-the-EU-project time there were two major areas of
-obscurity in the memory management area:
-
- 1. The confusing set of operations that the low-level backend are
-    expected to implement.
-
- 2. The related, but slightly different, confusion of the various
-    "flavours" of malloc: what's the difference between
-    lltype.malloc(T, flavour='raw') and llmemory.raw_malloc(sizeof(T))?
-
-At the post-ep2007 sprint, Samuele and Michael attacked the first
-problem a bit: making the Boehm GC transformer only require three
-simple operations of the backend.  This could be extending still
-further by having the gc transformer use rffi to insert calls to the
-relevant Boehm functions^Wmacros, and then the backend wouldn't need
-to know anything about Boehm at all (but... LLVM).
-
-A potential next step is to work out what we want the "llpython"
-interface to memory management to be.
-
-There are various use cases:
-
-**lltype.malloc(T) &#8211; T is a fixed-size GC container**
-
-  This is the default case.  Non-pointers inside the allocated memory
-  will not be zeroed.  The object will be managed by the GC, no
-  deallocation required.
-
-**lltype.malloc(T, zero=True) &#8211; T is a GC container**
-
-  As above, but all fields will be cleared.
-
-**lltype.malloc(U, raw=True) &#8211; U is not a GC container**
-
-  Blah.

diff --git a/pypy/doc/config/objspace.usemodules._md5.txt b/pypy/doc/config/objspace.usemodules._md5.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._md5.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-Use the built-in '_md5' module.
-This module is expected to be working and is included by default.
-There is also a pure Python version in lib_pypy which is used
-if the built-in is disabled, but it is several orders of magnitude 
-slower.

diff --git a/pypy/doc/config/translation.cli.txt b/pypy/doc/config/translation.cli.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.cli.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-..  intentionally empty

diff --git a/pypy/doc/config/translation.platform.txt b/pypy/doc/config/translation.platform.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.platform.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-select the target platform, in case of cross-compilation

diff --git a/pypy/doc/config/translation.backendopt.mallocs.txt b/pypy/doc/config/translation.backendopt.mallocs.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.mallocs.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-This optimization enables "malloc removal", which "explodes"
-allocations of structures which do not escape from the function they
-are allocated in into one or more additional local variables.
-
-An example.  Consider this rather unlikely seeming code::
-
-    class C:
-        pass
-    def f(y):
-        c = C()
-        c.x = y
-        return c.x
-
-Malloc removal will spot that the ``C`` object can never leave ``f``
-and replace the above with code like this::
-
-    def f(y):
-        _c__x = y
-        return _c__x
-
-It is rare for code to be directly written in a way that allows this
-optimization to be useful, but inlining often results in opportunities
-for its use (and indeed, this is one of the main reasons PyPy does its
-own inlining rather than relying on the C compilers).
-
-For much more information about this and other optimizations you can
-read section 4.1 of the technical report on "Massive Parallelism and
-Translation Aspects" which you can find on the `Technical reports page
-<../index-report.html>`__.

diff --git a/pypy/doc/config/objspace.logbytecodes.txt b/pypy/doc/config/objspace.logbytecodes.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.logbytecodes.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Internal option.
-
-.. internal

diff --git a/pypy/doc/config/translation.dump_static_data_info.txt b/pypy/doc/config/translation.dump_static_data_info.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.dump_static_data_info.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Dump information about static prebuilt constants, to the file
-TARGETNAME.staticdata.info in the /tmp/usession-... directory.  This file can
-be later inspected using the script ``bin/reportstaticdata.py``.

diff --git a/pypy/doc/config/objspace.usemodules.zlib.txt b/pypy/doc/config/objspace.usemodules.zlib.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.zlib.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the 'zlib' module. 
-This module is expected to be working and is included by default.

diff --git a/pypy/doc/config/translation.backendopt.inline_heuristic.txt b/pypy/doc/config/translation.backendopt.inline_heuristic.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.inline_heuristic.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Internal option. Switch to a different weight heuristic for inlining.
-This is for basic inlining (:config:`translation.backendopt.inline`).
-
-.. internal

diff --git a/pypy/doc/distribution.txt b/pypy/doc/distribution.txt
deleted file mode 100644
--- a/pypy/doc/distribution.txt
+++ /dev/null
@@ -1,111 +0,0 @@
-
-========================
-lib/distributed features
-========================
-
-The 'distributed' library is an attempt to provide transparent, lazy
-access to remote objects. This is accomplished using
-`transparent proxies`_ and in application level code (so as a pure
-python module).
-
-The implementation uses an RPC-like protocol, which accesses
-only members of objects, rather than whole objects. This means it
-does not rely on objects being pickleable, nor on having the same
-source code available on both sides. On each call, only the members
-that are used on the client side are retrieved, objects which
-are not used are merely references to their remote counterparts.
-
-As an example, let's imagine we have a remote object, locally available
-under the name `x`. Now we call::
-
-    >>>> x.foo(1, [1,2,3], y)
-
-where y is some instance of a local, user-created class.
-
-Under water, x.\_\_getattribute\_\_ is called, with argument 'foo'. In the
-\_\_getattribute\_\_ implementation, the 'foo' attribute is requested, and the
-remote side replies by providing a bound method. On the client this bound
-method appears as a remote reference: this reference is called with a remote
-reference to x as self, the integer 1 which is copied as a primitive type, a
-reference to a list and a reference to y. The remote side receives this call,
-processes it as a call to the bound method x.foo, where 'x' is resolved as a
-local object, 1 as an immutable primitive, [1,2,3] as a reference to a mutable
-primitive and y as a reference to a remote object. If the type of y is not
-known on the remote side, it is faked with just about enough shape (XXX?!?) to
-be able to perform the required operations.  The contents of the list are
-retrieved when they're needed.
-
-An advantage of this approach is that a user can have remote references to
-internal interpreter types, like frames, code objects and tracebacks. In a demo
-directory there is an example of using this to attach pdb.post\_mortem() to a
-remote traceback. Another advantage is that there's a minimal amount of data
-transferred over the network. On the other hand, there are a large amount of
-packages sent to the remote side - hopefully this will be improved in future.
-
-The 'distributed' lib is uses an abstract network layer, which means you
-can provide custom communication channels just by implementing
-two functions that send and receive marshallable objects (no pickle needed!).
-
-Exact rules of copying
-----------------------
-
-- Immutable primitives are always transferred
-
-- Mutable primitives are transferred as a reference, but several operations
-  (like iter()) force them to be transferred fully
-
-- Builtin exceptions are transferred by name
-
-- User objects are always faked on the other side, with enough shape
-  transferred
-
-XXX finish, basic interface, example, build some stuff on top of greenlets
-
-Related work comparison
------------------------
-
-There are a lot of attempts to incorporate RPC mechanism into
-Python, some of them are listed below:
-
-* `Pyro`_ - Pyro stands for PYthon Remote Objects, it's a mechanism of
-  implementing remotely accessible objects in pure python (without modifying
-  interpreter). This is only a remote method call implementation, with
-  all limitations, so:
-
-  - No attribute access
-
-  - Arguments of calls must be pickleable on one side and unpickleable on
-    remote side, which means they must share source code, they do not
-    become remote references
-
-  - Exported objects must inherit from specific class and follow certain
-    standards, like \_\_init\_\_ shape.
-
-  - Remote tracebacks only as strings
-
-  - Remote calls usually invokes new threads
-
-* XMLRPC - There are several implementations of xmlrpc protocol in Python,
-  one even in the standard library. Xmlrpc is cross-language, cross-platform
-  protocol of communication, which implies great flexibility of tools to
-  choose, but also implies several limitations, like:
-
-  - No remote tracebacks
-
-  - Only simple types to be passed as function arguments
-
-* Twisted Perspective Broker
-
-  - involves twisted, which ties user to network stack/programming style
-
-  - event driven programming (might be good, might be bad, but it's fixed)
-
-  - copies object (by pickling), but provides sophisticated layer of
-    caching to avoid multiple copies of the same object.
-
-  - two way RPC (unlike Pyro)
-
-  - also heavy restrictions on objects - they must subclass certain class
-
-.. _`Pyro`: http://pyro.sourceforge.net/
-.. _`transparent proxies`: objspace-proxies.html#tproxy

diff --git a/pypy/doc/cpython_differences.txt b/pypy/doc/cpython_differences.txt
deleted file mode 100644
--- a/pypy/doc/cpython_differences.txt
+++ /dev/null
@@ -1,225 +0,0 @@
-====================================
-Differences between PyPy and CPython
-====================================
-
-This page documents the few differences and incompatibilities between
-the PyPy Python interpreter and CPython.  Some of these differences
-are "by design", since we think that there are cases in which the
-behaviour of CPython is buggy, and we do not want to copy bugs.
-
-Differences that are not listed here should be considered bugs of
-PyPy.
-
-
-Extension modules
------------------
-
-List of extension modules that we support:
-
-* Supported as built-in modules (in `pypy/module/`_):
-
-    __builtin__
-    `__pypy__`_
-    _ast
-    _bisect
-    _codecs
-    _lsprof
-    `_minimal_curses`_
-    _random
-    `_rawffi`_
-    _ssl
-    _socket
-    _sre
-    _weakref
-    array
-    bz2
-    cStringIO
-    `cpyext`_
-    crypt
-    errno
-    exceptions
-    fcntl
-    gc
-    itertools
-    marshal
-    math
-    md5
-    mmap
-    operator
-    parser
-    posix
-    pyexpat
-    select
-    sha
-    signal
-    struct
-    symbol
-    sys
-    termios
-    thread
-    time
-    token
-    unicodedata
-    zipimport
-    zlib
-
-  When translated to Java or .NET, the list is smaller; see
-  `pypy/config/pypyoption.py`_ for details.
-
-  When translated on Windows, a few Unix-only modules are skipped,
-  and the following module is built instead:
-
-    _winreg
-
-  Extra module with Stackless_ only:
-
-    _stackless
-
-* Supported by being rewritten in pure Python (possibly using ``ctypes``):
-  see the `lib_pypy/`_ directory.  Examples of modules that we
-  support this way: ``ctypes``, ``cPickle``,
-  ``cStringIO``, ``cmath``, ``dbm`` (?), ``datetime``, ``binascii``...  
-  Note that some modules are both in there and in the list above;
-  by default, the built-in module is used (but can be disabled
-  at translation time).
-
-The extension modules (i.e. modules written in C, in the standard CPython)
-that are neither mentioned above nor in `lib_pypy/`_ are not available in PyPy.
-(You may have a chance to use them anyway with `cpyext`_.)
-
-.. the nonstandard modules are listed below...
-.. _`__pypy__`: __pypy__-module.html
-.. _`_rawffi`: ctypes-implementation.html
-.. _`_minimal_curses`: config/objspace.usemodules._minimal_curses.html
-.. _`cpyext`: http://morepypy.blogspot.com/2010/04/using-cpython-extension-modules-with.html
-.. _Stackless: stackless.html
-
-
-Differences related to garbage collection strategies
-----------------------------------------------------
-
-Most of the garbage collectors used or implemented by PyPy are not based on
-reference counting, so the objects are not freed instantly when they are no
-longer reachable.  The most obvious effect of this is that files are not
-promptly closed when they go out of scope.  For files that are opened for
-writing, data can be left sitting in their output buffers for a while, making
-the on-disk file appear empty or truncated.
-
-Fixing this is essentially not possible without forcing a
-reference-counting approach to garbage collection.  The effect that you
-get in CPython has clearly been described as a side-effect of the
-implementation and not a language design decision: programs relying on
-this are basically bogus.  It would anyway be insane to try to enforce
-CPython's behavior in a language spec, given that it has no chance to be
-adopted by Jython or IronPython (or any other port of Python to Java or
-.NET, like PyPy itself).
-
-This affects the precise time at which __del__ methods are called, which
-is not reliable in PyPy (nor Jython nor IronPython).  It also means that
-weak references may stay alive for a bit longer than expected.  This
-makes "weak proxies" (as returned by ``weakref.proxy()``) somewhat less
-useful: they will appear to stay alive for a bit longer in PyPy, and
-suddenly they will really be dead, raising a ``ReferenceError`` on the
-next access.  Any code that uses weak proxies must carefully catch such
-``ReferenceError`` at any place that uses them.
-
-There are a few extra implications for the difference in the GC.  Most
-notably, if an object has a __del__, the __del__ is never called more
-than once in PyPy; but CPython will call the same __del__ several times
-if the object is resurrected and dies again.  The __del__ methods are
-called in "the right" order if they are on objects pointing to each
-other, as in CPython, but unlike CPython, if there is a dead cycle of
-objects referencing each other, their __del__ methods are called anyway;
-CPython would instead put them into the list ``garbage`` of the ``gc``
-module.  More information is available on the blog `[1]`__ `[2]`__.
-
-.. __: http://morepypy.blogspot.com/2008/02/python-finalizers-semantics-part-1.html
-.. __: http://morepypy.blogspot.com/2008/02/python-finalizers-semantics-part-2.html
-
-Using the default GC called ``minimark``, the built-in function ``id()``
-works like it does in CPython.  With other GCs it returns numbers that
-are not real addresses (because an object can move around several times)
-and calling it a lot can lead to performance problem.
-
-Note that if you have a long chain of objects, each with a reference to
-the next one, and each with a __del__, PyPy's GC will perform badly.  On
-the bright side, in most other cases, benchmarks have shown that PyPy's
-GCs perform much better than CPython's.
-
-Another difference is that if you add a ``__del__`` to an existing class it will
-not be called::
-
-    >>>> class A(object):
-    ....     pass
-    ....
-    >>>> A.__del__ = lambda self: None
-    __main__:1: RuntimeWarning: a __del__ method added to an existing type will not be called
-
-
-Subclasses of built-in types
-----------------------------
-
-Officially, CPython has no rule at all for when exactly
-overridden method of subclasses of built-in types get
-implicitly called or not.  As an approximation, these methods
-are never called by other built-in methods of the same object.
-For example, an overridden ``__getitem__()`` in a subclass of
-``dict`` will not be called by e.g. the built-in ``get()``
-method.
-
-The above is true both in CPython and in PyPy.  Differences
-can occur about whether a built-in function or method will
-call an overridden method of *another* object than ``self``.
-In PyPy, they are generally always called, whereas not in
-CPython.  For example, in PyPy, ``dict1.update(dict2)``
-considers that ``dict2`` is just a general mapping object, and
-will thus call overridden ``keys()``  and ``__getitem__()``
-methods on it.  So the following code prints ``42`` on PyPy
-but ``foo`` on CPython::
-
-    >>>> class D(dict):
-    ....     def __getitem__(self, key):
-    ....         return 42
-    ....
-    >>>>
-    >>>> d1 = {}
-    >>>> d2 = D(a='foo')
-    >>>> d1.update(d2)
-    >>>> print d1['a']
-    42
-
-
-Ignored exceptions
------------------------
-
-In many corner cases, CPython can silently swallow exceptions.
-The precise list of when this occurs is rather long, even
-though most cases are very uncommon.  The most well-known
-places are custom rich comparison methods (like \_\_eq\_\_);
-dictionary lookup; calls to some built-in functions like
-isinstance().
-
-Unless this behavior is clearly present by design and
-documented as such (as e.g. for hasattr()), in most cases PyPy
-lets the exception propagate instead.
-
-
-Miscellaneous
--------------
-
-* ``sys.setrecursionlimit()`` is ignored (and not needed) on
-  PyPy.  On CPython it would set the maximum number of nested
-  calls that can occur before a RuntimeError is raised; on PyPy
-  overflowing the stack also causes RuntimeErrors, but the limit
-  is checked at a lower level.  (The limit is currently hard-coded
-  at 768 KB, corresponding to roughly 1480 Python calls on
-  Linux.)
-
-* assignment to ``__class__`` is limited to the cases where it
-  works on CPython 2.5.  On CPython 2.6 and 2.7 it works in a bit
-  more cases, which are not supported by PyPy so far.  (If needed,
-  it could be supported, but then it will likely work in many
-  *more* case on PyPy than on CPython 2.6/2.7.)
-
-
-.. include:: _ref.txt

diff --git a/pypy/doc/config/translation.backendopt.constfold.txt b/pypy/doc/config/translation.backendopt.constfold.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.backendopt.constfold.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Do constant folding of operations and constant propagation on flowgraphs.

diff --git a/pypy/doc/buildtool.txt b/pypy/doc/buildtool.txt
deleted file mode 100644
--- a/pypy/doc/buildtool.txt
+++ /dev/null
@@ -1,249 +0,0 @@
-============
-PyPyBuilder
-============
-
-What is this?
-=============
-
-PyPyBuilder is an application that allows people to build PyPy instances on
-demand. If you have a nice idle machine connected to the Internet, and don't
-mind us 'borrowing' it every once in a while, you can start up the client
-script (in bin/client) and have the server send compile jobs to your machine.
-If someone requests a build of PyPy that is not already available on the PyPy
-website, and your machine is capable of making such a build, the server may ask
-your machine to create it. If enough people participate, with diverse enough
-machines, a 'build farm' is created.
-
-Quick usage instructions
-========================
-
-For the impatient, that just want to get started, some quick instructions.
-
-First you'll need to have a checkout of the 'buildtool' package, that can
-be found here::
-
-  https://codespeak.net/svn/pypy/build/buildtool
-
-To start a compilation, run (from the buildtool root directory)::
-
-  $ ./bin/startcompile.py [options] <email address>
-
-where the options can be found by using --help, and the email address will be
-used to send mail to once the compilation is finished.
-
-To start a build server, to participate in the build farm, do::
-
-  $ ./bin/buildserver.py
-
-That's it for the compilation script and build server, if you have your own
-project and want to set up your own meta server, you'll have to be a bit more
-patient and read the details below...
-
-Components
-==========
-
-The application consists of 3 main components: a meta server component, a
-client component that handles compilations (let's call this a 'build server')
-and a small client component to start compile jobs (which we'll call
-'requesting clients' for now).
-
-The server waits for build server to register, and for compile job
-requests. When participating clients register, they pass the server information
-about what compilations the system can handle (system info), and a set of
-options to use for compilation (compile info).
-
-When now a requesting client requests a compilation job, the server checks
-whether a suitable binary is already available based on the system and compile
-info, and if so returns that. If there isn't one, the server walks through a
-list of connected participating clients to see if one of them can handle the
-job, and if so dispatches the compilation. If there's no participating client
-to handle the job, it gets queued until there is.
-
-If a client crashes during compilation, the build is restarted, or error
-information is sent to the logs and requesting client, depending on the type of
-error. As long as no compilation error occurs (read: on disconnects, system
-errors, etc.) compilation will be retried until a build is available.
-
-Once a build is available, the server will send an email to all clients waiting
-for the build (it could be that more than one person asked for some build at
-the same time!).
-
-Configuration
-=============
-
-There are several aspects to configuration on this system. Of course, for the
-meta server, build server and startcompile components there is configuration
-for the host and port to connect to, and there is some additional configuration
-for things like which mailhost to use (only applies to the server), but also
-there is configuration data passed around to determine what client is picked,
-and what the client needs to compile exactly.
-
-Config file
------------
-
-The host/port configuration etc. can be found in the file 'config.py' in the
-build tool dir. There are several things that can be configured here, mostly
-related to what application to build, and where to build it. Please read the
-file carefully when setting up a new build network, or when participating for
-compilation, because certain items (e.g. the svnpath_to_url function, or the
-client_checkers) can make the system a lot less secure when not configured
-properly.
-
-Note that all client-related configuration is done from command-line switches,
-so the configuration file is supposed to be changed on a per-project basis:
-unless you have specific needs, use a test version of the build tool, or are
-working on another project than PyPy, you will not want to modify the it.
-
-System configuration
---------------------
-
-This information is used by the client and startcompile components. On the
-participating clients this information is retrieved by querying the system, on
-the requesting clients the system values are used by default, but may be
-overridden (so a requesting client running an x86 can still request PPC builds,
-for instance). The clients compare their own system config to that of a build
-request, and will (should) refuse a build if it can not be executed because
-of incompatibilities.
-
-Compilation configuration
--------------------------
-
-The third form of configuration is that of the to-be-built application itself,
-its compilation arguments. This configuration is only provided by the
-requesting clients, build servers can examine the information and refuse a
-compilation based on this configuration (just like with the system config, see
-'client_checkers' in 'config.py'). Compilation configuration can be controlled
-using command-line arguments (use 'bin/startcompile.py --help' for an
-overview).
-
-Build tool options
-------------------
-
-Yet another part of the configuration are the options that are used by the
-startcompile.py script itself: the user can specify what SVN path (relative to
-a certain base path) and what Subversion revision is desired.  The revision can
-either be specified exactly, or as a range of versions.
-
-Installation
-============
-
-Build Server
-------------
-
-Installing the system should not be required: just run './bin/buildserver' to
-start. Note that it depends on the `py lib`_ (as does the rest of PyPy).
-
-When starting a build server with PyPy's default configuration, it will connect
-to a meta server we have running in codespeak.net.
-
-Meta Server
------------
-
-Also for the server there's no real setup required, and again there's a 
-dependency on the `py lib`_. Starting it is done by running
-'./bin/metaserver'.
-
-Running a compile job
----------------------
-
-Again installation is not required, just run './bin/startcompile.py [options]
-<email>' (see --help for the options) to start. Again, you need to have the
-`py lib`_ installed.
-
-Normally the codespeak.net meta server will be used when this script is issued.
-
-.. _`py lib`: http://codespeak.net/py
-
-Using the build tool for other projects
-=======================================
-
-The code for the build tool is meant to be generic. Using it for other projects
-than PyPy (for which it was originally written) is relatively straight-forward:
-just change the configuration, and implement a build client script (probably
-highly resembling bin/buildserver.py).
-
-Note that there is a test project in 'tool/build/testproject' that can serve
-as an example.
-
-Prerequisites
---------------
-
-Your project can use the build tool if:
-
-  * it can be built from Python
-
-    Of course this is a rather vague requirement: theoretically _anything_ can
-    be built from Python; it's just a matter of integrating it into the tool
-    properly... A project that can entirely be built from Python code (like
-    PyPy) is easier to integrate than something that is built from the command
-    line, though (although implementing that won't be very hard either, see
-    the test project for instance).
-
-  * it is located in Subversion
-
-    The build tool makes very little hard-coded assumptions, but having code
-    in Subversion is one of them. There are several locations in the code where
-    SVN is assumed: the command line options (see `build tool options`_),
-    the server (which checks SVN urls for validity, and converts HEAD revision
-    requests to actual revision ids) and and build client (which checks out the
-    data) all make this assumption, changing to a different revision control
-    system is currently not easy and unsupported (but who knows what the future
-    will bring).
-
-  * it uses PyPy's config mechanism
-
-    PyPy has a very nice, generic configuration mechanism (essentially wrapper
-    OptionParser stuff) that makes dealing with fragmented configuration
-    and command-line options a lot easier. This mechanism is used by the build
-    tool: it assumes configuration is provided in this format. If your project
-    uses this configuration mechanism already, you can provide the root Config
-    object from config.compile_config; if not it should be fairly straight-
-    forward to wrap your existing configuration with the PyPy stuff.
-
-Basically that's it: if your project is stored in SVN, and you don't mind using
-Python a bit, it shouldn't be too hard to get things going (note that more
-documentation about this subject will follow in the future).
-
-Web Front-End
-=============
-
-To examine the status of the meta server, connected build servers and build
-requests, there is a web server available. This can be started using
-'./bin/webserver' and uses port 8080 by default (override in
-config.py).
-
-The web server presents a number of different pages:
-
-  * / and /metaserverstatus - meta server status
-
-    this displays a small list of information about the meta server, such
-    as the amount of connected build servers, the amount of builds available,
-    the amount of waiting clients, etc.
-
-  * /buildservers - connected build servers
-
-    this page contains a list of all connected build servers, system
-    information and what build they're currently working on (if any)
-
-  * /builds - a list of builds
-
-    here you'll find a list of all builds, both done and in-progress and
-    queued ones, with links to the details pages, the date they were
-    requested and their status
-
-  * /build/<id> - build details
-
-    the 'build' (virtual) directory contains pages of information for each
-    build - each of those pages displays status information, time requested,
-    time started and finished (if appropriate), links to the zip and logs,
-    and system and compile information
-
-There's a build tool status web server for the meta server on codespeak.net
-available at http://codespeak.net/pypy/buildstatus/.
-
-More info
-=========
-
-For more information, bug reports, patches, etc., please send an email to 
-guido at merlinux.de.
-

diff --git a/pypy/doc/config/objspace.usemodules.rctime.txt b/pypy/doc/config/objspace.usemodules.rctime.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules.rctime.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-Use the 'rctime' module. 
-
-'rctime' is our `rffi`_ based implementation of the builtin 'time' module.
-It supersedes the less complete :config:`objspace.usemodules.time`,
-at least for C-like targets (the C and LLVM backends).
-
-.. _`rffi`: ../rffi.html

diff --git a/pypy/doc/config/translation.debug.txt b/pypy/doc/config/translation.debug.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.debug.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Record extra debugging information during annotation. This leads to slightly
-less obscure error messages.

diff --git a/pypy/doc/discussion/improve-rpython.txt b/pypy/doc/discussion/improve-rpython.txt
deleted file mode 100644
--- a/pypy/doc/discussion/improve-rpython.txt
+++ /dev/null
@@ -1,93 +0,0 @@
-Possible improvements of the rpython language
-=============================================
-
-Improve the interpreter API
----------------------------
-
-- Rationalize the modules, and the names, of the different functions needed to
-  implement a pypy module. A typical rpython file is likely to contain many
-  `import` statements::
-
-    from pypy.interpreter.baseobjspace import Wrappable
-    from pypy.interpreter.gateway import ObjSpace, W_Root, NoneNotWrapped
-    from pypy.interpreter.argument import Arguments
-    from pypy.interpreter.typedef import TypeDef, GetSetProperty
-    from pypy.interpreter.typedef import interp_attrproperty, interp_attrproperty_w
-    from pypy.interpreter.gateway import interp2app
-    from pypy.interpreter.error import OperationError
-    from pypy.rpython.lltypesystem import rffi, lltype
-
-- A more direct declarative way to write Typedef::
-
-    class W_Socket(Wrappable):
-        _typedef_name_ = 'socket'
-        _typedef_base_ = W_EventualBaseClass
-
-        @interp2app_method("connect", ['self', ObjSpace, W_Root])
-        def connect_w(self, space, w_addr):
-            ...
-
-- Support for metaclasses written in rpython. For a sample, see the skipped test
-  `pypy.objspace.std.test.TestTypeObject.test_metaclass_typedef`
-
-RPython language
-----------------
-
-- Arithmetic with unsigned integer, and between integer of different signedness,
-  when this is not ambiguous.  At least, comparison and assignment with
-  constants should be allowed.
-
-- Allocate variables on the stack, and pass their address ("by reference") to
-  llexternal functions. For a typical usage, see
-  `pypy.rlib.rsocket.RSocket.getsockopt_int`.
-
-- Support context managers and the `with` statement. This could be a workaround
-  before the previous point is available.
-
-Extensible type system for llexternal
--------------------------------------
-
-llexternal allows the description of a C function, and conveys the same
-information about the arguments as a C header.  But this is often not enough.
-For example, a parameter of type `int*` is converted to
-`rffi.CArrayPtr(rffi.INT)`, but this information is not enough to use the
-function. The parameter could be an array of int, a reference to a single value,
-for input or output...
-
-A "type system" could hold this additional information, and automatically
-generate some conversion code to ease the usage of the function from
-rpython. For example::
-
-    # double frexp(double x, int *exp);
-    frexp = llexternal("frexp", [rffi.DOUBLE, OutPtr(rffi.int)], rffi.DOUBLE)
-
-`OutPtr` indicates that the parameter is output-only, which need not to be
-initialized, and which *value* is returned to the caller. In rpython the call
-becomes::
-
-    fraction, exponent = frexp(value)
-
-Also, we could imagine that one item in the llexternal argument list corresponds
-to two parameters in C. Here, OutCharBufferN indicates that the caller will pass
-a rpython string; the framework will pass buffer and length to the function::
-
-    # ssize_t write(int fd, const void *buf, size_t count);
-    write = llexternal("write", [rffi.INT, CharBufferAndSize], rffi.SSIZE_T)
-
-The rpython code that calls this function is very simple::
-
-    written = write(fd, data)
-
-compared with the present::
-
-    count = len(data)
-    buf = rffi.get_nonmovingbuffer(data)
-    try:
-        written = rffi.cast(lltype.Signed, os_write(
-            rffi.cast(rffi.INT, fd),
-            buf, rffi.cast(rffi.SIZE_T, count)))
-    finally:
-        rffi.free_nonmovingbuffer(data, buf)
-
-Typemaps are very useful for large APIs where the same conversions are needed in
-many places.  XXX example

diff --git a/pypy/doc/config/translation.make_jobs.txt b/pypy/doc/config/translation.make_jobs.txt
deleted file mode 100644
--- a/pypy/doc/config/translation.make_jobs.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-Specify number of make jobs for make command.

diff --git a/pypy/doc/interpreter-optimizations.txt b/pypy/doc/interpreter-optimizations.txt
deleted file mode 100644
--- a/pypy/doc/interpreter-optimizations.txt
+++ /dev/null
@@ -1,357 +0,0 @@
-==================================
-Standard Interpreter Optimizations
-==================================
-
-.. contents:: Contents
-
-Introduction
-============
-
-One of the advantages -- indeed, one of the motivating goals -- of the PyPy
-standard interpreter (compared to CPython) is that of increased flexibility and
-configurability.
-
-One example of this is that we can provide several implementations of the same
-object (e.g. lists) without exposing any difference to application-level
-code. This makes it easy to provide a specialized implementation of a type that
-is optimized for a certain situation without disturbing the implementation for
-the regular case.
-
-This document describes several such optimizations.  Most of them are not
-enabled by default.  Also, for many of these optimizations it is not clear
-whether they are worth it in practice for a real-world application (they sure
-make some microbenchmarks a lot faster and use less memory, which is not saying
-too much).  If you have any observation in that direction, please let us know!
-By the way: alternative object implementations are a great way to get into PyPy
-development since you have to know only a rather small part of PyPy to do
-them. And they are fun too!
-
-.. describe other optimizations!
-
-Object Optimizations
-====================
-
-String Optimizations
---------------------
-
-String-Join Objects
-+++++++++++++++++++
-
-String-join objects are a different implementation of the Python ``str`` type,
-They represent the lazy addition of several strings without actually performing
-the addition (which involves copying etc.). When the actual value of the string
-join object is needed, the addition is performed. This makes it possible to
-perform repeated string additions in a loop without using the
-``"".join(list_of_strings)`` pattern.
-
-You can enable this feature enable with the :config:`objspace.std.withstrjoin`
-option.
-
-String-Slice Objects
-++++++++++++++++++++
-
-String-slice objects are another implementation of the Python ``str`` type.
-They represent the lazy slicing of a string without actually performing the
-slicing (which would involve copying). This is only done for slices of step
-one. When the actual value of the string slice object is needed, the slicing
-is done (although a lot of string methods don't make this necessary). This
-makes string slicing a very efficient operation. It also saves memory in some
-cases but can also lead to memory leaks, since the string slice retains a
-reference to the original string (to make this a bit less likely, we don't
-use lazy slicing when the slice would be much shorter than the original
-string.  There is also a minimum number of characters below which being lazy
-is not saving any time over making the copy).
-
-You can enable this feature with the :config:`objspace.std.withstrslice` option.
-
-Ropes
-+++++
-
-Ropes are a general flexible string implementation, following the paper `"Ropes:
-An alternative to Strings."`_ by Boehm, Atkinson and Plass. Strings are
-represented as balanced concatenation trees, which makes slicing and
-concatenation of huge strings efficient.
-
-Using ropes is usually not a huge benefit for normal Python programs that use
-the typical pattern of appending substrings to a list and doing a
-``"".join(l)`` at the end. If ropes are used, there is no need to do that.
-A somewhat silly example of things you can do with them is this::
-
-    $ bin/py.py --objspace-std-withrope
-    faking <type 'module'>
-    PyPy 0.99.0 in StdObjSpace on top of Python 2.4.4c1 (startuptime: 17.24 secs)
-    >>>> import sys
-    >>>> sys.maxint
-    2147483647
-    >>>> s = "a" * sys.maxint
-    >>>> s[10:20]
-    'aaaaaaaaaa'
-
-
-You can enable this feature with the :config:`objspace.std.withrope` option.
-
-.. _`"Ropes: An alternative to Strings."`: http://www.cs.ubc.ca/local/reading/proceedings/spe91-95/spe/vol25/issue12/spe986.pdf
-
-Integer Optimizations
----------------------
-
-Caching Small Integers
-++++++++++++++++++++++
-
-Similar to CPython, it is possible to enable caching of small integer objects to
-not have to allocate all the time when doing simple arithmetic. Every time a new
-integer object is created it is checked whether the integer is small enough to
-be retrieved from the cache.
-
-This option is enabled by default.
-
-Integers as Tagged Pointers
-+++++++++++++++++++++++++++
-
-An even more aggressive way to save memory when using integers is "small int"
-integer implementation. It is another integer implementation used for integers
-that only needs 31 bits (or 63 bits on a 64 bit machine). These integers
-are represented as tagged pointers by setting their lowest bits to distinguish
-them from normal pointers. This completely avoids the boxing step, saving
-time and memory.
-
-You can enable this feature with the :config:`objspace.std.withsmallint` option.
-
-Dictionary Optimizations
-------------------------
-
-Multi-Dicts
-+++++++++++
-
-Multi-dicts are a special implementation of dictionaries.  It became clear that
-it is very useful to *change* the internal representation of an object during
-its lifetime.  Multi-dicts are a general way to do that for dictionaries: they
-provide generic support for the switching of internal representations for
-dicts.
-
-If you just enable multi-dicts, special representations for empty dictionaries,
-for string-keyed dictionaries. In addition there are more specialized dictionary
-implementations for various purposes (see below).
-
-This is now the default implementation of dictionaries in the Python interpreter.
-option.
-
-Sharing Dicts
-+++++++++++++
-
-Sharing dictionaries are a special representation used together with multidicts.
-This dict representation is used only for instance dictionaries and tries to
-make instance dictionaries use less memory (in fact, in the ideal case the
-memory behaviour should be mostly like that of using __slots__).
-
-The idea is the following: Most instances of the same class have very similar
-attributes, and are even adding these keys to the dictionary in the same order
-while ``__init__()`` is being executed. That means that all the dictionaries of
-these instances look very similar: they have the same set of keys with different
-values per instance. What sharing dicts do is store these common keys into a
-common structure object and thus save the space in the individual instance
-dicts:
-the representation of the instance dict contains only a list of values.
-
-A more advanced version of sharing dicts, called *map dicts,* is available
-with the :config:`objspace.std.withmapdict` option.
-
-Builtin-Shadowing
-+++++++++++++++++
-
-Usually the calling of builtins in Python requires two dictionary lookups: first
-to see whether the current global dictionary contains an object with the same
-name, then a lookup in the ``__builtin__`` dictionary. This is somehow
-circumvented by storing an often used builtin into a local variable to get
-the fast local lookup (which is a rather strange and ugly hack).
-
-The same problem is solved in a different way by "wary" dictionaries. They are
-another dictionary representation used together with multidicts. This
-representation is used only for module dictionaries. The representation checks on
-every setitem whether the key that is used is the name of a builtin. If this is
-the case, the dictionary is marked as shadowing that particular builtin.
-
-To identify calls to builtins easily, a new bytecode (``CALL_LIKELY_BUILTIN``)
-is introduced. Whenever it is executed, the globals dictionary is checked
-to see whether it masks the builtin (which is possible without a dictionary
-lookup).  Then the ``__builtin__`` dict is checked in the same way,
-to see whether somebody replaced the real builtin with something else. In the
-common case, the program didn't do any of these; the proper builtin can then
-be called without using any dictionary lookup at all.
-
-You can enable this feature with the
-:config:`objspace.opcodes.CALL_LIKELY_BUILTIN` option.
-
-
-List Optimizations
-------------------
-
-Range-Lists
-+++++++++++
-
-Range-lists solve the same problem that the ``xrange`` builtin solves poorly:
-the problem that ``range`` allocates memory even if the resulting list is only
-ever used for iterating over it. Range lists are a different implementation for
-lists. They are created only as a result of a call to ``range``. As long as the
-resulting list is used without being mutated, the list stores only the start, stop
-and step of the range. Only when somebody mutates the list the actual list is
-created. This gives the memory and speed behaviour of ``xrange`` and the generality
-of use of ``range``, and makes ``xrange`` essentially useless.
-
-You can enable this feature with the :config:`objspace.std.withrangelist`
-option.
-
-
-User Class Optimizations
-------------------------
-
-Shadow Tracking
-+++++++++++++++
-
-Shadow tracking is a general optimization that speeds up method calls for user
-classes (that don't have special meta-class). For this a special dict
-representation is used together with multidicts. This dict representation is
-used only for instance dictionaries. The instance dictionary tracks whether an
-instance attribute shadows an attribute of its class. This makes method calls
-slightly faster in the following way: When calling a method the first thing that
-is checked is the class dictionary to find descriptors. Normally, when a method
-is found, the instance dictionary is then checked for instance attributes
-shadowing the class attribute. If we know that there is no shadowing (since
-instance dict tells us that) we can save this lookup on the instance dictionary.
-
-*This was deprecated and is no longer available.*
-
-
-Method Caching
-++++++++++++++
-
-Shadow tracking is also an important building block for the method caching
-optimization. A method cache is introduced where the result of a method lookup
-is stored (which involves potentially many lookups in the base classes of a
-class). Entries in the method cache are stored using a hash computed from
-the name being looked up, the call site (i.e. the bytecode object and
-the current program counter), and a special "version" of the type where the
-lookup happens (this version is incremented every time the type or one of its
-base classes is changed). On subsequent lookups the cached version can be used,
-as long as the instance did not shadow any of its classes attributes.
-
-You can enable this feature with the :config:`objspace.std.withmethodcache`
-option.
-
-Interpreter Optimizations
-=========================
-
-Special Bytecodes
------------------
-
-.. _`lookup method call method`:
-
-LOOKUP_METHOD & CALL_METHOD
-+++++++++++++++++++++++++++
-
-An unusual feature of Python's version of object oriented programming is the
-concept of a "bound method".  While the concept is clean and powerful, the
-allocation and initialization of the object is not without its performance cost.
-We have implemented a pair of bytecodes that alleviate this cost.
-
-For a given method call ``obj.meth(x, y)``, the standard bytecode looks like
-this::
-
-    LOAD_GLOBAL     obj      # push 'obj' on the stack
-    LOAD_ATTR       meth     # read the 'meth' attribute out of 'obj'
-    LOAD_GLOBAL     x        # push 'x' on the stack
-    LOAD_GLOBAL     y        # push 'y' on the stack
-    CALL_FUNCTION   2        # call the 'obj.meth' object with arguments x, y
-
-We improved this by keeping method lookup separated from method call, unlike
-some other approaches, but using the value stack as a cache instead of building
-a temporary object.  We extended the bytecode compiler to (optionally) generate
-the following code for ``obj.meth(x)``::
-
-    LOAD_GLOBAL     obj
-    LOOKUP_METHOD   meth
-    LOAD_GLOBAL     x
-    LOAD_GLOBAL     y
-    CALL_METHOD     2
-
-``LOOKUP_METHOD`` contains exactly the same attribute lookup logic as
-``LOAD_ATTR`` - thus fully preserving semantics - but pushes two values onto the
-stack instead of one.  These two values are an "inlined" version of the bound
-method object: the *im_func* and *im_self*, i.e.  respectively the underlying
-Python function object and a reference to ``obj``.  This is only possible when
-the attribute actually refers to a function object from the class; when this is
-not the case, ``LOOKUP_METHOD`` still pushes two values, but one *(im_func)* is
-simply the regular result that ``LOAD_ATTR`` would have returned, and the other
-*(im_self)* is a None placeholder.
-
-After pushing the arguments, the layout of the stack in the above
-example is as follows (the stack grows upwards):
-
-+---------------------------------+
-| ``y`` *(2nd arg)*               |
-+---------------------------------+
-| ``x`` *(1st arg)*               |
-+---------------------------------+
-| ``obj`` *(im_self)*             |
-+---------------------------------+
-| ``function object`` *(im_func)* |
-+---------------------------------+
-
-The ``CALL_METHOD N`` bytecode emulates a bound method call by
-inspecting the *im_self* entry in the stack below the ``N`` arguments:
-if it is not None, then it is considered to be an additional first
-argument in the call to the *im_func* object from the stack.
-
-You can enable this feature with the :config:`objspace.opcodes.CALL_METHOD`
-option.
-
-.. _`call likely builtin`:
-
-CALL_LIKELY_BUILTIN
-+++++++++++++++++++
-
-A often heard "tip" for speeding up Python programs is to give an often used
-builtin a local name, since local lookups are faster than lookups of builtins,
-which involve doing two dictionary lookups: one in the globals dictionary and
-one in the the builtins dictionary. PyPy approaches this problem at the
-implementation level, with the introduction of the new ``CALL_LIKELY_BUILTIN``
-bytecode. This bytecode is produced by the compiler for a call whose target is
-the name of a builtin.  Since such a syntactic construct is very often actually
-invoking the expected builtin at run-time, this information can be used to make
-the call to the builtin directly, without going through any dictionary lookup.
-
-However, it can occur that the name is shadowed by a global name from the
-current module.  To catch this case, a special dictionary implementation for
-multidicts is introduced, which is used for the dictionaries of modules. This
-implementation keeps track which builtin name is shadowed by it.  The
-``CALL_LIKELY_BUILTIN`` bytecode asks the dictionary whether it is shadowing the
-builtin that is about to be called and asks the dictionary of ``__builtin__``
-whether the original builtin was changed.  These two checks are cheaper than
-full lookups.  In the common case, neither of these cases is true, so the
-builtin can be directly invoked.
-
-You can enable this feature with the
-:config:`objspace.opcodes.CALL_LIKELY_BUILTIN` option.
-
-.. more here?
-
-Overall Effects
-===============
-
-The impact these various optimizations have on performance unsurprisingly
-depends on the program being run.  Using the default multi-dict implementation that
-simply special cases string-keyed dictionaries is a clear win on all benchmarks,
-improving results by anything from 15-40 per cent.
-
-Another optimization, or rather set of optimizations, that has a uniformly good
-effect is the set of three 'method optimizations', i.e. shadow tracking, the
-method cache and the LOOKUP_METHOD and CALL_METHOD opcodes.  On a heavily
-object-oriented benchmark (richards) they combine to give a speed-up of nearly
-50%, and even on the extremely un-object-oriented pystone benchmark, the
-improvement is over 20%.
-
-.. waffles about ropes
-
-When building pypy, all generally useful optimizations are turned on by default
-unless you explicitly lower the translation optimization level with the
-``--opt`` option.

diff --git a/pypy/doc/discussion/distribution-roadmap.txt b/pypy/doc/discussion/distribution-roadmap.txt
deleted file mode 100644
--- a/pypy/doc/discussion/distribution-roadmap.txt
+++ /dev/null
@@ -1,72 +0,0 @@
-Distribution:
-=============
-
-Some random thoughts about automatic (or not) distribution layer.
-
-What I want to achieve is to make clean approach to perform
-distribution mechanism with virtually any distribution heuristic.
-
-First step - RPython level:
----------------------------
-
-First (simplest) step is to allow user to write RPython programs with
-some kind of remote control over program execution. For start I would
-suggest using RMI (Remote Method Invocation) and remote object access
-(in case of low level it would be struct access). For the simplicity
-it will make some sense to target high-level platform at the beginning
-(CLI platform seems like obvious choice), which provides more primitives
-for performing such operations. To make attempt easier, I'll provide
-some subset of type system to be serializable which can go as parameters
-to such a call.
-
-I take advantage of several assumptions:
-
-* globals are constants - this allows us to just run multiple instances
-  of the same program on multiple machines and perform RMI.
-
-* I/O is explicit - this makes GIL problem not that important. XXX: I've got
-  to read more about GIL to notice if this is true.
-
-Second step - doing it a little bit more automatically:
--------------------------------------------------------
-
-The second step is to allow some heuristic to live and change
-calls to RMI calls. This should follow some assumptions (which may vary,
-regarding implementation):
-
-* Not to move I/O to different machine (we can track I/O and side-effects
-  in RPython code).
-
-* Make sure all C calls are safe to transfer if we want to do that (this
-  depends on probably static API declaration from programmer "I'm sure this
-  C call has no side-effects", we don't want to check it in C) or not transfer
-  them at all.
-
-* Perform it all statically, at the time of program compilation.
-
-* We have to generate serialization methods for some classes, which 
-  we want to transfer (Same engine might be used to allow JSON calls in JS
-  backend to transfer arbitrary python object).
-
-Third step - Just-in-time distribution:
----------------------------------------
-
-The biggest step here is to provide JIT integration into distribution
-system. This should allow to make it really useful (probably compile-time
-distribution will not work for example for whole Python interpreter, because
-of too huge granularity). This is quite unclear for me how to do that
-(JIT is not complete and I don't know too much about it). Probably we
-take JIT information about graphs and try to feed it to heuristic in some way
-to change the calls into RMI.
-
-Problems to fight with:
------------------------
-
-Most problems are to make mechanism working efficiently, so:
-
-* Avoid too much granularity (copying a lot of objects in both directions
-  all the time)
-
-* Make heuristic not eat too much CPU time/memory and all of that.
-
-* ...

diff --git a/pypy/doc/config/objspace.usemodules._sre.txt b/pypy/doc/config/objspace.usemodules._sre.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.usemodules._sre.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Use the '_sre' module. 
-This module is expected to be working and is included by default.


More information about the Pypy-commit mailing list