It has been a while since I posted a copy of PEP 1 to the mailing
lists and newsgroups. I've recently done some updating of a few
sections, so in the interest of gaining wider community participation
in the Python development process, I'm posting the latest revision of
PEP 1 here. A version of the PEP is always available on-line at
-------------------- snip snip --------------------
Title: PEP Purpose and Guidelines
Version: $Revision: 1.36 $
Last-Modified: $Date: 2002/07/29 18:34:59 $
Author: Barry A. Warsaw, Jeremy Hylton
Post-History: 21-Mar-2001, 29-Jul-2002
What is a PEP?
PEP stands for Python Enhancement Proposal. A PEP is a design
document providing information to the Python community, or
describing a new feature for Python. The PEP should provide a
concise technical specification of the feature and a rationale for
We intend PEPs to be the primary mechanisms for proposing new
features, for collecting community input on an issue, and for
documenting the design decisions that have gone into Python. The
PEP author is responsible for building consensus within the
community and documenting dissenting opinions.
Because the PEPs are maintained as plain text files under CVS
control, their revision history is the historical record of the
Kinds of PEPs
There are two kinds of PEPs. A standards track PEP describes a
new feature or implementation for Python. An informational PEP
describes a Python design issue, or provides general guidelines or
information to the Python community, but does not propose a new
feature. Informational PEPs do not necessarily represent a Python
community consensus or recommendation, so users and implementors
are free to ignore informational PEPs or follow their advice.
PEP Work Flow
The PEP editor, Barry Warsaw <peps(a)python.org>, assigns numbers
for each PEP and changes its status.
The PEP process begins with a new idea for Python. It is highly
recommended that a single PEP contain a single key proposal or new
idea. The more focussed the PEP, the more successfully it tends
to be. The PEP editor reserves the right to reject PEP proposals
if they appear too unfocussed or too broad. If in doubt, split
your PEP into several well-focussed ones.
Each PEP must have a champion -- someone who writes the PEP using
the style and format described below, shepherds the discussions in
the appropriate forums, and attempts to build community consensus
around the idea. The PEP champion (a.k.a. Author) should first
attempt to ascertain whether the idea is PEP-able. Small
enhancements or patches often don't need a PEP and can be injected
into the Python development work flow with a patch submission to
the SourceForge patch manager or feature request tracker.
The PEP champion then emails the PEP editor <peps(a)python.org> with
a proposed title and a rough, but fleshed out, draft of the PEP.
This draft must be written in PEP style as described below.
If the PEP editor approves, he will assign the PEP a number, label
it as standards track or informational, give it status 'draft',
and create and check-in the initial draft of the PEP. The PEP
editor will not unreasonably deny a PEP. Reasons for denying PEP
status include duplication of effort, being technically unsound,
not providing proper motivation or addressing backwards
compatibility, or not in keeping with the Python philosophy. The
BDFL (Benevolent Dictator for Life, Guido van Rossum) can be
consulted during the approval phase, and is the final arbitrator
of the draft's PEP-ability.
If a pre-PEP is rejected, the author may elect to take the pre-PEP
to the comp.lang.python newsgroup (a.k.a. python-list(a)python.org
mailing list) to help flesh it out, gain feedback and consensus
from the community at large, and improve the PEP for
The author of the PEP is then responsible for posting the PEP to
the community forums, and marshaling community support for it. As
updates are necessary, the PEP author can check in new versions if
they have CVS commit permissions, or can email new PEP versions to
the PEP editor for committing.
Standards track PEPs consists of two parts, a design document and
a reference implementation. The PEP should be reviewed and
accepted before a reference implementation is begun, unless a
reference implementation will aid people in studying the PEP.
Standards Track PEPs must include an implementation - in the form
of code, patch, or URL to same - before it can be considered
PEP authors are responsible for collecting community feedback on a
PEP before submitting it for review. A PEP that has not been
discussed on python-list(a)python.org and/or python-dev(a)python.org
will not be accepted. However, wherever possible, long open-ended
discussions on public mailing lists should be avoided. Strategies
to keep the discussions efficient include, setting up a separate
SIG mailing list for the topic, having the PEP author accept
private comments in the early design phases, etc. PEP authors
should use their discretion here.
Once the authors have completed a PEP, they must inform the PEP
editor that it is ready for review. PEPs are reviewed by the BDFL
and his chosen consultants, who may accept or reject a PEP or send
it back to the author(s) for revision.
Once a PEP has been accepted, the reference implementation must be
completed. When the reference implementation is complete and
accepted by the BDFL, the status will be changed to `Final.'
A PEP can also be assigned status `Deferred.' The PEP author or
editor can assign the PEP this status when no progress is being
made on the PEP. Once a PEP is deferred, the PEP editor can
re-assign it to draft status.
A PEP can also be `Rejected'. Perhaps after all is said and done
it was not a good idea. It is still important to have a record of
PEPs can also be replaced by a different PEP, rendering the
original obsolete. This is intended for Informational PEPs, where
version 2 of an API can replace version 1.
PEP work flow is as follows:
Draft -> Accepted -> Final -> Replaced
Some informational PEPs may also have a status of `Active' if they
are never meant to be completed. E.g. PEP 1.
What belongs in a successful PEP?
Each PEP should have the following parts:
1. Preamble -- RFC822 style headers containing meta-data about the
PEP, including the PEP number, a short descriptive title
(limited to a maximum of 44 characters), the names, and
optionally the contact info for each author, etc.
2. Abstract -- a short (~200 word) description of the technical
issue being addressed.
3. Copyright/public domain -- Each PEP must either be explicitly
labelled as placed in the public domain (see this PEP as an
example) or licensed under the Open Publication License.
4. Specification -- The technical specification should describe
the syntax and semantics of any new language feature. The
specification should be detailed enough to allow competing,
interoperable implementations for any of the current Python
platforms (CPython, JPython, Python .NET).
5. Motivation -- The motivation is critical for PEPs that want to
change the Python language. It should clearly explain why the
existing language specification is inadequate to address the
problem that the PEP solves. PEP submissions without
sufficient motivation may be rejected outright.
6. Rationale -- The rationale fleshes out the specification by
describing what motivated the design and why particular design
decisions were made. It should describe alternate designs that
were considered and related work, e.g. how the feature is
supported in other languages.
The rationale should provide evidence of consensus within the
community and discuss important objections or concerns raised
7. Backwards Compatibility -- All PEPs that introduce backwards
incompatibilities must include a section describing these
incompatibilities and their severity. The PEP must explain how
the author proposes to deal with these incompatibilities. PEP
submissions without a sufficient backwards compatibility
treatise may be rejected outright.
8. Reference Implementation -- The reference implementation must
be completed before any PEP is given status 'Final,' but it
need not be completed before the PEP is accepted. It is better
to finish the specification and rationale first and reach
consensus on it before writing code.
The final implementation must include test code and
documentation appropriate for either the Python language
reference or the standard library reference.
PEPs are written in plain ASCII text, and should adhere to a
rigid style. There is a Python script that parses this style and
converts the plain text PEP to HTML for viewing on the web.
PEP 9 contains a boilerplate template you can use to get
started writing your PEP.
Each PEP must begin with an RFC822 style header preamble. The
headers must appear in the following order. Headers marked with
`*' are optional and are described below. All other headers are
PEP: <pep number>
Title: <pep title>
Version: <cvs version string>
Last-Modified: <cvs date string>
Author: <list of authors' real names and optionally, email addrs>
* Discussions-To: <email address>
Status: <Draft | Active | Accepted | Deferred | Final | Replaced>
Type: <Informational | Standards Track>
* Requires: <pep numbers>
Created: <date created on, in dd-mmm-yyyy format>
* Python-Version: <version number>
Post-History: <dates of postings to python-list and python-dev>
* Replaces: <pep number>
* Replaced-By: <pep number>
The Author: header lists the names and optionally, the email
addresses of all the authors/owners of the PEP. The format of the
author entry should be
address(a)dom.ain (Random J. User)
if the email address is included, and just
Random J. User
if the address is not given. If there are multiple authors, each
should be on a separate line following RFC 822 continuation line
conventions. Note that personal email addresses in PEPs will be
obscured as a defense against spam harvesters.
Standards track PEPs must have a Python-Version: header which
indicates the version of Python that the feature will be released
with. Informational PEPs do not need a Python-Version: header.
While a PEP is in private discussions (usually during the initial
Draft phase), a Discussions-To: header will indicate the mailing
list or URL where the PEP is being discussed. No Discussions-To:
header is necessary if the PEP is being discussed privately with
the author, or on the python-list or python-dev email mailing
lists. Note that email addresses in the Discussions-To: header
will not be obscured.
Created: records the date that the PEP was assigned a number,
while Post-History: is used to record the dates of when new
versions of the PEP are posted to python-list and/or python-dev.
Both headers should be in dd-mmm-yyyy format, e.g. 14-Aug-2001.
PEPs may have a Requires: header, indicating the PEP numbers that
this PEP depends on.
PEPs may also have a Replaced-By: header indicating that a PEP has
been rendered obsolete by a later document; the value is the
number of the PEP that replaces the current document. The newer
PEP must have a Replaces: header containing the number of the PEP
that it rendered obsolete.
PEP Formatting Requirements
PEP headings must begin in column zero and the initial letter of
each word must be capitalized as in book titles. Acronyms should
be in all capitals. The body of each section must be indented 4
spaces. Code samples inside body sections should be indented a
further 4 spaces, and other indentation can be used as required to
make the text readable. You must use two blank lines between the
last line of a section's body and the next section heading.
You must adhere to the Emacs convention of adding two spaces at
the end of every sentence. You should fill your paragraphs to
column 70, but under no circumstances should your lines extend
past column 79. If your code samples spill over column 79, you
should rewrite them.
Tab characters must never appear in the document at all. A PEP
should include the standard Emacs stanza included by example at
the bottom of this PEP.
A PEP must contain a Copyright section, and it is strongly
recommended to put the PEP in the public domain.
When referencing an external web page in the body of a PEP, you
should include the title of the page in the text, with a
footnote reference to the URL. Do not include the URL in the body
text of the PEP. E.g.
Refer to the Python Language web site  for more details.
When referring to another PEP, include the PEP number in the body
text, such as "PEP 1". The title may optionally appear. Add a
footnote reference that includes the PEP's title and author. It
may optionally include the explicit URL on a separate line, but
only in the References section. Note that the pep2html.py script
will calculate URLs automatically, e.g.:
Refer to PEP 1  for more information about PEP style
 PEP 1, PEP Purpose and Guidelines, Warsaw, Hylton
If you decide to provide an explicit URL for a PEP, please use
this as the URL template:
PEP numbers in URLs must be padded with zeros from the left, so as
to be exactly 4 characters wide, however PEP numbers in text are
Reporting PEP Bugs, or Submitting PEP Updates
How you report a bug, or submit a PEP update depends on several
factors, such as the maturity of the PEP, the preferences of the
PEP author, and the nature of your comments. For the early draft
stages of the PEP, it's probably best to send your comments and
changes directly to the PEP author. For more mature, or finished
PEPs you may want to submit corrections to the SourceForge bug
manager or better yet, the SourceForge patch manager so that
your changes don't get lost. If the PEP author is a SF developer,
assign the bug/patch to him, otherwise assign it to the PEP
When in doubt about where to send your changes, please check first
with the PEP author and/or PEP editor.
PEP authors who are also SF committers, can update the PEPs
themselves by using "cvs commit" to commit their changes.
Remember to also push the formatted PEP text out to the web by
doing the following:
% python pep2html.py -i NUM
where NUM is the number of the PEP you want to push out. See
% python pep2html.py --help
Transferring PEP Ownership
It occasionally becomes necessary to transfer ownership of PEPs to
a new champion. In general, we'd like to retain the original
author as a co-author of the transferred PEP, but that's really up
to the original author. A good reason to transfer ownership is
because the original author no longer has the time or interest in
updating it or following through with the PEP process, or has
fallen off the face of the 'net (i.e. is unreachable or not
responding to email). A bad reason to transfer ownership is
because you don't agree with the direction of the PEP. We try to
build consensus around a PEP, but if that's not possible, you can
always submit a competing PEP.
If you are interested assuming ownership of a PEP, send a message
asking to take over, addressed to both the original author and the
PEP editor <peps(a)python.org>. If the original author doesn't
respond to email in a timely manner, the PEP editor will make a
unilateral decision (it's not like such decisions can be
References and Footnotes
 This historical record is available by the normal CVS commands
for retrieving older revisions. For those without direct access
to the CVS tree, you can browse the current and past PEP revisions
via the SourceForge web site at
 The script referred to here is pep2html.py, which lives in
the same directory in the CVS tree as the PEPs themselves.
Try "pep2html.py --help" for details.
The URL for viewing PEPs on the web is
 PEP 9, Sample PEP Template
This document has been placed in the public domain.
I've received some enthusiastic emails from someone who wants to
revive restricted mode. He started out with a bunch of patches to the
CPython runtime using ctypes, which he attached to an App Engine bug:
Based on his code (the file secure.py is all you need, included in
secure.tar.gz) it seems he believes the only security leaks are
__subclasses__, gi_frame and gi_code. (I have since convinced him that
if we add "restricted" guards to these attributes, he doesn't need the
functions added to sys.)
I don't recall the exploits that Samuele once posted that caused the
death of rexec.py -- does anyone recall, or have a pointer to the
--Guido van Rossum (home page: http://www.python.org/~guido/)
Alright, I will re-submit with the contents pasted. I never use double
backquotes as I think them rather ugly; that is the work of an editor
or some automated program in the chain. Plus, it also messed up my
line formatting and now I have lines with one word on them... Anyway,
the contents of PEP 3145:
Title: Asynchronous I/O For subprocess.Popen
Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson
Type: Standards Track
In its present form, the subprocess.Popen implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process.
A search for "python asynchronous subprocess" will turn up numerous
accounts of people wanting to execute a child process and communicate with
it from time to time reading only the data that is available instead of
blocking to wait for the program to produce data   . The current
behavior of the subprocess module is that when a user sends or receives
data via the stdin, stderr and stdout file objects, dead locks are common
and documented  . While communicate can be used to alleviate some of
the buffering issues, it will still cause the parent process to block while
attempting to read data when none is available to be read from the child
There is a documented need for asynchronous, non-blocking functionality in
subprocess.Popen    . Inclusion of the code would improve the
utility of the Python standard library that can be used on Unix based and
Windows builds of Python. Practically every I/O object in Python has a
file-like wrapper of some sort. Sockets already act as such and for
strings there is StringIO. Popen can be made to act like a file by simply
using the methods attached the the subprocess.Popen.stderr, stdout and
stdin file-like objects. But when using the read and write methods of
those options, you do not have the benefit of asynchronous I/O. In the
proposed solution the wrapper wraps the asynchronous methods to mimic a
I have been maintaining a Google Code repository that contains all of my
changes including tests and documentation  as well as blog detailing
the problems I have come across in the development process .
I have been working on implementing non-blocking asynchronous I/O in the
subprocess.Popen module as well as a wrapper class for subprocess.Popen
that makes it so that an executed process can take the place of a file by
duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen
class: Popen.send and Popen._recv, each with two separate implementations,
one for Windows and one for Unix based systems. The Windows
implementation uses ctypes to access the functions needed to control pipes
in the kernel 32 DLL in an asynchronous manner. On Unix based systems,
the Python interface for file control serves the same purpose. The
different implementations of Popen.send and Popen._recv have identical
arguments to make code that uses these functions work across multiple
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes
selects stdout as the pipe for Popen._recv by default. Popen.recv_err
selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err"
are much easier to read and understand than "Popen._recv('stdout' ..." and
"Popen._recv('stderr' ..." respectively.
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread
handles this issue by returning all data read over a given time
The ProcessIOWrapper class uses the asyncread and asyncwrite functions to
allow a process to act like a file so that there are no blocking issues
that can arise from using the stdout and stdin file objects produced from
a subprocess.Popen call.
 [ python-Feature Requests-1191964 ] asynchronous Subprocess
 Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess
 How can I run an external command asynchronously from Python? - Stack
 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
 Issue 1191964: asynchronous Subprocess - Python tracker
 Module to allow Asynchronous subprocess use on Windows and Posix
platforms - ActiveState Code
 subprocess.rst - subprocdev - Project Hosting on Google Code
 subprocdev - Project Hosting on Google Code
 Python Subprocess Dev
This P.E.P. is licensed under the Open Publication License;
On Tue, Sep 8, 2009 at 22:56, Benjamin Peterson <benjamin(a)python.org> wrote:
> 2009/9/7 Eric Pruitt <eric.pruitt(a)gmail.com>:
>> Hello all,
>> I have been working on adding asynchronous I/O to the Python
>> subprocess module as part of my Google Summer of Code project. Now
>> that I have finished documenting and pruning the code, I present PEP
>> 3145 for its inclusion into the Python core code. Any and all feedback
>> on the PEP (http://www.python.org/dev/peps/pep-3145/) is appreciated.
> Hi Eric,
> One of the reasons you're not getting many response is that you've not
> pasted the contents of the PEP in this message. That makes it really
> easy for people to comment on various sections.
> BTW, it seems like you were trying to use reST formatting with the
> text PEP layout. Double backquotes only mean something in reST.
In reviewing a fix for the metaclass calculation in __build_class__
, I realised that PEP 3115 poses a potential problem for the common
practice of using "type(name, bases, ns)" for dynamic class creation.
Specifically, if one of the base classes has a metaclass with a
significant __prepare__() method, then the current idiom will do the
wrong thing (and most likely fail as a result), since "ns" will
probably be an ordinary dictionary instead of whatever __prepare__()
would have returned.
Initially I was going to suggest making __build_class__ part of the
language definition rather than a CPython implementation detail, but
then I realised that various CPython specific elements in its
signature made that a bad idea.
Instead, I'm thinking along the lines of an
"operator.prepare(metaclass, bases)" function that does the metaclass
calculation dance, invoking __prepare__() and returning the result if
it exists, otherwise returning an ordinary dict. Under the hood we
would refactor this so that operator.prepare and __build_class__ were
using a shared implementation of the functionality at the C level - it
may even be advisable to expose that implementation via the C API as
The correct idiom for dynamic type creation in a PEP 3115 world would then be:
from operator import prepare
cls = type(name, bases, prepare(type, bases))
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
Currently if you work in console and define a function and then
immediately call it - it will fail with SyntaxError.
For example, copy paste this completely valid Python script into console:
There is an issue for that that was just closed by Eric. However, I'd
like to know if there are people here that agree that if you paste a
valid Python script into console - it should work without changes.
I’ve read PEP 402 and would like to offer comments.
I know a bit about the import system, but not down to the nitty-gritty
details of PEP 302 and __path__ computations and all this fun stuff (by
which I mean, not fun at all). As such, I can’t find nasty issues in
dark corners, but I can offer feedback as a user. I think it’s a very
well-written explanation of a very useful feature: +1 from me. If it is
accepted, the docs will certainly be much more concise, but the PEP as a
thought process is a useful document to read.
> When new users come to Python from other languages, they are often
> confused by Python's packaging semantics.
Minor: I would reserve “packaging” for
packaging/distribution/installation/deployment matters, not Python
modules. I suggest “Python package semantics”.
> On the negative side, however, it is non-intuitive for beginners, and
> requires a more complex step to turn a module into a package. If
> ``Foo`` begins its life as ``Foo.py``, then it must be moved and
> renamed to ``Foo/__init__.py``.
Minor: In the UNIX world, or with version control tools, moving and
renaming are the same one thing (hg mv spam.py spam/__init__.py for
example). Also, if you turn a module into a package, you may want to
move code around, change imports, etc., so I’m not sure the renaming
part is such a big step. Anyway, if the import-sig people say that
users think it’s a complex or costly operation, I can believe it.
> (By the way, both of these additions to the import protocol (i.e. the
> dynamically-added ``__path__``, and dynamically-created modules)
> apply recursively to child packages, using the parent package's
> ``__path__`` in place of ``sys.path`` as a basis for generating a
> child ``__path__``. This means that self-contained and virtual
> packages can contain each other without limitation, with the caveat
> that if you put a virtual package inside a self-contained one, it's
> gonna have a really short ``__path__``!)
I don’t understand the caveat or its implications.
> In other words, we don't allow pure virtual packages to be imported
> directly, only modules and self-contained packages. (This is an
> acceptable limitation, because there is no *functional* value to
> importing such a package by itself. After all, the module object
> will have no *contents* until you import at least one of its
> subpackages or submodules!)
> Once ``zc.buildout`` has been successfully imported, though, there
> *will* be a ``zc`` module in ``sys.modules``, and trying to import it
> will of course succeed. We are only preventing an *initial* import
> from succeeding, in order to prevent false-positive import successes
> when clashing subdirectories are present on ``sys.path``.
I find that limitation acceptable. After all, there is no zc project,
and no zc module, just a zc namespace. I’ll just regret that it’s not
possible to provide a module docstring to inform that this is a
namespace package used for X and Y.
> The resulting list (whether empty or not) is then stored in a
> ``sys.virtual_package_paths`` dictionary, keyed by module name.
This was probably said on import-sig, but here I go: yet another import
artifact in the sys module! I hope we get ImportEngine in 3.3 to clean
up all this.
> * A new ``extend_virtual_paths(path_entry)`` function, to extend
> existing, already-imported virtual packages' ``__path__`` attributes
> to include any portions found in a new ``sys.path`` entry. This
> function should be called by applications extending ``sys.path``
> at runtime, e.g. when adding a plugin directory or an egg to the
Let’s imagine my application Spam has a namespace spam.ext for plugins.
To use a custom directory where plugins are stored, or a zip file with
plugins (I don’t use eggs, so let me talk about zip files here), I’d
have to call sys.path.append *and* pkgutil.extend_virtual_paths?
> * ``ImpImporter.iter_modules()`` should be changed to also detect and
> yield the names of modules found in virtual packages.
Is there any value in providing an argument to get the pre-PEP behavior?
Or to look at it from a different place, how can Python code know that
some module is a virtual or pure virtual package, if that is even a
useful thing to know?
> Last, but not least, the ``imp`` module (or ``importlib``, if
> appropriate) should expose the algorithm described in the `virtual
> paths`_ section above, as a
> ``get_virtual_path(modulename, parent_path=None)`` function, so that
> creators of ``__import__`` replacements can use it.
If I’m not mistaken, the rule of thumb these days is that imp is edited
when it’s absolutely necessary, otherwise code goes into importlib (more
easily written, read and maintained).
I wonder if importlib.import_module could implement the new import
semantics all by itself, so that we can benefit from this PEP in older
Pythons (importlib is on PyPI).
> * If you are changing a currently self-contained package into a
> virtual one, it's important to note that you can no longer use its
> ``__file__`` attribute to locate data files stored in a package
> directory. Instead, you must search ``__path__`` or use the
> ``__file__`` of a submodule adjacent to the desired files, or
> of a self-contained subpackage that contains the desired files.
Wouldn’t pkgutil.get_data help here?
Besides, putting data files in a Python package is held very poorly by
some (mostly people following the File Hierarchy Standard), and in
distutils2/packaging, we (will) have a resources system that’s as
convenient for users and more flexible for OS packagers. Using __file__
for more than information on the module is frowned upon for other
reasons anyway (I talked about a Debian developer about this one day but
forgot), so I think the limitation is okay.
> * XXX what is the __file__ of a "pure virtual" package? ``None``?
> Some arbitrary string? The path of the first directory with a
> trailing separator? No matter what we put, *some* code is
> going to break, but the last choice might allow some code to
> accidentally work. Is that good or bad?
A pure virtual package having no source file, I think it should have no
__file__ at all. I don’t know if that would break more code than using
an empty string for example, but it feels righter.
> For those implementing PEP \302 importer objects:
Minor: Here I think a link would not be a nuisance (IOW remove the
our current deprecation policy is not so well defined (see e.g. ),
and it seems to me that it's something like:
1) deprecate something and add a DeprecationWarning;
2) forget about it after a while;
3) wait a few versions until someone notices it;
4) actually remove it;
I suggest to follow the following process:
1) deprecate something and add a DeprecationWarning;
2) decide how long the deprecation should last;
3) use the deprecated-remove directive to document it;
4) add a test that fails after the update so that we remember to
Other related issues:
* AFAIK the difference between PDW and DW is that PDW are silenced by
* now DW are silence by default too, so there are no differences;
* I therefore suggest we stop using it, but we can leave it around
(other projects might be using it for something different);
Before, we more or less used to deprecated in release X and remove in
X+1, or add a PDW in X, DW in X+1, and remove it in X+2.
I suggest we drop this scheme and just use DW until X+N, where N is >=1
and depends on what is being removed. We can decide to leave the DW for
2-3 versions before removing something widely used, or just deprecate in
X and remove in X+1 for things that are less used.
Porting from 2.x to 3.x:
Some people will update directly from 2.7 to 3.2 or even later versions
(3.3, 3.4, ...), without going through earlier 3.x versions.
If something is deprecated on 3.2 but not in 2.7 and then is removed in
3.3, people updating from 2.7 to 3.3 won't see any warning, and this
will make the porting even more difficult.
I suggest that:
* nothing that is available and not deprecated in 2.7, will be
removed until 3.x (x needs to be defined);
* possibly we start backporting warnings to 2.7 so that they are
visible while running with -3;
Documenting the deprecations:
In order to advertise the deprecations, they should be documented:
* in their doc, using the deprecated-removed directive (and possibly
not the 'deprecated' one);
* in the what's new, possibly listing everything that is currently
deprecated, and when it will be removed;
Django seems to do something similar.
(Another thing I would like is a different rending for deprecated
functions. Some part of the docs have a deprecation warning on the top
of the section and the single functions look normal if you miss that.
Also while linking to a deprecated function it would be nice to have it
rendered with a different color or something similar.)
Testing the deprecations:
Tests that fail when a new release is made and the version number is
bumped should be added to make sure we don't forget to remove it.
The test should have a related issue with a patch to remove the
deprecated function and the test.
Setting the priority of the issue to release blocker or deferred blocker
can be done in addition/instead, but that works well only when N == 1
(the priority could be updated for every release though).
The tests could be marked with an expected failure to give some time
after the release to remove them.
All the deprecation-related tests might be added to the same file, or
left in the test file of their module.
Where to add this:
Once we agree about the process we should write it down somewhere.
Possible candidates are:
* PEP387: Backwards Compatibility Policy (it has a few lines about
* a new PEP;
* the devguide;
I think having it in a PEP would be good, the devguide can then link to it.
: deprecated-removed doesn't seem to be documented in the documenting
doc, but it was added here: http://hg.python.org/cpython/rev/03296316a892
: see e.g.
: we could also introduce a MetaDeprecationWarning and make
PendingDeprecationWarning inherit from it so that it can be used to
pending-deprecate itself. Once PendingDeprecationWarning is gone, the
MetaDeprecationWarning will become useless and can then be used to
Looking at a RECORD file installed by pysetup (on 3.3 trunk, on
Windows) all of the filenames seem to be absolute, even though the
package is pure-Python and so everything is under site-packages.
Looking at PEP 376, it looks like the paths should be relative to
site-packages. Two questions:
1. Am I reading this right? Is it a bug in pysetup?
2. Does it matter? Are relative paths needed, or is it just nice to have?
Oh, and a third question - where is the best place to ask these
questions? Now that packaging is in core, is python-dev OK? Or should
I be asking on the distutils SIG or the packaging developers list?
-----BEGIN PGP SIGNED MESSAGE-----
As has been discussed here previously, Vinay Sajip and I are working on
a PEP for making "virtual Python environments" a la virtualenv  a
built-in feature of Python 3.3.
This idea was first proposed on python-dev by Ian Bicking in February
2010 . It was revived at PyCon 2011 and has seen discussion on
distutils-sig , more recently again on python-dev  , and most
recently on python-ideas .
Full text of the draft PEP is pasted below, and also available on
Bitbucket . The reference implementation is also on Bitbucket .
For known issues in the reference implementation and cases where it does
not yet match the PEP, see the open issues list .
In particular, please note the "Open Questions" section of the draft
PEP. These are areas where we are still unsure of the best approach, or
where we've received conflicting feedback and haven't reached consensus.
We welcome your thoughts on anything in the PEP, but feedback on the
open questions is especially useful.
We'd also especially like to hear from Windows and OSX users, from
authors of packaging-related tools (packaging/distutils2, zc.buildout)
and from Python implementors (PyPy, IronPython, Jython).
If it is easier to review and comment on the PEP after it is published
on python.org, I can submit it to the PEP editors anytime. Otherwise
I'll wait until we've resolved a few more of the open questions, as it's
easier for me to update the PEP on Bitbucket.
Title: Python Virtual Environments
Author: Carl Meyer <carl(a)oddbird.net>
Type: Standards Track
Post-History: 24-Oct-2011, 28-Oct-2011
This PEP proposes to add to Python a mechanism for lightweight
"virtual environments" with their own site directories, optionally
isolated from system site directories. Each virtual environment has
its own Python binary (allowing creation of environments with various
Python versions) and can have its own independent set of installed
Python packages in its site directories, but shares the standard
library with the base installed Python.
The utility of Python virtual environments has already been well
established by the popularity of existing third-party
virtual-environment tools, primarily Ian Bicking's `virtualenv`_.
Virtual environments are already widely used for dependency management
and isolation, ease of installing and using Python packages without
system-administrator access, and automated testing of Python software
across multiple Python versions, among other uses.
Existing virtual environment tools suffer from lack of support from
the behavior of Python itself. Tools such as `rvirtualenv`_, which do
not copy the Python binary into the virtual environment, cannot
provide reliable isolation from system site directories. Virtualenv,
which does copy the Python binary, is forced to duplicate much of
Python's ``site`` module and manually symlink/copy an ever-changing
set of standard-library modules into the virtual environment in order
to perform a delicate boot-strapping dance at every
startup. (Virtualenv copies the binary because symlinking it does not
provide isolation, as Python dereferences a symlinked executable
before searching for `sys.prefix`.)
The ``PYTHONHOME`` environment variable, Python's only existing
built-in solution for virtual environments, requires
copying/symlinking the entire standard library into every
environment. Copying the whole standard library is not a lightweight
solution, and cross-platform support for symlinks remains inconsistent
(even on Windows platforms that do support them, creating them often
requires administrator privileges).
A virtual environment mechanism integrated with Python and drawing on
years of experience with existing third-party tools can be lower
maintenance, more reliable, and more easily available to all Python
.. _virtualenv: http://www.virtualenv.org
.. _rvirtualenv: https://github.com/kvbik/rvirtualenv
When the Python binary is executed, it attempts to determine its
prefix (which it stores in ``sys.prefix``), which is then used to find
the standard library and other key files, and by the ``site`` module
to determine the location of the site-package directories. Currently
the prefix is found (assuming ``PYTHONHOME`` is not set) by first
walking up the filesystem tree looking for a marker file (``os.py``)
that signifies the presence of the standard library, and if none is
found, falling back to the build-time prefix hardcoded in the binary.
This PEP proposes to add a new first step to this search. If a
``pyvenv.cfg`` file is found either adjacent to the Python executable,
or one directory above it, this file is scanned for lines of the form
``key = value``. If a ``home`` key is found, this signifies that the
Python binary belongs to a virtual environment, and the value of the
``home`` key is the directory containing the Python executable used to
create this virtual environment.
In this case, prefix-finding continues as normal using the value of
the ``home`` key as the effective Python binary location, which
results in ``sys.prefix`` being set to the system installation prefix,
while ``sys.site_prefix`` is set to the directory containing
(If ``pyvenv.cfg`` is not found or does not contain the ``home`` key,
prefix-finding continues normally, and ``sys.site_prefix`` will be
equal to ``sys.prefix``.)
The ``site`` and ``sysconfig`` standard-library modules are modified
such that site-package directories ("purelib" and "platlib", in
``sysconfig`` terms) are found relative to ``sys.site_prefix``, while
other directories (the standard library, include files) are still
found relative to ``sys.prefix``.
(Also, ``sys.site_exec_prefix`` is added, and handled similarly with
regard to ``sys.exec_prefix``.)
Thus, a Python virtual environment in its simplest form would consist
of nothing more than a copy or symlink of the Python binary
accompanied by a ``pyvenv.cfg`` file and a site-packages
directory. The ``venv`` module also adds a ``pysetup3`` script into
each venv, as well as necessary DLLs and `.pyd` files on Windows.
In order to allow Python package managers to install packages into the
virtual environment the same way they would install into a normal
Python installation, and avoid special-casing virtual environments in
``sysconfig`` beyond using ``sys.site_prefix`` in place of
``sys.prefix``, the internal virtual environment layout mimics the
layout of the Python installation itself on each platform. So a
typical virtual environment layout on a POSIX system would be::
While on a Windows system::
... other DLLs and pyds...
Third-party packages installed into the virtual environment will have
their Python modules placed in the ``site-packages`` directory, and
their executables placed in ``bin/`` or ``Scripts\``.
On a normal Windows system-level installation, the Python binary
itself wouldn't go inside the "Scripts/" subdirectory, as it does
in the default venv layout. This is useful in a virtual
environment so that a user only has to add a single directory to
their shell PATH in order to effectively "activate" the virtual
On Windows, it is necessary to also copy or symlink DLLs and pyd
files from compiled stdlib modules into the env, because if the
venv is created from a non-system-wide Python installation,
Windows won't be able to find the Python installation's copies of
those files when Python is run from the venv.
Isolation from system site-packages
By default, a virtual environment is entirely isolated from the
system-level site-packages directories.
If the ``pyvenv.cfg`` file also contains a key
``include-system-site-packages`` with a value of ``true`` (not case
sensitive), the ``site`` module will also add the system site
directories to ``sys.path`` after the virtual environment site
directories. Thus system-installed packages will still be importable,
but a package of the same name installed in the virtual environment
will take precedence.
:pep:`370` user-level site-packages are considered part of the system
site-packages for venv purposes: they are not available from an
isolated venv, but are available from an
``include-system-site-packages = true`` venv.
Creating virtual environments
This PEP also proposes adding a new ``venv`` module to the standard
library which implements the creation of virtual environments. This
module can be executed using the ``-m`` flag::
python3 -m venv /path/to/new/virtual/environment
A ``pyvenv`` installed script is also provided to make this more
Running this command creates the target directory (creating any parent
directories that don't exist already) and places a ``pyvenv.cfg`` file
in it with a ``home`` key pointing to the Python installation the
command was run from. It also creates a ``bin/`` (or ``Scripts`` on
Windows) subdirectory containing a copy (or symlink) of the
``python3`` executable, and the ``pysetup3`` script from the
``packaging`` standard library module (to facilitate easy installation
of packages from PyPI into the new virtualenv). And it creates an
(initially empty) ``lib/pythonX.Y/site-packages`` (or
``Lib\site-packages`` on Windows) subdirectory.
If the target directory already exists an error will be raised, unless
the ``--clear`` option was provided, in which case the target
directory will be deleted and virtual environment creation will
proceed as usual.
The created ``pyvenv.cfg`` file also includes the
``include-system-site-packages`` key, set to ``true`` if ``venv`` is
run with the ``--system-site-packages`` option, ``false`` by default.
Multiple paths can be given to ``venv``, in which case an identical
virtualenv will be created, according to the given options, at each
Copies versus symlinks
The technique in this PEP works equally well in general with a copied
or symlinked Python binary (and other needed DLLs on Windows). Some
users prefer a copied binary (for greater isolation from system
changes) and some prefer a symlinked one (so that e.g. security
updates automatically propagate to virtual environments).
There are some cross-platform difficulties with symlinks:
* Not all Windows versions support symlinks, and even on those that
do, creating them often requires administrator privileges.
* On OSX framework builds of Python, sys.executable is just a stub
that executes the real Python binary. Symlinking this stub does not
work with the implementation in this PEP; it must be
copied. (Fortunately the stub is also small, so copying it is not an
Because of these issues, this PEP proposes to copy the Python binary
by default, to maintain cross-platform consistency in the default
The ``pyvenv`` script accepts a ``--symlink`` option. If this option
is provided, the script will attempt to symlink instead of copy. If a
symlink fails (e.g. because they are not supported by the platform, or
additional privileges are needed), the script will warn the user and
fall back to a copy.
On OSX framework builds, where a symlink of the executable would
succeed but create a non-functional virtual environment, the script
will fail with an error message that symlinking is not supported on
OSX framework builds.
The high-level method described above will make use of a simple API
which provides mechanisms for third-party virtual environment creators
to customize environment creation according to their needs.
The ``venv`` module will contain an ``EnvBuilder`` class which accepts
the following keyword arguments on instantiation::
* ``system_site_packages`` - A Boolean value indicating that the
system Python site-packages should be available to the
environment (defaults to ``False``).
* ``clear`` - A Boolean value which, if True, will delete any
existing target directory instead of raising an exception
(defaults to ``False``).
* ``use_symlinks`` - A Boolean value indicating whether to attempt
to symlink the Python binary (and any necessary DLLs or other
binaries, e.g. ``pythonw.exe``), rather than copying. Defaults to
The returned env-builder is an object which is expected to have a
single method, ``create``, which takes as required argument the path
(absolute or relative to the current directory) of the target
directory which is to contain the virtual environment. The ``create``
method will either create the environment in the specified directory,
or raise an appropriate exception.
Creators of third-party virtual environment tools will be free to use
the provided ``EnvBuilder`` class as a base class.
The ``venv`` module will also provide a module-level function as a
system_site_packages=False, clear=False, use_symlinks=True):
builder = EnvBuilder(
The ``create`` method of the ``EnvBuilder`` class illustrates the
hooks available for customization:
def create(self, env_dir):
Create a virtualized Python environment in a directory.
:param env_dir: The target directory to create an environment in.
env_dir = os.path.abspath(env_dir)
context = self.create_directories(env_dir)
Each of the methods ``create_directories``, ``create_configuration``,
``setup_python``, ``setup_packages`` and ``setup_scripts`` can be
overridden. The functions of these methods are::
* ``create_directories`` - creates the environment directory and
all necessary directories, and returns a context object. This is
just a holder for attributes (such as paths), for use by the
* ``create_configuration`` - creates the ``pyvenv.cfg``
configuration file in the environment.
* ``setup_python`` - creates a copy of the Python executable (and,
under Windows, DLLs) in the environment.
* ``setup_packages`` - A placeholder method which can be overridden
in third party implementations to pre-install packages in the
* ``setup_scripts`` - A placeholder methd which can be overridden
in third party implementations to pre-install scripts (such as
activation and deactivation scripts) in the virtual environment.
The ``DistributeEnvBuilder`` subclass in the reference implementation
illustrates how these last two methods can be used in practice. It's
not envisaged that ``DistributeEnvBuilder`` will be actually added to
Python core, but it makes the reference implementation more
immediately useful for testing and exploratory purposes.
* The ``setup_packages`` method installs Distribute in the target
environment. This is needed at the moment in order to actually
install most packages in an environment, since most packages are
not yet packaging / setup.cfg based.
* The ``setup_scripts`` method installs shell activation scripts in
the environment. This is also done in a configurable way: A
``scripts`` property on the builder is expected to provide a
buffer which is a base64-encoded zip file. The zip file contains
directories "common", "linux2", "darwin", "win32", each
containing scripts destined for the bin directory in the
environment. The contents of "common" and the directory
corresponding to ``sys.platform`` are copied after doing some
text replacement of placeholders:
* ``__VIRTUAL_ENV__`` is replaced with absolute path of the
* ``__VIRTUAL_PROMPT__`` is replaced with the environment
* ``__BIN_NAME__`` is replaced with the name of the bin
* ``__ENV_PYTHON__`` is replaced with the absolute path of the
The "shell activation scripts" provided by ``DistributeEnvBuilder``
simply add the virtual environment's ``bin/`` (or ``Scripts\``)
directory to the front of the user's shell PATH. This is not strictly
necessary for use of a virtual environment (as an explicit path to the
venv's python binary or scripts can just as well be used), but it is
convenient. This PEP does not propose that the ``venv`` module in core
Python will add such activation scripts by default, as they are
shell-specific. Adding activation scripts for the wide variety of
possible shells is an added maintenance burden, and is left to
third-party extension tools.
No doubt the process of PEP review will show up any customization
requirements which have not yet been considered.
Splitting the meanings of ``sys.prefix``
Any virtual environment tool along these lines (which attempts to
isolate site-packages, while still making use of the base Python's
standard library with no need for it to be symlinked into the virtual
environment) is proposing a split between two different meanings
(among others) that are currently both wrapped up in ``sys.prefix``:
the answers to the questions "Where is the standard library?" and
"Where is the site-packages location where third-party modules should
This split could be handled by introducing a new ``sys`` attribute for
either the former prefix or the latter prefix. Either option
potentially introduces some backwards-incompatibility with software
written to assume the other meaning for ``sys.prefix``. (Such software
should preferably be using the APIs in the ``site`` and ``sysconfig``
modules to answer these questions rather than using ``sys.prefix``
directly, in which case there is no backwards-compatibility issue, but
in practice ``sys.prefix`` is sometimes used.)
The `documentation`__ for ``sys.prefix`` describes it as "A string
giving the site-specific directory prefix where the platform
independent Python files are installed," and specifically mentions the
standard library and header files as found under ``sys.prefix``. It
does not mention ``site-packages``.
This PEP currently proposes to leave ``sys.prefix`` pointing to the
base system installation (which is where the standard library and
header files are found), and introduce a new value in ``sys``
(``sys.site_prefix``) to point to the prefix for
``site-packages``. This maintains the documented semantics of
``sys.prefix``, but risks breaking isolation if third-party code uses
``sys.prefix`` rather than ``sys.site_prefix`` or the appropriate
``site`` API to find site-packages directories.
The most notable case is probably `setuptools`_ and its fork
`distribute`_, which mostly use ``distutils``/``sysconfig`` APIs, but
do use ``sys.prefix`` directly to build up a list of site directories
for pre-flight checking where ``pth`` files can usefully be placed.
It would be trivial to modify these tools (currently only
`distribute`_ is Python 3 compatible) to check ``sys.site_prefix`` and
fall back to ``sys.prefix`` if it doesn't exist (for earlier versions
of Python). If Distribute is modified in this way and released before
Python 3.3 is released with the ``venv`` module, there would be no
likely reason for an older version of Distribute to ever be installed
in a virtual environment.
In terms of other third-party usage, a `Google Code Search`_ turns up
what appears to be a roughly even mix of usage between packages using
``sys.prefix`` to build up a site-packages path and packages using it
to e.g. eliminate the standard-library from code-execution
tracing. Either choice that's made here will require one or the other
of these uses to be updated.
.. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools
.. _distribute: http://packages.python.org/distribute/
.. _Google Code Search:
Naming of the new ``sys`` prefix attributes
The name ``sys.site_prefix`` was chosen with the following
considerations in mind:
* Inasmuch as "site" has a meaning in Python, it means a combination
of Python version, standard library, and specific set of
site-packages. This is, fundamentally, what a venv is (although it
shares the standard library with its "base" site).
* It is the Python ``site`` module which implements adding
site-packages directories to ``sys.path``, so ``sys.site_prefix`` is
a prefix used (and set) primarily by the ``site`` module.
A concern has been raised that the term ``site`` in Python is already
overloaded and of unclear meaning, and this usage will increase the
One proposed alternative is ``sys.venv_prefix``, which has the
advantage of being clearly related to the venv implementation. The
downside of this proposal is that it implies the attribute is only
useful/relevant when in a venv and should be absent or ``None`` when
not in a venv. This imposes an unnecessary extra burden on code using
the attribute: ``sys.venv_prefix if sys.venv_prefix else
sys.prefix``. The prefix attributes are more usable and general if
they are always present and set, and split by meaning (stdlib vs
site-packages, roughly), rather than specifically tied to venv. Also,
third-party code should be encouraged to not know or care whether it
is running in a virtual environment or not; this option seems to work
against that goal.
Another option would be ``sys.local_prefix``, which has both the
advantage and disadvantage, depending on perspective, that it
introduces the new term "local" rather than drawing on existing
associations with the term "site".
Why not modify sys.prefix?
As discussed above under `Backwards Compatibility`_, this PEP proposes
to add ``sys.site_prefix`` as "the prefix relative to which
site-package directories are found". This maintains compatibility with
the documented meaning of ``sys.prefix`` (as the location relative to
which the standard library can be found), but means that code assuming
that site-packages directories are found relative to ``sys.prefix``
will not respect the virtual environment correctly.
Since it is unable to modify ``distutils``/``sysconfig``,
`virtualenv`_ is forced to instead re-point ``sys.prefix`` at the
An argument could be made that this PEP should follow virtualenv's
lead here (and introduce something like ``sys.base_prefix`` to point
to the standard library and header files), since virtualenv already
does this and it doesn't appear to have caused major problems with
Another argument in favor of this is that it would be preferable to
err on the side of greater, rather than lesser, isolation. Changing
``sys.prefix`` to point to the virtual environment and introducing a
new ``sys.base_prefix`` attribute would err on the side of greater
isolation in the face of existing code's use of ``sys.prefix``.
What about include files?
For example, ZeroMQ installs zmq.h and zmq_utils.h in $VE/include,
whereas SIP (part of PyQt4) installs sip.h by default in
$VE/include/pythonX.Y. With virtualenv, everything works because the
PythonX.Y include is symlinked, so everything that's needed is in
$VE/include. At the moment the reference implementation doesn't do
anything with include files, besides creating the include directory;
this might need to change, to copy/symlink $VE/include/pythonX.Y.
As in Python there's no abstraction for a site-specific include
directory, other than for platform-specific stuff, then the user
expectation would seem to be that all include files anyone could ever
want should be found in one of just two locations, with sysconfig
labels "include" & "platinclude".
There's another issue: what if includes are Python-version-specific?
For example, SIP installs by default into $VE/include/pythonX.Y rather
than $VE/include, presumably because there's version-specific stuff in
there - but even if that's not the case with SIP, it could be the case
with some other package. And the problem that gives is that you can't
just symlink the include/pythonX.Y directory, but actually have to
provide a writable directory and symlink/copy the contents from the
system include/pythonX.Y. Of course this is not hard to do, but it
does seem inelegant. OTOH it's really because there's no supporting
concept in Python/sysconfig.
Interface with packaging tools
Some work will be needed in packaging tools (Python 3.3 packaging,
Distribute) to support implementation of this PEP. For example:
* How Distribute and packaging use sys.prefix and/or sys.site_prefix.
in practice we'll need to use Distribute for a while, until packages have
migrated over to usage of setup.cfg.
* How packaging and Distribute set up shebang lines in scripts which they
install in virtual environments.
Testability and Source Build Issues
Currently in the reference implementation, virtual environments must
be created with an installed Python, rather than a source build, as
the base installation. In order to be able to fully test the ``venv``
module in the Python regression test suite, some anomalies in how
sysconfig data is configured in source builds will need to be
removed. For example, sysconfig.get_paths() in a source build gives
'libdir': '/usr/lib ; or /usr/lib64 on a multilib system',
Need for ``install_name_tool`` on OSX?
`Virtualenv uses`_ ``install_name_tool``, a tool provided in the Xcode
developer tools, to modify the copied executable on OSX. We need input
from OSX developers on whether this is actually necessary in this
PEP's implementation of virtual environments, and if so, if there is
an alternative to ``install_name_tool`` that would allow ``venv`` to
not require that Xcode is installed.
.. _Virtualenv uses: https://github.com/pypa/virtualenv/issues/168
Activation and Utility Scripts
Virtualenv provides shell "activation" scripts as a user convenience,
to put the virtual environment's Python binary first on the shell
PATH. This is a maintenance burden, as separate activation scripts
need to be provided and maintained for every supported shell. For this
reason, this PEP proposes to leave such scripts to be provided by
third-party extensions; virtual environments created by the core
functionality would be used by directly invoking the environment's
Python binary or scripts.
If we are going to rely on external code to provide these
conveniences, we need to check with existing third-party projects in
this space (virtualenv, zc.buildout) and ensure that the proposed API
meets their needs.
(Virtualenv would be fine with the proposed API; it would become a
relatively thin wrapper with a subclass of the env builder that adds
shell activation and automatic installation of ``pip`` inside the
Provide a mode that is isolated only from user site packages?
Is there sufficient rationale for providing a mode that isolates the
venv from :pep:`370` user site packages, but not from the system-level
Other Python implementations?
We should get feedback from Jython, IronPython, and PyPy about whether
there's anything in this PEP that they foresee as a difficulty for
The in-progress reference implementation is found in `a clone of the
CPython Mercurial repository`_. To test it, build and install it (the
virtual environment tool currently does not run from a source tree).
- From the installed Python, run ``bin/python3 -m venv
/path/to/new/virtualenv`` to create a virtual environment.
The reference implementation (like this PEP!) is a work in progress.
.. _a clone of the CPython Mercurial repository:
This document has been placed in the public domain.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
-----END PGP SIGNATURE-----