I've received some enthusiastic emails from someone who wants to
revive restricted mode. He started out with a bunch of patches to the
CPython runtime using ctypes, which he attached to an App Engine bug:
Based on his code (the file secure.py is all you need, included in
secure.tar.gz) it seems he believes the only security leaks are
__subclasses__, gi_frame and gi_code. (I have since convinced him that
if we add "restricted" guards to these attributes, he doesn't need the
functions added to sys.)
I don't recall the exploits that Samuele once posted that caused the
death of rexec.py -- does anyone recall, or have a pointer to the
--Guido van Rossum (home page: http://www.python.org/~guido/)
Alright, I will re-submit with the contents pasted. I never use double
backquotes as I think them rather ugly; that is the work of an editor
or some automated program in the chain. Plus, it also messed up my
line formatting and now I have lines with one word on them... Anyway,
the contents of PEP 3145:
Title: Asynchronous I/O For subprocess.Popen
Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson
Type: Standards Track
In its present form, the subprocess.Popen implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process.
A search for "python asynchronous subprocess" will turn up numerous
accounts of people wanting to execute a child process and communicate with
it from time to time reading only the data that is available instead of
blocking to wait for the program to produce data   . The current
behavior of the subprocess module is that when a user sends or receives
data via the stdin, stderr and stdout file objects, dead locks are common
and documented  . While communicate can be used to alleviate some of
the buffering issues, it will still cause the parent process to block while
attempting to read data when none is available to be read from the child
There is a documented need for asynchronous, non-blocking functionality in
subprocess.Popen    . Inclusion of the code would improve the
utility of the Python standard library that can be used on Unix based and
Windows builds of Python. Practically every I/O object in Python has a
file-like wrapper of some sort. Sockets already act as such and for
strings there is StringIO. Popen can be made to act like a file by simply
using the methods attached the the subprocess.Popen.stderr, stdout and
stdin file-like objects. But when using the read and write methods of
those options, you do not have the benefit of asynchronous I/O. In the
proposed solution the wrapper wraps the asynchronous methods to mimic a
I have been maintaining a Google Code repository that contains all of my
changes including tests and documentation  as well as blog detailing
the problems I have come across in the development process .
I have been working on implementing non-blocking asynchronous I/O in the
subprocess.Popen module as well as a wrapper class for subprocess.Popen
that makes it so that an executed process can take the place of a file by
duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen
class: Popen.send and Popen._recv, each with two separate implementations,
one for Windows and one for Unix based systems. The Windows
implementation uses ctypes to access the functions needed to control pipes
in the kernel 32 DLL in an asynchronous manner. On Unix based systems,
the Python interface for file control serves the same purpose. The
different implementations of Popen.send and Popen._recv have identical
arguments to make code that uses these functions work across multiple
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes
selects stdout as the pipe for Popen._recv by default. Popen.recv_err
selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err"
are much easier to read and understand than "Popen._recv('stdout' ..." and
"Popen._recv('stderr' ..." respectively.
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread
handles this issue by returning all data read over a given time
The ProcessIOWrapper class uses the asyncread and asyncwrite functions to
allow a process to act like a file so that there are no blocking issues
that can arise from using the stdout and stdin file objects produced from
a subprocess.Popen call.
 [ python-Feature Requests-1191964 ] asynchronous Subprocess
 Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess
 How can I run an external command asynchronously from Python? - Stack
 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
 Issue 1191964: asynchronous Subprocess - Python tracker
 Module to allow Asynchronous subprocess use on Windows and Posix
platforms - ActiveState Code
 subprocess.rst - subprocdev - Project Hosting on Google Code
 subprocdev - Project Hosting on Google Code
 Python Subprocess Dev
This P.E.P. is licensed under the Open Publication License;
On Tue, Sep 8, 2009 at 22:56, Benjamin Peterson <benjamin(a)python.org> wrote:
> 2009/9/7 Eric Pruitt <eric.pruitt(a)gmail.com>:
>> Hello all,
>> I have been working on adding asynchronous I/O to the Python
>> subprocess module as part of my Google Summer of Code project. Now
>> that I have finished documenting and pruning the code, I present PEP
>> 3145 for its inclusion into the Python core code. Any and all feedback
>> on the PEP (http://www.python.org/dev/peps/pep-3145/) is appreciated.
> Hi Eric,
> One of the reasons you're not getting many response is that you've not
> pasted the contents of the PEP in this message. That makes it really
> easy for people to comment on various sections.
> BTW, it seems like you were trying to use reST formatting with the
> text PEP layout. Double backquotes only mean something in reST.
In reviewing a fix for the metaclass calculation in __build_class__
, I realised that PEP 3115 poses a potential problem for the common
practice of using "type(name, bases, ns)" for dynamic class creation.
Specifically, if one of the base classes has a metaclass with a
significant __prepare__() method, then the current idiom will do the
wrong thing (and most likely fail as a result), since "ns" will
probably be an ordinary dictionary instead of whatever __prepare__()
would have returned.
Initially I was going to suggest making __build_class__ part of the
language definition rather than a CPython implementation detail, but
then I realised that various CPython specific elements in its
signature made that a bad idea.
Instead, I'm thinking along the lines of an
"operator.prepare(metaclass, bases)" function that does the metaclass
calculation dance, invoking __prepare__() and returning the result if
it exists, otherwise returning an ordinary dict. Under the hood we
would refactor this so that operator.prepare and __build_class__ were
using a shared implementation of the functionality at the C level - it
may even be advisable to expose that implementation via the C API as
The correct idiom for dynamic type creation in a PEP 3115 world would then be:
from operator import prepare
cls = type(name, bases, prepare(type, bases))
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
There is a need for the default Python2 install to place a symlink at
/usr/bin/python2 that points to /usr/bin/python, or for the documentation to
recommend that packagers ensure that python2 is defined. Also, all
documentation should be changed to recommend that "#!/usr/bin/env python2"
be used as the shebang for Python 2 scripts.
This is needed because some distributions (Arch Linux, in particular), point
/usr/bin/python to /usr/bin/python3, while others (including Slackware,
Debian, and the BSDs, probably more) do not even define the python2 command.
This means that a script has no way of achieving cross-platform
compatibility. The point at which many distributions begin to alias
/usr/bin/python to /usr/bin/python3 is due soon, and for the next couple of
years, it would be best to use a python2 or python3 shebang in all scripts,
making no assumptions about plain python, which should only be invoked
interactively. This email from about 3 years ago seems relevant: :
Again, this issue needs to be addressed by the Python developers themselves
so that different *nix distributions will handle it consistently, allowing
Python scripts to continue to be cross-platform.
following http://docs.python.org/devguide/coverage.html doc you'll end
up with several "new" files/dirs in your checkout:
- .coverage, used by coveragepy to save its info
- coverage/ , the symlink to coveragepy/coverage
- htmlcov/ , the dir where the coverage HTML pages are written
I think they should be added to .hgignore so that hg st won't show them.
I'm writing here since I don't think an issue is needed for such
matter, if that's not the case, I apologize.
Sandro Tosi (aka morph, morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi
If no one objects, I'll promote Tools/scripts/pysetup3 to a top level
script that gets installed in scripts/ like 2to3, pydoc etc..
That way, people will be able to use it directly when installing,
removing projects, or studying what's installed
Tarek Ziadé | http://ziade.org
The bytes type in Python 3 does not feel very consistent.
--> some_var = 'abcdef'
--> some_other_var = b'abcdef'
On the one hand we have the 'bytes are ascii data' type interface, and
on the other we have the 'bytes are a list of integers between 0 - 256'
interface. And trying to use the two is not intuitive:
--> some_other_var == b'd'
When I'm parsing a .dbf file and extracting field types from the byte
stream, I'm not thinking, "okay, 67 is a Character field" -- what I'm
thinking is, "b'C' is a Character field".
Considering that ord() still works fine, I'm not sure why it was done
Is there code out there that is using this "list of int's" interface, or
is there time to make changes to bytes?
I would like to suggest that we remove the socket HOWTO (currently at
My main issue with this document is that it doesn't seem to have
a well-defined destination:
- people who know sockets won't learn anything from it
- but people who don't know sockets will probably find it clear as mud
(for example, what's an "INET" or "STREAM" socket? what's "select"?)
I have other issues, such as the style/tone it's written in. I'm sure
the author had fun writing it but it doesn't fit well with the rest of
the documentation. Also, the author gives a lot of "advice" without
explaining or justifying it ("if somewhere in those input lists of
sockets is one which has died a nasty death, the select will fail" ->
is that really true? what is a "nasty death" and how is that supposed to
happen? couldn't the author have put a 3-line example to demonstrate
this supposed drawback and how it manifests?).
And, finally, many statements seem arbitrary ("There’s no question that
the fastest sockets code uses non-blocking sockets and select to
multiplex them") or plain wrong ("threading support in Unixes varies
both in API and quality. So the normal Unix solution is to fork a
subprocess to deal with each connection"). I don't think giving
misleading advice to users is really a good idea. And suggesting
beginners they use non-blocking sockets without even *showing* how (or
pointing to asyncore or Twisted) is a very bad idea. select() is not
enough, you still have to be prepared to get EAGAIN or EWOULDBLOCK when
calling recv() or send() (i.e. select() can give false positives).
Oh and I think it's obsolete too, because the "class mysocket"
concatenates the output of recv() with a str rather than a bytes
object. Not to mention that features of the "class mysocket" can be had
using a buffered socket.makefile() instead of writing custom code.
(followed up from http://bugs.python.org/issue12126 at Eli's request)
I've pushed packaging in stdlib. There are a few buildbots errors
we're fixing right now.
We will continue our work in their directly for now on.
The next "big" commit will be for the documentation,
Tarek Ziadé | http://ziade.org
I'd like to escalate http://bugs.python.org/issue12226 : 'use secured
channel for uploading packages to pypi' to be shipped with next Python
This will prevent pydotorg password sniffing when submitting packages
through public networks (such as hotels).