I've received some enthusiastic emails from someone who wants to
revive restricted mode. He started out with a bunch of patches to the
CPython runtime using ctypes, which he attached to an App Engine bug:
Based on his code (the file secure.py is all you need, included in
secure.tar.gz) it seems he believes the only security leaks are
__subclasses__, gi_frame and gi_code. (I have since convinced him that
if we add "restricted" guards to these attributes, he doesn't need the
functions added to sys.)
I don't recall the exploits that Samuele once posted that caused the
death of rexec.py -- does anyone recall, or have a pointer to the
--Guido van Rossum (home page: http://www.python.org/~guido/)
I guess a long time ago, threading support in operating systems wasn't
very widespread, but these days all our supported platforms have it.
Is it still useful for production purposes to configure
--without-threads? Do people use this option for something else than
curiosity of mind?
While trying 3.3 beta I found that I cannot use my favorite virtualenv pattern with pyvenv:
$ virtualenv .
$ pyvenv .
Error: Directory exists: /Users/stefan/sandbox/foo
I appreciate that this behavior is documented and was in the PEP from the start:
"If the target directory already exists an error will be raised, unless the --clear or --upgrade option was provided."
Still, I am curious what the rationale behind this restriction is. For me, being able to "bless" an existing directory with a virtualenv has always been one of its attractions.
Stefan H. Holek
Alright, I will re-submit with the contents pasted. I never use double
backquotes as I think them rather ugly; that is the work of an editor
or some automated program in the chain. Plus, it also messed up my
line formatting and now I have lines with one word on them... Anyway,
the contents of PEP 3145:
Title: Asynchronous I/O For subprocess.Popen
Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson
Type: Standards Track
In its present form, the subprocess.Popen implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process.
A search for "python asynchronous subprocess" will turn up numerous
accounts of people wanting to execute a child process and communicate with
it from time to time reading only the data that is available instead of
blocking to wait for the program to produce data   . The current
behavior of the subprocess module is that when a user sends or receives
data via the stdin, stderr and stdout file objects, dead locks are common
and documented  . While communicate can be used to alleviate some of
the buffering issues, it will still cause the parent process to block while
attempting to read data when none is available to be read from the child
There is a documented need for asynchronous, non-blocking functionality in
subprocess.Popen    . Inclusion of the code would improve the
utility of the Python standard library that can be used on Unix based and
Windows builds of Python. Practically every I/O object in Python has a
file-like wrapper of some sort. Sockets already act as such and for
strings there is StringIO. Popen can be made to act like a file by simply
using the methods attached the the subprocess.Popen.stderr, stdout and
stdin file-like objects. But when using the read and write methods of
those options, you do not have the benefit of asynchronous I/O. In the
proposed solution the wrapper wraps the asynchronous methods to mimic a
I have been maintaining a Google Code repository that contains all of my
changes including tests and documentation  as well as blog detailing
the problems I have come across in the development process .
I have been working on implementing non-blocking asynchronous I/O in the
subprocess.Popen module as well as a wrapper class for subprocess.Popen
that makes it so that an executed process can take the place of a file by
duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen
class: Popen.send and Popen._recv, each with two separate implementations,
one for Windows and one for Unix based systems. The Windows
implementation uses ctypes to access the functions needed to control pipes
in the kernel 32 DLL in an asynchronous manner. On Unix based systems,
the Python interface for file control serves the same purpose. The
different implementations of Popen.send and Popen._recv have identical
arguments to make code that uses these functions work across multiple
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes
selects stdout as the pipe for Popen._recv by default. Popen.recv_err
selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err"
are much easier to read and understand than "Popen._recv('stdout' ..." and
"Popen._recv('stderr' ..." respectively.
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread
handles this issue by returning all data read over a given time
The ProcessIOWrapper class uses the asyncread and asyncwrite functions to
allow a process to act like a file so that there are no blocking issues
that can arise from using the stdout and stdin file objects produced from
a subprocess.Popen call.
 [ python-Feature Requests-1191964 ] asynchronous Subprocess
 Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess
 How can I run an external command asynchronously from Python? - Stack
 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
 Issue 1191964: asynchronous Subprocess - Python tracker
 Module to allow Asynchronous subprocess use on Windows and Posix
platforms - ActiveState Code
 subprocess.rst - subprocdev - Project Hosting on Google Code
 subprocdev - Project Hosting on Google Code
 Python Subprocess Dev
This P.E.P. is licensed under the Open Publication License;
On Tue, Sep 8, 2009 at 22:56, Benjamin Peterson <benjamin(a)python.org> wrote:
> 2009/9/7 Eric Pruitt <eric.pruitt(a)gmail.com>:
>> Hello all,
>> I have been working on adding asynchronous I/O to the Python
>> subprocess module as part of my Google Summer of Code project. Now
>> that I have finished documenting and pruning the code, I present PEP
>> 3145 for its inclusion into the Python core code. Any and all feedback
>> on the PEP (http://www.python.org/dev/peps/pep-3145/) is appreciated.
> Hi Eric,
> One of the reasons you're not getting many response is that you've not
> pasted the contents of the PEP in this message. That makes it really
> easy for people to comment on various sections.
> BTW, it seems like you were trying to use reST formatting with the
> text PEP layout. Double backquotes only mean something in reST.
Python code should not depend upon the ordering of items in a dict.
Unfortunately it seems that a number of tests in the standard library do
Changing PyDict_MINSIZE from 8 to either 4 or 16 causes the following
tests to fail:
test_dis test_email test_inspect test_nntplib test_packaging
test_plistlib test_pprint test_symtable test_trace
test_sys also fails, but this is a legitimate failure in sys.getsizeof()
Changing the collision resolution function from f(n) = 5n + 1 to
f(n) = n + 1 results in the same failures, except for test_packaging and
test_symtable which pass.
Finally, changing the seed in unicode_hash() from (implicit) 0 to an
arbitrary value (12345678) causes the above tests to fail plus:
test_json test_set test_ttk_textonly test_urllib test_urlparse
I think this is a real issue as the unicode_hash() function is likely to
change soon due to http://bugs.python.org/issue13703.
1. Submit one big bug report?
2. Submit a bug report for each "failing" test separately?
3. Ignore it, since the tests only fail when I start messing about?
I'm implementing the buffer API and some of memoryview for Jython. I
have read with interest, and mostly understood, the discussion in Issue
#10181 that led to the v3.3 re-implementation of memoryview and
much-improved documentation of the buffer API. Although Jython is
targeting v2.7 at the moment, and 1-D bytes (there's no Jython NumPy),
I'd like to lay a solid foundation that benefits from the recent CPython
work. I hope that some of the complexity in memoryview stems from legacy
considerations I don't have to deal with in Jython.
I am puzzled that PEP 3118 makes some specifications that seem
unnecessary and complicate the implementation. Would those who know the
API inside out answer a few questions?
My understanding is this: When a consumer requests a buffer from the
exporter it specifies using flags how it intends to navigate it. If the
buffer actually needs more apparatus than the consumer proposes, this
raises an exception. If the buffer needs less apparatus than the
consumer proposes, the exporter has to supply what was asked for. For
example, if the consumer sets PyBUF_STRIDES, and the buffer can only be
navigated by using suboffsets (PIL-style) this raises an exception.
Alternatively, if the consumer sets PyBUF_STRIDES, and the buffer is
just a simple byte array, the exporter has to supply shape and strides
arrays (with trivial values), since the consumer is going to use those
Is there any harm is supplying shape and strides when they were not
requested? The PEP says: "PyBUF_ND ... If this is not given then shape
will be NULL". It doesn't stipulate that strides will be null if
PyBUF_STRIDES is not given, but the library documentation says so.
suboffsets is different since even when requested, it will be null if
Similar, but simpler, the PEP says "PyBUF_FORMAT ... If format is not
explicitly requested then the format must be returned as NULL (which
means "B", or unsigned bytes)". What would be the harm in returning "B"?
One place where this really matters is in the implementation of
memoryview. PyMemoryView requests a buffer with the flags PyBUF_FULL_RO,
so even a simple byte buffer export will come with shape, strides and
format. A consumer (of the memoryview's buffer API) might specify
PyBUF_SIMPLE: according to the PEP I can't simply give it the original
buffer since required fields (that the consumer will presumably not
access) are not NULL. In practice, I'd like to: what could possibly go
I've drafted some edits to Metadata 1.2 with valuable feedback from
distutils-sig (special thanks to Erik Bray), which seems to have no
more comments on the issue after about 6 weeks. Let me know if you
have an opinion, or if you will have one during some bounded time in
Metadata 1.2 (PEP 345), a non-final PEP that has been adopted by
approximately 10 of the latest sdists from pypy, cannot represent the
setuptools "extras" (optional dependencies) feature. This is a problem
because about 1600+ or 10% of the packages hosted on pypy define
"extras" as measured in May of this year.
The edit implements the extras feature by adding a new condition
"extra == 'name'" to the Metadata 1.2 environment markers.
Requirements with this marker are only installed when the named
optional feature is requested. Valid extras for a package must be
declared with Provides-Extra: name.
It also adds Setup-Requires-Dist as a way to specify requirements
needed during an install as opposed to during runtime.
Setup-Requires-Dist (multiple use)
Like Requires-Dist, but names dependencies needed while the
distributions's distutils / packaging `setup.py` / `setup.cfg` is run.
Provides-Extra (multiple use)
A string containing the name of an optional feature.
Requires-Dist: reportlab; extra == 'pdf'
Requires-Dist: nose; extra == 'test'
Requires-Dist: sphinx; extra == 'doc'
(full changeset on
Is there a list somewhere of the IRC nicks of the core developers that
use IRC (and who wish to be listed) alongside their real names? If
there is no such list, has there ever been discussion on python-dev of
creating such a list?
it is my understanding that the patches floating around the net to support Visual Studio 2010 to compile the Python core and for distutils will never be accepted and therefore that the 2.7 line is stuck to VS 2008 for the remaining of its life. Could you please confirm that?