I've received some enthusiastic emails from someone who wants to
revive restricted mode. He started out with a bunch of patches to the
CPython runtime using ctypes, which he attached to an App Engine bug:
Based on his code (the file secure.py is all you need, included in
secure.tar.gz) it seems he believes the only security leaks are
__subclasses__, gi_frame and gi_code. (I have since convinced him that
if we add "restricted" guards to these attributes, he doesn't need the
functions added to sys.)
I don't recall the exploits that Samuele once posted that caused the
death of rexec.py -- does anyone recall, or have a pointer to the
--Guido van Rossum (home page: http://www.python.org/~guido/)
Alright, I will re-submit with the contents pasted. I never use double
backquotes as I think them rather ugly; that is the work of an editor
or some automated program in the chain. Plus, it also messed up my
line formatting and now I have lines with one word on them... Anyway,
the contents of PEP 3145:
Title: Asynchronous I/O For subprocess.Popen
Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson
Type: Standards Track
In its present form, the subprocess.Popen implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process.
A search for "python asynchronous subprocess" will turn up numerous
accounts of people wanting to execute a child process and communicate with
it from time to time reading only the data that is available instead of
blocking to wait for the program to produce data   . The current
behavior of the subprocess module is that when a user sends or receives
data via the stdin, stderr and stdout file objects, dead locks are common
and documented  . While communicate can be used to alleviate some of
the buffering issues, it will still cause the parent process to block while
attempting to read data when none is available to be read from the child
There is a documented need for asynchronous, non-blocking functionality in
subprocess.Popen    . Inclusion of the code would improve the
utility of the Python standard library that can be used on Unix based and
Windows builds of Python. Practically every I/O object in Python has a
file-like wrapper of some sort. Sockets already act as such and for
strings there is StringIO. Popen can be made to act like a file by simply
using the methods attached the the subprocess.Popen.stderr, stdout and
stdin file-like objects. But when using the read and write methods of
those options, you do not have the benefit of asynchronous I/O. In the
proposed solution the wrapper wraps the asynchronous methods to mimic a
I have been maintaining a Google Code repository that contains all of my
changes including tests and documentation  as well as blog detailing
the problems I have come across in the development process .
I have been working on implementing non-blocking asynchronous I/O in the
subprocess.Popen module as well as a wrapper class for subprocess.Popen
that makes it so that an executed process can take the place of a file by
duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen
class: Popen.send and Popen._recv, each with two separate implementations,
one for Windows and one for Unix based systems. The Windows
implementation uses ctypes to access the functions needed to control pipes
in the kernel 32 DLL in an asynchronous manner. On Unix based systems,
the Python interface for file control serves the same purpose. The
different implementations of Popen.send and Popen._recv have identical
arguments to make code that uses these functions work across multiple
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes
selects stdout as the pipe for Popen._recv by default. Popen.recv_err
selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err"
are much easier to read and understand than "Popen._recv('stdout' ..." and
"Popen._recv('stderr' ..." respectively.
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread
handles this issue by returning all data read over a given time
The ProcessIOWrapper class uses the asyncread and asyncwrite functions to
allow a process to act like a file so that there are no blocking issues
that can arise from using the stdout and stdin file objects produced from
a subprocess.Popen call.
 [ python-Feature Requests-1191964 ] asynchronous Subprocess
 Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess
 How can I run an external command asynchronously from Python? - Stack
 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
 Issue 1191964: asynchronous Subprocess - Python tracker
 Module to allow Asynchronous subprocess use on Windows and Posix
platforms - ActiveState Code
 subprocess.rst - subprocdev - Project Hosting on Google Code
 subprocdev - Project Hosting on Google Code
 Python Subprocess Dev
This P.E.P. is licensed under the Open Publication License;
On Tue, Sep 8, 2009 at 22:56, Benjamin Peterson <benjamin(a)python.org> wrote:
> 2009/9/7 Eric Pruitt <eric.pruitt(a)gmail.com>:
>> Hello all,
>> I have been working on adding asynchronous I/O to the Python
>> subprocess module as part of my Google Summer of Code project. Now
>> that I have finished documenting and pruning the code, I present PEP
>> 3145 for its inclusion into the Python core code. Any and all feedback
>> on the PEP (http://www.python.org/dev/peps/pep-3145/) is appreciated.
> Hi Eric,
> One of the reasons you're not getting many response is that you've not
> pasted the contents of the PEP in this message. That makes it really
> easy for people to comment on various sections.
> BTW, it seems like you were trying to use reST formatting with the
> text PEP layout. Double backquotes only mean something in reST.
Which I noticed since it's cited in the BeOpen license we still refer
to in LICENSE. Since pythonlabs.com itself is still up, it probably
isn't much work to make the logos.html URI work again, but I don't know
who maintains that page.
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.
I see several problems with the two hex-conversion function pairs that
1. binascii.hexlify and binascii.unhexlify
2. bytes.fromhex and bytes.hex
bytes.hex is not implemented, although it was specified in PEP 358.
This means there is no symmetrical function to accompany bytes.fromhex.
Both pairs perform the same function, although The Zen Of Python suggests
"There should be one-- and preferably only one --obvious way to do it."
I do not understand why PEP 358 specified the bytes function pair although
it mentioned the binascii pair...
bytes.fromhex may receive spaces in the input string, although
binascii.unhexlify may not.
I see no good reason for these two functions to have different features.
binascii.unhexlify may receive both input types: strings or bytes, whereas
bytes.fromhex raises an exception when given a bytes parameter.
Again there is no reason for these functions to be different.
binascii.hexlify returns a bytes type - although ideally, converting to hex
always return string types and converting from hex should always return
IMO there is no meaning of bytes as an output of hexlify, since the output
representation of other bytes.
This is also the suggested behavior of bytes.hex in PEP 358
Problems #4 and #5 call for a decision about the input and output of the
functions being discussed:
Option A : Strict input and output
unhexlify (and bytes.fromhex) may only receives string and may only return
hexlify (and bytes.hex) may only receives bytes and may only return strings
Option B : Robust input and strict output
unhexlify (and bytes.fromhex) may receive bytes and strings and may only
hexlify (and bytes.hex) may receive bytes or strings and may only return
Of course we may also consider a third option, which will allow the return
all functions to be robust (perhaps specified in a keyword argument), but as
I wrote in
the description of problem #5, I see no sense in that.
Note that PEP 3137 describes: "... the more strict definitions of encoding
and decoding in
Python 3000: encoding always takes a Unicode string and returns a bytes
sequence, and decoding
always takes a bytes sequence and returns a Unicode string." - suggesting
To repeat problems #4 and #5, the current behavior does not match any
* The return type of binascii.hexlify should be string, and this is not the
As for the input:
* Option A is not the current behavior because binascii.unhexlify may
receive both input types.
* Option B is not the current behavior because bytes.fromhex does not allow
bytes as input.
To fix these issues, three changes should be applied:
1. Deprecate bytes.fromhex. This fixes the following problems:
#4 (go with option B and remove the function that does not allow bytes
#2 (the binascii functions will be the only way to "do it")
#1 (bytes.hex should not be implemented)
2. In order to keep the functionality that bytes.fromhex has over unhexlify,
the latter function should be able to handle spaces in its input (fix #3)
3. binascii.hexlify should return string as its return type (fix #5)
This is a follow up to PEP 3147. That PEP, already implemented in Python 3.2,
allows for Python source files from different Python versions to live together
in the same directory. It does this by putting a magic tag in the .pyc file
name and placing the .pyc file in a __pycache__ directory.
Distros such as Debian and Ubuntu will use this to greatly simplifying
deploying Python, and Python applications and libraries. Debian and Ubuntu
usually ship more than one version of Python, and currently have to play
complex games with symlinks to make this work. PEP 3147 will go a long way to
eliminating the need for extra directories and symlinks.
One more thing I've found we need though, is a way to handled shared libraries
for extension modules. Just as we can get name collisions on foo.pyc, we can
get collisions on foo.so. We obviously cannot install foo.so built for Python
3.2 and foo.so built for Python 3.3 in the same location. So symlink
nightmare's mini-me is back.
I have a fairly simple fix for this. I'd actually be surprised if this hasn't
been discussed before, but teh Googles hasn't turned up anything.
The idea is to put the Python version number in the shared library file name,
and extend .so lookup to find these extended file names. So for example, we'd
see foo.3.2.so instead, and Python would know how to dynload both that and the
traditional foo.so file too (for backward compatibility).
(On file naming: the original patch used foo.so.3.2 and that works just as
well, but I thought there might be tools that expect exactly a '.so' suffix,
so I changed it to put the Major.Minor version number to the left of the
extension. The exact naming scheme is of course open to debate.)
This is a much simpler patch than PEP 3147, though I'm not 100% sure it's the
right approach. The way this works is by modifying the configure and
Makefile.pre.in to put the version number in the $SO make variable. Python
parses its (generated) Makefile to find $SO and it uses this deep in the
bowels of distutils to decide what suffix to use when writing shared libraries
built by 'python setup.py build_ext'.
This means the patched Python only writes versioned .so files by default. I
personally don't see that as a problem, and it does not affect the test suite,
with the exception of one easily tweaked test. I don't know if third party
tools will care. The fact that traditional foo.so shared libraries will still
satisfy the import should be enough, I think.
The patch is currently Linux only, since I need this for Debian and Ubuntu and
wanted to keep the change narrow.
Other possible approaches:
* Extend the distutils API so that the .so file extension can be passed in,
instead of being essentially hardcoded to what Python's Makefile contains.
* Keep the dynload_shlib.c change, but modify the Debian/Ubuntu build
environment to pass in $SO to make (though the configure.in warning and
sleep is a little annoying).
* Add a ./configure option to enable this, which Debuntu's build would use.
The patch is available here:
and my working branch is here:
Please let me know what you think. I'm happy to just commit this to the py3k
branch if there are no objections <wink>. I don't think a new PEP is in
order, but an update to PEP 3147 might make sense.
I have two somewhat unrelated thoughts about PEPs.
* Accepted: header
When PEP 3147 was accepted, I had a few folks ask that this be recorded in the
PEP by including a link to the BDFL pronouncement email. I realized that
there's no formal way to express this in a PEP, and many PEPs in fact don't
record more than the fact that it was accepted. I'd like to propose
officially adding an Accepted: header which should include a URL to the email
or other web resource where the PEP is accepted. I've come as close as
possible to this (without modifying the supporting scripts or PEP 1) in PEP
I'd be willing to update things if there are no objections.
* EOL schedule for older releases.
We have semi-formal policies for the lifetimes of Python releases, though I'm
not sure this policy is written down in any of the existing informational
PEPs. However, we have release schedule PEPs going back to Python 1.6. It
seems reasonable to me that we include end-of-life information in those PEPs.
For example, we could state that Python 2.4 is no longer even being maintained
for security, and we could state the projected date that Python 2.6 will go
into security-only maintenance mode.
I would not mandate that we go back and update all previous PEPs for either of
these ideas. We'd adopt them moving forward and allow anyone who's motivated
to backfill information opportunistically.
Issue #5180  presented an interesting challenge: how to unpickle
instances of old-style classes when a pickle created with 2.x is
loaded in 3.x python? The problem is that pickle protocol requires
that unpickled instances be created without calling the __init__
method. This is necessary because pickle file may not contain
information about how __init__ method should be invoked. Instead,
implementations are required to bypass __init__ and populate
instance's __dict__ directly using data found in the pickle.
Pure python implementation uses the following trick that happens to work in 3.x:
pickled = Empty()
pickled.__class__ = Pickled
This of course, creates a new-style class in 3.x, but if 3.x version
of Pickled behaves similarly to its 2.x predecessor, it should work.
The cPickle implementation, on the other hand uses 2.x C API which is
not available in 3.x. Namely, the PyInstance_NewRaw function. In
order to fix the bug described in issue #5180, I had to emulate
PyInstance_NewRaw using type->tp_alloc. I considered an rejected the
idea to use tp_new instead. 
Is this the right way to proceed? The patch is attached to the issue. 
I've been searching for a data structure like a tuple/list *but* unordered --
like a set, but duplicated elements shouldn't be removed. I have not even
found a recipe, so I'd like to write an implementation and contribute it to
the "collections" module in the standard library.
This is the situation I have: I have a data structure that represents an
arithmetic/boolean operation. Operations can be commutative, which means that
the order of their operands don't change the result of the operation. This is,
the following operations are equivalent:
operation(a, b, c) == operation(c, b, a) == operation(b, a, c)
operation(a, b, a) == operation(a, a, b) == operation(b, a, a)
operation(a, a) == operation(a, a)
So, I need a type to store the arguments/operands so that if two of these
collections have the same elements with the same multiplicity, they are
equivalent, regardless of the order.
A multiset is not exactly what I need: I still need to use the elements in the
order they were given. For example, the logical conjuction (aka "and"
operator) has a left and right operands; I need to evaluate the first/left one
and, if it returned True, then call the second/right one. They must not be
evaluated in a random order.
To sum up, it would behave like a tuple or a list, except when it's compared
with another object: They would be equivalent if they're both unordered
tuples/lists, and have the same elements. There can be mutable and immutable
editions (UnorderedList and UnorderedTuple, respectively).
I will write a PEP to elaborate on this if you think it'd be nice to have. Or,
should I have written the PEP first?
Gustavo Narea <xri://=Gustavo>.
| Tech blog: =Gustavo/(+blog)/tech ~ About me: =Gustavo/about |
-----BEGIN PGP SIGNED MESSAGE-----
Debugging a strange problem today, I got the following result:
Sockets open by stdlib libraries are open without the "keepalive"
option, so the system default is used. The system default under linux is
So, if you are using a URLlib connection, POP3 connection, IMAP
connection, etc., any stdlib that internally creates a socket, and your
server goes away suddendly (you lose network connectivity, by instance),
the library will wait FOREVER for the server. The client can't detect
that the server is not longer available.
The "keepalive" option will send a probe packed every X minutes of
inactivity, to check if the other side is still alive, even if the
connection is idle.
The issue is bad, but the solution is simple enough. Options:
1. All "client" libraries should create sockets with the "KEEPALIVE" option.
2. Modify the socket C module to create all sockets as "Keepalive" by
3. To have a global variable in the socket module to change the default
for future sockets. Something like current "socket.setdefaulttimeout()".
The default should be "keepalive".
4. Modify client libraries to accept a new optional socket-like object
as an optional parameter. This would allow things like transparent
compression or encryption, or to replace the socket connection by
anything else (read/write to shared memory or database, for example).
This is an issue in Linux because by default the sockets are not
"keepalive". In other Unix systems, the default is "keepalive". I don't
know about MS Windows.
What do you think?. The solution seems trivial, after deciding the right
way to go.
PS: "socket.setdefaulttimeout()" is not enough, because it could
shutdown a perfectly functional connection, just because it was idle for
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea(a)jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/
jabber / xmpp:firstname.lastname@example.org _/_/ _/_/ _/_/_/_/_/
. _/_/ _/_/ _/_/ _/_/ _/_/
"Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/
"My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
-----END PGP SIGNATURE-----
Here are a couple of ideas I'm taking away from the bytes/string
First, it would probably be a good idea to have a String ABC.
Secondly, maybe the string situation in 2.x wasn't as broken as we
thought it was. In particular, those who deal with lots of encoded
strings seemed to find it handy, and miss it in 3.x. Perhaps strings
are more like numbers than we think. We have separate types for int,
float, Decimal, etc. But they're all numbers, and they all
cross-operate. In 2.x, it seems there were two missing features: no
encoding attribute on str, which should have been there and should have
been required, and the default encoding being "ASCII" (I can't tell you
how many times I've had to fix that issue when a non-ASCII encoded str
was passed to some output function).
So maybe having a second string type in 3.x that consists of an encoded
sequence of bytes plus the encoding, call it "estr", wouldn't have been
a bad idea. It would probably have made sense to have estr cooperate
with the str type, in the same way that two different kinds of numbers
cooperate, "promoting" the result of an operation only when necessary.
This would automatically achieve the kind of polymorphic functionality
that Guido is suggesting, but without losing the ability to do
x = e(ASCII)"bar"
a = ''.join("foo", x)
(or whatever the syntax for such an encoded string literal would be --
I'm not claiming this is a good one) which presume would bind "a" to a
Unicode string "foobar" -- have to work out what gets promoted to what.
The language moratorium kind of makes this all theoretical, but building
a String ABC still would be a good start, and presumably isn't forbidden
by the moratorium.