I've received some enthusiastic emails from someone who wants to
revive restricted mode. He started out with a bunch of patches to the
CPython runtime using ctypes, which he attached to an App Engine bug:
Based on his code (the file secure.py is all you need, included in
secure.tar.gz) it seems he believes the only security leaks are
__subclasses__, gi_frame and gi_code. (I have since convinced him that
if we add "restricted" guards to these attributes, he doesn't need the
functions added to sys.)
I don't recall the exploits that Samuele once posted that caused the
death of rexec.py -- does anyone recall, or have a pointer to the
--Guido van Rossum (home page: http://www.python.org/~guido/)
Alright, I will re-submit with the contents pasted. I never use double
backquotes as I think them rather ugly; that is the work of an editor
or some automated program in the chain. Plus, it also messed up my
line formatting and now I have lines with one word on them... Anyway,
the contents of PEP 3145:
Title: Asynchronous I/O For subprocess.Popen
Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson
Type: Standards Track
In its present form, the subprocess.Popen implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process.
A search for "python asynchronous subprocess" will turn up numerous
accounts of people wanting to execute a child process and communicate with
it from time to time reading only the data that is available instead of
blocking to wait for the program to produce data   . The current
behavior of the subprocess module is that when a user sends or receives
data via the stdin, stderr and stdout file objects, dead locks are common
and documented  . While communicate can be used to alleviate some of
the buffering issues, it will still cause the parent process to block while
attempting to read data when none is available to be read from the child
There is a documented need for asynchronous, non-blocking functionality in
subprocess.Popen    . Inclusion of the code would improve the
utility of the Python standard library that can be used on Unix based and
Windows builds of Python. Practically every I/O object in Python has a
file-like wrapper of some sort. Sockets already act as such and for
strings there is StringIO. Popen can be made to act like a file by simply
using the methods attached the the subprocess.Popen.stderr, stdout and
stdin file-like objects. But when using the read and write methods of
those options, you do not have the benefit of asynchronous I/O. In the
proposed solution the wrapper wraps the asynchronous methods to mimic a
I have been maintaining a Google Code repository that contains all of my
changes including tests and documentation  as well as blog detailing
the problems I have come across in the development process .
I have been working on implementing non-blocking asynchronous I/O in the
subprocess.Popen module as well as a wrapper class for subprocess.Popen
that makes it so that an executed process can take the place of a file by
duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen
class: Popen.send and Popen._recv, each with two separate implementations,
one for Windows and one for Unix based systems. The Windows
implementation uses ctypes to access the functions needed to control pipes
in the kernel 32 DLL in an asynchronous manner. On Unix based systems,
the Python interface for file control serves the same purpose. The
different implementations of Popen.send and Popen._recv have identical
arguments to make code that uses these functions work across multiple
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes
selects stdout as the pipe for Popen._recv by default. Popen.recv_err
selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err"
are much easier to read and understand than "Popen._recv('stdout' ..." and
"Popen._recv('stderr' ..." respectively.
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread
handles this issue by returning all data read over a given time
The ProcessIOWrapper class uses the asyncread and asyncwrite functions to
allow a process to act like a file so that there are no blocking issues
that can arise from using the stdout and stdin file objects produced from
a subprocess.Popen call.
 [ python-Feature Requests-1191964 ] asynchronous Subprocess
 Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess
 How can I run an external command asynchronously from Python? - Stack
 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
 Issue 1191964: asynchronous Subprocess - Python tracker
 Module to allow Asynchronous subprocess use on Windows and Posix
platforms - ActiveState Code
 subprocess.rst - subprocdev - Project Hosting on Google Code
 subprocdev - Project Hosting on Google Code
 Python Subprocess Dev
This P.E.P. is licensed under the Open Publication License;
On Tue, Sep 8, 2009 at 22:56, Benjamin Peterson <benjamin(a)python.org> wrote:
> 2009/9/7 Eric Pruitt <eric.pruitt(a)gmail.com>:
>> Hello all,
>> I have been working on adding asynchronous I/O to the Python
>> subprocess module as part of my Google Summer of Code project. Now
>> that I have finished documenting and pruning the code, I present PEP
>> 3145 for its inclusion into the Python core code. Any and all feedback
>> on the PEP (http://www.python.org/dev/peps/pep-3145/) is appreciated.
> Hi Eric,
> One of the reasons you're not getting many response is that you've not
> pasted the contents of the PEP in this message. That makes it really
> easy for people to comment on various sections.
> BTW, it seems like you were trying to use reST formatting with the
> text PEP layout. Double backquotes only mean something in reST.
Which I noticed since it's cited in the BeOpen license we still refer
to in LICENSE. Since pythonlabs.com itself is still up, it probably
isn't much work to make the logos.html URI work again, but I don't know
who maintains that page.
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.
+The :mod:`zlib` extension is built using an included copy of the zlib
+sources unless the zlib version found on the system is too old to be
+used for the build::
Unless or if? Building with an included copy *if* the system one is too
old makes sense to me, not the contrary. Am I not seeing something?
reading the description of the new LRU cache in the "What's new in 3.2"
document now, I got the impression that the hits/misses attributes and the
.clear() method aren't really well namespaced. When I read
it's not very obvious to me what happens, unless I know that there actually
*is* a cache involved, which simply has the same name as the function. So
this will likely encourage users to add a half-way redundant comment like
"clear the cache" to their code.
What about adding an intermediate namespace called "cache", so that the new
operations are available like this:
It's just a little more overhead, but I think it reads quite a bit better.
I have started to correct quite a lot of issues I have with Python on
AIX, and since I had to test quite a lot of patchs, I though it would be
more convenient to setup a buildbot for that platform.
So I now have a buildbot environment with 2 slaves (AIX 5.3 and 6.1)
that builds and tests Python (branch py3k) with both gcc and xlc (the
native AIX compiler) (I have 4 builders ("py3k-aix6-xlc",
"py3k-aix5-xlc", "py3k-aix6-gcc", "py3k-aix5-gcc").
I expect to add 4 more builders for branch 2.7 in coming days.
I would like to share the results of this buildbot to the Python
community so that issues with AIX could be addressed more easily.
R. David Murray pointed me to the page on the python wiki concerning
buildbot. It is stated there that is is possible to connect some slaves
to some official Python buildbot master.
Unfortunately, I don't think this solution is possible for me: I don't
think the security team in my company would appreciate that a server
inside our network runs some arbitrary shell commands provided by some
external source. Neither can I expose the buildbot master web interface.
Also I had to customize the buildbot rules in order to work with some
specificities of AIX (see attached master.cfg), and I can't guarantee
that this buildbot will run 24 hours a day; I may have to schedule it
only once at night for example if it consumes too much resources.
(And the results are very unstable at the moment, mostly because of
On the other hand, I could upload the build results with rsync or scp
somewhere or setup some MailNotifier if that can help.
How do you think I could share those results?
Le 15/09/2010 23:28, R. David Murray a écrit :
> R. David Murray added the comment:
> Sébastien, you could email Martin (tracker id loewis) about adding your buildbot to our unstable fleet (or even to stable if it is stable; that is, the tests normally pass and don't randomly fail). As long as you are around to help fix bugs it would be great to have an aix buildbot in our buildbot fleet.
> (NB: see also http://wiki.python.org/moin/BuildBot, which unfortunately is a bit out of date...)
> nosy: +r.david.murray
> Python tracker<report(a)bugs.python.org>
[I've got no response from python-ideas, so I am forwarding to python-dev.]
With addition of fixed offset timezone class and the timezone.utc
instance , it is easy to get UTC time as an aware datetime
datetime.datetime(2010, 8, 3, 14, 16, 10, 670308, tzinfo=datetime.timezone.utc)
However, if you want to keep time in your local timezone, getting an
aware datetime is almost a catch 22. If you know your timezone UTC
offset, you can do
>>> EDT = timezone(timedelta(hours=-4))
datetime.datetime(2010, 8, 3, 10, 20, 23, 769537,
but the problem is that there is no obvious or even correct way to
find local timezone UTC offset. 
In a comment on issue #5094 ("datetime lacks concrete tzinfo
implementation for UTC"), I proposed to address this problem in a
localtime([t]) function that would return current time (or time
corresponding to the optional datetime argument) as an aware datetime
object carrying local timezone information in a tzinfo set to an
appropriate timezone instance. This solution is attractive by its
simplicity, but there are several problems:
1. An aware datetime cannot carry all information that system
localtime() supplies in a time tuple. Specifically, the is_dst flag
is lost. This is not a problem for most applications as long as
timezone UTC offset and timezone name are available, but may be an
issue when interoperability with the time module is required.
2. Datetime's tzinfo interface was designed with the idea that
<2010-11-06 12:00 EDT> + <1 day> = <2010-11-07 12:00 EST>, not
<2010-11-07 12:00 EDT>. It other words, if I have lunch with someone
at noon (12:00 EDT) on Saturday the day before first Sunday in
November, and want to meet again "at the same time tomorrow", I mean
12:00 EST, not 24 hours later. With localtime() returning datetime
with tzinfo set to fixed offset timezone, however, localtime() +
timedelta(1) will mean exactly 24 hours later and the result will be
expressed in an unusual for the given location timezone.
An alternative approach is the one recommended in the python manual.
 One could implement a LocalTimezone class with utcoffset(),
tzname() and dst() extracting information from system mktime and
localtime calls. This approach has its own shortcomings:
1. While adding integral number of days to datetimes in business
setting, it is natural to expect automatic timezone adjustments, it is
not as clearcut when adding hours or minutes.
2. The tzinfo.utcoffset() interface that expects *standard* local time
as an argument is confusing to many users. Even the "official"
example in the python manual gets it wrong. 
3. datetime(..., tzinfo=LocalTimezone()) is ambiguous during the
"repeated hour" when local clock is set back in DST to standard time
As far as I can tell, the only way to resolve the last problem is to
add is_dst flag to the datetime object, which would also be the
only way to achieve full interoperability between datetime objects and
time tuples. 
The traditional answer to call for improvement of timezone support in
datetime module has been: "this is upto 3rd parties to implement."
Unfortunately, stdlib is asking 3rd parties to implement an impossible
interface without giving access to the necessary data. The
impossibility comes from the requirement that dst() method should find
out whether local time represents DST or standard time while there is
an hour each year when the same local time can be either. The missing
data is the system UTC offset when it changes historically. The time
module only gives access to the current UTC offset.
My preference is to implement the first alternative - localtime([t])
returning aware datetime with fixed offset timezone. This will solve
the problem of python's lack of access to the universally available
system facilities that are necessary to implement any kind of aware
local time support.
I wonder if situation with relative imports in packages is improved in
Python 3k or
we are still doomed to a chain of hacks?
My user story:
I am currently debugging project, which consists of many modules in one package.
Each module has tests or other useful stuff for debug in its main
section, but it is a
disaster to use it, because I can't just execute the module file and
expect it to find
relatives. All imports are like:
from spyderlib.config import get_icon
from spyderlib.utils.qthelpers import translate, add_actions, create_action
PEP 328 http://www.python.org/dev/peps/pep-0328/ proposes:
from ... import config
from ..utils.qthelpers import translate, add_actions, create_action
But this doesn't work, and I couldn't find any short user level
explanation why it is
not possible to make this work at least in Py3k without additional magic.
I'm curious as to why, with a file called "Foo.txt" on a case
descriminating but case insensitive filesystem,
os.path.normcase('FoO.txt') will return "foo.txt" rather than "Foo.txt"?
Yes, I know the behaviour is documented, but I'm wondering if anyone can
remember the rationale for that behaviour?
Simplistix - Content Management, Batch Processing & Python Consulting