I've received some enthusiastic emails from someone who wants to
revive restricted mode. He started out with a bunch of patches to the
CPython runtime using ctypes, which he attached to an App Engine bug:
http://code.google.com/p/googleappengine/issues/detail?id=671
Based on his code (the file secure.py is all you need, included in
secure.tar.gz) it seems he believes the only security leaks are
__subclasses__, gi_frame and gi_code. (I have since convinced him that
if we add "restricted" guards to these attributes, he doesn't need the
functions added to sys.)
I don't recall the exploits that Samuele once posted that caused the
death of rexec.py -- does anyone recall, or have a pointer to the
threads?
--
--Guido van Rossum (home page: http://www.python.org/~guido/)
Alright, I will re-submit with the contents pasted. I never use double
backquotes as I think them rather ugly; that is the work of an editor
or some automated program in the chain. Plus, it also messed up my
line formatting and now I have lines with one word on them... Anyway,
the contents of PEP 3145:
PEP: 3145
Title: Asynchronous I/O For subprocess.Popen
Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson
Type: Standards Track
Content-Type: text/plain
Created: 04-Aug-2009
Python-Version: 3.2
Abstract:
In its present form, the subprocess.Popen implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process.
Motivation:
A search for "python asynchronous subprocess" will turn up numerous
accounts of people wanting to execute a child process and communicate with
it from time to time reading only the data that is available instead of
blocking to wait for the program to produce data [1] [2] [3]. The current
behavior of the subprocess module is that when a user sends or receives
data via the stdin, stderr and stdout file objects, dead locks are common
and documented [4] [5]. While communicate can be used to alleviate some of
the buffering issues, it will still cause the parent process to block while
attempting to read data when none is available to be read from the child
process.
Rationale:
There is a documented need for asynchronous, non-blocking functionality in
subprocess.Popen [6] [7] [2] [3]. Inclusion of the code would improve the
utility of the Python standard library that can be used on Unix based and
Windows builds of Python. Practically every I/O object in Python has a
file-like wrapper of some sort. Sockets already act as such and for
strings there is StringIO. Popen can be made to act like a file by simply
using the methods attached the the subprocess.Popen.stderr, stdout and
stdin file-like objects. But when using the read and write methods of
those options, you do not have the benefit of asynchronous I/O. In the
proposed solution the wrapper wraps the asynchronous methods to mimic a
file object.
Reference Implementation:
I have been maintaining a Google Code repository that contains all of my
changes including tests and documentation [9] as well as blog detailing
the problems I have come across in the development process [10].
I have been working on implementing non-blocking asynchronous I/O in the
subprocess.Popen module as well as a wrapper class for subprocess.Popen
that makes it so that an executed process can take the place of a file by
duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen
class: Popen.send and Popen._recv, each with two separate implementations,
one for Windows and one for Unix based systems. The Windows
implementation uses ctypes to access the functions needed to control pipes
in the kernel 32 DLL in an asynchronous manner. On Unix based systems,
the Python interface for file control serves the same purpose. The
different implementations of Popen.send and Popen._recv have identical
arguments to make code that uses these functions work across multiple
platforms.
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes
selects stdout as the pipe for Popen._recv by default. Popen.recv_err
selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err"
are much easier to read and understand than "Popen._recv('stdout' ..." and
"Popen._recv('stderr' ..." respectively.
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread
handles this issue by returning all data read over a given time
interval.
The ProcessIOWrapper class uses the asyncread and asyncwrite functions to
allow a process to act like a file so that there are no blocking issues
that can arise from using the stdout and stdin file objects produced from
a subprocess.Popen call.
References:
[1] [ python-Feature Requests-1191964 ] asynchronous Subprocess
http://mail.python.org/pipermail/python-bugs-list/2006-December/
036524.html
[2] Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess
http://ivory.idyll.org/blog/feb-07/problems-with-subprocess
[3] How can I run an external command asynchronously from Python? - Stack
Overflow
http://stackoverflow.com/questions/636561/how-can-i-run-an-external-
command-asynchronously-from-python
[4] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
http://docs.python.org/library/subprocess.html#subprocess.Popen.wait
[5] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
http://docs.python.org/library/subprocess.html#subprocess.Popen.kill
[6] Issue 1191964: asynchronous Subprocess - Python tracker
http://bugs.python.org/issue1191964
[7] Module to allow Asynchronous subprocess use on Windows and Posix
platforms - ActiveState Code
http://code.activestate.com/recipes/440554/
[8] subprocess.rst - subprocdev - Project Hosting on Google Code
http://code.google.com/p/subprocdev/source/browse/doc/subprocess.rst?spec=s…
[9] subprocdev - Project Hosting on Google Code
http://code.google.com/p/subprocdev
[10] Python Subprocess Dev
http://subdev.blogspot.com/
Copyright:
This P.E.P. is licensed under the Open Publication License;
http://www.opencontent.org/openpub/.
On Tue, Sep 8, 2009 at 22:56, Benjamin Peterson <benjamin(a)python.org> wrote:
> 2009/9/7 Eric Pruitt <eric.pruitt(a)gmail.com>:
>> Hello all,
>>
>> I have been working on adding asynchronous I/O to the Python
>> subprocess module as part of my Google Summer of Code project. Now
>> that I have finished documenting and pruning the code, I present PEP
>> 3145 for its inclusion into the Python core code. Any and all feedback
>> on the PEP (http://www.python.org/dev/peps/pep-3145/) is appreciated.
>
> Hi Eric,
> One of the reasons you're not getting many response is that you've not
> pasted the contents of the PEP in this message. That makes it really
> easy for people to comment on various sections.
>
> BTW, it seems like you were trying to use reST formatting with the
> text PEP layout. Double backquotes only mean something in reST.
>
>
> --
> Regards,
> Benjamin
>
Hello,
There is a need for the default Python2 install to place a symlink at
/usr/bin/python2 that points to /usr/bin/python, or for the documentation to
recommend that packagers ensure that python2 is defined. Also, all
documentation should be changed to recommend that "#!/usr/bin/env python2"
be used as the shebang for Python 2 scripts.
This is needed because some distributions (Arch Linux, in particular), point
/usr/bin/python to /usr/bin/python3, while others (including Slackware,
Debian, and the BSDs, probably more) do not even define the python2 command.
This means that a script has no way of achieving cross-platform
compatibility. The point at which many distributions begin to alias
/usr/bin/python to /usr/bin/python3 is due soon, and for the next couple of
years, it would be best to use a python2 or python3 shebang in all scripts,
making no assumptions about plain python, which should only be invoked
interactively. This email from about 3 years ago seems relevant: :
http://mail.python.org/pipermail/python-3000/2008-March/012421.html
Again, this issue needs to be addressed by the Python developers themselves
so that different *nix distributions will handle it consistently, allowing
Python scripts to continue to be cross-platform.
Thanks,
Kerrick Staley
I've posted a very preliminary Python 3.3 release schedule as PEP 398.
The final release is set to be about 18 months after 3.2 final, which
is in August 2012.
For 3.3, I'd like to revive the tradition of listing planned large-scale
changes in the PEP. Please let me know if you plan any such changes,
at any time. (If they aren't codified in PEP form, we should think about
whether they should be.)
The "Candidate PEPs" I listed are those open PEPs that in my opinion have
the highest chance to be accepted and implemented for 3.3. It is by no
means binding.
cheers,
Georg
Greetings!
I'm not sure where the best place is to ask this question, so I'll start
here -- feel free to redirect me if necessary.
I would like to have some software to keep track of bugs, to-do's,
ideas, etc., etc. -- you know, an issue tracker! Naturally I thought of
the one we use to track Python. Is it available? Is it written in
Python? Are there any others that are recommended?
Thanks!
~Ethan~
Hi,
Currently 2to3 page at http://wiki.python.org/moin/2to3 lists
http://svn.python.org/view/sandbox/trunk/2to3 as a repository for 2to3
tool. There is also an outdated repository at http://hg.python.org/
and the page says that the code is finally integrated into CPython 2.6
- you can see it at
http://hg.python.org/cpython/file/default/Lib/lib2to3. So, what
version is more up-to-date?
In svn repository there is a HACKING guide advising to use
find_pattern.py script for writing new fixer. However, there is no
find_pattern.py in CPython repository, no HACKING guide, no any
documentation about how to write fixers or description of PATTERN
format. Did I miss something?
--
anatoly t.
Ubuntu 11.04 added support for multiarch libraries:
https://wiki.ubuntu.com/MultiarchSpechttp://wiki.debian.org/ReleaseGoals/MultiArch
At the moment, I don't care about issue 1294959 which I think addresses
building multiarch flavors of Python:
http://bugs.python.org/issue1294959
I have a much more short-term concern, which is being able to build Python
from source *on* a multiarch Debian/Ubuntu:
http://bugs.python.org/issue11715
The problem is that without this patch (or something like it), several of the
extension modules do not build because setup.py does not search the
directories in which the third party .so files live. The patch in the tracker
is fairly straightforward and should be robust enough for platforms without
dpkg-architecture(1). It's adapted from the patch in the Ubuntu source
package.
I would like to apply this patch (or its moral equivalent) to all active,
affected branches of Python, meaning 2.5 through 2.7, and 3.1 through 3.3, as
soon as possible. Without this, it will be very difficult for anyone on
future Ubuntu or Debian releases to build Python. Since it's not a new
feature, but just a minor fix to the build process, I think it should be okay
to back port.
Please comment here or in the tracker for issue 11715.
Cheers,
-Barry
Hi,
I'm testing my faulthandler repository on the custom buildbots, here are
some remarks and issues.
The form still refers to SVN ('Branch to build' is relative to
http://svn.python.org/projects/python.) => the branch is relative to
hg.python.org/
I cannot write "#" in the branch field to specify... the branch (only
the repository). If the branch contains "#", the request looks to be
ignored (without any warning/error). I merged my faulthandler branch
into the default branch (in my features/faulthandler branch).
I don't understand the meaning of the "project" field. It is maybe
something specific to Subversion?
What are the 3 optional properties?
If branch doesn't end with a slash (e.g. "features/faulthandler"), the
request is ignored (without any warning/error).
I canceled a build on a Windows buildbot during the "tests" step using
the [Cancel] button, but it failed to kill the process:
http://www.python.org/dev/buildbot/all/builders/x86%20Windows7%
20custom/builds/2/steps/test/logs/stdio
-----------
command interrupted, killing pid 2168
SIGKILL failed to kill process
using fake rc=-1
program finished with exit code -1
-----------
To test my faulthandler feature branch, the correct parameters are:
--
Name: haypo
Reason: test faulthandler
Branch: features/faulthandler/
Revision: tip
Repository: features/faulthandler
(leave the project and the 6 property fields empty)
--
The repository looks like a duplicate of the branch field. I would be
better to use "default" as the branch and "features/faulthandler" as the
repository.
I would be nice to have error messages.
Victor
Hi,
> changeset: 68921:11dc3f270594
> user: Thomas Wouters <thomas(a)python.org>
> date: Fri Mar 25 11:42:37 2011 +0100
> summary:
> Revert the Lib/test/test_bigmem.py changes from commit 17891566a478 (and a
> few other assertEqual tests that snuck in), and expand the docstrings and
> comments explaining why and how these tests are supposed to work.
Your commit message does not explain why you reverted the changes. The
specific assert* methods give more useful messages than assertEqual in
case of failure.
Regards
The tracker was recently changed so that when I click on a link to a
tracker page, the page is properly displayed, but then a fraction of a
second it blinks and redisplays with the edit form hidden. This is so
obnoxious to me that I no longer want to visit the tracker. Then I have
to find and click the button to get back the edit form that I nearly
always want to see, as I often make changes. All this to compress the
page by half a screen, which makes almost no difference once one grabs
the scrollbar anyway.
If someone actually considers this a desired feature, after using it,
then please add a field on the profile page to select autofolding or
not. Also, there should be a button to fold as well as one to unfold.
--
Terry Jan Reedy