I've received some enthusiastic emails from someone who wants to
revive restricted mode. He started out with a bunch of patches to the
CPython runtime using ctypes, which he attached to an App Engine bug:
http://code.google.com/p/googleappengine/issues/detail?id=671
Based on his code (the file secure.py is all you need, included in
secure.tar.gz) it seems he believes the only security leaks are
__subclasses__, gi_frame and gi_code. (I have since convinced him that
if we add "restricted" guards to these attributes, he doesn't need the
functions added to sys.)
I don't recall the exploits that Samuele once posted that caused the
death of rexec.py -- does anyone recall, or have a pointer to the
threads?
--
--Guido van Rossum (home page: http://www.python.org/~guido/)
Alright, I will re-submit with the contents pasted. I never use double
backquotes as I think them rather ugly; that is the work of an editor
or some automated program in the chain. Plus, it also messed up my
line formatting and now I have lines with one word on them... Anyway,
the contents of PEP 3145:
PEP: 3145
Title: Asynchronous I/O For subprocess.Popen
Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson
Type: Standards Track
Content-Type: text/plain
Created: 04-Aug-2009
Python-Version: 3.2
Abstract:
In its present form, the subprocess.Popen implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process.
Motivation:
A search for "python asynchronous subprocess" will turn up numerous
accounts of people wanting to execute a child process and communicate with
it from time to time reading only the data that is available instead of
blocking to wait for the program to produce data [1] [2] [3]. The current
behavior of the subprocess module is that when a user sends or receives
data via the stdin, stderr and stdout file objects, dead locks are common
and documented [4] [5]. While communicate can be used to alleviate some of
the buffering issues, it will still cause the parent process to block while
attempting to read data when none is available to be read from the child
process.
Rationale:
There is a documented need for asynchronous, non-blocking functionality in
subprocess.Popen [6] [7] [2] [3]. Inclusion of the code would improve the
utility of the Python standard library that can be used on Unix based and
Windows builds of Python. Practically every I/O object in Python has a
file-like wrapper of some sort. Sockets already act as such and for
strings there is StringIO. Popen can be made to act like a file by simply
using the methods attached the the subprocess.Popen.stderr, stdout and
stdin file-like objects. But when using the read and write methods of
those options, you do not have the benefit of asynchronous I/O. In the
proposed solution the wrapper wraps the asynchronous methods to mimic a
file object.
Reference Implementation:
I have been maintaining a Google Code repository that contains all of my
changes including tests and documentation [9] as well as blog detailing
the problems I have come across in the development process [10].
I have been working on implementing non-blocking asynchronous I/O in the
subprocess.Popen module as well as a wrapper class for subprocess.Popen
that makes it so that an executed process can take the place of a file by
duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen
class: Popen.send and Popen._recv, each with two separate implementations,
one for Windows and one for Unix based systems. The Windows
implementation uses ctypes to access the functions needed to control pipes
in the kernel 32 DLL in an asynchronous manner. On Unix based systems,
the Python interface for file control serves the same purpose. The
different implementations of Popen.send and Popen._recv have identical
arguments to make code that uses these functions work across multiple
platforms.
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes
selects stdout as the pipe for Popen._recv by default. Popen.recv_err
selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err"
are much easier to read and understand than "Popen._recv('stdout' ..." and
"Popen._recv('stderr' ..." respectively.
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread
handles this issue by returning all data read over a given time
interval.
The ProcessIOWrapper class uses the asyncread and asyncwrite functions to
allow a process to act like a file so that there are no blocking issues
that can arise from using the stdout and stdin file objects produced from
a subprocess.Popen call.
References:
[1] [ python-Feature Requests-1191964 ] asynchronous Subprocess
http://mail.python.org/pipermail/python-bugs-list/2006-December/
036524.html
[2] Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess
http://ivory.idyll.org/blog/feb-07/problems-with-subprocess
[3] How can I run an external command asynchronously from Python? - Stack
Overflow
http://stackoverflow.com/questions/636561/how-can-i-run-an-external-
command-asynchronously-from-python
[4] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
http://docs.python.org/library/subprocess.html#subprocess.Popen.wait
[5] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
http://docs.python.org/library/subprocess.html#subprocess.Popen.kill
[6] Issue 1191964: asynchronous Subprocess - Python tracker
http://bugs.python.org/issue1191964
[7] Module to allow Asynchronous subprocess use on Windows and Posix
platforms - ActiveState Code
http://code.activestate.com/recipes/440554/
[8] subprocess.rst - subprocdev - Project Hosting on Google Code
http://code.google.com/p/subprocdev/source/browse/doc/subprocess.rst?spec=s…
[9] subprocdev - Project Hosting on Google Code
http://code.google.com/p/subprocdev
[10] Python Subprocess Dev
http://subdev.blogspot.com/
Copyright:
This P.E.P. is licensed under the Open Publication License;
http://www.opencontent.org/openpub/.
On Tue, Sep 8, 2009 at 22:56, Benjamin Peterson <benjamin(a)python.org> wrote:
> 2009/9/7 Eric Pruitt <eric.pruitt(a)gmail.com>:
>> Hello all,
>>
>> I have been working on adding asynchronous I/O to the Python
>> subprocess module as part of my Google Summer of Code project. Now
>> that I have finished documenting and pruning the code, I present PEP
>> 3145 for its inclusion into the Python core code. Any and all feedback
>> on the PEP (http://www.python.org/dev/peps/pep-3145/) is appreciated.
>
> Hi Eric,
> One of the reasons you're not getting many response is that you've not
> pasted the contents of the PEP in this message. That makes it really
> easy for people to comment on various sections.
>
> BTW, it seems like you were trying to use reST formatting with the
> text PEP layout. Double backquotes only mean something in reST.
>
>
> --
> Regards,
> Benjamin
>
Hi,
Python code should not depend upon the ordering of items in a dict.
Unfortunately it seems that a number of tests in the standard library do
just that.
Changing PyDict_MINSIZE from 8 to either 4 or 16 causes the following
tests to fail:
test_dis test_email test_inspect test_nntplib test_packaging
test_plistlib test_pprint test_symtable test_trace
test_sys also fails, but this is a legitimate failure in sys.getsizeof()
Changing the collision resolution function from f(n) = 5n + 1 to
f(n) = n + 1 results in the same failures, except for test_packaging and
test_symtable which pass.
Finally, changing the seed in unicode_hash() from (implicit) 0 to an
arbitrary value (12345678) causes the above tests to fail plus:
test_json test_set test_ttk_textonly test_urllib test_urlparse
I think this is a real issue as the unicode_hash() function is likely to
change soon due to http://bugs.python.org/issue13703.
Should I:
1. Submit one big bug report?
2. Submit a bug report for each "failing" test separately?
3. Ignore it, since the tests only fail when I start messing about?
Cheers,
Mark.
A brief status update on PEP 405 (built-in virtualenv) and the open issues:
1. As mentioned in the updated version of the language summit notes,
Nick Coghlan has agreed to pronounce on the PEP.
2. Ned Deily discovered at the PyCon sprints that the current reference
implementation does not work with an OS X framework build of Python.
We're still working to discover the reason for that and determine
possible fixes.
3. If anyone knows of a pair of packages in which both need to build
compiled extensions, and the compilation of the second depends on header
files from the first, that would be helpful to me in testing the other
open issue (installation of header files). (I thought numpy and scipy
might fit this bill, but I'm currently not able to install numpy at all
under Python 3 using pysetup, easy_install, or pip.)
Thanks,
Carl
On 03/15/2012 03:02 PM, Lindberg, Van wrote:
> FYI, the location of the tcl/tk libraries does not appear to be set in
> the virtualenv, even if tkinter is installed and working in the main
> Python installation. As a result, tk-based apps will not run from a
> virtualenv.
Thanks for the report! I've added this to the list of open issues in the
PEP and I'll look into it.
Carl
In reviewing a fix for the metaclass calculation in __build_class__
[1], I realised that PEP 3115 poses a potential problem for the common
practice of using "type(name, bases, ns)" for dynamic class creation.
Specifically, if one of the base classes has a metaclass with a
significant __prepare__() method, then the current idiom will do the
wrong thing (and most likely fail as a result), since "ns" will
probably be an ordinary dictionary instead of whatever __prepare__()
would have returned.
Initially I was going to suggest making __build_class__ part of the
language definition rather than a CPython implementation detail, but
then I realised that various CPython specific elements in its
signature made that a bad idea.
Instead, I'm thinking along the lines of an
"operator.prepare(metaclass, bases)" function that does the metaclass
calculation dance, invoking __prepare__() and returning the result if
it exists, otherwise returning an ordinary dict. Under the hood we
would refactor this so that operator.prepare and __build_class__ were
using a shared implementation of the functionality at the C level - it
may even be advisable to expose that implementation via the C API as
PyType_PrepareNamespace().
The correct idiom for dynamic type creation in a PEP 3115 world would then be:
from operator import prepare
cls = type(name, bases, prepare(type, bases))
Thoughts?
Cheers,
Nick.
[1] http://bugs.python.org/issue1294232
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
I'm looking into getting a RHEL6 system set up to add to the buildbot
fleet. The info already on the wiki [1] is pretty helpful, but does
anyone have any suggestions on appropriate CPU/memory/disk
allocations?
Cheers,
Nick.
[1] http://wiki.python.org/moin/BuildBot
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
Alex Leach <albl500(a)york.ac.uk> wrote:
> Can you translate Intel's suggestion into a patch for ffi64?
Well probably, but this really belongs on the bug tracker. Also, as I said,
there are many issues with higher priority.
Stefan Krah
Why does HG cpython repo contains .{bzr,git}ignore at all?
IMHO, all .*ignore files should be strictly repository dependent and
they should not be mixed together.
It is even worse, that (understandingly) .{bzr,git}ignore are apparently
poorly maintained, so in order to get an equivalent of .hgignore in
.gitignore, one has to apply the attached patch.
Best,
Matěj
--
http://www.ceplovi.cz/matej/, Jabber: mcepl<at>ceplovi.cz
GPG Finger: 89EF 4BC6 288A BF43 1BAB 25C3 E09F EF25 D964 84AC
We understand our competition isn't with Caldera or SuSE--our
competition is with Microsoft.
-- Bob Young of Red Hat
http://www.linuxjournal.com/article/3553