I've received some enthusiastic emails from someone who wants to
revive restricted mode. He started out with a bunch of patches to the
CPython runtime using ctypes, which he attached to an App Engine bug:
http://code.google.com/p/googleappengine/issues/detail?id=671
Based on his code (the file secure.py is all you need, included in
secure.tar.gz) it seems he believes the only security leaks are
__subclasses__, gi_frame and gi_code. (I have since convinced him that
if we add "restricted" guards to these attributes, he doesn't need the
functions added to sys.)
I don't recall the exploits that Samuele once posted that caused the
death of rexec.py -- does anyone recall, or have a pointer to the
threads?
--
--Guido van Rossum (home page: http://www.python.org/~guido/)
Alright, I will re-submit with the contents pasted. I never use double
backquotes as I think them rather ugly; that is the work of an editor
or some automated program in the chain. Plus, it also messed up my
line formatting and now I have lines with one word on them... Anyway,
the contents of PEP 3145:
PEP: 3145
Title: Asynchronous I/O For subprocess.Popen
Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson
Type: Standards Track
Content-Type: text/plain
Created: 04-Aug-2009
Python-Version: 3.2
Abstract:
In its present form, the subprocess.Popen implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process.
Motivation:
A search for "python asynchronous subprocess" will turn up numerous
accounts of people wanting to execute a child process and communicate with
it from time to time reading only the data that is available instead of
blocking to wait for the program to produce data [1] [2] [3]. The current
behavior of the subprocess module is that when a user sends or receives
data via the stdin, stderr and stdout file objects, dead locks are common
and documented [4] [5]. While communicate can be used to alleviate some of
the buffering issues, it will still cause the parent process to block while
attempting to read data when none is available to be read from the child
process.
Rationale:
There is a documented need for asynchronous, non-blocking functionality in
subprocess.Popen [6] [7] [2] [3]. Inclusion of the code would improve the
utility of the Python standard library that can be used on Unix based and
Windows builds of Python. Practically every I/O object in Python has a
file-like wrapper of some sort. Sockets already act as such and for
strings there is StringIO. Popen can be made to act like a file by simply
using the methods attached the the subprocess.Popen.stderr, stdout and
stdin file-like objects. But when using the read and write methods of
those options, you do not have the benefit of asynchronous I/O. In the
proposed solution the wrapper wraps the asynchronous methods to mimic a
file object.
Reference Implementation:
I have been maintaining a Google Code repository that contains all of my
changes including tests and documentation [9] as well as blog detailing
the problems I have come across in the development process [10].
I have been working on implementing non-blocking asynchronous I/O in the
subprocess.Popen module as well as a wrapper class for subprocess.Popen
that makes it so that an executed process can take the place of a file by
duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen
class: Popen.send and Popen._recv, each with two separate implementations,
one for Windows and one for Unix based systems. The Windows
implementation uses ctypes to access the functions needed to control pipes
in the kernel 32 DLL in an asynchronous manner. On Unix based systems,
the Python interface for file control serves the same purpose. The
different implementations of Popen.send and Popen._recv have identical
arguments to make code that uses these functions work across multiple
platforms.
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes
selects stdout as the pipe for Popen._recv by default. Popen.recv_err
selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err"
are much easier to read and understand than "Popen._recv('stdout' ..." and
"Popen._recv('stderr' ..." respectively.
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread
handles this issue by returning all data read over a given time
interval.
The ProcessIOWrapper class uses the asyncread and asyncwrite functions to
allow a process to act like a file so that there are no blocking issues
that can arise from using the stdout and stdin file objects produced from
a subprocess.Popen call.
References:
[1] [ python-Feature Requests-1191964 ] asynchronous Subprocess
http://mail.python.org/pipermail/python-bugs-list/2006-December/
036524.html
[2] Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess
http://ivory.idyll.org/blog/feb-07/problems-with-subprocess
[3] How can I run an external command asynchronously from Python? - Stack
Overflow
http://stackoverflow.com/questions/636561/how-can-i-run-an-external-
command-asynchronously-from-python
[4] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
http://docs.python.org/library/subprocess.html#subprocess.Popen.wait
[5] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation
http://docs.python.org/library/subprocess.html#subprocess.Popen.kill
[6] Issue 1191964: asynchronous Subprocess - Python tracker
http://bugs.python.org/issue1191964
[7] Module to allow Asynchronous subprocess use on Windows and Posix
platforms - ActiveState Code
http://code.activestate.com/recipes/440554/
[8] subprocess.rst - subprocdev - Project Hosting on Google Code
http://code.google.com/p/subprocdev/source/browse/doc/subprocess.rst?spec=s…
[9] subprocdev - Project Hosting on Google Code
http://code.google.com/p/subprocdev
[10] Python Subprocess Dev
http://subdev.blogspot.com/
Copyright:
This P.E.P. is licensed under the Open Publication License;
http://www.opencontent.org/openpub/.
On Tue, Sep 8, 2009 at 22:56, Benjamin Peterson <benjamin(a)python.org> wrote:
> 2009/9/7 Eric Pruitt <eric.pruitt(a)gmail.com>:
>> Hello all,
>>
>> I have been working on adding asynchronous I/O to the Python
>> subprocess module as part of my Google Summer of Code project. Now
>> that I have finished documenting and pruning the code, I present PEP
>> 3145 for its inclusion into the Python core code. Any and all feedback
>> on the PEP (http://www.python.org/dev/peps/pep-3145/) is appreciated.
>
> Hi Eric,
> One of the reasons you're not getting many response is that you've not
> pasted the contents of the PEP in this message. That makes it really
> easy for people to comment on various sections.
>
> BTW, it seems like you were trying to use reST formatting with the
> text PEP layout. Double backquotes only mean something in reST.
>
>
> --
> Regards,
> Benjamin
>
In reviewing a fix for the metaclass calculation in __build_class__
[1], I realised that PEP 3115 poses a potential problem for the common
practice of using "type(name, bases, ns)" for dynamic class creation.
Specifically, if one of the base classes has a metaclass with a
significant __prepare__() method, then the current idiom will do the
wrong thing (and most likely fail as a result), since "ns" will
probably be an ordinary dictionary instead of whatever __prepare__()
would have returned.
Initially I was going to suggest making __build_class__ part of the
language definition rather than a CPython implementation detail, but
then I realised that various CPython specific elements in its
signature made that a bad idea.
Instead, I'm thinking along the lines of an
"operator.prepare(metaclass, bases)" function that does the metaclass
calculation dance, invoking __prepare__() and returning the result if
it exists, otherwise returning an ordinary dict. Under the hood we
would refactor this so that operator.prepare and __build_class__ were
using a shared implementation of the functionality at the C level - it
may even be advisable to expose that implementation via the C API as
PyType_PrepareNamespace().
The correct idiom for dynamic type creation in a PEP 3115 world would then be:
from operator import prepare
cls = type(name, bases, prepare(type, bases))
Thoughts?
Cheers,
Nick.
[1] http://bugs.python.org/issue1294232
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
Hi,
I’ve read PEP 402 and would like to offer comments.
I know a bit about the import system, but not down to the nitty-gritty
details of PEP 302 and __path__ computations and all this fun stuff (by
which I mean, not fun at all). As such, I can’t find nasty issues in
dark corners, but I can offer feedback as a user. I think it’s a very
well-written explanation of a very useful feature: +1 from me. If it is
accepted, the docs will certainly be much more concise, but the PEP as a
thought process is a useful document to read.
> When new users come to Python from other languages, they are often
> confused by Python's packaging semantics.
Minor: I would reserve “packaging” for
packaging/distribution/installation/deployment matters, not Python
modules. I suggest “Python package semantics”.
> On the negative side, however, it is non-intuitive for beginners, and
> requires a more complex step to turn a module into a package. If
> ``Foo`` begins its life as ``Foo.py``, then it must be moved and
> renamed to ``Foo/__init__.py``.
Minor: In the UNIX world, or with version control tools, moving and
renaming are the same one thing (hg mv spam.py spam/__init__.py for
example). Also, if you turn a module into a package, you may want to
move code around, change imports, etc., so I’m not sure the renaming
part is such a big step. Anyway, if the import-sig people say that
users think it’s a complex or costly operation, I can believe it.
> (By the way, both of these additions to the import protocol (i.e. the
> dynamically-added ``__path__``, and dynamically-created modules)
> apply recursively to child packages, using the parent package's
> ``__path__`` in place of ``sys.path`` as a basis for generating a
> child ``__path__``. This means that self-contained and virtual
> packages can contain each other without limitation, with the caveat
> that if you put a virtual package inside a self-contained one, it's
> gonna have a really short ``__path__``!)
I don’t understand the caveat or its implications.
> In other words, we don't allow pure virtual packages to be imported
> directly, only modules and self-contained packages. (This is an
> acceptable limitation, because there is no *functional* value to
> importing such a package by itself. After all, the module object
> will have no *contents* until you import at least one of its
> subpackages or submodules!)
>
> Once ``zc.buildout`` has been successfully imported, though, there
> *will* be a ``zc`` module in ``sys.modules``, and trying to import it
> will of course succeed. We are only preventing an *initial* import
> from succeeding, in order to prevent false-positive import successes
> when clashing subdirectories are present on ``sys.path``.
I find that limitation acceptable. After all, there is no zc project,
and no zc module, just a zc namespace. I’ll just regret that it’s not
possible to provide a module docstring to inform that this is a
namespace package used for X and Y.
> The resulting list (whether empty or not) is then stored in a
> ``sys.virtual_package_paths`` dictionary, keyed by module name.
This was probably said on import-sig, but here I go: yet another import
artifact in the sys module! I hope we get ImportEngine in 3.3 to clean
up all this.
> * A new ``extend_virtual_paths(path_entry)`` function, to extend
> existing, already-imported virtual packages' ``__path__`` attributes
> to include any portions found in a new ``sys.path`` entry. This
> function should be called by applications extending ``sys.path``
> at runtime, e.g. when adding a plugin directory or an egg to the
> path.
Let’s imagine my application Spam has a namespace spam.ext for plugins.
To use a custom directory where plugins are stored, or a zip file with
plugins (I don’t use eggs, so let me talk about zip files here), I’d
have to call sys.path.append *and* pkgutil.extend_virtual_paths?
> * ``ImpImporter.iter_modules()`` should be changed to also detect and
> yield the names of modules found in virtual packages.
Is there any value in providing an argument to get the pre-PEP behavior?
Or to look at it from a different place, how can Python code know that
some module is a virtual or pure virtual package, if that is even a
useful thing to know?
> Last, but not least, the ``imp`` module (or ``importlib``, if
> appropriate) should expose the algorithm described in the `virtual
> paths`_ section above, as a
> ``get_virtual_path(modulename, parent_path=None)`` function, so that
> creators of ``__import__`` replacements can use it.
If I’m not mistaken, the rule of thumb these days is that imp is edited
when it’s absolutely necessary, otherwise code goes into importlib (more
easily written, read and maintained).
I wonder if importlib.import_module could implement the new import
semantics all by itself, so that we can benefit from this PEP in older
Pythons (importlib is on PyPI).
> * If you are changing a currently self-contained package into a
> virtual one, it's important to note that you can no longer use its
> ``__file__`` attribute to locate data files stored in a package
> directory. Instead, you must search ``__path__`` or use the
> ``__file__`` of a submodule adjacent to the desired files, or
> of a self-contained subpackage that contains the desired files.
Wouldn’t pkgutil.get_data help here?
Besides, putting data files in a Python package is held very poorly by
some (mostly people following the File Hierarchy Standard), and in
distutils2/packaging, we (will) have a resources system that’s as
convenient for users and more flexible for OS packagers. Using __file__
for more than information on the module is frowned upon for other
reasons anyway (I talked about a Debian developer about this one day but
forgot), so I think the limitation is okay.
> * XXX what is the __file__ of a "pure virtual" package? ``None``?
> Some arbitrary string? The path of the first directory with a
> trailing separator? No matter what we put, *some* code is
> going to break, but the last choice might allow some code to
> accidentally work. Is that good or bad?
A pure virtual package having no source file, I think it should have no
__file__ at all. I don’t know if that would break more code than using
an empty string for example, but it feels righter.
> For those implementing PEP \302 importer objects:
Minor: Here I think a link would not be a nuisance (IOW remove the
backslash).
Regards
Hi,
Three weeks ago, I posted a draft on my PEP on this mailing list. I
tried to include all remarks you made, and the PEP is now online:
http://www.python.org/dev/peps/pep-0400/
It's now unclear to me if the PEP will be accepted or rejected. I don't
know what to do to move forward.
Victor
Hi all,
I am lacking of time right now to finish an important task before 3.2
final is out: we need to release "packaging" as a standalone release
under Python 2.x and Python 3.1, to gather as much feedback as we can
from more people.
Doing an automated conversion turned out to be a nightmare, and I was
about to go ahead and maintain a fork of the packaging package, with
the few modules that are needed (sysconfig, etc) within a standalone
release.
I am looking for someone that has some free time and that is willing
to lead this work.
3.2 can go out without this work of course, but it would be *much*
better to have that feedback
If you are interested, please let me know.
Cheers
Tarek
--
Tarek Ziadé | http://ziade.org
Hello all,
I'd like to propose the addition of a new module in Python 3.3. The 'lzma'
module will provide support for compression and decompression using the LZMA
algorithm, and the .xz and .lzma file formats. The matter has already been
discussed on the tracker <http://bugs.python.org/issue6715>, where there seems
to be a consensus that this is a desirable feature. What are your thoughts?
The proposed module's API will be very similar to that of the bz2 module;
the only differences will be additional keyword arguments to some functions,
for specifying container formats and detailed compressor options.
The implementation will also be similar to bz2 - basic compressor and
decompressor classes written in C, with convenience functions and a file
interface implemented on top of those in Python.
I've already done some work on the C parts of the module; I'll push that to my
sandbox <http://hg.python.org/sandbox/nvawda/> in the next day or two.
Cheers,
Nadeem
Hello all,
I have implemented an initial version of PEP 393 -- "Flexible String
Representation" as part of my Google Summer of Code project. My patch
is hosted as a repository on bitbucket [1] and I created a related
issue on the bug tracker [2]. I posted documentation for the current
state of the development in the wiki [3].
Current tests show a potential reduction of memory by about 20% and
CPU by 50% for a join micro benchmark. Starting a new interpreter
still causes 3244 calls to create compatibility Py_UNICODE
representations, 263 strings are created using the old API while 62719
are created using the new API. More measurements are on the wiki page
[3].
If there is interest, I would like to continue working on the patch
with the goal of getting it into Python 3.3. Any and all feedback is
welcome.
Regards,
Torsten
[1]: http://www.python.org/dev/peps/pep-0393
[2]: http://bugs.python.org/issue12819
[3]: http://wiki.python.org/moin/SummerOfCode/2011/PEP393