minimum install & pickling

Paul Boddie paul at
Thu Sep 18 12:20:02 CEST 2008

On 17 Sep, 22:18, "Aaron \"Castironpi\" Brady" <castiro... at>
> On Sep 17, 4:43 am, Paul Boddie <p... at> wrote:
> >
> These solutions have at least the same bugs that the bare bones
> solution in the corresponding framework has.  Malicious code has fewer
> options, but constructive code does too.  If you're running foreign
> code, what do you want it to do?  What does it want to do?  The more
> options it needs, the more code you have to trust.

As I noted, instead of just forbidding access to external resources,
what you'd want to do is to control access instead. This idea is not
exactly new: although Brett Cannon was working on a sandbox capability
for CPython, the underlying concepts involving different privilege
domains have been around since Safe-Tcl, if not longer. The advantage
of using various operating system features, potentially together with
tools like fakechroot or, I believe, Plash, is that they should work
for non-Python programs. Certainly, the chances of successfully
introducing people to such capabilities are increased if you don't
have to persuade the CPython core developers to incorporate your
changes into their code.

> The only way a Python script can return a value is with sys.exit, and
> only an integer at that.  It is going to have output; maybe there's a
> way to place a maximum limit on its consumption.  It's going to have
> input, so that the output is relative to something.  You just make
> copies to prevent it from destroying data.  Maybe command-line
> parameters are enough.  IIRC if I recall correctly, Win32 has a way to
> examine how much time a process has owned so far, and a way to
> terminate it, which could be in Python's future.

There is support for imposing limits on processes in the Python
standard library:

My experimental package, jailtools, relies on each process's sandbox
being set up explicitly before the process is run, so you'd definitely
want to copy data into the sandbox. Setting limits on the amount of
data produced would probably require support from the operating
system. Generally, when looking into these kinds of systems, most of
the solutions ultimately come from the operating system: process
control, resource utilisation, access control, and so on. (This is the
amusing thing about Java: that Sun attempted to reproduce lots of
things that a decent operating system would provide *and* insist on
their use when deploying Java code in a controlled server environment,
despite actually having a decent operating system to offer already.)

> PyPy sandbox says:  "The C code generated by PyPy is not
> segfaultable."  I find that to be a bold claim (whether it's true or
> not).
> I'm imagining in the general case, you want the foreign code to make
> changes to objects in your particular context, such as exec x in
> vars.  In that case, x can still be productive without any libraries,
> just less productive.

Defining an interface between trusted and untrusted code can be
awkward. When I looked into this kind of thing for my undergraduate
project, I ended up using something similar to CORBA, and my
conclusion was that trusted code would need to expose an interface
that untrusted "agents" would rely on to request operations outside
the sandbox. That seems restrictive, but as the situation with rexec
has shown, if you expose a broad interface to untrusted programs, it
becomes increasingly difficult to verify whether or not the solution
is actually secure.


More information about the Python-list mailing list