constant in python?

Alex Martelli aleaxit at yahoo.com
Sun Aug 19 05:27:49 EDT 2001


"Paul Rubin" <phr-n2001 at nightsong.com> wrote in message
news:7xk800ij0d.fsf at ruckus.brouhaha.com...
> "Alex Martelli" <aleaxit at yahoo.com> writes:
> > It's un-messy, though rather rough-cut at this point.  Best
> > coverage is AMK's at http://py-howto.sourceforge.net/rexec/rexec.html.
>
> Thanks, rexec/bastion looks nice and seems to take care of the
> problem.  I didn't see that there were enough pre-existing Python
> primitives to do that.  I'll have to look at the rexec/bastion code
> since it will probably be instructive.

It simply relies on the fact that code executed in a (global)
dictionary namespace whose __builtins__ entry does not
refer to the real __builtin__ module is intepreted as being
"restricted".  Restricted code is denied access to some of
Python's normal introspection facilities -- the interpreter
sees to that.  So, for example, a BastionClass instance has
a _get_ method that's a function, supplied to it by the
factory function Bastion, which in turn holds the real
object as a default argument.  Normal Python code could
introspect that and get at the real object; restricted code
cannot, because its introspection abilities are limited.


> An alternative may be to use one of the remote object schemes to put
> the restricted object in a separate Unix process (maybe even on a
> separate machine) from the caller.  Then the restricted object could
> implement its own access policy and generically defeating the control
> would require to breaking Unix security.  This is probably the most
> secure possible scheme, but its limitations may not always be
> tolerable.

Yes, putting the code whose execution is being restricted
in another process (maybe in another machine) *IS* a huge
step up in security.  Denial-of-service attacks via the untrusted
code become MUCH easier to defend against, for example: a
watchdog process can monitor the untrusted-process's resource
consumption and terminate it if need be -- the untrusted process
can be run with enforced priorities, under a userid with a very
limited disk-quota, etc, etc.  (It's all quite feasible under NT,
too, by the way -- just a little bit costly, because starting a new
process is heavier, but once that's done the monitoring
facilities are quite decent.).  I do agree that sometimes the
overall jump up in complexity and performance overhead
is just something you can't afford.

Something I've never done is keeping just two processes
up, the monitoring/untrusted and the monitored one -- the
process-startup overhead would only be paid on the rare
occasions when the untrusted process is terminated.  Sort
of like an untrusted-machine approach to firewalling -- you
don't build a new machine for each network transaction in
that case:-).  But I've never thought deeply enough about
potential vulnerabilities -- could one clever piece of
untrusted code hide eggs in the untrusted process and
exploit a further unrelated piece of untrusted code loaded
later with different privileges, for example?


Alex






More information about the Python-list mailing list