calling a function indirectly

Jim Dennis jimd at vega.starshine.org
Thu Feb 28 03:16:14 EST 2002


In article <UTAe8.9921$jC3.362611 at news1.east.cox.net>, Jeff Hinrichs wrote:

>"Jim Dennis" <jimd at vega.starshine.org> wrote in message
> news:a5cnvo$2mdu$1 at news.idiom.com...
>> In article <Vg%c8.4179$XT1.116344 at news1.east.cox.net>, Jeff Hinrichs wrote:

>>> see:
>>> http://mail.python.org/pipermail/python-list/2001-August/060976.html
>>> for some info.   Unless you have total control over what is being eval'd
>>> you are at risk.
>>>-Jeff

>>> "Rajarshi Guha" <rxg218 at psu.edu> wrote in message
>>> news:a50udo$1dou at r02n01.cac.psu.edu...
>>>> On Wednesday 20 February 2002 01:24 in comp.lang.python Jeff Hinrichs
>>> wrote:

>>>>> If you wanted to get away from the dangerous eval, you could put your
>>>>> functions inside of a class and then,

>>>> Why is eval dangerous?

>>  In THIS case it's not a problem.  The string being eval'd is a
>>  reference to be bit of code he wrote himself.  However, if it was
>>  being combined with a string from any untrusted source (esp.
>>  user input, or data from a network connection or subprocess, even
>>  filenames from an os.listdir() or some such, IF that was the case
>>  then eval could be quite dangerous because it could be executing
>>  arbitrary Python code (which could, in turn, execute arbitrary bits
>>  of system code).

>>  Regulars on this newsgroup display a knee jerk reaction to eval().
>>  This could be construed as a healthy wariness in some cases.  One
>>  should always ask:
>> 		where did this data come from?
>> 		how did I validate it?
>>  and especially,
>> 		what are the risks of using/trusting this data in this way?

>>  ... but those questions should not be limited merely to strings
>>  that we might be passing to eval, they must be constantly applied
>>  throughout our coding if we intend to write robust code which
>>  works across security contexts (handles any sort of foreign data)
>>  and which is capable of enforcing the most basic implicit security
>>  policy (don't get subverted to executing arbitrary code or tricked
>>  into removing, or corrupting our data).

> This is true, but you should always consider that 
> modifications/enhancements in the future might expose those 
> "shielded" yet vulnerable sections of code.  With that in mind, 
> you should always code "on the defensive."

 "those 'shielded' yet vulnerable"... which are those?  I didn't
 refer to any thing as shielded or vulnerable.  I said that we
 should not curse eval() (merely the mechanism) but that the 
 questions we should be asking are:

 	where did the data come from?
	how did we validate it?
	what are the risks of using this data in this way?

 Remember that our Python code is already relying on a long
 trusted path (from our hardware and OS vendors and distributors,
 through all of the authors and contributors of the compilers, 
 linkers, archivers, and all the way to the interpreters (and 
 shells, dynamic library loaders, kernels, etc) and modules that
 we use.  
 
 That many of us use Python (open source) compiled with
 gcc (open source) on Linux or {Free,Net,Open}BSD systems (more
 open source) and processing (linked, archived, packaged, distributed,
 installed, and configured) with more open source tools, this fact
 is reassuring to some.  That there are wacky people out there who
 disassemble Python byte-compiled code, and binaries (into assembly)
 and people who run their systems with ICEs (in-circuit emulators)
 while debugging *their* software (which depends on our tool chains)
 --- that these people exist and a large number of them have ready
 access to the Internet (and places like slashdot especially)
 is also reassuring.

 People who've read Ken Thompson's essay on "Trusting Trust" (???)
 will see what I've getting at.  While the "Trusting Trust" 
 scenario might be remotely possible even with Linux and GCC, the
 chances are truly infinitesimal.


> Danger lurks not only where we expect it but where we don't.  So having a
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> "knee jerk" reaction to eval is proper.  It has a number of uses and can be
> quite handy in one shot scripts but any code with a lifetime of more than
> one run...."Danger! Will Robinson.  Danger!"

>-Jeff

 Ummm.  This is self-contradictory.  Since danger lurks where we
 don't expect it then "knee jerk" attention or focus on eval expressions
 (where you are EXPECTING danger) is necessarily detracting from 
 the rest of your coding and data manipulation assumptions (where
 we aren't *expecting* to have problems.

 It would be folly to read Python code, looking for copies of eval(),
 scrutinizing those, and then waltz away with the misconception that
 you'd done a code audit.

 Attention is ignorance.  Ignorance is Attention.

 To attend to anything is to ignore others and vice versa.
 I hate to make Zen references when talking about coding and 
 especially when discussing computer security, but the fact is 
 that we must be holistic (looking at *everything*) which does
 require abandonning prejudices and questioning our own assumptions.

 You are right about one thing, the bit of code which was "safe" and
 "secure" at one point (given a set of assumptions which were valid,
 even *provable* at the point that it was implemented) have frequently
 been rendered dangerous and insecure by later enhancements to their
 environment.  
 
 UNIX is rife with examples of this.  Some will say it's evidence
 of a poor security design.  I will assert that it is a natural 
 byproduct; UNIX is one of the oldest and definitely is the most
 widely ported, wildly "enhanced" and modified, operating system 
 in history.  In fact UNIX is really a set of conventions regarding
 programming APIs, tool chain and utility sets, and file system
 naming and semantics.  UNIX is not "an operating system" but a term
 that describes many vastly divergent operating systems.

 So, in the thirty year history of UNIX (over half of the history
 of electronic computing) we've seen enhancements like symlinks, 
 DNS (over /etc/hosts), networking (especially NFS, and other networking 
 fileysystems), shared/dynamic libraries, longer filenames, dropped
 constraints on filename character sets (and other i18n features), 
 introduction of GUIs and countless other enhancements which have
 each resulted in new bugs --- failures in various programs
 to implement the intentions of their programmers, and many of those
 were security bugs --- failures to enforce the intended, usually
 implicit policies of their creators and/or users/owners.

 (Other OS' are also peppered with examples of how new features
 break the previously "safe" assumptions design and programming
 decisions).
 
 Telling people to avoid eval() because it's "dangerous" and might
 lead to "insecure" code is a gross oversimplification.  It is better
 to teach them about good coding practice and design theory then to
 tout empty guidelines.  (Granted, eval() might usually be the
 *wrong* approach for other reasons, such as unmaintainability, 
 lack of introspection, spurious or opaqueness to the structured
 exception handling mechanisms, etc; those are NOT hollow reasons.
 But "because it's dangerous" is much to vague and oversimplified).
 



More information about the Python-list mailing list