obviscating python code for distribution

geremy condra debatem1 at gmail.com
Wed May 18 15:07:49 EDT 2011


On Wed, May 18, 2011 at 10:33 AM, Hans Georg Schaathun <hg at schaathun.net> wrote:
> On Wed, 18 May 2011 09:54:30 -0700, geremy condra
>  <debatem1 at gmail.com> wrote:
> :  On Wed, May 18, 2011 at 12:36 AM, Hans Georg Schaathun <hg at schaathun.net> wrote:
> : > But then, nothing is secure in any absolute sense.
> :
> :  If you're talking security and not philosophy, there is such a thing
> :  as a secure system. As a developer you should aim for it.
>
> You think so?  Please name one, and let us know how you know that it
> is secure.

I was playing around with an HSM the other day that had originally
targeted FIPS 140-3 level 5, complete with formal verification models
and active side-channel countermeasures. I'm quite confident that it
was secure in nearly any practical sense.

> : > and thereby provides some level of security.
> :
> :  The on-the-ground reality is that it doesn't. Lack of access to the
> :  source code has not kept windows or adobe acrobat or flash player
> :  secure, and they have large full-time security teams, and as you might
> :  imagine from the amount of malware floating around targeting those
> :  systems there are a lot of people who have these skills in spades.
>
> You are just demonstrating that it does not provide complete security,
> something which I never argued against.

Ah, my mistake- when you said 'some level of security' I read that as
'some meaningful level of security'. If you were arguing that it
provided roughly as much protection to your code as the curtain of air
surrounding you does to your body, then yes- you're correct.

> : > Obviously, if your threat sources are dedicated hackers or maybe MI5,
> : > there is no point bothering with obfuscation, but if your threat source
> : > is script kiddies, then it might be quite effective.
> :
> :  On the theory that any attack model without an adversary is
> :  automatically secure?
>
> No, on the assumption that we were discussing real systems, real
> threats, and practical solutions, rather than models and theory.
> There will always be adversaries, but they have limited means, and
> limited interest in your system.  And the limits vary.  Any marginal
> control will stave off a few potential attackers who just could not
> be bothered.

Empirically this doesn't appear to be a successful gambit, and from an
attacker's point of view it's pretty easy to see why. When a system
I'm trying to break turns out to have done something stupid like this,
it really just ticks me off, and I know a lot of actual attackers who
think the same way.

> In theory, you can of course talk about absolute security.  For
> instance, one can design something like AES¹, which is secure in
> a very limited, theoretical model.  However, to be of any practical
> use, AES must be built into a system, interacting with other systems,
> and the theory and skills to prove that such a system be secure simply
> has not been developed.

This is flatly incorrect.

> Why do you think Common Criteria have not yet specified frameworks
> for the top levels of assurance?

Perhaps because the lower levels of 'assurance' don't seem to provide very much.

Geremy Condra



More information about the Python-list mailing list