Python obfuscation

Alex Martelli aleax at
Fri Nov 18 18:02:18 CET 2005

Anton Vredegoor <anton.vredegoor at> wrote:
> What was mostly on my mind (but I didn't mention it) is that for
> something to be commercially viable there should be some kind of
> pricing strategy (NB in our current economic view of the world) where a
> better paying user gets a vip interface and poor people get the
> standard treatment.

Some fields work well with such market segmentation, but others work
perfectly well without it.  iTunes songs are 99 cents (for USA
residents; there IS some segmentation by national markets, imposed on
Apple by the music industry) whoever is buying them; I personally think
it would hurt iTunes' business model if the 99-cents song was a "cheap
version" and you could choose to "upgrade" to a better-sounding one for
extra money -- giving the mass-market the perception that they're
getting inferior goods may adversely hurt sales and revenue.

Market segmentation strategies and tactics are of course a huge field of
study, both theoretical and pragmatic (and it's as infinitely
fascinating in the theoretical view, as potentially lucrative or ruinous
in practical application).  It's definitely wrong to assume, as in your
statement above, that uniform pricing (no segmentation, at least not
along that axis) cannot work in a perfectly satisfactory way.

> If the heuristic always gives the same answer to the same problem it
> would be easier to predict the results. Oh no, now some mathematician
> surely will prove me wrong :-)

"Easier" need not be a problem; even assuming that the heuristic uses no
aspect whatever of randomness, you may easily think of real-world cases
where ``reverse engineering'' the heuristic from its results is
computationally unfeasible anyway.  Take the problem of controlling a NC
saw to cut a given set of shapes out of a standard-sized wood plank,
which is one of the real-world cases I mentioned.  It doesn't seem to me
that trying to reverse-engineer a heuristic is any better than trying to
devise one (which may end up being better) from ingenuity and first
principles, even if you had thousands of outputs from the secret
heuristic at hand (and remember, getting each of this output costs you
money, which you have to pay to the webservice with the heuristic).

> Ok. Although it's a bit tricky to prove this by using an example where
> the randomness is already in the problem from the start. If one groups
> very chaotic processes in the same category as random processes of
> course.

Well, economically interesting prediction problems do tend to deal with
systems that are rich and deep and complex enough to qualify as chaotic,
if not random -- from weather to the price of oil, etc etc.  But
problems of optimization under constraint, such as the NC saw one,
hardly qualify as such, it seems to me -- no randomness nor necessarily
any chaotic qualities in the problem, just utter computational
unfeasibility of algorithmic solutions and the attendand need to look
for "decently good" heuristics instead.

> > >Deliberately giving predictions worse than I could have given, in this
> > >context, seems a deliberate self-sabotage without any return.
> Not always, for example with a gradient in user status according to how
> much they pay. Note that I don't agree at all with such practice, but
> I'm trying to explain how money is made now instead of thinking about
> how it should be made.

Money is made in many ways, essentially by creating (perceived) buyer
advantage and capturing some part of it -- but market segmentation is
just one of many ways.  IF your predictions are ENORMOUSLY better than
those the competition can make, then offering for free "slightly
damaged" predictions, that are still better than the competition's
despite the damage, MIGHT be a way to market your wares -- under a lot
of other assumptions, e.g., that there is actual demand for the best
predictions you can make, the ones you get paid for, so that your free
service doesn't undermine your for-pay one.  It just seems unlikely that
all of these preconditions would be satisfied at the same time; better
to limit your "free" predictions along other axes, such as duration or
location, which doesn't require your predictions' accuracy advantage to
be ENORMOUS _and_ gives you a lot of control on "usefulness" of what
you're supplying for free -- damaging the quality by randomization just
seems to be unlikely to be the optimal strategy here, even if you had
determined (or were willing to bet the firm that) marked segmentation is
really the way to go here.

Analogy: say you make the best jams in the world and want to attract
customers by showing them that's the case via free samples.  Your
randomization strategy seems analogous to: damage your jam's free
samples by adding tiny quantities of ingredients that degrade their
flavor -- if your degraded samples are still much better than the
competitors' jam, and there's effective demand for really "perfect" jam,
this strategy MIGHT work... but it seems a very, very far-fetched one
indeed.  The NORMAL way to offer free samples to enhance, not damage,
the demand for your product, would be to limit the samples along
completely different axes -- damaging your product's quality
deliberately seems just about the LAST think you'd want to do; rather,
you'd offer, say, only tiny amounts for sampling, and already spread on
toast so they need to be tasted right on the spot, enticing the taster
to purchase a jar so they can have the amount of jam they choose at the
time and place of their choosing.

I hope this analogy clarifies why, while I don't think deliberate damage
of result quality can be entirely ruled out, I think it's extremely
unlikely to make any sense compared to ofher market segmentation
tactics, even if you DO grant that it's worth segmenting (free samples
are an extremely ancient and traditional tactic in all kind of food
selling situations, after all, and when well-designed and promoting a
product whose taste is indeed worth a premium price, they have been
repeatedly shown to be potentially quite effective -- so, I'm hoping
there will be no debate that the segmentation might perfectly well be
appropriate for this "analogy" case, whether it is or isn't in the
originally discussed case of selling predictions-via-webservices).


More information about the Python-list mailing list