Alex Martelli aleaxit at yahoo.com
Sun Jun 3 16:09:59 EDT 2001

"Darren New" <dnew at san.rr.com> wrote in message
news:3B1A7FEB.E466824E at san.rr.com...
> Alex Martelli wrote:
> > _Complete_ information about how the numbers are generated
> > requires complete information about the state of their generator.
> Yep. The fun thing about quantum physics is that *complete* information
> is available about the state of the generator, and you *still* don't

Except it's not -- you can't be told position AND momentum
of the particles (either, but not both).  The information isn't
(and presumably can't) be complete.

> > Is your claim, then, that "randomness" MUST be used ONLY
> > for systems which, at the level of observation under discussion,
> No. The point I'm discussing is that an algorithm cannot generate
> randomness.

Maybe it can't "generate" the randomness, but it can "unearth"
it if it's hidden in the underlying "deep truth of mathematics".
Which seems to be a way in which one can read C's thesis.

Do the physical generators actually "generate" the randomness,
or do they rely on underlying "deep randomness in the deep
truth of physics", just unearthing some of it (e.g. by suitable

> No matter how much complexity you add to a mathematical
> formula, it's still 100% predictable if you know what the formula is.

So the _amount_ of randomness is constrained by 'program
size'.  Thus, it's finite for a finite-size program (program size
being an upper bound, and it being, it seems, impossible to
get a lower bound).

> This is untrue of physical random number generators.

I.e., a physical RNG cannot be finitely described?  I think it
can be, depending on its design -- and no doubt with the
same as-close-to-1-as-desired-probability constraints we
need to use on algorithms when we're running them on any
physical piece of equipment.  Deck-shuffling theory being
an example, since a deck of cards that is shuffled has long
been regarded as a typical "generator" of randomness.

Dice that are massive enough may presumably be similarly
finitized -- give me the initial state (position & momentum)
accurately, and in principle I can apply classical mechanics
and compute the positions at rest.  Even if the dice are thus
predictable-in-principle, it doesn't mean they aren't a RNG --
maybe not a 'perfect' one if you include the condition about
being given ALL information (or all AVAILABLE information
for a good-enough value of 'available':-).

No doubt better-in-principle generators can be designed,
taking advantage of the impossibility to have "complete
enough" state information.  I dunno, lava lamps, whatever.
But that doesn't mean that suddenly card decks or dice
"aren't random" any more (or that they never were).  For
all "finite-randomness generators", the interesting issue
becomes "how much randomness".

> > Chaotic effects may
> > well ensure that the number of bits of information that would
> > be needed to obtain prediction is higher than any practical
> > measure.
> In an algorithm, it's never going to be bigger than your seed.

Not sure what you mean by "seed" in the general case of
any algorithm whatsoever.  Even assuming the algorithm
has to have an input and we must call the initial input
value 'seed', what can you predict if all I tell you is that
"the seed" is 32304823408349093?  Nothing, it seems to
me, unless the information you receive also includes the
program that's going to much this "seed".  So, why confuse
things by putting "seeds" into the picture?  Let's just say
that amount of randomness (as measured in bits) can never
be greater than the size of the program -- it's easier to
account for any literal constants as part of the program's
size than vice versa, isn't it?  Well, Chaitin's approach boils
down to defining amount of randomness as _minimal_
programsize to generate a sequence (which turns out to
be undecidable in general, if I understand him correctly:
any actual program that does the generation gives me an
upper bound on the randomness of its output, but in
general I have no way to prove a lower bound).

> > So what is so special, that distinguishes a computer system
> > from a deck of cards -- considering both systems at a
> > macroscopic enough level that quantum effects can be
> > ignored, which is easier for the deck of cards I think but
> > is commonly done for well-designed computers too:-) --
> > so that I *could* get randomness from one and not the
> > other?
> You don't get randomness from a deck of cards. You get randomness from
> *shuffling* a deck of cards. The randomness comes from not knowing
> exactly what the fingers were doing.

Ah, from lack of information?  The information is apparently
not-available because nobody bothered to notice it?-)

So is lack of information randomness?-)

> If you shuffled the cards algorithmically (e.g., with a mechanical card
> shuffler that very reliably put them in the same order) then they
> wouldn't be random at all, *if* you knew exactly how the mechanical card
> shuffler worked. That's the equivalent of using an algorithm to generate
> pseudo-random numbers.

And it turns out to be a good approximation of what happens when
a human being shuffles cards in practice -- he or she tends to give
a finite (and small) number of riffles that are rather predictable.

Particularly when the original unsorted deck has 'structure', it's
interesting to study to what extent that structure is broken up and
to what extent a trace of it is preserved.  For example, in classic
rubber bridge, much of the deck before shuffling has been grouped
in 'tricks' of 4 cards each which tend to be of the same suit, or 3
in a suit and 1 in another.  If it was dealt without any shuffling,
the 4 hands would tend to be rather "balanced" -- with similar
number of cards in the various suits.  Depending on the shuffling
kind and amount used, this structure is gradually broken down
until the probabilities approximate "theoretical" ones (those that
would obtain if each hand of 13 cards were built by sampling
without repetition from the deck).  Measuring the _amount_ of
randomness generated by various shuffling methods and amounts
thus becomes quite interesting (for card-play theorists, at
least -- maybe to some extent for players & others too).

Saying that the resulting deck-order "isn't random at all if you
knew" again seems to consider randomness totally congruent to
lack of information.  It isn't *perfectly* random, sure, but that's
a long way from saying it isn't random *at all*.  Measuring the
_amount_ of randomness (maybe due to the exact amount of
information you lack, if you want to frame it this way) is the
interesting part.

A shuffled deck of cards may be very far from your Platonic
ideal of "a physical RNG", but it's pretty physical, has been
used as a RNG for centuries, and (as any card-player can
attest) generates or unearths or whatever SOME amount of
randomness.  Maybe those who shuffle card decks to get
randomness are "in a state of sin" (those who call that deck
"the Devil's Picture Book" would no doubt agree:-), but they
do get (finite amounts of) randomness, not 'none at all'.

> > It seems more fruitful to me to consider the amount
> > of information, i.e., a quantitative assessment of randomness,
> > rather than claim that only if that amount is 'infinite' can
> > the word 'randomness' be properly used.
> It depends, again, on which meaning of "random" you want to use. For
> some folks, "unpredictable" and "random" are two different things.

And "random" means instead...?  Didn't you also define it as such?

> > Taking a set of states and deeming them equivalent can
> > perfectly well be done -- who ever gets an "exact" number
> > from physical measurement anyway?!  It applies just as
> > well to the finite definition of randomness.
> Is fine. If you want to call "unpredictable" and "random" the same
> thing, power to you. I thought you were trying to understand the Von
> Neuman quote, not to argue that he's using words you don't like.

I have no objections to the words he's using -- at most I may
have some to the meaning he might be attaching to them:-).

I'm not particularly interested (in this thread) to understand JVN,
I _am_ interested in applying the word "randomness" to the
breaking of structure and *difficulty* of prediction that I can
study in finite systems, rather than have it reserved for some
kind of *impossibility* (infinite difficulty) of prediction.

> > > I think "random" here means "unpredictable". Tossing dice will lead to
> > > random/unpredictable results, even if you know everything there is to
> > > know about the dice and the table. Generating numbers between 1 and 6
> > > will not be unpredictable if you know the seed and algorithm.
> >
> > If you have COMPLETE information about the dice &c (taking
> > a HUGE number of bits, no doubt) you can predict their
> > trajectories, bounces (if allowed), rolls, AND final results.
> No. Quantum effects will screw that up.

Aren't dice (by far) massive enough to avoid that?

> > Why should these macroscopic-physics effect be inherently
> > unpredictable as you claim?
> If you start with a pooltable with perfect spheres for poolballs,
> perfectly racked, you cannot sink all 15 balls with your eyes closed.
> The probability cloud for the location of the last ball will be larger
> than the pooltable.

Interesting -- I can't refute this claim as I wouldn't know how
to go about it.  Is this result independent of the masses and
diameters of the balls, temperature, &c, or does it specifically
hold for the actual average sizes/temperatures/&c as used in
pool?  Unfortunately, pool is definitely not among my skills.

> > If you have complete information about the algorithm and
> > the relevant part of the state of the computer where it
> > runs, again you have predictability.
> And that's why VN said calling it "random" is sinful.

Aha, I knew it would get to attaching sinfulness to the way one
uses words -- and, wouldn't you know it, exactly the word-use
I care about in the context of this thread: what's now being
directly attacked (by your interpretation of VN's words -- I do
not know how demonstrable this interpretation is) is the attempt
to measure the *amount* of randomness... the amount of
information that would minimally afford prediction about the
random phenomenon.  If anything AT ALL predictable cannot
"by definition" be random AT ALL, then "amount of randomness"
becomes a meaningless, self-contradictory phrase.  Which is
exactly what I think is unsupportable...


More information about the Python-list mailing list