How many pads of paper do I have on my desk right now? How many books are on my shelves this minute?

You don’t know the answer to any of these questions, just as you don’t know if the Tigers will beat the Marlins tonight, whether IBM’s stock price will be higher at the close of today’s bell, and whether the spin of an outermost electron in my pet ytterbium atom is up or down.

You might be able to guess—predict—correctly any or all of these things, but you do not *know* them. There is some information available that allows you to quantify your uncertainty.

For example, you *know* that I can have no pads, or one, or two, or some other discrete number, certainly well below infinity. The number is more likely to be small rather than large. And since we have never heard of a universal physical law of “Pads on Desks”, this is about as good as you can do.

In a severe abuse of physics language, we can say that, to you, there are *exactly* no pads, *exactly* one pad, *exactly* two pads, and so forth, where each possibility exists in a superposition until…what? Right: until you look.

I know how many pads of paper I have because, according to my sophisticated measurement, there are three. And now, according your new information, the probability that there are three has collapsed to one. Given this observation—and accepting the observation is without error and given the belief in our mental stability—the event is no longer random, but known.

The point of this tedious introduction is to prove to you that “randomness” merely means “unknown.” Probability, and its brother randomness, are measures of information. What can be random to you can be known to me.

An event could be random to both of us, but that does not mean that we have identical information that would lead us to quantify our probabilities identically. For example, the exact number of books I have on my shelf is unknown to me and you: the event is random to both of us. But I have different information because I can see the number of shelves and can gauge their crowdedness, whereas you cannot.

A marble dropping in a roulette wheel is random to most of us. But not to all. Given the initial conditions—speed of the wheel, spin and force on the marble, where the marble was released, the equations of motion, and so forth—where the marble rests can be predicted exactly. In other words, random to thee but not to me.

I am happy to say that Antonio Acin, of the Institute of Photonic Sciences in Barcelona, agrees with me. On NPR, he said, “If you are able to compute the initial position and the speed of the ball, and you have a perfect model for the roulette, then you can predict where the ball is going to finish — with certainty.” (My aunt Kayla sent me this story.)

The story continues: “[Acin] says everything that appears random in our world may just appear random because of lack of knowledge.” Amen, brother Antonio.

A Geiger counter measures ionizing radiation, such as might occur in a lump of uranium. That decay is said to be “random”, because we do not have precise information on the state of the lump: we don’t know where each atom is, where the protons and so forth are placed, etc. Thus, we cannot predict the exact times of the clicks on the counter.

But there’s a problem. “You can’t be certain that the box the counter is in doesn’t have a mechanical flaw…” In other words, information might exists that allows the clicks to be semi-predictable, in just the same way as the number of books on my selves are to me but not to you.

So Acin and a colleague cobbled together ytterbium atoms to produce “true” randomness, by which they mean the results of an electron being “up” or “down” cannot be predicted skillfully using *any* information.

In their experiment, the information on the ytterbium atoms’ quantum (which means *discrete*!) state is not humanly accessible, so we can never do better than always guessing “up”^{1}.

It is misleading to say that they are “generating” randomness—you cannot generate “unknowness.” Instead, they have found a way to *block information*. Information is what separates the predictable from the unpredictable.

The difference is crucial: failing to appreciate it accounts for much of the nonsense written about randomness and *discrete* mechanics.

—————————————————————————————————-

^{1}*Brain teaser for advanced readers.* Acin’s experiment generates an “up” or “down”, each occurring half the time unpredictably. Why is guessing “up” every time better than switching guesses between “up” and “down”?

**Update** This is what happens when you write these things at 5 in the morning. The teaser is misspecified. It should read:

Acin’s experiment generates an “up” or “down”, each occurring as they may. When is guessing “up” (or “down” as the case might be) every time better than switching guesses between “up” and “down”?

You will see that I idiotically gave away the answer in my original, badly worded version.

16 April 2010 at 10:23 am

Disclaimer: obviously not an advanced reader. But from what I remember of your previous posts if one is concentrating on something that can go either way isn’t it better to guess one way all the time rather than “randomly” switching guesses back and forth? Doing the latter would seem to double the opportunities for a chance result.

16 April 2010 at 11:08 am

Hmmm… If I have a 50% probability of selecting Up vs. Down and a 50% probability of exact match then I would expect to be right 25% of the time. It seems counter-intuitive that reversing my guesses would not make me correct 75% of the time.

16 April 2010 at 12:47 pm

Because Monty Hall showed us the goat behind door number three.

16 April 2010 at 12:50 pm

All,

Well, I didn’t proof-read what I wrote. As usual, this means I made a bone-headed mistake. The teaser has been corrected.

16 April 2010 at 1:02 pm

“You don’t know the answer to any of these questions”

How do you know that I don’t?

How’s that for informational asymmetry?

16 April 2010 at 1:03 pm

Ari,

Except that I can ask you to prove it.

16 April 2010 at 1:08 pm

I have proved it.

You may verify it with your own proof, but you may not have my methodology or my data.

If I gave it to you it would merely be replication.

16 April 2010 at 1:14 pm

“If you are able to compute the initial position and the speed of the ball, and you have a perfect model for the roulette, then you can predict where the ball is going to finish — with certainty.”

This would seem to require that every one of the universe of interactions of the ball spinning on the wheel had a predictable outcome or direction. Certainly there must be interactions of the ball spinning that have 50/50 probabilities of going in one direction or the other. If this is true -wouldn’t this make it impossible to know with certainty the outcome – even with perfect knowledge? (I guess this is 2 questions)

16 April 2010 at 1:21 pm

Chuckles,

You beat me to it.

16 April 2010 at 1:31 pm

Pat,

You’d think so, but no. It turns out that the physics of roulette wheels is know fairly well. There are cheap devices—banned!—that allow you to guess with high accuracy where the marble will land.

Banned because the purpose of a casino is for you to lose.

16 April 2010 at 1:32 pm

Pat,

If you assume that you can figure out the exact forces acting on the ball at every moment, there is really no randomness. Of course this involves knowing everything– including the air hitting the ball at every moment– but assuming you could measure everything from start to finish, you could always predict the outcome.

Even if there’s a situation where there’s a “50/50 possibility,” something will tip it in one direction or another. That something, if measured, gives us knowledge of what will happen.

The problem, of course, is that that is quite difficult to do. Or at least, I would think it would be hard.

16 April 2010 at 1:34 pm

Beaten again.

I’m 0 for 2 today.

16 April 2010 at 2:17 pm

Well, I might be too naive but I still feel compel to ask where would the indetermination principle of Heisenberg fit in this.

Complete knowledge eliminates the need for probability but ultimately, at least in the physical world, it’s not possible. Of course this does not mean that we need to resort to quantum mechanics every time (De Broglie matter-wave helps in determining when it’s negligible) but “in principle” it means that we will never be able to reach complete knowledge.

As far as entangled states I’ve always been confused (but I’m in good company: Bohr said “If you are not confused then you haven’t really understood it”). However the idea that entangled states are only predictable -in the sense that only a probability of occurrence can be given, never certainty- has always sounded to me normal. If you do not know how you made them then you cannot be certain of their state.

Marco

16 April 2010 at 2:41 pm

Roulette, the pile driver building the casino across street cause sonic vibrations that shake the ball out of its grove 1/10 of a second before it otherwise might have, hitting the deflector differently and landing in another part of the wheel. I don’t think it is possible to have a perfect physical model of roulette. There is still some margin of error.

When is it better to guess always up? There is an experiment with that says that pigeons understand randomness better than people do. Suppose you have 2 doors with the same reward behind each one. A randomizer will open one door with 60% probability and the other door with 40% probability. The pigeon will understand to stand in front of the door that opens 60% of the time and stay there. The human will try to divine a patter than does not exist and switch between doors.

16 April 2010 at 2:44 pm

I was going to say or always choosing down. If you change every time the chance is fifty fifty each time whereas the guesses are part of a series. There is a high probability that the ups and downs will not occur fifty fifty. If the experiment is done an imaginary infinite number of times then the outcome will ‘regress’ towards fifty fifty!

However we cannot do better than saying the same each time but we could do worse if we plumped for up and down occurred fifty five times. We are stuck with our first choice.! So we’ve gambled only once, effectively. Where we have a fifty fifty chance of getting the answer right more than fifty percent of the time.

Ask me anything you want to know about gambling, Dav knows nothing! I won fifteen pounds on the grand national. (I bet on five horses though)

When I was little I used to play at fairs on the penny rullette wheels. I noticed the machine was very lazy at times with it’s firing action. After spending the whole ten pence I would stand and watch the ball. They’re more predictable than they ought to be. Someone who’s stood all day watching therefore has more information than someone who’s just walked up for the first time. So randomness has to do with information. Can something be more random? It can’t be more unknown than unknown.

16 April 2010 at 3:28 pm

Doug M,

Exactly!

Joy,

Seems to me we should share in your winnings, since we contributed to your understanding of probability. I’ll take half.

16 April 2010 at 3:43 pm

Woo hoo! I got one right.

you can send my no-prize to…

16 April 2010 at 3:45 pm

I would think that the reason that humans fail where pigeons succeed is we’re hardwired to look for patterns, trends, that sort of thing. Pareidolia and seeing the Virgin Mother in a bowl of soup and all that.

I’d be interested to see how other primates respond to these experiments. I’d suspect that it’s something we share with our hairier brethren (no, I don’t mean hirsute men.)

16 April 2010 at 4:10 pm

‘Can something be more random?’

Who knows? But ’17’ is the least random number.

16 April 2010 at 4:31 pm

Doug,

Great example.

Reminds me of an episode of The Amazing Race where the contestant had to find a clue in a wall of mail slots. Instead of starting with mail slot one (or box 25, or whatever) and moving in numerical order (or some other order), they all tried to beat randomness by guessing slots. Inevitably, they all ended up picking boxes they had already picked. I was screaming at the TV, “Just go in order.” But, alas, no one listened.

16 April 2010 at 6:45 pm

indeed, i find myself telling my students on a regular basis that ‘noise is simply how we model the structure of ignorance’

16 April 2010 at 9:44 pm

Is it random, or does it only appear random? You say the former, while Acin says the latter. He does not seem to agree that random means unknown, for if it did, then there would be no difference between appearing random and being random.

16 April 2010 at 10:13 pm

When it comes to predicting radioactive decay, there is no blocking of information. The information does not exist. Within quantum electrodynamics, each particle exists surrounded by a sea of virtual particles/antiparticle pairs that appear out of nothingness and disappear back into nothingness before Heisenberg uncertainty principle would allow them to be detected. These ghostly particles do control radioactive decay but there is no predicting them because they don’t actually exist!

17 April 2010 at 8:06 am

Heisenberg’s Uncertainty Principle is nothing more than a fudge over a temporary measurement problem.

It mistakes “don’t know” for “can’t know”.

17 April 2010 at 9:35 am

clazy,

There is no “random”, nor is there “appear random.” There is predictable or not. If you disagree, trying defining exactly what you mean by “random.”

Gary P,

I would say that nature blocks the information about what will happen. Or, that is, our current understanding is that we do not have access to the information necessary to make accurate predictions.

17 April 2010 at 10:31 am

t = rbinom(10000,1,0.5)

g = rep(1,10000)

h = as.numeric(gl(2,1,10000))-1

i = as.numeric(gl(2,2,10000))-1

u = rbinom(10000,1,0.5)

# So t is 0 or 1 ‘unpredictably'; g is all 1’s; h is alternate 0’s and 1’s;

# i is two 0’s followed by two 1’s, repeated;

# and u is another set of ‘unpredictable’ 0’s and 1’s.

# My results:

length(which(t == g)) # 5003

length(which(t == h)) # 5031

length(which(t == i)) # 4985

length(which(t == u)) # 4937

# Strategies g,h,i and u all seem to be right about 50% of the time.

# So my guess at the answer would be”never”.

18 April 2010 at 4:20 am

Briggs

I surely owe you more than seven pounds fifty!

If you had a tip jar I’d put it in, however, the pleasure was in seeing the winning horse ’don’t push it’ meet his public in the winners enclosure. He seems like a real character. His prize? a bucket of cold water! I’m afraid my statistical understanding as it is did not help me win. I was drawn to the story of the fourteen times champion jockey for whom the National had eluded him, and the horse, bless him, a cutie.

19 April 2010 at 1:17 am

Briggs,

“I would say that nature blocks the information about what will happen. Or, that is, our current understanding is that we do not have access to the information necessary to make accurate predictions.”

I believe that the demonstration of Bell’s Inequality proves that any such “hidden variable” theory cannot be true (unless you accept non-locality, or faster-than-light communication, which is a problem for causality).

19 April 2010 at 5:38 am

William :

.

I think that you miss a huge necessary domain when you talk about (un)predictability – namely computability .

You will surely agree that to make a prediction you need to compute .

For instance you predict that at some time T you will have X(T) = sqrt(2) .

Your prediction is not computable because sqrt(2) is not computable .

What saves you is that you may compute a number A which is as arbitrarily close to the exact prediction sqrt(2) as you wish .

Your computed answer is approximate but as close … or so you think .

But are there exactly known processes which are not computable with an arbitrary accuracy ?

You bet !

.

A simple example : X(N+1) = 4.X(N).[1 - X(N)] .

A perfectly defined , known and computable relation and trivially for all integer N ,

X(N) stays in ]0,1[ if X(0) is in ]0,1[ .

Now can you predict X(K) with an arbitrary accuracy for all K given some incertitude on X(0) ?

Make a fast test : compute X(1000) for X(0) = 0.2 and for X(0) = 0.2001 .

Really not close . So try X(0) = 0.20001 . Damn , still not close and actually completely different !

If you have patience you’ll try some more X(0) and come to the conclusion that despite having a perfectly defined “law” and initial conditions as arbitrarily close as you want , you are unable to compute X(K) with an arbitrary accuracy for larger K .

So despite the common belief , you have here a system about which everything is known yet it is utterly unpredictable .

.

It is actually even worse .

It can be proven that there don’t exist integers K1 and K2 such as X(K1)=X(K2) – e.g you never get 2 or more times the same number X(N) .

And I’ll prove you now in 4 lines that any actual computation you will do on this system is just an illusion .

1) Every computer has a finite accuracy . F.ex it represents numbers with 256 bits .

2) Every X(K) computed with the relation above will have that representation .

3) From 1) and 2) follows that after a finite number of iterations whose maximal number of steps can be easily given , the computer will find 2 integers K1 and K2 such as X(K1)=X(K2) .

4) But as that contradicts the known theorem above , whatever the computer computed , it was not the real trajectory of X(N) . E.g what you computed was an illusion .

5) QED

.

Amusingly (for a statistician) it can be proven that there exists a PDF invariant to changes in the initial conditions which gives the probability to get X (N) = a . The PDF can be computed with an arbitrary accuracy

.

And now take it on my word that a perfect roulette wheel is just another a bit more complicated case of unpredictability and incomputability demonstrated above .

However what you can compute is what is the theoretical maximal prediction time – equivalent to a maximum K above which you loose any ability to say anything about the simple X(N) system above .

Indeed according to the Heisenberg uncertainty principle we know that the initial conditions of the ball (position and velocity) can’t be known at better than h .

Sure h is hallucinatingly small (10^-34) and everybody knows that quantum mechanics has absolutely no impact on behaviours of macroscopical objects like roulette wheels .

Or is it so sure … ?

Well if you put 2 IC apart by only h in the “law” (you could try with the X(N) example too if your computer had the ability to compute such small numbers :)) , the result will diverge after some finite time T .

This time T is the ABSOLUTE predictabilty time limit of the system because you can’t know the IC at better than h .

Of course actual perturbations of the IC (like the underground passing below the building or the variations of air pressure) even if infinitesimal are vastly greater than h .

That’s why the real predictability time of even a perfect roulette wheel is astoundingly short – a couple of revolutions or so .

However , like in the example X(N) above , the probabilty for a given result will be as arbitrarily close to 1/37 as you wish

19 April 2010 at 6:06 am

Sorry i clicked on “submit comment” without giving a conclusion what every memo should do .

So this is the conclusion :

.

1) Acin is (horribly) wrong or (tragically) misrepresented

2) Randomness in physics as defined by PDFs is a naturally emerging feature in both the microscopical domain (quantum mechanics) and the macroscopical domain (roulette wheels) .

3) The deep reason for randomness in physics is the impossibility to know the initial conditions with an infinite accuracy . Its equivalent in mathematics is the regretable uncomputability of irrational numbers .

19 April 2010 at 11:39 am

“11:15, restate my assumptions:

1. Mathematics is the language of nature.

2. Everything around us can be represented and understood through numbers.

3. If you graph these numbers, patterns emerge.

Therefore: There are patterns everywhere in nature. “

19 April 2010 at 12:56 pm

Dear Mr. Briggs, I don’t know why you ask me to define random: you already did. You also used the word. It is therefore not very sensible to claim that there is “no ‘random’ nor ‘appear random'”. I do understand that randomness refers to an observer’s relationship to a given event, not to the event alone, even if we do speak of “random events”. The problem here is the conflict between a technical usage of the term “random” vs a layman’s usage of that term. My point is that it makes no sense for Acin to say something “appears random”, in the technical sense that you are highlighting, unless he intends to say the observer knows more than he realizes. You are saying that an event IS random ONLY because of the observer’s lack of knowledge. Acin is saying it is NOT in fact random, but only APPEARS to be random because of lack of knowledge. He is using “random” differently than you are. You are both, however, making the same point. Which raises the question, is all this noodling of mine beside the point?

20 April 2010 at 8:02 am

“Information is what separates the predictable from the unpredictable. ”

And when an event remains unpredictable, even with absolute information, wouldn’t that be random?

When the entire observation group, given unlimited information, cannot skillfully predict an outcome does the distinction of random and unpredictable even matter?

21 April 2010 at 5:35 am

“And when an event remains unpredictable, even with absolute information, wouldn’t that be random?

When the entire observation group, given unlimited information, cannot skillfully predict an outcome does the distinction of random and unpredictable even matter?”

.

Yes .

X(N+1) = 4.X(N).[1 - X(N)] is such an example of absolute information AND unpredictability .

Randomness is consequence and can even be formally proven : the probability distribution of the Xs in the ]0,1[ interval is given by 1 /Pi.sqrt[X(1-X)]

25 April 2010 at 7:59 pm

Mr. Briggs –

I don’t see that the answer you “gave away” by saying “why” instead of “when” is correct. If “up” or “down” each occurs half the time, unpredictably, then all 2^N possible outcomes of a set of N up/down events are equiprobable, so all guessing patterns are equally likely of every degree of success.