Tests For Randomness Aren’t What You Think

Here is a random event. When I wrote this article I either had a quarter or a nickel in my pocket; I did not have both.

Did I have a quarter in my pocket? You do not know. You have not been given enough information to say for certain. The “event” of my having a quarter (or nickel) in my pocket is random to you. This paragraph is also a perfect test for randomness, because why? Because here we have demonstrated randomness, which is to say, a state less than certainty in your mind, which has been proven. Therefore the test is perfect.

Random, as I have written many times, and which is proved in Uncertainty, means unknown or unpredictable and nothing more.

Now before we leave this example, consider that I (Yours Truly) caused the “event” of the quarter or the nickel in my pocket. The event is not random to me, the causal agent. It would not be random to you, either, if you knew the cause (perhaps you were peeking in my window, you naughty reader).

A reader at Roulette 30 as me to take a look at a “test for randomness“, which you might first read.

These tests work in the following fashion. A string of numbers has been caused to be by some process. Usually this process is known, or at least could be, if this process is one of the many deterministic “pseudo random number generators”. The “pseudo” is there to tell you the process is as rigorously deterministic as the sequence 1, 2, 3, …

Now in these tests for “randomness”, the cause of the number sequence is either ignored or it genuinely unknown. The cause has to be there, of course, even if the sequence has been produced by harnessing some quantum mechanical procedure; only with these, we know we cannot know the cause.

We already know the correct definition of random. When a cause is know, the event or sequence is no longer unknown, hence it is not random. A sequence can thus be “random” to one person, and non-random to another.

Naturally, if we know the cause, we can predict the sequence. But what if we could not predict the sequence only imperfectly? Keep that question in mind.

All probability is conditional on assumed evidence of premises. If we have B = “A thirty-eight-sided object only one side of which is labeled ’00’ and only one side will be revealed to you”, the probability of seeing in ’00’ is, given B and only B and no other evidence, 1/38. Suppose that is the only information you have. If you wanted to predict, the best you could do is say the probability is 1/38.

If we could learn something, anything, really, about the cause, or somehow come into possession of information related to the cause, but not the cause itself, we could augment our base knowledge and deduce a different prediction, larger or smaller than 1/38 depending on how probative this information is.

The event, in this sense, would be “less random”. Any time the probabilities, based on new evidence, moves toward the extremes (0 or 1), it is better known, hence less random. But unless we can determine or know the cause, the probability will still be less than extreme, and the event will still be random. And

Let’s add new evidence in the form of a string from the sequence, observed (or assumed) in the past. Given B, we can deduce the probability of seeing such a sequence, or the probabilities of functions of this sequence (one function is the total ’00’s; there are an infinite number of functions). The probabilities won’t necessarily match the observed relative frequencies of the sequence (or function), but why should they? Relative frequency is not the same as probability.

It could be, as often is, the probability of the sequence (or some function), given B is low. Since there are an infinite number of functions, we could always find one that has a low probability given B. So low probability in itself isn’t especially interesting.

The real test, then, is predictability. Suppose we augment B by the observed sequence: call this augmented evidence B+. Then it might be that the function in which we’re interested has a high probability given B+, but low probability given B.

Very well. Past sequences are not of much interest: future ones are. We could use B+, and B, to predict probabilities of new sequences (or functions). Pay attention closely, now.

If you can find an opponent willing to take bets on the sequence and your opponent uses B to formulate probabilities, and you use B+, then if the augmented information in B+ has anything to do with the cause of the sequence (even at second hand, if you understand me), then you can make money from your opponent.

But if you were merely fooling yourself with B+, then you will lose money, since B better predicts the sequence.

The true test of “randomness”, then, is in the predictions you can make, since unless any observed sequence is impossible given B, you might be fooling yourself.

In other words, if you think you have found a system which can beat the casino (which uses B), by discovering some hidden sequence of numbers (which implies a B+), the only way to prove it is to take the casino’s money.

I do not say such a thing is impossible, for it is not. There have been reports of devices which allow discovery of “B+”s. Which is why casinos ban such devices.

5 Comments

  1. imnobody00

    This is very well explained.

    But, what about the intrinsic randomness of quantum mechanics?

    You discuss pseudo-random generators but there are random generators based on quantum events.

  2. “The cause has to be there, of course, even if the sequence has been produced by harnessing some quantum mechanical procedure; only with these, we know we cannot know the cause.”

    Our current understanding of QM is the exact opposite of this, as any physics student knows. The combination of Bell’s theorem with an abundance of experimental data forces us to abandon the notion of unknown causes in the microworld. There is only probability, and uncaused events. Of course, since this leads to metaphysical discomfort, the search for alternatives continues. But, in the absence of an alternative theory, Briggs’ statements about physics are simply incorrect.

  3. Oldavid

    [quote=Lee]The combination of Bell’s theorem with an abundance of experimental data forces us to abandon the notion of unknown causes in the microworld. There is only probability, and uncaused events.[/quote]
    [quote “Bell’s theorem”] No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.[/quote]

    A non-series of incomprehensible contradictions.

    An “uncaused event” is a self-contradiction. An event is always the result of some (maybe undefined, or even undefinable, cause(s)). Probability is only the expected result of partly defined causes. The “predictions” of QM cannot be “predictions” of anything according to your incomprehensible, self-contradictory, pontification.

  4. MosesSmellTheRoses

    So Oldavid, you can measure both the position and momentum of a sub-atomic object at the same time, with a perfect degree of accuracy?

  5. Will Janoschka

    MosesSmellTheRoses April 16, 2017 at 1:10 am

    “So Oldavid, you can measure both the position and momentum of a sub-atomic object at the same time, with a perfect degree of accuracy?”

    You are presupposing not only that your “sub-atomic object” may exist (quantum probability), but also that such does exist for sufficient interval to actually have both position and momentum. Just where is some evidence that one or both suppositions are correct? At what location in that interval does your fake photon ever exhibit both?

Leave a Reply

Your email address will not be published. Required fields are marked *