Listen to the podcast at YouTube, Bitchute or Gab (to come).
From Anon comes the major announcement: “Quside unveils the world’s first Randomness Processing Unit“.
Quside today unveils its vision for the Randomness Processing Unit (RPU), a device designed to simultaneously accelerate the execution of intensive randomized workloads with reduced energy consumption and execution time savings.
Many of the most relevant simulation, optimization, and prediction workloads rely on stochastic processes. In turn, they require an ever-increasing source of high-quality, high-speed random numbers and associated processing tasks. Current approaches using pseudo-randomness generation in high-performance environments often lead to significant energy consumption and performance inefficiencies, as well as potentially introducing artifacts and co-dependencies in the statistical results.
Who needs “random” numbers”? Well, it depends on what random means. Random means unknown, unpredictable, lack of knowledge of cause.
So where might we need numbers which are unknown, unpredictable, and where the lack of knowledge of their cause is important?
I can think of only three: casinos, cryptography, and conjuring.
Casinos
Casinos are situations where preventing the lack of cause of numbers is paramount.
Very obviously, casinos do not want slot machines where the numbers are predictable—beyond a certain point. This is also why cards are shuffled, to conceal their order and make the sequence unpredictable—up to a point. Card counters seek ways around this randomness, and, if caught, are banned. (You are not meant to win.)
The “up to a point” means that the unpredictability is within known bounds. A deck of cards has a known constituency, and the randomness is limited to that. You will not draw a “72 of Crustaceans”, for there is no such card. Same with slot machines, which only have fixed outcomes, larger and more complex than cards, to be sure.
It therefore behooves casinos to have means of producing numbers that cannot be predicted except within the known bounds of the gambles (which they call “games”). It also behooves (say behooves three or four times) casinos to boot the odd fellow who hits upon the “randomization scheme”, i.e. who has figured out, at least partially, the cause of the numbers. Such as in card counting.
Casino-like activity is ubiquitous: lotteries, sports, some elections, etc. Here’s an example I used in Uncertainty. The original twelve apostles were reduced to eleven, Judas having gone missing, and they had two equal candidates to choose from to return to a dozen. Then this happened (Acts 1: 23–26; my emphasis):
And they proposed two: Joseph called Barsabas, who was surnamed Justus, and Matthias. And they prayed and said, “You, O Lord, who know the hearts of all, show which of these two You have chosen to take part in this ministry and apostleship from which Judas by transgression fell, that he might go to his own place.” And they cast their lots, and the lot fell on Matthias. And he was numbered with the eleven apostles.
You can call this pagan superstition if you wish, but the same casting of lots occurs each time a zebra-shirted man tosses a coin to see who receives the kickoff. The idea in both cases is the same: remove a known man-made cause from the choice. That is, remove the possibility of bias from the choice.
The referee is not blamed for the result, which was unpredictable; neither were any of the eleven blamed for going with Matthias over Barsabas. Of course, there are plenty of ways to juice coin tosses and lot throwing, which are ways of introducing back those unknown causes. But you have the idea.
Cryptography
Just as obviously as for casinos, messages encoded by secret key cannot have their keys predictable. If you can guess a numerical key, even partially, you can decode a message, at least part way.
Keys must be generated by means that are at least extremely difficult to predict, if not impossible. And there is nothing more unpredictable than certain quantum phenomena, the causes of which are known to be unknown.
Which is not to say they lack cause, just that the causes are not local, in the parlance of physics, and beyond our reach. Also just like casinos, the unpredictability is within known bounds.
Hence devices like the RPU. Hardware random number generators aren’t new. Diodes near their breakdown voltage supply unpredictable voltages, which can be turned easily into numbers. Diodes, and other such devices, are not very efficient, though, and consume a lot of energy. Quside brags that they have found a newer and more efficient process, better than their competitors’, which relieves CPUs from the burden.
I don’t know whether their RPU works as advertised or not, but it is true that removing the job from CPUs is important. For most of them only produce what are called “pseudo-random” numbers, which are numbers which are produced by deterministic algorithms, and so are perfectly predictable—if you know their “seed”, i.e. their starting point.
These CPU-based generators are put through innumerable tests (a pun!)—except one—to see if their outputs are in any way predictable (inside the known bounds). The exception is the known algorithm which generated them. This is a weakness, because if your enemy can guess which generator you use, he only has to uncover the seed to lay all bare.
Which threat, again, is what hardware devices would alleviate, or if they truly work as advertised, eliminate.
Conjuring
The third claimant, which is statistics and probability, which seek “random numbers” for simulations, and for all to approximate numerical integration.
The statistical quest exists because of two reasons: (1) lack of knowledge of straightforward analytical methods (see here and here and here), and (2) the false belief that “random” numbers have mysterious powers that replicate Nature, which some think, wrongly I say, “generates” “random” numbers which somehow, nobody knows how, guarantee the quality of statistical results.
Sometimes experiments are “randomized”—but only in situations where it is known, though not always acknowledged, there are unknown causes. This does not work. If all an observable’s causes were known, then all could be controlled, as in special physics experiments. But because all the causes aren’t known, the “randomization” cannot guarantee an equal partition of causes in “randomized” blocks of the experiments. Thus the “randomization” must be for another purpose.
The randomization should always be seen like referees casting lots: to remove the potential for human bias. To bar, or make less likely, cheating, even unconscious cheating. You don’t give the sicker patients the placebo, and so on. In some cases, the “randomization” takes part in the same statistical ritual as simulations, a sort of blessing on the results. This is wrong, and for the same reasons.
Approximating integrals using “random” numbers is sounder, but inefficient. And only happens, as the links above show, because analytical methods aren’t yet known. I never tire of quoting Jaynes:
It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought.
I go through examples in those links. The idea, that some have, that the “randomization” is needed for certain theoretical reasons, to ensure correctness of results, is wrong.
Others?
I can’t think of other applications. Which is as far from a proof there are none as can be. Perhaps you know of others that are not equivalent to casinos or crypto or conjuring—and aren’t misguided, like simulation. Let us know.
Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.
Subscribe or donate to support this site and its wholly independent host using credit card click here. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
For integration, there are the “quasirandom” sequences (sobol, e.g.) that are well structured non-random points along with some creative math on top to do integrals or sensitivity analysis. The need for randomness in integration is not there (except for likely some problem class I don’t know about).
For simulation, it is often much easier to say that an outcome is random (such as a break rate of a key part) even though you could theoretically define every part break event explicitly. When you are stacking many such events together, it’s hard to beat the convenience of a PRNG. In those cases you want a seed as well, so you can replicate the simulation run for debug or other purposes.
… and Matthias was never mentioned again Acts 1:26b
It makes one wonder if that was how the Popes were originally chosen?
Maybe Quside can choose a better Pope
I suppose Dungeons and Dragons would fall under conjuring?
Consider industrial applications, where piece of equipment involved in complex processes has been calibrated and the distribution of error is quantified by measurement. Cannot random numbers be used to investigate the uncertainty and risk of the entire process on the quality of the final product, via process simulation?
*each piece of equipment …
I’ve always found Benford’s law to be fascinating. Yet, it is often treated as some sort of statistical footnote (probably because it’s usually grounded in actual measurements).
It started with a book of logarithms being worn toward the front of the book. That was in the pre-slide rule days before cheating, grade inflation, and cell phones. https://www.statisticshowto.com/benfords-law/
https://brilliant.org/wiki/benfords-law/
The political types could use Benford’s Law to prove that numbers are racist because some random numbers are favored over others. When Government decided in 1969 to pick draftees by birth dates in a lottery where the first six months were heavily weighted.
There is still no registration for the draft for females. However,
Benford’s Law is used for fraud detection, Lie detection, etc., and some groups in Brazil are using it question elections.
The entirety of human history is looking more and more like some
kind of ‘misguided simulation’. Fortunately humans by their very nature
are non entropic round pegs in square holes.
random musings…
Yea the Poles are only sending 24 tanks that will make a big difference.
Germany proper has a total of 200. Even if you sent all those Russia has
like 2700 main battle tanks. The West has never given Ukraine enough
to stop what’s coming, this is a war on the peasants worldwide not on Russia.
Russia for all their WEF efforts gets the richest farmland in the world and the two
largest titanium surface mines in the world. They’ve already been given the uranium
concession by Hilliary. Putin is the favored son of dada Klause. This war will wreck the
world economy and usher in digital currency and passports as planned. All wars are domestic.
(a not so random observation)
more random musings…
It’s odd but the preponderance of Ukrainian guards in all of the so called
‘German death camps’ is never mentioned today. It was regular post WW2 fare.
Seventy two of crustaceans indeed.
Random numbers are used extensively in financial modelling – especially in the simulation of asset based bundles – e.g. buying subprime mortgages or other loans and bundling them to reduce investor risk and fund operations. In fact I believe that prior to the 2008 crash Intel’s CPUs had a bug (actually a design issue producing greater-than-expected consistency ) in the random generator which consistently biased monte-carlo style package simulations to suggest higher valuations (lower risk) than the same programs did when run on SPARC processors (no bug).
Yes, like Jaynes says, sure, there may be nonrandom ways of doing something (no one says we have to choose between all random and all nonrandom). Yet that is entire reason sampling theory took off like gangbusters in most all the sciences and other areas- the nonsampling errors often are larger than the sampling error. Also, nonrandom ways can fail too, for example if there is some gradient that your nonrandom method did not take into account. That is, the “requires more thought” is not well-defined and can be problematic since man’s thought often introduces bias, mistakes, overlooks things, leads to nonsampling errors, etc.
Cheers,
Justin
As an archaeologist doing random stratified land surveys, I’ve often assigned numbers to the many smaller zones within the survey area, written them on bits of paper, shaken them up in a coffee can, and picked the amount I needed out one at a time with my eyes closed. No diodes were harmed during this highly technical process.
Some noise filters use random number generators. Doesn’t Reed-Solomon use something like that?
In an email exchange about my claim that evolution can’t be made rationale with claims of randomness, the late great Zippy Catholic, told me he could program randomness.
Of course, I am to dull to even consider it a possibility – program randomness – owing to the program would necessarily exclude randomness, right?
Dang, I wish I had followed up.
The top tier security companies use true random number generators: cosmic ray detectors and video cameras pointed across a busy hallway at an enormous bank of various sizes, shapes, and colors of lava lamps. No, I’m not kidding.
You got me curious about the QUSIDE shown in the heading. How does it work? Must be some kind of radioactive sample and a geiger-like detector. Kind of a cool idea.
Building off the Casino idea, random numbers are of course good in a wide variety of games.
In that vein, DOOM is a good example of what is needed for “randomness.” In the original game random number calls simply take the next number from a pre-scrambled fixed list of the numbers from 0 to 255, looping to the beginning of the list when it goes past the end. It’s common for people to object that “these aren’t really random numbers!” and to say that this is a defect of the game. But in reality the numbers are still unknown during gameplay. DOOM makes random number calls a huge number of times per second (ex. having monsters aim, determine damage, determine movement, determine whether to fire or make a noise etc.) Many of these are done in response to player actions (ex. firing a weapon) and as such in actual play it becomes impossible to predict what number will come next. If you click the mouse a millisecond earlier than you did on the last playthrough then the order that the random numbers are called will change (which will in turn affect how many times it is called in the future) so beyond a second or two the numbers are effectively unknown, even if they aren’t “random” in a mystical sense.
There are many benefits to doing things this way too, chief among them the following two: First, this is an extremely quick way to get random numbers, especially in contrast to “purer” pseudo-random algorithms, so even relatively slow computers can run the game. Second, it allows players to record their inputs in a small file, and then have other players view their game recording by having the game recreate things in accordance with those inputs. If the random calls weren’t fixed then the situation in the replay would quickly diverge from what the player originally experienced, making the later inputs nonsensical. But since the random calls are fixed the recreated gameplay will match the original gameplay exactly, all in a small file size (and one easy to transfer on the dial-up internet of the time.)
Fast, lazy, heuristic monte-carlo methods to get a just of how some well modelled process might work in a final application. I feel like much of the engineering utility of stochastic processes has to do with getting to some design criteria of all the possible ways the ocean could behave over the next 30 years, quickly, e.g.
I object, slightly, to this being lumped with conjuring, because everyone (except marketing) should understand he heuristic characteristic of the result. And, you were going to overbuild the system anyways, but are trying to shave off a few tens of tons of steel.
You are correct that intellectual laziness eventually gives too much authority to this random subsampling, leading to overconfidence. And without some rational bounding force, commonly played by an engineering oversight agency, naïve individuals would remove too much steel to get a bonus this year, merely to have an oil tanker sink 10 years from now.
Casinos, cryptography and conjuring, oh my!
I’ve made heavy use of “random numbers” in my engineering career. I’m pretty sure most of these would make our host blanch. But if there has been one theme in my professional life, it has clearly been “do more with less”, and random numbers often provide a shortcut. Sometimes the shortcut is real, sometimes the shortcut is political (i.e., useful for dealing with non-technical management).
In the distant past, I have used pseudo-random numbers to create test cases for software modules written by others, where I have no access to the inner workings of the software. The test cases were bundled together and shoved into an automated software tool that ran the test-case-suite nightly on the software team’s daily compile. The software team would arrive in the morning, and if all the test cases passed, they would release the build to the development team. The pseudo-random numbers were used when there were too many parameters for complete coverage. Instead, ‘slices’ were taken through the N-dimensional parameter space (with ‘random’ time delays as one parameter). Pseudo-random numbers were used because they repeated exactly given a starting “seed”, so the software defects could be reproduced for ‘bug cleaning’ and ‘bug regression checking’. The “random” aspect of pseudo-random was used in the belief that the sequence of numbers would be uncorrelated with any important aspect of the software module being tested. The pseudo-random numbers were produced by a standard canned procedure, but usually further randomized by skipping the first N samples and multiplying two sequences produced with two different seeds.
Another common use for random numbers was to generate a physical “random noise” signal used to drive a vibration table, to check the mechanical design of a product for failure modes such as mechanical resonances. One or more locations on the product would be fitted with an accelerometer to measure the response. Other excitation sources were used, such as swept sine waves and impulses (often by smacking the product with a hammer). Why use random noise as an excitation source? It was viewed as a reasonable substitute for the vibrations the product would be subjected to while traveling cross-country in a truck or a plane. But probably the main reason it was used was because there are some mathematical manipulations that produce seemingly intuitive results, such as the coherence function (seen as a measure of the energy in the output vibration “caused” by the input excitation; a low coherence was used as an indicator of poor measurement quality), and those mathematical manipulations required the use of averaging multiple measurements, and using a random excitation was the easiest way to justify the assumptions inherent in the analysis.
When using random excitation in vibration testing, it was important to use a random sequence of numbers with a very long repetition period. This was because of the averaging used – the averaged results would continue to improve (higher signal-to-noise ratio, lower standard deviation of the distribution of measured values). If the random sequence was too short, the measurement would improve until the random sequence repeated, at which point the measurement would just wobble around forever with no further improvement. At first, pseudo-random sequences with very long repetition periods (measured in years, just because it was easy to make the sequences that long) were used. But then it was discovered that with nonlinear mechanical systems (which includes all mechanical systems to some extent), a subtle pattern could be discerned in the measurement results that was independent of the device being measured, but was highly correlated with the algorithm used to generate the pseudo-random sequence of numbers. To fix this problem, a “true random noise” analog noise source was built that used lightly-biased Zener diodes (a quantum effect, I’m told), which was measured and converted to a stream of numbers, which was then multiplied by the pseudo-random sequence. The analog noise source was more complicated than one might think necessary, since analogs circuit pick up all kinds of interference from their environment, so multiple noise sources were used, and the differences between those sources taken as the noise output (on the assumption that the interference was highly correlated between the different noise sources and could thus be subtracted out).
At the time, very-long-period pseudo-random sequences were believed to be quite good, they passed all the randomness tests, other than being completely predictable. I think Knuth may even have actually stated as much in the Art of Computer Programming. As to why they fail when used in conjunction with non-linear systems, the explanation I was given is really hand-wavy. The explanation went something like this: the pseudo-random number generators use the equivalent of a delay line with multiplicative feedback; non-linear systems can be modeled, at least over narrow regions, as a log function; a log function turns multiplication into addition; a delay line with additive feedback is a linear system with effects that can be measured (e.g., echoes).
In computer graphics an element of randomness is often used to smooth out sharp transitions. The best example I can think of is dithering across a smooth gradient of one color. In this case the quality of the randomness of not very important.
Also along the lines of what Justin said above, thinking is hard. I try to do as little as possible. Of course that is why I became a programmer. Motto: I didn’t get into programming because it is easy but because I thought it would be easy.