Swiroset.com

Powering future

What is randomness?

What is randomness?

The BBC Radio 4 show ‘In Our Time’ took up the issue of randomness today. The In Our Time website has a link to the show on iPlayer, if you missed it the first time.

What is meant by randomness? Well, a truly random event is not deterministic, that is, it is not possible to determine the next result, based on previous results, or on anything else.

In fact, random processes are very important in many areas of mathematics, science, and life in general, but truly random processes are notoriously difficult to achieve. Why should this be the case? Because in theory, many processes that we think of as random, like rolling a die, are in fact deterministic. In theory, you could determine the outcome of the dice roll if you knew its exact position, size, etc.

The ancient Greek philosopher and mathematician Democritus (ca. 460 BC – ca. 370 BC) was a member of the group known as atomists. This group of ancients pioneered the concept that all matter can be subdivided into its fundamental building blocks, atoms. Democritus decreed that there was no such thing as true chance. He gave the example of two men who meet at a well, and both consider their meeting to have been pure chance. What they didn’t know is that the meeting was probably arranged in advance by their families. This can be considered an analogy for the deterministic dice roll: there are factors that determine the outcome, even if we cannot precisely measure or control them.

Epicurus (341 BC – 270 BC), a later Greek mathematician, disagreed. Although he had no idea how small the atoms really were, he suggested that they wandered randomly in their paths. No matter how well we understand the laws of motion, there will always be randomness introduced by this underlying property of atoms.

Aristotle worked more on probability, but it remained a non-mathematical quest. He divided all things into certain, probable, and unknowable, for example, writing about the result of throwing knuckles, first dice, as unknowable.

As with many other areas of mathematics, the theme of randomness and probability did not resurface in Europe until the Renaissance. The mathematician and gambler Gerolamo Cardano (September 24, 1501 – September 21, 1576) correctly noted the probabilities of rolling a six with one dice, a double six with 2 dice, and a triple with three. He was the first person to notice, or at least record, the fact that a 7 is more likely to be rolled by 2 dice than any other number. These revelations were part of his manual for his players. Cardano had suffered terribly from his gambling (he sometimes pawned all his family belongings, ended up in a poor house and in fights). This book was his way of telling his fellow players how much to bet and how to stay out of trouble.

In the 17th century, Fermat and Pascal collaborated and developed a more formal theory of probability, and numbers were assigned to probabilities. Pascal developed the idea of ​​an expected value and used a famous probabilistic argument, Pascal’s wager, to justify his belief in God and his virtuous life.

Today, there are sophisticated tests that can be performed on a sequence of numbers to determine whether or not the sequence is truly random, or whether it has been determined by formula, human, or some other means. For example, does the number 7 appear one tenth of the time (plus or minus some error allowed)? Is the digit 1 followed by another 1 one tenth of the time?

An increasingly sophisticated series of tests can be put into operation. We have the “poker test”, which looks at numbers in groups of 5, to see if there are two pairs, three of a kind, etc., and compares the frequency of these patterns with those expected in a truly random distribution. The Chi Squared test is another favorite of statisticians. Given that a particular pattern has occurred, it will give a probability and a confidence level that it was generated by a random process.

But none of these tests is perfect. There are deterministic sequences that appear random (all tests pass) but are not. For example, the digits of the irrational number π look like a random sequence and pass all the tests for randomness, but of course, they are not. π is a deterministic sequence of numbers: mathematicians can compute it to as many decimal places as they like, with powerful enough computers.

Another apparently random distribution that occurs naturally is that of prime numbers. The Riemann hypothesis provides a way to calculate the distribution of prime numbers, but it remains unresolved and no one knows if the hypothesis still holds for very large values. However, like the digits of the irrational number π, the distribution of prime numbers passes all the tests for randomness. It is still deterministic, but unpredictable.

Another useful measure of randomness is a statistic called Kolmogorov Complexity, named after the 20th-century Russian mathematician. Kolmogorov Complexity is the shortest possible description of a sequence of numbers, for example, the sequence 01010101… could be described simply as “Repeat 01”. This is a very short description, indicating that the sequence is certainly not random.

However, for a truly random sequence, it would be impossible to describe the sequence of digits in a simplified way. The description would be as long as the sequence itself, indicating that the sequence would appear to be random.

Over the past two centuries, scientists, mathematicians, economists, and many others have begun to realize that sequences of random numbers are very important to their work. And so, in the 19th century, methods for generating random numbers were devised. It says, but it may be biased. Walter Welden and his wife spent months at their kitchen table rolling a set of 12 dice over 26,000 times, but this data was found to be in error due to dice bias, which seems like a terrible shame.

The first published collection of random numbers appears in a 1927 book by Leonard HC Tippet. After that, there were many attempts, many failed. One of the most successful methods was used by John von Neumann, who pioneered the mean-square method, in which a 100-digit number is squared, the middle 100 digits of the result are removed, and the result is re-adjusted. square, and so on. Very quickly, this process produces a set of digits that pass all tests for randomness.

In the 1936 United States presidential election, all opinion polls pointed to a close result, with a possible victory for Republican Party candidate Alf Landon. In the event, the result was a landslide victory for Franklin D Roosevelt of the Democratic Party. The opinion pollsters had chosen poor sampling techniques. In their attempts to be high-tech, they called people to ask about their voting intentions. In the 1930s, the wealthiest people, mostly Republican voters, were much more likely to have telephones, so polling results were deeply skewed. In surveys, true randomization of the sample population is of paramount importance.

Likewise, it is also very important in medical tests. Choosing a biased sample set (eg, too many women, too young, etc.) can make a drug seem more or less likely to work, biasing the experiment, with possibly dangerous consequences.

One thing is certain: humans aren’t very good at producing random sequences, and they’re not very good at detecting them either. When tested with two dot patterns, a human is particularly bad at deciding which pattern has been generated at random. Similarly, when trying to create a random sequence of numbers, very few people include features like digits appearing three times in a row, which is a very prominent feature of random sequences.

But is there something truly random? Going back to the dice we considered at the beginning, where knowledge of the precise initial conditions would have allowed us to predict the outcome, surely this is true of any physical process that creates a set of numbers.

Well, so far, atomic and quantum physics have come the closest to providing us with truly unpredictable events. To date, it is impossible to determine precisely when a radioactive material will decay. It seems random, but maybe we just don’t understand. At the moment, it’s still probably the only way to generate truly random sequences.

Ernie, the UK government premium bond number generator, is now in his fourth incarnation. It must be random, so that all premium bondholders in the country have an equal chance of winning a prize. It contains a chip that takes advantage of thermal noise within itself, that is, the momentum of electrons. Government statisticians run tests on the number sequences this generates, and they do, in fact, pass the tests for randomness.

Other applications are: random prime numbers used in Internet transactions, encrypting your credit card number. The National Lottery machines use a set of very light balls and air currents to mix them up, but like dice this could, in theory, be predictable.

Finally, the Met Office uses sets of random numbers for its set forecasts. It is sometimes difficult to predict the weather because of the well-known “chaos theory”: that the final state of the atmosphere depends heavily on the precise initial conditions. It is impossible to measure initial conditions with the required precision, so atmospheric scientists feed their computer models several different scenarios, with the initial conditions varying slightly in each. This results in a set of different forecasts and a weather presenter who talks about percentage chances, rather than certainties.

See also: In Our Time.

Leave a Reply

Your email address will not be published. Required fields are marked *


*