What is a rigorous mathematical/logical definition of 'randomness'? Under what conditions can we truthfully apply the predicate 'is random'?

5God does not play dice! – user237393 May 05 '15 at 06:45

2What about Satan? :D – duskn May 05 '15 at 06:50

https://www.cs.auckland.ac.nz/~chaitin/sciamer.html. It can be helpful – iostream007 May 05 '15 at 06:59

There is a very nice discussion in Seminumerical Algorithms, Volume 2 of Knuth's book, The Art Of Computer Programming. – Gerry Myerson May 05 '15 at 07:09

I say, both God and Satan are sensible beings (like us) and hate randomness – Spectre Oct 04 '20 at 02:59

I hope you will have some fun with [this presentation](https://github.com/rtybase/mlaistats/blob/master/kolmogorovcomplexity/KolmogorovComplexity.pdf) I did a while ago for my colleagues explaining randomness. – rtybase Oct 08 '20 at 22:39
6 Answers
To avoid philosophical debates (I assume you are looking for the mathematical concept) one deals with random variables (they can be though as numerical characteristics of your experiment) which are functions defined on a probability space $(\Omega,\mathcal{B}, \mathbb{P} )$ $$X: \Omega \to \mathbb{R}$$
In order that this construction makes sense, one requires that you can ask some 'natural' questions about the result of your numerical characteristic such as: Is $X$ bigger than some $a$? what is the probability of this event?
You would like to consider $\{\omega \in \Omega : X(\omega) > a\} $ or briefly $[X>a]$. As $\mathcal{F}$ is the set of events, you require that $[X>a] \in \mathcal{F}$ then the probability of the event is given by $\mathbb{P}[X>a] \in [0,1]$.
that is, we require it to be a number between $0$ and $1$.
Lastly one demands that $\mathbb{P}[X\in \mathbb{R}] = 1$
An interesting point is the law of large numbers (a theorem) that states that as you repeat the experiment (with $X_1, \ldots, X_n, \ldots$ as the random variables representing the reproduction of the experiment (as independent random variables)) the mean value you observe converges to the probability on the event in question (almost surely ).
$$\lim_{n\to \infty} \frac{\sum_{j=1}^n 1_{[X_j >a](\omega)}}{n} = \mathbb{P}[A>a]\quad \omega\; a.s.$$
This is a remarkable result you may find a better discussion on Durret's Book ( Probability: theory and examples). I started there.
Now, if there is randomness in the world or if this is nothing but a useful model is a deeper question that requires more than the formal construction we made above. you should check this quote of Einstein "God does not play with dice" and contrast it with James Clerk Maxwell: “The true logic of this world is in the calculus of probabilities.”
a fuller discussion can be found on http://www.feynmanlectures.caltech.edu/I_06.html
Good luck
 7,633
 1
 21
 42
 5,943
 17
 39


4It's an interesting answer, alas not an answer to the question. – JeanClaude Arbaut Oct 04 '20 at 01:55
I think a serious explanation of a rigorous mathematical definition of randomness needs a considerable technical machinery. Hoping for an answer from an expert, here is at least a first step, admittedly for the most part informal.
Fundamental aspects of randomness can be described and explained with AIT  algorithmic information theory which was created and developed in large parts by Gregory Chaitin.
From the preface of the first edition of Information and Randomness  An Algorithmic Perspective by C.S. Calude:
While the classical theory of information is based on Shannon's concept of entropy, AIT adopts as a primary concept the informationtheoretic complexity or descriptional complexity of an individual object. ...
The classical definition of randomness as considered in probability theory and used, for instance, in quantum mechanics allows one to speak of a process (such as a tossing coin, or measuring the diagonal polarization of a horizontallypolarized photon) as being random.
It does not allow one to call a particular outcome (or string of outcomes, or sequence of outcomes) random, except in an intuitive, heuristic sense. The informationtheoretic complexity of an object (independently introduced in the mid 1960s by R. J. Solomonoff, A. N. Kolmogorov and G. J. Chaitin) is a measure of the difficulty of specifying that object; it focuses the attention on the individual, allowing one to formalize the randomness intuition.
An algorithmically random string is one not producible from a description significantly shorter than itself, when a universal computer is used as the decoding apparatus.
So, AIT allows formalizing randomness of individual objects, which is a remarkable shift from the classical probability approach.
In chapter 26 of Randomness and Complexity G. Chaitin tells us informally about AIT in a nutshell. He speaks about his very first definition of randomness (1962):
 Definition of Randomness R1: A random finite binary string is one that cannot be compressed into a program smaller than itself, that is, that is not the unique output of a program without any input, a program whose size in bits is smaller than the size of bits of its output.
This definition enabled G. Chaitin to connect game theory, information theory and computability theory. He worked on it and developed three models of complexity theory:
Complexity Theory (A): Counting the number of states in a normal Turing machine with a fixed number of tape symbols. (Turing machine statecomplexity)
Complexity Theory (B): The same as theory (A), but now there's a fixed upper bound on the size of transfersjumps, branchesbetween states. You can only jump nearby. (boundedtransfer Turing machine statecomplexity)
Complexity Theory (C): Counting the number of bits in a binary program, a bit string. The program starts with a selfdelimiting prefix, indicating which computing machine to simulate, followed by the binary program for that machine. That's how we get what's called a universal machine.
In each case, G. Chaitin showed that most $n$bit strings have complexity close to the maximum possible and he determined asymptotic formulas for the maximum possible complexity of an $n$bit string. These maximum or near maximum complexity strings are defined to be random. To show that this is reasonable, he proved, for example, that these strings are normal in Borel's sense. This means that all possible blocks of bits of the same size occur in such strings approximately the same number of times, an equidistribution theory.
In order to develop this theory it was necessary to refine the definition of Randomness R1:
 Definition of Randomness R2: A random $n$bit string is one that has maximum or near maximum complexity. In other words, an $n$bit string is random if its complexity is approximately equal to the maximum complexity of any $n$bit string.
Nearly a decade and some brakethroughs later, in 1974, G. Chaitin had developed the theory to a point where he was able to formulate and concentrate randomness in his celebrated constant, the halting probability $\Omega$ (Chaitin's constant). It is based on a mature theory around the following:
 Complexity Theory (D): Counting the number of bits in a selfdelimiting binary program, a bit string with the property that you can tell where it ends by reading it bit by bit without ever reading a blank endmarker. Now a program starts with a selfdelimiting prefix as before, but the program to be simulated that follows the prefix must also be selfdelimiting. So the idea is that the whole program must now have the same property the prefix already had in theory (C).
All that work culminated in the definition of the halting probability $\Omega$. He published this model in A theory of program size formally identical to information theory in 1975. There are three key ideas in this paper: selfdelimiting programs, a new definition of relative complexity, and the idea of getting programsize results indirectly from probabilistic, measuretheoretic arguments involving the probability $P(x)$ that a program will calculate $x$. He calls this the algorithmic probability of $x$. Summing $P(x)$ over all possible outputs $x$ yields the halting probability $\Omega$: \begin{align*}\Omega=\sum_{x}P(x) \end{align*} Or, equivalently as sum over all positive integers $n$: \begin{align*} \Omega^{\prime}=\sum_{n}2^{H(n)} \end{align*} where $H(n)$ is the size in bits of the smallest program for calculating the positive interger $n$.
A key theorem is \begin{align*} H(x)=\log_2P(x)+\mathcal{O}(1) \end{align*} which enabled him to translate complexities into probabilities and vice versa. Here the complexity $H(x)$ of $x$ is the size in bits of the smallest program for calculating $x$, and the $\mathcal{O}(1)$ indicates that the difference between the two sides of the equation is bounded.
$\Omega$  A famous mathematical Constant
In order to get a glimpse of the kind of randomness which is distilled in $\Omega$ we take a look into section 11 in the first chapter of Mathematical Constants by S.R. Finch, which is devoted to Chaitin's constant. A remarkable connection of exponential diophantine equations with $\Omega$ is addressed:
An exponential diophantine equation involves a polynomial $q(x_1,x_2,\ldots,x_n)$ with integer coefficients as before, with the added freedom that there may be certain positive integers $c$ and $1\leq i<j\leq n$ for which $x_j=c^{x_i}$, and there may be certain $1\leq i\leq j<k\leq n$ for which $x_k=x_{i}^{x_j}$. That is, exponents are allowed to be variables as well.
Starting with the work of Jones and Matiyasevic, Chaitin found an exponential diophantine equation $Q(N,x_1,x_2,\ldots,x_n)=0$ with the following remarkable property. Let $E_N$ denote the set of positive integer solutions $x$ of $Q=0$ for each $N$. Define a real number $\Omega$ in terms of $0.\Omega_1\Omega_2\Omega_3\ldots$ as follows: \begin{align*} \Omega_N= \begin{cases} 1\qquad\qquad \text{if }E_N\text{ is infinite},\\ 0\qquad\qquad \text{if }E_N\text{ is finite}\\ \end{cases} \end{align*}
Then $\Omega$ is not merely uncomputable, but it is random too! ... Chaitin explicitly wrote down his equation $Q=0$, which has $17\,000$ variables and requires $200$ pages for printing. The corresponding constant $\Omega$ is what we call Chaitin's constant. Other choices of the expression $Q$ are possible and thus other random $\Omega$ exist. ... So whereas Turing's fundamental result is that the halting problem is unsolvable, Chaitin's result is that the halting probability is random. It turns out that the first several bits of Chaitin's original $\Omega$ are known and all are ones thus far.
Note that in C.S. Caludes book random objects with increasing complexity are studied. It starts with random strings and continues with random sequences and random reals, culminating in the derivation of $\Omega$ and related objects.
I'd like to finish this little trip by citing D. Zeilberger found in section 23.4 in Randomness and Complexity:
Greg Chaitin and the Limits of Mathematics:
Standing on the shoulders of Gödel, Turing (and Post, Church, Markov and others), Greg Chaitin gave the most succinct, elegant and witty expression to the limits of our mathematical knowledge. It is his immortal Chaitin's Constant, $\Omega$: \begin{align*} \Omega:=\sum_{p\ \text{halts}}2^{p} \end{align*} where the sum ranges of all selfdelimiting programs run on some Universal Turing Machine. As Greg puts it so eloquently, $\Omega$ is the epitome of mathematical randomness, and its digits are beautiful examples of random mathematical facts, true for no reason. It also has the charming property of being normal to all bases.
 94,265
 6
 88
 219

Interesting answer, thank you. I wasn't aware of these books by Steven Finch. Long ago (ca. 2000), he had a website about mathematical constants, hosted by MathSoft (at that time, the developers of MathCad). He probably made a book version of the site. Here is the page for Chaitin's Constant, thanks to the Internet Archive: https://web.archive.org/web/20040204194231/http://www.mathsoft.com/mathresources/constants/wellknown/article/0,,1984,00.html – JeanClaude Arbaut Oct 09 '20 at 05:26

@JeanClaudeArbaut: You're welcome! Many thanks for granting the bounty and providing this interesting link. You're right, this is at least one of the sources for his book. I've checked other constants and they can be found partly verbatim in the book, very nice. :) – epi163sqrt Oct 09 '20 at 08:23
The following excerpt is from the first page of the introduction of Probability1 by Albert Shiryaev. He discusses what it means to say something is random.
At the end of Probability2, the second volume, he has a section called "Development of Mathematical Theory of Probability: Historical Review" that discusses "randomness" among other things. I can post it here if desired, as long as that doesn't contravene any rules.
If this answer is not considered helpful, let me know, and I can delete it.
 3,237
 1
 6
 12
Philosophy is unavoidable in answering this question, even as it applies to mathematics. Randomness is that property of some things, among them things such as items, events, collections, and patterns, which lacks what Aristotle would call an efficient cause; i.e. there is no mechanism or algorithm which can account for the particular property being observed. Note that an object per se (including a mathematical object such as a number like $\pi$) might be caused, but some property of that object, if that property is truly random, cannot be attributed to a causal mechanism or algorithm.
In the real world, where causality is presumed, true randomness must perforce be an abstraction or ideal that can be approximated but never achieved. In an abstract world, such as mathematics, randomness can be presumed axiomatically, but not proved, because a proof would entail an algorithm that dispositively accounted for or determined that property thought to be randomness.
 6,171
 2
 16
 23

In the 'real' real world, there are processes that, for all we know, must be truly random; quantum decay, particularly, is the canonical example. There is strong evidence both mathematical and physical that this process is truly random and that there are no 'markers' that one can use to determine specifically when it will happen. – Steven Stadnicki Oct 04 '20 at 03:31
A definition I can give for randomness is this :
It is a condition in an experiment (mostly mathematical) such that one can't determine what comes next.
An example where this holds is in the tossing of an unbiased coin in conditions that ensure its unbiasedness. In such a toss, you can't exactly determine what would come next.
From the above example, it is clear to us that an experiment with random outcomes have equally likely outcomes , when considering each outcome as separate (i.e. , in the case of the rolling of an unbiased die, if you consider each number as separate and not subjecting them to conditions like 'the number is greater than $x$' or anything of that sort, each number has an equal probability of appearance).
Also, randomness has a specialty  it has no cyclic pattern (Ah.. that's just obvious).
For better answers, refer @ConradoCosta's or @Novice's answer.
 1,609
 4
 22

*"an experiment with random outcomes have equally likely outcomes"* suggests that tossing a slightly biased coin would not have random outcomes. Many would disagree given the uncertainty. – Henry Oct 07 '20 at 12:07

@Henry , I don't understand what you mean to say. Could you please explain it to me ? Thanks. – Spectre Oct 07 '20 at 12:11

Suppose you have a bag with three balls (two red, one blue) and you draw a ball (equally likely any of the three) and note its colour. Many people would say the colour is random (unpredictable, based on chance, etc.) even though the probability of it being red is $\frac23$ rather than $\frac12$ – Henry Oct 07 '20 at 12:19

Well, I thought only about the unbiased experiments when I wrote this answer. Thanks for the help, @Henry – Spectre Oct 07 '20 at 12:26
Food for thought: Aperiodic infinite "pattern"
Since that is an oxymoron we can rather say,
The Logical Definition of Randomness:
Aperiodic infinite sequence of a set of values whose values must appear in that infinite sequence to the exact proportion they appear in the set of values
The predicate is said to be random if it is an outcome of an event that is a part of that aperiodic infinite sequence of a set of values given that the trials for the event were to be continued indefinitely
In the case of Pi, each new iteration of the calcuation to further the digits of Pi is a single trial, you cannot know the next digit until you calculate it. That is the equivalent of flipping an unbiased coin. There is a reason we only know Pi to the 2 quadrillionth digit. But even now, you cannot know the next digit until you run the next iteration  ergo the next "coin toss".
 23
 5

The digits of $\pi$ are an aperiodic infinite pattern, but calling them random would be outrageous, given that they can all be predicted by merely... calculating $\pi$. Though an interesting start, this concept doesn't hold up to scrutiny. – ViHdzP Oct 04 '20 at 07:14

The digits of pi are random by every scrutiny. You cannot predict the sequence on the basis of the previous numbers. That is random. You can actually use pi for a random number generator if you go arbitrarily into the sequence. If no one knows your seed then it will serve the purpose perfectly. – ketenks Oct 04 '20 at 12:35

However, there is no proof showing that all the numbers appear in the same proportion within pi. You can see here: http://www.eveandersson.com/pi/precalculatedfrequencies? that it does not appear that pi follows the law of large numbers which would fail the second portion of the definition given here meaning that this definition would not say pi is random. But that is still speculative. – ketenks Oct 04 '20 at 12:50

"However, because there is no repeating pattern in the decimal portion of Pi we can assume that all numbers are equally likely to be the next number in the sequence as the length of the decimal portion goes to infinity. This in effect defines the next number in the infinite sequence as a random event. This means that each number is equally likely to be the next number so each has a 1/10 chance. Therefore, the occurence of each digit should be equal once we reach an infinite number of decimal places." eveandersson.com/pi/precalculatedfrequencies? – ketenks Oct 04 '20 at 12:54

So it could be an equal proportion of the 10 digits within pi but still speculative as I see it. – ketenks Oct 04 '20 at 12:56

Lastly, you are assuming randomness is indeterministic but it is deterministic. Just because you can't predict the rest of the sequence on the previous sequence doesn't mean you can't know what the sequence will be through deeper associations. There are 3 properties of concern when speaking about randomness: aperidocity, impartiality and determinism. The first two are NOT in debate. The mathematics community knows it must have those two properties, the third is of debate simply because of the Copenhagen Interpretation. But this definition does not actually say it can or can't be determined. – ketenks Oct 04 '20 at 13:06

Knowing the seed of any random number generator will give you the ability to predict it. Calculating Pi is simply knowing the seed for that randomness. If we knew the seed of quantum mechanics then we could predict quantum mechanics. – ketenks Oct 04 '20 at 13:15

A random sequence of digits is aperiodic and "impartial" (normal) with probability 1, alright. That doesn't mean every sequence of digits in the sample space is, so it stands to reason a random sequence isn't necessarily either of the two. Furthermore, PRNGs are called pseudorandom for a reason. Though this touches on some more philosophical points, it makes sense that if any sequence can be predicted with absolute certainty, it isn't actually random. Quantum mechanics definitely plays a part in the debate about whether unpredictable randomness actually exists, but is irrelevant here. – ViHdzP Oct 06 '20 at 23:25

So then which is it? You said, "if any sequence can be predicted with absolute certainty, it isn't actually random" then you said, "whether unpredictable randomness actually exists". So if you don't even know whether randomness is deterministic or not then at this point in time, this answer deals with the two properties: aperiodic and impartial and leaves out determinism altogether. So you're wrong. – ketenks Oct 07 '20 at 00:24

You're conflating mathematical and physical randomness. Does physical nondeterministic randomness exists? Who knows! But we can model it mathematically, and we don't model it as any old sequence that seems "random enough". Mathematical randomness isn't deterministic. – ViHdzP Oct 07 '20 at 01:07

I literally gave the definition of randomness logically. This is both mathematical and physically applicable. That's the bridge between math and physics. Randomness, mathematically speaking is deterministic. Go ahead, post your definition that shows otherwise. – ketenks Oct 07 '20 at 01:20

Again, you are sticking by your own notion that randomness should not be deterministic while also saying that we currently don't know whether it is or not...This definition simply allows for it to be deterministic and in fact there is no reason it can't be determinable. Many physicists have said that if all the factors were known in a coin toss then it could be known which side would turn up each time. Everyone actually knows it is deterministic in nature but only that we currently can't determine such things with our knowledge. This however, cannot be attributed as a property of randomness. – ketenks Oct 07 '20 at 16:49