What Is Randomness? A 3,000-Year Journey
Three thousand years before our screens, on a street in Athens, a young woman throws four small sheep bones onto the ground. They are astragals — the ancestor of the die, carved from ankle bone, with six irregular faces, only four of which can bring the piece to rest. She waits for a sign. The outcome will decide whether she joins her family at the religious festival, stays home, or speaks to the man she glimpsed the day before. The gesture would seem trivial today; in her time, it was a serious act. Randomness, then, was not the absence of order: it was a channel to the invisible. To understand what we call randomness today, we must trace this thread back — it has changed meaning three times in three thousand years.
The Origins: Randomness Before Randomness
Before the word existed, there was the gesture. Archaeologists have found dice carved from bone and stone in Iran and Mesopotamia, dating back more than five thousand years. Astragals were ubiquitous in the ancient world, serving both as children’s toys and oracular tools. In Rome, Cicero recounts how generals consulted lots before battle, and how senators decided certain matters by die.
The Lot as Divine Speech
In classical Greece, and especially in the Athenian democracy of the 5th–4th centuries BCE, drawing lots was a sacred mode of selection. Magistrates were designated by a ritual apparatus, the kleroterion, which mixed bronze tokens and black-and-white balls in a stone tube. For the Athenians, this was no neutral procedure: the draw expressed the will of the gods, who chose better than humans because they had no interest to defend. Here, randomness was the opposite of arbitrariness — it was the voice of a higher order. This sacred conception appears in similar forms in Republican Rome, in the Bible (the lot that designates the scapegoat in Leviticus), and in most ancient cultures.
When Randomness Became Suspect
With the Christianization of Europe, the status of randomness was reversed. Medieval theologians struggled to reconcile the idea of an omniscient God — who knew every blink of every eye — with the notion of truly unforeseen events. If God knows everything, then nothing happens by chance: the word became suspect. Saint Augustine wrote that what people call fortune is merely a name given to their ignorance. Dice games became a recurring target of sermons, and the very idea that an event could escape a divine plan grew theologically uncomfortable. For nearly a thousand years, randomness remained a practical category without any real theory. It would take the Renaissance, and a gambler’s problem, to tip the scales again.
1654: The Letter That Gave Birth to Probability
It is through a betting problem that everything changes. In 1654, in Paris, a nobleman with a passion for gambling, the Chevalier de Méré, presents his friend Blaise Pascal with a puzzle that has been nagging at him: if two players interrupt a dice game before it ends, how should the stakes be fairly divided based on the score so far? The question seems trivial. It is about to shift Western thought.
Pascal corresponds with Pierre de Fermat, a magistrate in Toulouse and a brilliant mathematician. Their exchange of letters, spread over the summer of 1654, lays the foundations for what was not yet called probability theory. For the first time, randomness is treated as a mathematical object: it is measured, calculated, and used to derive rules of fairness. The Chevalier de Méré’s “problem of points” becomes the birth certificate of a discipline.
The historian of science Ian Hacking, in The Emergence of Probability (1975), highlights just how radical this rupture was: before Pascal and Fermat, the very notion of probability in the modern sense — a number between 0 and 1 attached to an event — did not exist in learned vocabulary. Randomness ceased to be a metaphysical mystery and became, for the first time, a field of calculation. Probability theory would go on to invade demography (with John Graunt and London’s mortality tables), insurance, physics, and ultimately almost everything that can be quantified today.
Laplace’s Demon: Randomness as Ignorance
One hundred and sixty years later, the French mathematician Pierre-Simon Laplace pushed the idea to its extreme. In his Philosophical Essay on Probabilities (1814), he proposed a famous thought experiment: imagine an intelligence — later called Laplace’s Demon — that knew at a given moment the position and velocity of every particle in the universe. For this intelligence, he wrote, “nothing would be uncertain, and the future, like the past, would be present to its eyes.”
The conclusion is vertiginous: if Laplace is right, randomness does not exist. It is merely a name given to our ignorance. When you roll a die on Virtual Dice, the outcome is in principle entirely determined by the force of your click, the processor speed, the state of memory — it would just take knowing enough parameters to predict it. This view is now called epistemic randomness: the result is fixed, but it escapes our knowledge. Here, randomness is a gap in our understanding, not a property of the world.
Throughout the 19th century, this idea set the standard. Randomness became a computational convenience for what we cannot, in practice, predict — the weather in two weeks, the outcome of a rolling die, the disease that strikes one person rather than another. No one at the time imagined that physics would soon encounter a randomness of an entirely different nature.
Heisenberg and the Quantum Revelation
In 1927, the young German physicist Werner Heisenberg formulated a principle that shook the edifice: there are physical quantities — such as the position and velocity of a particle — that it is fundamentally impossible to know simultaneously with arbitrary precision. Not because our instruments are too crude; but because nature, at that scale, does not itself possess that information. The uncertainty principle does not describe our ignorance: it describes a feature of reality.
With quantum mechanics, randomness changed status. A radium atom decays — or not. No hidden cause, no additional parameter can predict the precise moment. The probability of decay over an hour is calculable to a thousandth; the exact instant of the next decay, however, does not exist before it is observed. This is what we now call ontological randomness: chance that is no longer an effect of our limited knowledge, but an intrinsic property of the physical world.
Albert Einstein, who had himself contributed to founding quantum theory, never accepted this conclusion. In a famous letter to his colleague Max Born in December 1926, he wrote: “The theory yields much, but it hardly brings us close to the Old One’s secret. At any rate, I am convinced He does not play dice.” The phrase, often shortened to “God does not play dice,” has endured. But experiment has ruled against Einstein: a century of increasingly precise measurements has confirmed that quantum randomness is, to our best knowledge, irreducible.
This distinction between epistemic randomness (ignorance) and ontological randomness (genuine indeterminacy) remains one of the deepest questions in the philosophy of science. For a die rolling on a table, randomness is probably epistemic — predictable in principle. For a decaying particle, it is probably ontological — irreducibly unpredictable. And in both cases, what counts for calculation is the same: probability theory works either way.
Today: Randomness in Our Screens
When you click Coin Flip, your browser runs a function that produces a number. That number is neither epistemic in Laplace’s sense nor ontological in Heisenberg’s: it is pseudo-random. A deterministic algorithm produces a sequence of digits so irregular that, in practice, it cannot be distinguished from true randomness. This is a third category: a manufactured randomness that imitates the other two well enough to replace them for the vast majority of everyday uses.
The technical details matter. A poor generator can produce detectable biases, cycles that are too short, or hidden correlations. A good generator — and current web standards impose strict quality requirements — produces sequences that no honest statistical analysis can distinguish from a genuine coin toss. This is precisely the mechanism we describe in detail in our article on how our draws work: the code, the functions used, and the fairness guarantees.
If you want a metaphor: pseudo-random chance is to pure chance what a photograph is to a landscape. It is not the thing itself, but it is faithful enough that we use it without a second thought for decisions that require nothing more.
Three Thousand Years, One Thread
From astragal to algorithm, randomness has passed through three statuses. For the ancients, it was the discreet voice of the gods — a channel of order, more than disorder. From Pascal onward, it became a calculable object: instead of invoking it, we began to measure it. With Heisenberg, it was written for the first time into the very fabric of reality — no longer as a limit of our knowledge, but as a property of the world. Today, in our browsers, it is a sophisticated imitation, designed to be indistinguishable.
Three thousand years, one thread: at every epoch, humanity runs up against the same question — can we predict what has not yet happened? — and answers with the tools at hand. Physics has not killed the metaphysics of randomness; it has displaced it. The next stage of this journey, perhaps the most unsettling, is no longer historical but cognitive: why does our brain, after three millennia of learning, keep making the same mistakes when faced with a simple string of heads and tails? That is precisely what we explore in our article on the gambler’s fallacy. And if you want to see what these illusions cost in concrete terms, our article on the real odds of winning the lottery translates the same mechanisms into numbers and combinations.