# How long does it take to observe that 10 is more common than 9 with three dice?

A classical problem in probability, which I learned about from Freedman, Pisani, and Purves’ book Statistics (pp. 238-240 in the fourth edition) is that of Galileo’s dice. Galileo was asked why, when rolling three fair dice, a sum of ten occurs more often than a sum of nine; he answered this question in Concerning an Investigation on Dice (from the University of York’s history of statistics page). It had previously been argued that since

10 = 6 + 3 + 1 = 6 + 2 + 2 = 5 + 4 + 1 = 5 + 3 + 2 = 4 + 4 + 2 = 4 + 3 + 3

and

9 = 6 + 2 + 1 = 5 + 3 + 1 = 5 + 2 + 2 = 4 + 4 + 1 = 4 + 3 + 2 = 3 + 3 + 3

and therefore there are six ways to get a sum of 9 and six ways to get a sum of 10, then both should be equally likely. But Galileo writes that

Nevertheless, although 9 and 12 can be made up in as many ways as 10
and 11, and therefore they should be considered as being of equal utility to these, yet
it is known that long observation has made dice-players consider 10 and 11 to be more

Galileo observes that, for example, 6 + 3 + 1 (or any other sum with three different summands) corresponds to six different outcomes of the dice. If we distinguish the dice by calling them, say, red, white, and green (Galileo was Italian), then a red 6, white 3, green 1 is different than red 1, white 6, green 3, and so on; there are six such orders. There are similarly three different ways to roll 6 + 2 + 2 (or any sum with two of one summand and one of another) and one way to roll 3 + 3 + 3. So the number of ways to roll a 10 is 6 + 3 + 6 + 6 + 3 + 3 = 27 and the number of ways to roll a 9 is 6 + 6 + 3 + 3 + 6 + 1 = 25; so 10 is slightly more likely, as had been observed.

In a review of Joseph Mazur’s What’s Luck Got to Do with It? I mention that to tell that 10 occurs more often than 9, empirically, requires thousands of rolls. I haven’t seen the calculation written out explicitly and I was teaching this problem recently; it seems like it would be a good idea to write it out.

So say that I bet on the sum of three dice coming up 10, and you bet on it coming up 9. If the sum is 10, you’ll pay me one dollar; if the sum is 9, I’ll pay you one dollar; in any other case there is no payment. Then define a random variable $X$, which is my net winnings on a single roll; we have $P(X=1) = 27/216$ (the probability of rolling 10), $P(X=-1) = 25/216$ (the probability of rolling 9), and $P(X=0) = 164/216$ (the probability of rolling something else).

Then it’s not hard to compute that the mean of X is 27/216 – 25/216 = 2/216 ≈ 0.0093, and the variance is $E(X^2)-E(X)^2 = 52/216 - (2/216)^2 = 0.2406$. So after n rolls, for large n, the distribution of my net winnings is approximately normal with mean 0.0093n and variance 0.2406n, by the central limit theorem.

How large does this have to be for, say, the chance of my net winnings being positive to be 95%? The chances that my net winnings are negative are about

$\Phi \left( {-0.0093n \over \sqrt{0.2406n}} \right) = \Phi \left( -0.0189 \sqrt{n} \right)$

and for this to be 0.05 we need $0.0189\sqrt{n} = \Phi^{-1}(1-0.05) = 1.645$. So $n = (1.645/0.0189)^2 \approx 7600$. From this I usually conclude in class (somewhat facetiously) that the players of these dice games had too much time on their hands. (In practice I suppose that the observation that 10 coming up more often than 9 came from tabulation of many players playing this game, not a single player playing many thousands of times.)