Nick Berry asks: you roll a pair of six-sided dice and sum that to get a total. Your good friend does the same. What are the chances that you both get the same total?
My instinct is to say that it should be somewhere between 1/6 and 1/11 – you’d get 1/6 if the question were about one die and 1/11 if it were about picking uniformly a number from 2, 3, …, 12.
Let’s generalize. What if you roll two n-sided dice? In that case, the probabilities of getting 2, 3, …, n+1 are , and the probabilities of getting n+2, n+3, …, 2n are – generalizing the well-known triangular distribution. The probability that you and your friend both roll 2, then, is
or, putting everything over a common denominator,
At this point we can already see that there are order-of-n terms in the numerator, of order , so we should get a result of order 1/n. We can peservere and do the algebra and we get
or just a hair over 2/3n.
Berry also asks what happens if you have more dice. For k six-sided dice he gives the results of a Monte Carlo simulation which has roughly . He writes that “the percentage chance of a match score falls off a lot slower than I would have predicted.”
Why does it fall off so slowly? The mean result from a single die is 7/2, with variance 35/12. So the mean result from k dice is 7k/2, with variance 35k/12 – so only values within a few multiples of are possible with any appreciable frequency. There are on the order of a few square roots of k of these, so the answer should be for some . Like Berry, I am too lazy to find the constant .
Edited to add: Matthew Aldridge gives in the comments an argument (for approximating that for six-sided dice, with $k$ large, the probability of both getting the same total is approximately , by approximating both sums as normal and applying the central limit theorem. This is about . If we bear in mind that the variance of a single roll of an n-sided die is , then we get for rolling k n-sided dice.
This approximation is surprisingly good even for k = 1, where it has no business being good. It gives . This reduces to 1/n, the correct answer, if you let and ignore the -1.
3 thoughts on “Two dice problems”
Let Z = X – Y be the difference between the my total dice roll and my friend’s total dice roll. Then Z has mean 0 and variance 2*35k/12 = 35k/6. We want the probability that Z = 0.
Since Z is approximately normally distributed, an appropriate continuity correction is to look for P(-1/2 < W < 1/2), where W is normal with mean 0 and variance 35k/6. The probability a normal distribution with mean 0 and large variance sigma^2 is between -1/2 and 1/2 is roughly 1/sqrt(2*pi*sigma^2). That's because this is the value of the PDF at 0, the PDF is roughly flat around 0, and we're integrating over an interval of length 1.
Thus for the dice problem, we get 1/sqrt(2*pi*35k/6) = 1/sqrt(pi*35k/3). Here, sqrt(pi*35/3) = 6.054, which is pretty close to Berry's simulations suggesting 6.
For a dice with n = 2 sides, we can work out the exact answer. You can think of it like flipping k coins each: what’s the probability me and my friend both get the same number of heads?
But the answer’s still the same if my friend swaps all their results; that is, you look for the probability that the number of heads I get equals the number of tails my friend gets. This is equivalent to getting exactly k heads and k tails out of the 2k coin tosses we perform between us. The number of ways of tossing 2k coins is 2^2k, and the number of those with k heads and k tails is (2k | k), which is how I’m writing the binomial coefficient “2k choose k”.
So the exact answer for n = 2 is (2k | k) / 2^2k.
Going to asymptotics, Stirling’s formula says this is asymptotically 1/sqrt(pi*k). This agrees with the answer in the “Edited to add” part of the post, upon setting n = 2.
I think I’ve seen this solution with the swaps before. My first instinct was that there must be some sort of symmetry-bsaed solution to this problem, but that only works in the n = 2 case.