Jordan Ellenberg on how many states Nate Silver is going to get wrong, according to Nate Silver. (This refers to the elections of US Senators taking place tomorrow.) For each state Silver gives a probability of winning; we can give a probability that Silver will be wrong which is just his own predicted probability that the underdog wins. The answer is an an expected value of 2.5. Silver has been saying since the 2012 election that he got lucky in calling all fifty states correctly. In some sense it would have been more impressive if he’d missed a couple, which would have shown his predictions were calibrated correctly. (I remember trying to explain this to colleagues at my job at the time, where I’d been for a bit over a month; I think I did so successfully, but it’s a subtle point.)
Silver’s famous 50-for-50 2012 presidential predictions are still available; according to his own predictions, he would have expected to get about 1.8 states wrong, on average. It’s hard to say just how good going 50-for-50 is, though, because the errors are correlated.
However, it almost never makes sense to look at binary outcomes, but rather at the continuous outcomes that they collapse. (For example, when looking at sports data use difference in points scores instead of win-loss records.) Andrew Mooney at the Boston Globe did exactly this, and saw that 68% of the time Silver got within his stated one-standard deviation margin of error, and 96% of the time within two standard deviations.
One thought on “We always think we’re right, but we don’t think we’re always right.”
Reblogged this on Stats in the Wild.