While looking for something for work (!), I came across an interesting set of PowerPoint slides How to predict election results from Jan-Michael Frahm at UN . This nicely summarizes the methodology of the 2012 iteration of Nate Silver’s election forecast at FiveThirtyEight. The 2016 iteration is explained here but is largely similar.
If you like these sorts of models, there’s also Sam Wang (Princeton Election Consortium), Josh Katz (New York Times / Upshot), and Huffington Post, Daily Kos PredictWise is David Rothschild’s site which aggregates results from the prediction markets. Slate has been running a feature on Trump vs. Clinton: Who’s Winning Today’s Forecasts of Who Will Win the Election?, which is a bit tongue-in-cheek . They average the other forecasts. As they tell us, “averaging is a sophisticated econometric technique that combines addition and division”.
The slides come from a course COMP 066: Random Thoughts at UNC, a first-year seminar in the CS department which is as far as I can tell intended for non-majors, which is described at the epartment’s web page as follows: “explores the notions of “randomness” and its antithesis, “structure.” What does “random” mean? How do computers generate “random” numbers and just how random are they? Is the addition of random noise to a signal always bad? (The answer is no!)” I would have liked to teach a class like this back when I taught (or for that matter, to take such a class back when I took classes).