An approximation of quarterly growth from monthly growth

An approximation from Justin Wolfers: “Quarterly growth ˜ 0.33 * this month’s growth + 0.67 * (t-1) + 1.0 * (t-2) + 0.67 * (t-3) + 0.33*(t-4)”.

I stared at this one for a while. But it’s actually pretty easy to prove, assuming that we’re expressing growth as a difference and not a quotient. Say it’s month t now. The quarterly growth of some quantity f which varies with time, for the most recent quarter over the quarter before that, is f(t) + f(t-1) + f(t-2) - f(t-3) - f(t-4) - f(t-5).

The weighted sum of monthly growths there is

{1 \over 3} (f(t) - f(t-1)) + {2 \over 3} (f(t-1) - f(t-2)) + {3 \over 3} (f(t-2) - f(t-3)) + {2 \over 3} (f(t-3) - f(t-4)) + {1 \over 3} (f(t-4) - f(t-5))

and most terms here cancel, leaving

{1 \over 3} (f(t) + f(t-1) + f(t-2) - f(t-3) - f(t-4) - f(t-5)).

This is one-third the quarterly growth in Wolfers’ tweet – but if both figures are annualized, as is conventional in economic data, that takes care of the factor of 3.

Math bracketologyology

Jordan Ellenberg wrote a piece for Sunday’s New York Times on The Math of March Madness. It’s centered around a paper by Michael J. Lopez and Gregory J. Matthews which claims that a model combining point spreads and “possession based team efficiency metrics” (i. e. average numbers of points scored or given up per possession) did quite well in Kaggle’s 2014 March Madness competition. (For legal reasons, Kaggle had to call it “March Machine Learning Mania”.)

Sadly, this article doesn’t include Jordan’s own contribution to bracketology, the “Math Bracket”, in which the school with the better math department is picked to win each game: here are the 2015, 2014,”>2013, and 2010 (the original). If there are 2011 or 2012 math brackets, they don’t appear on his blog. In 2010 the math bracket picked Berkeley (excuse me, “Cal”, since we’re talking athletics) to win; in 2013 through 2015, Harvard.

I don’t know how well a totally random bracket (i. e. picked by coin flips) would do, but the math bracket at least starts out better than this. The math bracket usually picks the higher-seeded team in the first round – 19 of 32 in 2015, 22 of 32 in 2014, 23 of 32 in 2013, and 23 of 32 in 2010. This is because bigger schools (up to a point) tend to have better math departments and are better at basketball. (Quality of a department is judged by how many people an anonymous group of number theorists and geometric group theorists can name so is correlated with department size, which in turn is correlated with undergraduate enrollment, etc.)

The math brackets seem to break down in the later rounds, though. The 2010 bracket has a final four featuring teams seeded 2, 4, 8, and 11; 2013 is 2, 6, 12, and 14; 2014 is 2, 2, 10, and 12; 2015 is 7, 11, 11, and 13. The average final four team in the math brackets is therefore an 8 seed (the average of those numbers is 127/16 = 7.9375); the average team in the tournament is of course an 8.5 seed. The very best basketball teams just aren’t at schools with the best math departments.

Worst pi approximation ever

The average sinuosity of rivers – their length as the river travels, divided by the distance from source to mouth as the crow flies – is supposed to be π.  But it’s actually 1.94, and depends heavily on the river.  Via James Grime, writing for the Grauniad.  

That’s worse than the Indiana pi bill or 1 Kings 7:23.  To be fair to the Biblical text, which refers to a “molten sea” that is “round all about”, thirty cubits around and ten cubits across, nobody there is claiming those are high-precision measurements.  And if you want to try really hard to make things work out you can argue that the inner brim was thirty cubits around and the outer diameter was ten cubits, which is stated three verses later.  

(It’s a shame that approximation isn’t in 1 Kings 7:22…)

This post is scheduled to go out at 12:26, because I am in Eastern time and Pi Day was originally invented in San Francisco. Or actually because I forgot to do a pi day post until after I saw other people did.

Time zones

It’s that day when everyone in the US pays attention to time zones, because we all lost an hour of sleep tonight.  And at least in my case, I’ll be a bit bitter tomorrow morning, when the sun rises at 7:56 in Atlanta – a city that really should be on Central Time, but is presumably Eastern because, well, look at a map, Georgia is on the East Coast.  (For a cheap thrill, drive to Alabama in less than an hour – as you can do on I-20. Then set your clock back  and have arrived before you left.) 

Time zones are basically a clustering problem with some extra restrictions.  You want to set times so that:

  • Almost everybody’s time differs by a whole number of hours from UTC;
  • Clock times are not too far from solar time;
  • The time where you are is the same as the time in nearby places you communicate with.
  • Time zone boundaries align with geographical boundaries

The second of these criteria keeps time zones narrow; the third keeps them wide.  The Basement Geographer has some examples where keeping the time zones wide – as in countries like Brazil, Russia, India, or China – means time zone boundaries with neighboring countries that aren’t the standard one-hour change upon moving east or west. And Alison Schrager at Quartz has suggested that the US should have two time zones, one hour apart.  (These would be UTC-5 in the east and UTC-6 in the west.).  All of Western Europe being on UTC+1 is another example – although from what I understand there’s some World War II history tied up in here. France, for example, was on UTC before the war – although the law called it Paris mean time, retarded by nine minutes and twenty-one seconds. Anything to avoid letting the British win.


Indian food (and wine) pairing

Scientists have figured out what makes Indian food so delicious, from Roberto A. Ferdman at Wonkblog. In Western cuisines, ingredients in a dish are more likely to share flavor components than ingredients picked at random; in Indian cuisines, ingredients in a dish are less likely to share flavor components than ingredients picked at random. (East Asian cuisines are like Indian ones in this respect.) This is a result from a paper spices form the basis of food pairing in Indian cuisine by Anupam Jain, Rakhi N K, and Ganesh Bagler.

The paper describes this sort of “negative food pairing” as possibly originating from a “copy-mutate model”, which comes from a paper called The nonequilibrium nature of culinary evolution by Osame Kinouchi, Rosa Diez-Garcia, Adriano Holanda, Pedro Zambiachi, and Antonio Roque. The copy-mutate model supposes that recipes (well, bags of ingredients) evolve by copying and mutation, where ingredients have an intrinsic fitness and mutations involve replacing inferior ingredients with superior ones. I’m not convinced by this, because there’s no reason to think that Indian food would be more prone to evolution than any other.

I learned about the first paper from my wife, a sommelier. So this raises an interesting question: how do you pair wine with Indian food? Do you pair food with wine that contains the same flavor compounds (which is roughly the Western way of thinking about wine)? Or would it be more appropriate, on some level, to pair with a wine that doesn’t contain the same compounds? Here are some recommendations from Serious Eats for pairing wine with Indian food, and here are some recommendations from a wine pairing web site by the British food and wine writer Fiona Beckett. Make your own conclusions.