Continued fraction day

Why do we have February 29 this year, and not in other years? Of course it’s because the ratio between the Earth’s orbital period and its rotational period is not an integer, but rather is about 365.242199. Let’s call this 365+α. And this is approximated well by the rational number 365{1 \over 4}, the first convergent of the continued fraction. The convergents of continued fractions give, in a sense, the “best possible” rational approximations to irrational numbers.

Yury Grabovsky observes that the next few convergents to α are 7/29, 8/33, 31/128, 163/673, and indeed the Iranian calendar uses 8 leap years in 33. This is a bit harder to compute in one’s head. 31/128 would be pretty easy to work with — a year is a leap year if it’s divisible by 4, except not if it’s divisible by 128.

(Implicit in the Gregorian calendar rules — no leap years in years divisible by 100, except if they’re divisible by 400 — is the rational approximation 97/400, but that’s not a convergent.)

Therefore I nominate February 29 (in years when it occurs) as a new holiday, to be observed by the consumption and/or use of things that rely on rational approximations to irrational numbers.

What are these, you ask?

1. go look at the moon. The Metonic cycle is a period of 19 years, which is very nearly 235 (synodic) lunar months. So the full moon, for example, falls on (approximately) the same day on the solar calendar in the year N and in the year N + 19. The Hebrew calendar has seven leap years in nineteen, where the leap years have 13 (lunar) months instead of twelve. The Islamic calendar has twelve lunar months in each year, with the result that they fall backwards 235-12 \times 19 = 7 lunar months in nineteen lunar years. Oh, and on February 29, 1936 the phase of the moon was the same as it is today.

2. play some music. Western music theory is based on the existence of the circle of fifths, which in turn is based on the fact that $(3/2)12 \approx 2^7$ — that is, twelve perfect fifths is very nearly seven octaves. Taking logs this becomes $\log_2 3 \approx 19/12$. The fact that this is not the best approximation ever — it’s off by $0.0016$ — as well as the desire to incorporate other consonances into musical tuning caused lots of trouble.

3. use a computer. In computer world we use the prefix “kilo-” to stand for 1024 or 210, while everywhere else we use “kilo-” to stand for 1000 or 103. This abuse of language is only possible because log_10 2 \approx 3/10.

4. it might be your birthday, in which case I am sad for you because your birthday comes but once every four years! But 253/365 is a convergent to \log 2 (that’s a natural log) and so e^{-253/365} is very near $1/2$ (to be precise, it’s 0.499998248 or so). And why would you care about such a thing? Well, let’s assume that nobody’s born on February 29, and all other birthdays are equally likely. Now take twenty-three people at random; the probability that their birthdays are all different is

{365 \over 365} \times {364 \over 365} \times \cdots \times {343 \over 365} = \left( 1 - {1 \over 365} \right) \left( 1 - {2 \over 365} \right) \cdots \left( 1 - {22 \over 365} \right).

But if you remember that 1-z \approx e^{-z} when z is small then this is approximately

\exp \left( - {1+2+\cdots+22 \over 365} \right) = \exp \left( -{253 \over 365}\right).

And the answer to the famous “birthday problem” — how many people do you have to have for there to be a fifty percent chance that two of them have the same birthday — is twenty-three.

(I honestly hadn’t seen this one until I was preparing for this week’s classes. It just so happens that the right day to introduce the birthday problem in one of my classes this semester is February 29… also, R has a command for this.)

5. if it’s your birthday, you should eat cake. If it’s not, you should eat pie. Of course the most famous rational approximation of them all is \pi \approx 22/7. It’s a shame that February 29 is so close to pi day. Perhaps this is another argument for switching to tau day. Tau day is on June 28, so you have to wait a bit. But you get to eat two pies.

(It appears I’m not the first person to mention continued fractions on February 29. Mark Dominus did it in 2008, and I linked to it. Also, here’s an online continued fraction calculator. James Grime beat me to the punch, posting this Numberphile video with astronomer Meghan Gray; he posted while it was still February 28, despite being eight time zones ahead of me.

Why everyone thinks they’re above average, following Schelling

Thomas Schelling, in his fascinating book Micromotives and Macrobehavior (pp. 64-65 of the 2006 Norton edition) writes:

Ask people whether they consider themselves above or below average as drivers. Most people rank themselves above. When you tell them that, most of them smile sheepishly.

There are three possibilities. The average they have in mind is an arithmetic mean and if a minority drive badly enough a big majority can be “above average”. Or everybody ranks himself high in qualities he values: careful drivers give weight to care, skillfyl drivers give weight to skill, and those who think that, whatever sle they are not, at least they are polite, give weight to courtest, and come out high on their own scale. (Thsi is the way that every child has the best dog on the block.) Or some of us are kidding ourselves.

I’d long heard something similar in that “75 percent of students entering [insert elite college] think they’ll be in the top quarter of their class”; “top quarter” presumably means top quarter in GPA, so everybody is working on the same scale. But as Schelling points out, people probably judge driving on different scales. And I really don’t think people think intuitively in terms of means in the driving case; that requires doing arithmetic on numbers that don’t even exist.

So naturally I wondered how strong the effect is. Let’s assume that person i has scores in two variables, x_i and y_i, drawn from independent standard normal distributions. Let z_i = (x_i + y_i)/\sqrt{2} be their “objective” score — weighting the two variables equally, and renomalizing to have variance 1.

Now consider someone with, say, x_i = 1, y_i = 0.5. Their objective score is z_i = 1.5/\sqrt{2}. Objective scores are standard normal; thus they are at to be at the \Phi(1.5/\sqrt{2}) \times 100 = 85.56 percentile of the distribution of objective scores.

But now say this person perceives the world around them using a subjective score of the form (2x_i + y_i)/\sqrt{5} — since their x is higher than their y, they naturally assume x is the more important trait, following Schelling. Again we force the scores (from this person’s perspective) to be standard normal. Such scores have mean 0 and variance \sqrt{5}; this person’s score is 2(1) + 0.5 = 2.5. So they are, according to their own perception, at the \Phi(2.5/\sqrt{5}) \times 100 = 86.83 percentile. The effect here is actually particularly weak, since this person is relatively well-balanced. If x_i and y_i are much different the effect is larger. For example if x_i = 1 and y_i = -1 then the individual in question is “objectively” exactly average, but they give themselves a subjective score of 1/\sqrt{5} =  0.45 and therefore perceive themselves at the 67th percentile.

For a general person let w_i = (2 \max(x_i, y_i) + \min(x_i, y_i))/sqrt{5} be their “subjective” score, weighting the variable in which they have the higher score double. Let obj_i = \Phi(z_i) be the “objective” percentile rank, and let subj_i = \Phi(w_i) be the ‘subjective” percentile rank. Then the mean subj_i from a quick simulation is about 57 percent — people think they’re better than 57 percent of others, on average, when of course the “objective” truth is 50 percent.

Here’s a scatterplot of z_i against w_i-z_i — that is, the objective rank against the “perception gap”.

Here’s the distribution of people’s self-perceived ranks, from a simulation of ten thousand individuals. The histogram for objective ranks is nearly flat. The piling up near the right indicates that people are thinking more highly of themselves than would be objectively true. For a very quick measure, in this simulation of 10,000 individuals, 5,937 perceive themselves as above the median.

Here’s the distribution of “perception gaps”, the difference between subjective and objective ranks.

And somewhat surpisingly, about six percent of people have a lower subjective rank than objective rank. This is the “well-rounded” group that has both x_i and y_i positive, and y_i/x_i between about 0.7 and 1.4.

One expects this effect to be stronger if there are more factors to be considered; I’ll save that for another post.

Pythagoras goes linear

Let x_i and y_i both be uniform on [0, 1]. Let w_i be the smaller of the two, and let z_i be the larger. Let h_i = \sqrt{w_i^2 + z_i^2}. So (x_i, y_i) is a random point in the unit square, and h_i is its distance from the origin. We can predict this distance using linear regression. For example, in R, we can pick 10^4 such points and execute the code

x=runif(10^4,0,1)
y=runif(10^4,0,1)
w=rep(0,10^4); for(i in 1:10^4){w[i]=min(x[i],y[i])}
z=rep(0,10^4); for(i in 1:10^4){z[i]=max(x[i],y[i])}
h=sqrt(w^2+z^2)
lm(h~0+w+z)

to fit a linear model of the form h = aw+bz. The least-squares model here is, for this particular simulation, h = 0.4278w + 0.9339z, with R^2 = 0.9995. In other words, the formula

h = 0.4278 \min(x,y) + 0.9339 \max(x,y)

appears to predict a as a linear function of \min(x,y) and $\max(x,y)$ quite well, and so the hypotenuse of a triangle is 0.4278 times its shorter leg, plus 0.9339 times its longer leg. For a particular famous special case, try x = 3, y = 4; then we predict the hypotenuse is 0.4278(3) + 0.9339(4) = 5.019, quite close to the true value of 5.

Andrew Gelman and Deborah Nolan, in Teaching Statistics: A bag of tricks, give a very similar example, with slightly different numerical parameters and quip that “if Pythagoras knew about multiple regression, he might never have discovered his famous theorem”. (p. 146). They fit a model that is allowed to have nonzero constant term; I choose to fit a model with zero constant term. I think that our anachronistic Pythagoras would have had the sense to observe that if we double x and y, we should double the hypotenuse as well.

The natural question, to me, is to determine the “true” constants. So what constants a and B give the linear function ax+by that best approximates \sqrt{x^2+y^2}, when we restrict to 0 < x < y < 1? The reason for the triangular-shaped region is that we’re restricting to the case where $x$ is smaller and $y$ is larger. To be consistent with our Pythagoras-as-linear-regressor model, we’ll make the approximation in the least-squares sense. So we want to minimize

$f(a,b) = latex \int_0^1 \int_0^y \left( \sqrt{x^2+y^2} – (ax+by) \right)^2 \: dx \: dy $

as a function of a and b. This is a calculus problem. Expand the integrand to get

\int_0^1 \int_0^y x^2+y^2+a^2 x^2 + b^2 y^2 + 2ab xy - 2ax \sqrt{x^2+y^2} - 2by \sqrt{x^2+y^2} \: dx \: dy.

The polynomials are easy to integrate; the square-root terms somewhat less so, if it’s been a while since you’ve done freshman calculus. But after a bit of work this is

f(a,b) = {1 \over 12} a^2 + {1 \over 4} ab + {1 \over 4} b^2 + {1 \over 3} - C_1 a - C_2 b

where C_1 = (2\sqrt{2}-1)/6, C_2 = (\sqrt{2}+\sinh^{-1} 1)/4. Differentiating we get

{\partial \over \partial a} f(a,b) = {1 \over 6} a + {1 \over 4} b - C_1

and

{\partial \over \partial b} f(a,b) = {1 \over 4} a + {1 \over 2} b - C_2.

Set both of these equal to zero and solve to get
a = 24C_1 - 12C_2 = 5 \sqrt{2} - 4 - 3 \sinh^{-1} 1 = 0.4269, b = -12C_1 + 8C_2 = -2\sqrt{2} + 2 + 2 \sinh^{-1} 1 = 0.9343

which are tolerably close to the coefficients that came out of the regression. (Those coefficients had standard errors of 0.0009 and 0.0005 respectively.)

Of course our hypothetical Pythagoras couldn’t have done these integrals, and would not have liked that they turn out to be irrational. Perhaps he would have just said that the length of the hypotenuse of a triangle was three-sevenths of the shorter leg, plus fourteen-fifteenths of the longer leg.

Weekly links for February 26

Robert Talbert on The origin of the nabla symbol.

Ivars Peterson observes that article titles make a difference. (This is part of a larger site on mathematical writing.)

A cute number-theoretic puzzle from John D. Cook.

The shortest route from every Hubway station to every other. Hubway is a bike-sharing system and all its stations are within the city of Boston, but not surprisingly some routes go through Cambridge or Brookline, because Boston is not convex.

From math.stackexchange, examples of apparent patterns than eventually fail.

From Social Flow, Timing, Network and Topicality: A Revealing Look at How Whitney Houston Death News Spread on Twitter. (There was a 42-minute period where the news had been tweeted but almost nobody knew about it.)

Piotr Blaszczyk, Mikhail G. Katz, David Sherry, Ten Misconceptions from the History of Analysis and Their Debunking.

Over the last decade, baby teeth are a better investment than the stock market.

Math doesn’t suck, you do. Interesting, but crass and offensive. The way to convince people that math is awesome is not to insult them.

Most 3-pixel-by-3-pixel=squares in black and white photos lie near a Klein bottle in a nine-dimensional space.

The Miura-Ori map folds itself. With animated gif.

Annie Keeghan, at Salon, writes Afraid of your child’s math textbook? You should be. Some interesting comments at Hacker News.

Pascal’s Triangle revisited?

Why is the last episode of the first season of Community called “Pascal’s Triangle Revisited”? I suppose it’s because it’s one of those season finales where couples break up and get back together, but that seems a bit forced.

An earlier episode includes a statistics professor who has a poster in her office labeled “know your graphs” in the background. Screen shots here and here.

Test design and grade inflation

I gave an exam yesterday. While I was standing in the copy room making copies, one of the grad students walked by.

“Midterms?”, he said.
“Yes,” I said.
“Grading’s not fun.”
“I hope this one won’t be too bad. Lots of questions with simple answers.”
“Yeah, you think that now, but it’s amazing what they come up with.”
“Sure. My trick is this. I used to write exams to be out of a hundred, and you’d end up with things worth six, eight, ten points and you sweat over partial credit. What’s the difference between a four and a five out of six? Now I write with questions being out of two or three, and I spend a lot less time thinking about that. If it’s out of two, zero is wrong, one is sort of right, two is entirely right or almost so. The grades come out just as accurate.”

The exam in question was out of thirty-six points, with three one-point items, nine two-pointers, and five three-pointers.

I’ve given this spiel before, ever since I discovered this trick. What I hadn’t realized is that it’s a smaller-scale version of Jordan Ellenberg’s take on grade inflation. Ellenberg argues that, first, if GPAs are to be used in order to discriminate between stronger and less strong students, what matters is not how high the grades are but how many different grades there are, so having a scale where the only grades used are A, A-, B+, B, B- is just as good as having a scale where the only grades used are A, B, C, D, F. And he gives the results of some calculations showing that if there are only three posssible grades — or even two, in some dystopian world where the only possible grades are A and A- — we still have some reasonable ability to discriminate between students over the course of an undergraduate career.

(Disclaimer: I’ve met Jordan. He’s also the cousin of a friend of mine I met through different channels. It’s a small world after all.)