A New Year’s time zone puzzle

A friend of mine got on a plane last night in San Francisco, at about 10:30 pm local time. She’s landing in Sydney at about 8:30 am local time on January 1. She asked: Does she get a New Year (i. e. is her local time ever midnight on December 31 / January 1)?

As far as I can tell, the answer is no. Here’s a map of the flight. The locations of the airports in question are:
SFO (37°37’08″N 122°22’31″W)
SYD (33°56’46″S 151°10’38″E)
`
The answer I gave was as follows: approximately speaking, she crosses over the date line about two-thirds of the way through the flight (since SFO is roughly 60 degrees east of the line and SYD is roughly 30 degrees west of it). It’s a fifteen-hour flight, so that’s ten hours in. At that time it’ll be 8:30 AM Dec 31 in San Francisco (UTC-8) so 4:30 AM Dec 31, UTC-12. At that time she crosses the line to 4:30 AM Jan 01, UTC+12 (which is 3:30 AM in Sydney, which is UTC+11 this time of year). So the answer to the question is yes.

(Note that I made the crude assumption that the date line is the 180th meridian, which it isn’t exactly. But the answer isn’t even close to depending on that.)

The flight in question is, I believe, United 863, which as of this writing hasn’t crosses the date line yet, but you can look at the tracking log for yesterday. The flight that left on the night of the 29th crossed the date line just around 11 AM US Eastern time (UTC-5) i. e. 4 AM UTC-12/UTC+12. The flight that left on the night of the 30th made that crossing around 12:41 PM US Eastern time, i. e. 5:41 AM at the 180th meridian. (The flight appears to have been closer to a great circle yesterday but took a more southerly route today, based on the FlightAware tracking pages here and here.) Coincidentally this is about when this post is going up.

In other Pacific Ocean time zone fun, there was no such day as December 31, 1844 in the Philippines. Their trade links had previously come from the east (the Americas) but as of that time had started coming more from the west (Asia).

Old books from Springer 

Springer has made a bunch of old texts in mathematics (up to 2005) available for free via SpringerLink.  Here is a list of the math books (mostly from the Undergraduate Texts in Mathematics and Graduate Texts in Mathematics series), from Stuart Gale.  Perhaps also relevant: lists of Springer Texts in Statistics such as Wasserman’s All of Statistics and the Statistics and Computing series, such as The Grammar of Graphics.

Christmas full moon

(A bit late, sorry!)

The moon was full this Christmas, for the first time in 38 years.  This garnered some media coverage – see for example CNNVox, and Forbes.  The next one will be 19 years from now.  (The available data on this seems to be from NASA and generally is based on US Eastern time – your mileage may vary in other time zones.)

What would we expect? In the long run, since there’s a full moon every 29.5 days, a full moon on any given calendar date should happen every 29.5 years.  But in the medium run it appears that the Metonic cycle kicks in – that is, the fact that 235 lunar months is very nearly equal to 19 solar years, so full moons are on nearly the same calendar dates in years nineteen years apart.  Looking at this list of full moons from 1900-2100, there are full moons at (all times Eastern US):

1901 Dec 25 07:15

1920 Dec 25 07:39

1939 Dec 26 06:29

1958 Dec 25 22:54

1977 Dec 25 07:50

1996 Dec 24 15:43

2015 Dec 25 06:13

2034 Dec 25 03:56

2053 Dec 25 04:25

2072 Dec 25 02:18

2091 Dec 25 12:02

and nine of these eleven fall on Christmas.  (The pattern is a bit irregular – the number of hours between full moons isn’t quite constant.) The previous cycle was one of the times when the full moon missed Christmas.

This cycle also comes up in the computation of Easter, which is nominally on the first Sunday after the first full moon after the vernal equinox.

More if I can find some code to compute the time of full moon…

Links for December 22

Erica Klarreich at Quanta on László Babai’s new algorithm for the graph isomorphism problem in quasi-polynomial time (that’s \exp((\log n)^{O(1)}) where n is the number of vertices, for those of you who don’t remember your complexity classes).  The actual preprint ison the arXiv.

Priceonomics on the history of the Black-Scholes formula and (unrelatedly, unless you want to make some really strained argument about the financialization of everything) the invention of auto-tune.

David Austin at the AMS Samplings column on Petals, flowers, and circle packings.

FiveThirtyEight is looking for people who can predict the Oscars.

Katie Steckles has a video on the mathematics of wrapping presents.

Donald Knuth’s 21st annual Christmas lecture is on universal commafree codes.  (Here’s a list of Knuth lectures, many available online.)  I can’t find the source for this right now, but I seem to recall that these used to be called the “Christmas tree lectures” until he ran out of tree-related topics he wanted to lecture on.

Do heads of government age more quickly? From Andrew Olenski, Matthew Abola, and Anupam Jena, at the BMJ (which used to be called the British Medical Journal), via Vox.

How Frank Wilcoxon helped statisticians walk the nonparametric path, from Mario Cortina Borja and Julian Stander at Significance.

Kevin Hartnett at QuantaHope Rekindled for ABC Proof.

 

Beards and sample size

The satirical journal PNIS (the “Proceedings of the Natural Institute of Science”) has an article out on “Beards of War: Relationships between facial hair coverage and battle outcome in the U.S. Civil War“. This is of course the second in a series, because they had to build a dataset on beardedness first.

The answer: facial hair doesn’t seem to matter. Their table of overall standings by facial hair type is interesting. Looking only at the battles with clear wins and losses, the standings are as follows, sorted by win percentage:

Facial hair type Wins Losses Win percentage
Muttonchops with moustache 6 0 1.000
Friendly muttonchops 14 7 0.667
French cut 26 21 0.553
Moustache 27 22 0.551
Chin curtain 7 6 0.538
Van Dyke 36 35 0.507
Long beard 65 65 0.500
Short beard 63 67 0.485
Muttonchops 8 9 0.471
Clean shaven 14 22 0.389
Goatee 1 5 0.167

What jumps out to me immediately from this table is that the styles at the top and bottom tend to be the rarer styles. But this is exactly what you’d expect even if facial hair has no effect on battle ability, just because these are smaller samples.

A few possible confounders that weren’t addressed:

  • Facial hair style could be correlated with age. Perhaps older generals win more battles. Or less.  (And maybe there’s some correlation with testosterone levels.  Who knows?)
  • Facial hair style is correlated with US region (i. e. North vs South). The first paper in the series observes that there are differences in facial hair styles (with the Union being more bearded). The Union won more battles than the Confederacy according to the National Park Service data set that was used as the list of battles (the final count is Union 197, Confederacy 123, Inconclusive 63) so this could be an issue.

Erdős is still publishing

Paul Erdős is still publishing. His newest paper, with Ron Graham and Steve Butler,shows that any natural number can be written as a sum \sum_1^\ell 1/a_i with a_1 < \ldots < a_\ell, where each denominator is the product of three distinct primes. A footnote, on the obvious generalization to expressing rational numbers in such a form, reads:
“One of the authors believes that all rational numbers can be expressed in this form, another author has doubts that every rational number can be expressed in this form, and the third author, already having looked in The BOOK at the answer, remains silent on this issue.” For more of the context on this paper, see Siobhan Roberts writing for the Simons Foundation.

“The BOOK”, of course, refers to Erdős’ frequent claim that God (who he did not believe in) had a book in heaven that had the best proof of every theorem.  A sample of this book can be found in Proofs from THE BOOK by Martin Aigner and Günter M. Ziegler.

Some Euler

William Dunham lectures at Cornell on two of Euler’s big theorems: the divergence of the sum of the reciprocals of the primes and the evaluation of the sum of the reciprocals of the squares
{1 \over 1} + {1 \over 4} + {1 \over 9} + {1 \over 16} + \cdots = {\pi^2 \over 6}.

Dunham is the author of Euler: The Master of Us All, which focuses on selected results of Euler. If I recall correctly I first saw the evaluation of \zeta(2) in his Journey through Genius.

This focuses on the math; there’s a full-length biography of Euler coming out, by Ronald Calinger. The publisher, Princeton, describes this as the “first full-scale biography of Leonhard Euler”, which is honestly a bit surprising.

4s and 9s

James Tanton tweeted: “How many of the 100-digit numbers composed solely of the digits 4 and 9 are divisible by 2^100”?

Well, there are 2^{100} 100-digit numbers composed only of the digits 4 and 9, and the chances that any given 100-digit number is divisible by 2^{100} are one in 2^{100}, so… one?

Turns out that’s right. The key here is that a number (written in decimal) is divisible by 2^k if the number made up of its last k digits is divisible by 2^k. So we can build up our number one digit at a time:

  • its last digit is divisible by 2, so it’s 4
  • its last two digits are divisible by 4. This must be 44 or 94; 44 is divisible by 4, and 94 isn’t, so it’s 44.
  • its last three digits are divisible by 8. This must be 444 or 944; 444 isn’t divisible by 8, and 944 is, so it’s 944.

And so on. But what guarantees that at every step our result is unique?

Let’s introduce some notation: N_k is the set of k-digit numbers made up of 4s and 9s which is divisible by 2^k. So N_1 = \{ 4 \}, N_2 = \{ 44 \}, N_3 = \{ 944 \}, etc. We’ll prove by induction that N_k has one element for every k \ge 1. Assume this is true for k. Given that N_k = \{ n_k \}, we want to find N_{k+1}. Every number in N_{k+1} must be divisible by 2^k, and so have its last k digits divisible by 2^k. So the only possible elements of N_{k+1} are 4 \times 10^k + n_k and 9 \times 10^k + n_k (that is, the numbers made from n_k by writing 4 and 9 at the front). Now, since n_k is a multiple of 2^k, either n_k \equiv 0 \mod 2^{k+1} or n_k \equiv 2^k \mod 2^{k+1}.

Now, observe that 4 \times 10^k = (2 \times 5^k) (2^{k+1}) is a multiple of 2^{k+1}, And observe that 9 \times 10^k = (3^2)(2^k)(5^k) has exactly k factors of 2 in its prime factorization, so it’s divisible by 2^k but not 2^{k+1}, so 9 \times 10^k \equiv 2^k \mod 2^{k+1}.

So if n_k \equiv 0 \mod 2^{k+1}, then N_{k+1} = \{ 4 \times 10^k + n_k \}, and if n_k \equiv 2^k \mod 2^{k+1}, then N_{k+1} = \{ 9 \times 10^k + n_k \}. In either case N_{k+1} is a singleton, and by induction, we have:

Proposition: there exists exactly one k-digit number composed solely of the digits 4 and 9 which is divisible by 2^k.

This generalizes: we could let 4 and 9 be replaced by any single even number and odd number.

Furthermore, since the proof is constructive, we can actually find the n_k with a few lines of Python. (I’ve initalized n to [0, 4] so that I could write code which starts indexing the n_k at 1.)

n = [0, 4]

for k in range(1, 100):
    if n[k] % 2**(k+1) == 0:
        n += [4*(10**k) + n[k]]
    else:
        n += [9*(10**k) + n[k]]
		
print(n[100])

giving the result

4999999449449999994999449944944994449499994444449494944949944994494999944449444499949499449494994944

The Encyclopedia of Integer Sequences has a few sequences along these same lines: A035014 gives such numbers made of 3s and 4s.  A05333 gives such numbers made of 4s and 9s and nearby sequences starting with  A053312 do the same for other pairs.