Would the 2016 US presidential election result be different if the Electoral College were bigger?

No.

A quick way to see this: Trump won 306 electoral votes, Clinton 232. (There were faithless electors, but we can ignore them for the purposes of this question.). Trump won 30 states, Clinton 21. For the purposes of this post, DC is a state, which it probably also should be in reality.

Each state has as many electoral votes as it has senators and representatives combined. (DC gets 3.). Imagine breaking up the electoral votes in each state into two classes: the “senatorial” electoral votes (two per state), and the “representational” electoral votes (the other ones). Then Trump won the “senatorial” electoral votes by 60 to 42, leaving a 246-190 margin among the “representational” votes.

In 2000, on the other hand, Bush won 271 electoral votes in 30 states, and Gore 267 in 21 states; the “representational” votes were therefore split as 211 for Bush, 225 for Gore. If there were, say, twice as many Representatives (870 instead of 435 – and let’s keep the math simple and say DC gets 6 electoral votes in this scenario), and every state had twice as many electoral votes, then Bush would have had about (60 + 211 × 2) = 482 EV, and Gore (42 + 225 × 2) = 490 EV. Also in this world Nate Silver’s web site is named nineseventytwo.com (which is available, as of this writing). In fact, Bush would have won with any number of representatives less than 491; Gore with more than 655; and in between, the lead swings back and forth, according to a 2003 analysis by Michael G. Neubauer and Joel Zeitlin.

In short: 2000 was the way it was because small states are overrepresented in the Electoral College; 2016 was the way it was because the Electoral College is winner-take-all.

Balancing the centrifuge – a Riddler puzzle

From the riddler:

Quoc’s lab has a microcentrifuge, a piece of equipment that can separate components of a liquid by spinning around very rapidly. Liquid samples are pipetted into small tubes, which are then placed in one of the microcentrifuge’s 12 slots evenly spaced in a circle.

For the microcentrifuge to work properly, each tube must hold the same amount of liquid. Also, importantly, the center of mass of the samples must be at the very center of the circle — otherwise, the microcentrifuge will not be balanced and may break.

Quoc notices that there is no way to place exactly one tube in the microcentrifuge so that it will be balanced, but he can place two tubes (e.g., in slots 1 and 7).

Now Quoc needs to spin exactly seven samples. In which slots (numbered 1 through 12, as in the diagram above) should he place them so that the centrifuge will be balanced? Extra credit: Assuming the 12 slots are distinct, how many different balanced arrangements of seven samples are there?

https://fivethirtyeight.com/features/can-you-break-a-very-expensive-centrifuge/

I’ve seen this one before, but I’ll take it on as a programming challenge.

I’ll say that hole k is at coordinates (\sin 2\pi k/12, \cos 2\pi k/12). This is nonstandard, but the result looks like a clock, so it’s easier to visualize:

The centrifuge layout.

Now, there are {12 \choose 7} = 792 ways that we could pick 7 out of the 12 holes. To generate a list of these I can use the following R function:

combinations = function(n, k){
  if (k == 0 & n == 0){return(list())} else {
    if (k == 0){return(list(c()))} else {
      if (n == k){return(list(1:n))} else {
        without_n = combinations(n-1, k)
        with_n = lapply(combinations(n-1, k-1), function(x){c(x, n)})
        return(c(without_n, with_n))
      }
    }
  }
}

To generate the subsets of [n] of size k, I just take:

  • the subsets that don’t contain n (which are therefore subsets of [n-1] of size k), and
  • the subsets that do contain n (which are therefore subsets of [n-1] of size k-1, with n adjoined)
    (I learned this a good two decades ago, from section 3.6 of Herb Wilf’s notes East Side, West Side.)

Then iterate over the subsets to find their centers of mass:

our_comb = combinations(12, 7)
com_x = rep(NA, length(our_comb))
com_y = rep(NA, length(our_comb))
for (i in 1:length(our_comb)){
  com_x[i] = mean(x[our_comb[[i]]])
  com_y[i] = mean(y[our_comb[[i]]])
}

So our_comb runs through the combinations, and com_x and com_y are the x- and y-coordinates of the center of mass. Plotting the centers of mass gives we get an attractive symmetric pattern. Here I’ve plotted transparent points, so the darker points are points that arise from more distinct combinations. The outermost points represent the most imbalanced centrifuges – for example the one closest to the top represents loading slots 9, 10, 11, 12, 1, 2, and 3. The main question is: how many points are on top of each other at the very center.

Centers of mass of all subsets of 7 of the 12 centrifuge positions.

I can extract those subsets from the matrix our_comb and build them into a matrix for looking at. There’s a small problem in that we used floating-point arithmetic, so the coordinates don’t come out exactly at zero, but it’s enough to ake the ones that are “close enough”:

epsilon = 10^(-6)
balanced_indices = which(abs(com_x) < epsilon & abs(com_y) < epsilon)
balanced_combs =  t(matrix(unlist(our_comb[balanced_indices]), nrow = k))

and the matrix balanced_combs, which has one row for each combination that balances the centrifuge, is

      [,1] [,2] [,3] [,4] [,5] [,6] [,7]
 [1,]    1    2    3    6    7    9   10
 [2,]    1    2    4    5    8    9   10
 [3,]    1    2    5    6    7   10   11
 [4,]    2    3    4    7    8   10   11
 [5,]    2    3    5    6    9   10   11
 [6,]    1    2    5    6    8    9   12
 [7,]    1    3    4    7    8    9   12
 [8,]    1    4    5    6    9   10   12
 [9,]    1    4    5    7    8   11   12
[10,]    2    3    6    7    8   11   12
[11,]    3    4    5    8    9   11   12
[12,]    3    4    6    7   10   11   12

(The order here is lexicographic order, but you have to read the rows backwards.)

If you can make sense of this pile of numbers without making some pictures, good for you! But I am human, though a mathematician, so let’s plot. Here big black dots represent the loaded spots, and small blue dots represent the non-loaded spots.

(And yes, it’s in base R graphics. It’s been a while since I’ve used those – at my day job it’s all ggplot all the time.)


par(mfrow = c(4, 3), mar = c(1, 1, 1, 1))
for (i in 1:nrow(balanced_combs)){
  full = balanced_combs[i,]
  empty = setdiff(1:n, balanced_combs[i,])
  plot(x[full], y[full], pch = 19,cex = 2,
       asp = 1, axes = FALSE, xlab = '', ylab = '', 
       xlim = c(-1.25, 1.25), ylim = c(-1.25, 1.25))
  points(x[empty], y[empty], pch = 19, cex = 1, col = 'lightblue')
}

So there are 12 ways to load the centrifuge… but we can clearly see that all of them are really just rotations of a single pattern. To me the visually most convenient representative of that pattern is the third one in the third row, which has loadings at 12, 1, 4, 5, 7, 8, and 11.

We can play a little game of connect-the-dots to see why this pattern is balanced. Connect 12, 4, and 8 to form an equilateral triangle. Connect 1 and 7; connect 5 and 11. Like this:

12-hole centrifuge loaded with 7; symmetric subsets specified

Each of those three subsets has its center of mass at the center of the circle, so the whole arrangement does too.

That decomposition turns out to be the key to the general problem of which centrifuge sizes can be loaded with which number of tubes and remain balanced. Stay tuned.

Rainbow cats considered harmful

Partner: you’re stressed! fill out this!

Me: I can’t do that, because rainbow color maps are considered harmful. This is especially true because if we’re going to use one, shouldn’t it be green for “go” and red for “stop”.

Partner: just shut up and color.

(The original version of this meme didn’t have the colors, so you can make whatever palette you want, as in this example.)

Twenty-nine

Evelyn Lamb has a delightful page-a-day calendar. Today (yes, a few days late) I learned that TWENTY NINE is the only word in English that is written with a number of straight-line strokes equal to its value. (This is in a sans serif font; in particular I is one stroke, not three.)

English is surprisingly rich in numbers that have all straight-line strokes. In the Latin alphabet the letters that are all straight lines are A, E, F, H, I, K, L, M, N, T, V, W, X, Y, Z. That leads to the following words that are all straight-lined and appear in numbers, and the number of straight lines that make them up:

– having more strokes than their value: FIVE (10), NINE (11), ELEVEN (19), TWELVE (18), FIFTEEN (20), NINETEEN (23)
– having less strokes than their value: TEN (9), TWENTY (18), FIFTY (12), NINETY (16)

Any word that equals its own value must combine elements from both these lists, and it’s not hard to see that TWENTY NINE is the only one that works.  The full list of straight-line numbers in English is: 5, 9, 10, 11, 12, 15, 19, 25, 29, 55, 59, 95, 99.  (All the larger numbers include “HUNDRED” or “THOUSAND” or something ending in “-ION”, so we can stop there.)

Evelyn suggests this as part of how to memorize the largest known prime (it’s a Mersenne prime, and she suggests doing it in binary so every bit is 1, so the hard part is remembering where you are).

It’s hard to even find straight-line numbers in other languages, because a lot of the alphabet is missing.  They include:

  • German ZWEI (2), ZEHN (10), ELF (11)
  • Dutch EEN (1), TWEE (2), ZEVEN (7), TIEN (10), ELF (11) (edited 9/3: also ZWAALF (12))
  • Norwegian has a lot: EN (1), FEM (5), ATTE (8), NI (9), TI (10), ELLEVE (11), FEMTEN (15), ATTEN (18), NITTEN (19), FEMTI (50), ATTI (80), NITTI (90) (and also 55 = FEMTIFEM, 58, 59, 85, 88, 89, 95, 98, 99)
  • As does Danish: EN, FEM, NI, TI, ELLEVE, FEMTEN, ATTEN, NITTEN, with the same meanings as in Norwegian, but then the bigger numbers are formed irregularly.
  • Swedish has less: EN (1), FEM (5), ATTA (8), ELVA (11) – the numbers are very similar to Norwegian but the “-teen” ending is “-ton”, not “-ti” like Norwegian.
  • Spanish VEINTE (20), MIL (1000), and MIL VEINTE (1020)
  • Italian VENTI (20), MILLE (1000), and MILLEVENTI (1020)
  • Portuguese VINTE (20), MIL (1000) and MIL E VINTE (1020)
  • French MILLE (1000), but not VINGT (20)

ELVA (Swedish for 11) is the only other one I could find that also has the self-referential property, and the Chinese numerals , ,  if you want to stray from the alphabetic world. (Edited 9/3: also Dutch TIEN = 10, which I inexplicably missed before.)

(This post was edited to add the list of numbers and to clarify that ELVA is not the only straight-line number outside of English, but the only one I could find with this self-referential property.)

The second derivative I compute every day

Every evening I compute an approximate second derivative. It comes from the data at the COVID Tracking Project. Specifically, it’s

(P(t) - P(t-7)) - (P(t-7) - P(t-14))

where P(t) is the number of positive coronavirus cases in the US on day t. So this is the number of new cases this week, subtracted from the number last week. Every day I hope it’s negative. Lately it has been. Yesterday it was

(5884053 – 5595343) – (5595343 – 5282758) = 288710 – 312585 = -23875.

Who would have thought 288 thousand cases in a week would seem good? But derivatives matter. That would have been terrifying on June 29 (282k cases from June 22 – 29; 194k cases from June 15 – 22). But now it feels like a reason for maybe a little bit of hope.

The most hopeless plots, I think, are the cumulative ones. The number of positive tests never goes down. Early on in all this we all wanted to “flatten the curve”… some people seemed to be referring to this curve, which could never be flat.

20200829-cumulative

What we want to know is how many new cases there are. How much should we worry right now? A lot, it seems. But less than before?

20200829-lastseven

And there is something heartening about seeing a negative number. If we plot the approximate second derivative, we can get negative numbers!

20200829-weeklychange

Here we can see, broadly, four phases: the growth of March and early April; the slow decline that lasted through mid-June; the “second wave” of late June and July; the decline of August.

But what’s not visible here is the terror of the fast growth of March, because that was growth from a low base. If we plot the growth rate

$latex {(P(t) – P(t-7)) – (P(t-7) – P(t-14)) \over P(t)

then the original terror is clear.

20200829-growthrate

This was the period of doubling times, when cases were doubling multiple times a week. I’ve suppressed the growth rates over 100 percent per week – for example there were 44867 new cases in the US between March 16 and 23, compared to 5744 between March 9 and March 16, nearly three doublings in a week. Those days were, quite literally, off the charts.

How many pieces can a puzzle have?

Patrick Honner tweeted a few days ago:

Of course you could easily make a 300-piece puzzle, for example 15 by 20. And you could make a 299-piece puzzle — that factors as 13 times 23. But a 301-piece puzzle would have to be 7 by 43, assuming that the pieces form a nice grid,and that doesn’t seem like a “reasonable” size for a puzzle.

So which numbers could actually be reasonable sizes for puzzles? This could be nice to know if, say, you want to know if you have all the edge pieces. (But a 500-piece puzzle could be 20 x 25, or some reasonable slightly larger numbers like 19 x 27 = 513, or 18 x 28 = 504.) It’s a nice puzzle nonetheless.

So let’s say that a number is a “puzzle number” if it’s of the form m \times n with m \le n \le 2m — that is, a puzzle has to have an aspect ratio between 1 (square) and 2. (The choice of 2 is arbitrary here, but any other constant would be more arbitrary.) We can easily work out the first few puzzle numbers:

2 = 1 \times 2, 4 = 2 \times 2, 6 = 2 \times 3, 8 = 2 \times 4, 9 = 3 \times 3
which is enough to find them in the OEIS: that’s A071562, defined as “Numbers n such that the sum of the middle divisors of n (A071090) is not zero.” (What’s a “middle divisor”, you ask? It’s a divisor of n that’s between \sqrt{n/2} and \sqrt{2n}.) Titling it that way seems a bit strange to me: I’d have called it “Numbers such that the number of middle divisors of n (A067742) is nonzero”.

The first 10,000 puzzle numbers have been calculated; the 10,000th is 35,956. There are 43 under 100, 336 under 1,000, and 2,932 under 10,000 — it doesn’t appear that they have contant density. It’s not hard to take this further — there are 26870 under 10^5, 252,756 under 10^6, 2409731 under 10^7, and 23169860 under 10^8. For example, in R, you can generate the puzzle numbers as follows. (This take a few seconds to run. Note that you don’t have to compute any prime factorizations.)

N = 10^8
= rep(0, N)

for (m in 1:floor(sqrt(N))){
max_n = min(2*m,floor(N/m))
a[m*(m:max_n)] = a[m*(m:max_n)]+1
}

puzzle = which(a >= 1)

I had at first thought this sequence had a natural density, because I was only trying numbers up to a few hundred in my head, and because the number of middle divisors appears to average somewhere around \log(2). There’s a nice heuristic for this – a middle divisor of n is somewhere between \sqrt{n/2} and \sqrt{2n}; the “probability” that a number is divisible by k is 1/k, so the “expected number” of middle divisors of n is

\sum_{k=\sqrt{n/2}}^{\sqrt{2n}} {1 \over k} \approx \int_{\sqrt{n/2}}^{\sqrt{2n}} {1 \over x} \: dx = \log \sqrt{2n} - \log \sqrt{n/2} = \log \sqrt{4} = \log 2.

But there must be more numbers as you go further out that have many middle divisors, and more zeroes. This is similar to the behavior you see for the problem of “how many ways can an integer be written as a sum of two squares”. In that case the “average” is \pi/4, but an integer greater than one can be written as a sum of two squares if and only if its prime decomposition contains no prime congruent to 3 mod 4 raised to an odd power; asymptotically the number of positive integers below x which are the sum of two squares goes like bx/\sqrt{\log x} for a constant b (the constant is the Landau-Ramanujan constant). That connection might stem from the fact that both sequences are multiplicative. For sums of two squares, that follows from the factorization-based characterization or from the fact that

(a^2 + b^2) (c^2 + d^2) = (ac-bd)^2 + (ad + bc)^2.

For puzzle numbers, it follows from a remark of Franklin T. Adams-Watters in the OEIS entry. So numbers which have lots of factors both tend to be sums of two squares and to be puzzle numbers in many different ways; as we get to bigger numbers those start “crowding out” the smaller numbers.

The math internet has noticed this at least once before. John D. Cook wrote in 2014: Jigsaw puzzles that say they have 1,000 pieces have approximately 1,000 pieces, but probably not exactly 1,000. Jigsaw puzzle pieces are typically arranged in a grid, so the number of pieces along a side has to be a divisor of the total number of pieces. This means there aren’t very many ways to make a puzzle with exactly 1,000 pieces, and most have awkward aspect ratios.

1000 as 25-by-40 seems reasonable, but 27 x 37 = 999 or 28 x 36 = 1008 would also work. I assume that you wouldn’t actually see a 999-piece puzzle because the lawyers would claim calling that 1000 was false advertising. A puzzle blog indicates that most “500-piece” puzzles are actually 27 x 19 = 513 and most “1000-piece” puzzles are 38 x 27 = 1026 – both of these aspect ratios are approximations of \sqrt{2}. That is a good aspect ratio to use if you want the ability to make both “500-piece” and “1000-piece” (or more generally, “N-piece” and “2N-piece”) versions of the same puzzle. Similarly the A0, A1, … series of paper sizes used in most of the world have that aspect ratio and therefore an An piece of paper can be cut into two A(n+1) pieces.

Will the rest of the hurricane season be worse?

Isaias will be the ninth tropical storm of the season, likely to form on July 30. This is unusually early. From CNN on July 28: “Once it is given the name Isaias — pronounced (ees-ah-EE-as) — it will be the earliest storm to begin with an “I” on record. The previous record was set on August 7, 2005, part of the busiest season to date.”  Storms are named in alphabetical order, so a storm starting with “I”  is the ninth storm of the season.

This did seem early to me. The recent example that I remember Irma, which made landfall on the mainland US on September 10, 2017. (I spent a day seeing lots of people with Florida license plates driving around Atlanta, thinking “at least I don’t live in Florida!”, and then the next day having my power out and being glad that the trees the storm took out in my backyard were ones that needed taking out anyway.)

What does this mean for the rest of the hurricane season? Should we expect a busy one?

There’s an R package, HURDAT, that compiles the NOAA hurricane database. For each year, we can count the number of storms. First let’s make a list of all the storms:

library(HURDAT)
library(lubridate)
al <- get_hurdat(basin = 'AL') cutoff_month = 7 cutoff_day = 29 storms = al %>% group_by(Key) %>% filter(Status %in% c('TS', 'HU')) %>%
mutate(rk = rank(DateTime)) %>% filter(rk == 1) %>%
mutate(season_part = ifelse(month(DateTime) < cutoff_month |
month(DateTime) == cutoff_month & day(DateTime) <= cutoff_day, 'early', 'late')) %>%
select(Key, Name, DateTime)

and count the storms by time in the season:

storm_counts = storms %>% mutate(yr = year(DateTime)) %>%
group_by(yr, season_part) %>% count()

But how do we know about are storms? Not all storms were observed in the past, because some stay out to sea. The “satellite era” began in 1966, according to this article about this year’s Tropical Storm Gonzalo (the earliest “G” storm ever). So let’s plot the number of storms before and after July 29, separately for each of the two “eras”.

satellite_start = 1966
storm_counts %>% spread(season_part, n, 0) %>%
mutate(satellite = ifelse(yr >= satellite_start, 'satellite', 'pre-satellite')) %>%
ggplot() + facet_wrap(~satellite) + geom_jitter(aes(x=early, y=late)) +
stat_smooth(aes(x=early, y=late, group = satellite), method = 'lm') +
scale_x_continuous(paste0('storms before ', cutoff_month, '/', cutoff_day)) +
scale_y_continuous(paste0('storms after ', cutoff_month, '/', cutoff_day))

20200729-storms-early-late

But 2005 is a big outlier! I remember that season. Of course it was the year of Katrina. And in December there was a hurricane named Epsilon, which I could not take seriously because epsilon is small. Indeed if we drop it the effect of early-season storms on late-season ones gets much smaller.

20200729-storms-early-late-no2005

So there’s a small effect – but not as large as might be expected! So how many storms will we have this season? Calling predict(storm_model, data.frame(early = 9), interval = 'prediction', level = 0.95), gets a 95% prediction interval for the number of storms in the rest of the season: 12.5 plus or minus 7.2. So we could reasonably have as few as 5 more or as many as 20 more, for a total of 14 to 29.

A 50% prediction interval goes from 10 to 15 more storms in the remaining part of the season, for a total of 19 to 24. For comparison, the quartiles of the distribution of the number of storms after 7/29 in the satellite era are 7 and 12. So if we knew nothing about what’s happened so far, we’d give a 50% prediction interval of 16 to 21 storms for the number at the end of the season.

Let’s see what happens. And let’s hope these storms stay far out to sea, because pandemics and hurricanes don’t seem very compatible.

Double (or triple) and sort

Here’s a sequence I keep mulling over in my head:

1, 2, 4, 8, 16, 23, 46, 29, 58, 116, 223, 446, …

Does it grow without bound? First we should specify what it is. Each number is double the one before, but with the digits sorted into ascending order. The doubling makes the number larger… but the sorting makes it smaller, so perhaps this sequence tops out somewhere?

This sequence actually becomes periodic pretty quickly. The full sequence is:

1, 2, 4, 8, 16, 23, 46, 29, 58, 116, 223, 446, 289, 578, 1156, 1223, 2446, 2489, 4789, 5789, 11578, 12356, 12247, 24449, 48889, 77789, 155578, 111356, 122227, 244445, 48889, …
and then it repeats with period 6. The key is in the final step: once we get a number that ends with 5, there’s a 0 in its double so the next term is actually shorter. (It is surprisingly hard to keep the mental arithmetic straight to the point where I can do this in my head.) There are other such loops: if you start with 3 you eventually reach

69, 138, 267, 345, 69, …

(note that everything stays divisible by 3 here!). If you start with 7 you get to

167, 334, 668, 1336, 2267, 3445, 689, 1378, 2567, 1345, 269, 358, 167, …

and if you start with 9 you get to

9, 18, 36, 27, 45, 9, …

And there’s a series of these sequences. Every number in that period-6 loop that you reach if you start with 1 has a repeated digit, and you can have less or more of those:

49, 89, 178, 356, 127, 245, 49, …
489, 789, 1578, 1356, 1227, 2445, 489, …
4889, 7789, 15578, 11356, 12227, 24445, 4889, …

The structure is quite a bit richer than the triple-and-sort sequence, presumably because divisibility by three “plays well” with rearranging digits. There, as far as I can tell, no matter what number you start with you end up with 45. (Tripling that give 135, tripling again gives 405, which sorts back to 45.) The reason for this seems to be that any repeated digits, when tripled, give you repeated 3s or 6s, and tripling those gives repeated 0s. For example, consider the start number 9261632697:

  • Tripling gives 27784898091 – which incidentally has a zero – and sorting that gives 1247788899.
  • Tripling again gives 3743366697 – the repeated 7s and 8s “turn into” repeated 3s and 6s — and sorting gives 3334666779.
  • At this point everything collapses – tripling gives 10004000337, and sorting gives 13347.
  • Another collapsing step: 13347 × 3 = 40041, which sorts to 144.
  • The remainder of the sequence is uninteresting:  234, 27, 18, 45, 135, 45, 135, …

This is the “typical” behavior – but it’s a bit surprising to me that it always works. I suspect a proof would be in terms of vectors that show the number of each digit, rather than with the raw numbers themselves.

How full is the pool?

Riddler Express for July 3, 2020:

Suppose a queue of swimmers arrives at [a] pool when it opens at 9 a.m. One at a time, each person randomly picks a lane from among the lanes that are available (i.e., the lane has no swimmer already and is not adjacent to any lanes with swimmers), until no more lanes are available.

At this point, what is the expected number of swimmers in the pool?

For five lanes the pool will be full when lanes 1, 3, and 5; 1 and 4; or 2 and 5 are occupied.

If lane 2 is first to be filled then we will certainly end up with just two occupied lanes (2 and 4); similarly with 4.

If lane 3 is first we certainly end up with three occupied lanes, since 1 and 5 remain and they don’t interfere with each other.

If the first swimmer uses lane 1 we actually have to think. The remaining lanes are 3, 4, and 5. If the second swimmer fills one of the “new” edge lanes (3 or 5) first then we’ll end up with three full lanes; if the second swimmer fills up 4 we end up with two full lanes. So if the first swimmer fills lane 1 we have an expectation of 2 2/3 lanes full, and similarly for lane 5.

Putting it all together the expected number of full lanes is (2/5) (2) + (1/5) (3) + (2/5) (8/3) = 37/15.

This is sort of a discrete version of “Renyi’s parking problem”: if you have a street of length x and cars of length 1 park at random, how full is the street expected to be when nobody else can park?

In the n-lane case there’s a recursive solution. Let f(n) be the expected number of full lanes in an n-lane pool. Clearly f(1) = 1, f(2) = 1; we already worked out f(5) = 8/3.

To find f(n), consider where the first swimmer goes.

  • If they use lane 1 or lane n then there is one remaining sub-pool of n-2 lanes.
  • If they use lane 2 or lane n-1 then there is one remaining sub-pool of n-3 lanes.
  • If they use lane k with 3 \le k \le n-2 then there are two remaining sub-pools of k-2 and n-k-1 lanes.

So this gives us, if n \ge 4,
f(n) = 1 + {2 \over n} f(n-2) + {2 \over n} f(n-3) + \sum_{k=3}^{n-2} {1 \over n} (f(k-2) + f(n-k-1)) $
with the base cases f(1) = 1, f(2) = 1, f(3) = 5/3.

If we compute this for large n we get f(n) \sim 0.4323n, which agrees with the Monte Carlo simulations in Georgiou, Kranakis, and Krizanc 2009. (Simulation isn’t necessary here but it’s necessary if you consider an analogous problem on a grid.) The constant 0.4323 is (1-e^{-2})/2. Kranakis and Krizanc have some online notes on the urinal problem. as well, and the problem goes back at least to a problem posed in the SIAM Review in 1962 by Freedman and Shepp.

(2020-07-07: fixed numerical expression of constant 0.4323)

The smallest non-obvious composite

It’s been a while, I know.

My partner sent me a link to this tweet yesterday:

Testing divisibility by 17 is hard!  There’s a test: 10A + B is divisible by 17 if and only if A – 5B is.  (Proof: 10A + B is divisible by 17 if and only if 12(10A + B) is, since 12 and 17 are relatively prime; then observe 12(10A + B) – 17(7A + B) = A – 5B.). But nobody has internalized this test.

But 51 is also divisible by 3, and that’s easy to see if you know the usual test: a number is divisible by 3 if and only if the sum of its digits is.  5 + 1 = 6.

So assume you know this test. Also assume you know how to test for numbers that are divisible by 2 or 5 (you can look at the last digit), 11 (for two-digit numbers you can just check if the two digits are the same), and you know the squares.  Then the smallest number that looks prime but isn’t is the product of the first two distinct primes I haven’t mentioned, 7 × 13 = 91.

Back in 2016 Christian Lawson-Perfect collected data on this from a game he made that asked people which numbers were prime; 51 is the number people got wrong most often.  But 91 is the composite most often identified as prime that’s not a multiple of three.

57 also looks prime.