Twenty-nine

Evelyn Lamb has a delightful page-a-day calendar. Today (yes, a few days late) I learned that TWENTY NINE is the only word in English that is written with a number of straight-line strokes equal to its value. (This is in a sans serif font; in particular I is one stroke, not three.)

English is surprisingly rich in numbers that have all straight-line strokes. In the Latin alphabet the letters that are all straight lines are A, E, F, H, I, K, L, M, N, T, V, W, X, Y, Z. That leads to the following words that are all straight-lined and appear in numbers, and the number of straight lines that make them up:

– having more strokes than their value: FIVE (10), NINE (11), ELEVEN (19), TWELVE (18), FIFTEEN (20), NINETEEN (23)
– having less strokes than their value: TEN (9), TWENTY (18), FIFTY (12), NINETY (16)

Any word that equals its own value must combine elements from both these lists, and it’s not hard to see that TWENTY NINE is the only one that works.  The full list of straight-line numbers in English is: 5, 9, 10, 11, 12, 15, 19, 25, 29, 55, 59, 95, 99.  (All the larger numbers include “HUNDRED” or “THOUSAND” or something ending in “-ION”, so we can stop there.)

Evelyn suggests this as part of how to memorize the largest known prime (it’s a Mersenne prime, and she suggests doing it in binary so every bit is 1, so the hard part is remembering where you are).

It’s hard to even find straight-line numbers in other languages, because a lot of the alphabet is missing.  They include:

  • German ZWEI (2), ZEHN (10), ELF (11)
  • Dutch EEN (1), TWEE (2), ZEVEN (7), TIEN (10), ELF (11) (edited 9/3: also ZWAALF (12))
  • Norwegian has a lot: EN (1), FEM (5), ATTE (8), NI (9), TI (10), ELLEVE (11), FEMTEN (15), ATTEN (18), NITTEN (19), FEMTI (50), ATTI (80), NITTI (90) (and also 55 = FEMTIFEM, 58, 59, 85, 88, 89, 95, 98, 99)
  • As does Danish: EN, FEM, NI, TI, ELLEVE, FEMTEN, ATTEN, NITTEN, with the same meanings as in Norwegian, but then the bigger numbers are formed irregularly.
  • Swedish has less: EN (1), FEM (5), ATTA (8), ELVA (11) – the numbers are very similar to Norwegian but the “-teen” ending is “-ton”, not “-ti” like Norwegian.
  • Spanish VEINTE (20), MIL (1000), and MIL VEINTE (1020)
  • Italian VENTI (20), MILLE (1000), and MILLEVENTI (1020)
  • Portuguese VINTE (20), MIL (1000) and MIL E VINTE (1020)
  • French MILLE (1000), but not VINGT (20)

ELVA (Swedish for 11) is the only other one I could find that also has the self-referential property, and the Chinese numerals , ,  if you want to stray from the alphabetic world. (Edited 9/3: also Dutch TIEN = 10, which I inexplicably missed before.)

(This post was edited to add the list of numbers and to clarify that ELVA is not the only straight-line number outside of English, but the only one I could find with this self-referential property.)

The second derivative I compute every day

Every evening I compute an approximate second derivative. It comes from the data at the COVID Tracking Project. Specifically, it’s

(P(t) - P(t-7)) - (P(t-7) - P(t-14))

where P(t) is the number of positive coronavirus cases in the US on day t. So this is the number of new cases this week, subtracted from the number last week. Every day I hope it’s negative. Lately it has been. Yesterday it was

(5884053 – 5595343) – (5595343 – 5282758) = 288710 – 312585 = -23875.

Who would have thought 288 thousand cases in a week would seem good? But derivatives matter. That would have been terrifying on June 29 (282k cases from June 22 – 29; 194k cases from June 15 – 22). But now it feels like a reason for maybe a little bit of hope.

The most hopeless plots, I think, are the cumulative ones. The number of positive tests never goes down. Early on in all this we all wanted to “flatten the curve”… some people seemed to be referring to this curve, which could never be flat.

20200829-cumulative

What we want to know is how many new cases there are. How much should we worry right now? A lot, it seems. But less than before?

20200829-lastseven

And there is something heartening about seeing a negative number. If we plot the approximate second derivative, we can get negative numbers!

20200829-weeklychange

Here we can see, broadly, four phases: the growth of March and early April; the slow decline that lasted through mid-June; the “second wave” of late June and July; the decline of August.

But what’s not visible here is the terror of the fast growth of March, because that was growth from a low base. If we plot the growth rate

$latex {(P(t) – P(t-7)) – (P(t-7) – P(t-14)) \over P(t)

then the original terror is clear.

20200829-growthrate

This was the period of doubling times, when cases were doubling multiple times a week. I’ve suppressed the growth rates over 100 percent per week – for example there were 44867 new cases in the US between March 16 and 23, compared to 5744 between March 9 and March 16, nearly three doublings in a week. Those days were, quite literally, off the charts.

How many pieces can a puzzle have?

Patrick Honner tweeted a few days ago:

Of course you could easily make a 300-piece puzzle, for example 15 by 20. And you could make a 299-piece puzzle — that factors as 13 times 23. But a 301-piece puzzle would have to be 7 by 43, assuming that the pieces form a nice grid,and that doesn’t seem like a “reasonable” size for a puzzle.

So which numbers could actually be reasonable sizes for puzzles? This could be nice to know if, say, you want to know if you have all the edge pieces. (But a 500-piece puzzle could be 20 x 25, or some reasonable slightly larger numbers like 19 x 27 = 513, or 18 x 28 = 504.) It’s a nice puzzle nonetheless.

So let’s say that a number is a “puzzle number” if it’s of the form m \times n with m \le n \le 2m — that is, a puzzle has to have an aspect ratio between 1 (square) and 2. (The choice of 2 is arbitrary here, but any other constant would be more arbitrary.) We can easily work out the first few puzzle numbers:

2 = 1 \times 2, 4 = 2 \times 2, 6 = 2 \times 3, 8 = 2 \times 4, 9 = 3 \times 3
which is enough to find them in the OEIS: that’s A071562, defined as “Numbers n such that the sum of the middle divisors of n (A071090) is not zero.” (What’s a “middle divisor”, you ask? It’s a divisor of n that’s between \sqrt{n/2} and \sqrt{2n}.) Titling it that way seems a bit strange to me: I’d have called it “Numbers such that the number of middle divisors of n (A067742) is nonzero”.

The first 10,000 puzzle numbers have been calculated; the 10,000th is 35,956. There are 43 under 100, 336 under 1,000, and 2,932 under 10,000 — it doesn’t appear that they have contant density. It’s not hard to take this further — there are 26870 under 10^5, 252,756 under 10^6, 2409731 under 10^7, and 23169860 under 10^8. For example, in R, you can generate the puzzle numbers as follows. (This take a few seconds to run. Note that you don’t have to compute any prime factorizations.)

N = 10^8
= rep(0, N)

for (m in 1:floor(sqrt(N))){
max_n = min(2*m,floor(N/m))
a[m*(m:max_n)] = a[m*(m:max_n)]+1
}

puzzle = which(a >= 1)

I had at first thought this sequence had a natural density, because I was only trying numbers up to a few hundred in my head, and because the number of middle divisors appears to average somewhere around \log(2). There’s a nice heuristic for this – a middle divisor of n is somewhere between \sqrt{n/2} and \sqrt{2n}; the “probability” that a number is divisible by k is 1/k, so the “expected number” of middle divisors of n is

\sum_{k=\sqrt{n/2}}^{\sqrt{2n}} {1 \over k} \approx \int_{\sqrt{n/2}}^{\sqrt{2n}} {1 \over x} \: dx = \log \sqrt{2n} - \log \sqrt{n/2} = \log \sqrt{4} = \log 2.

But there must be more numbers as you go further out that have many middle divisors, and more zeroes. This is similar to the behavior you see for the problem of “how many ways can an integer be written as a sum of two squares”. In that case the “average” is \pi/4, but an integer greater than one can be written as a sum of two squares if and only if its prime decomposition contains no prime congruent to 3 mod 4 raised to an odd power; asymptotically the number of positive integers below x which are the sum of two squares goes like bx/\sqrt{\log x} for a constant b (the constant is the Landau-Ramanujan constant). That connection might stem from the fact that both sequences are multiplicative. For sums of two squares, that follows from the factorization-based characterization or from the fact that

(a^2 + b^2) (c^2 + d^2) = (ac-bd)^2 + (ad + bc)^2.

For puzzle numbers, it follows from a remark of Franklin T. Adams-Watters in the OEIS entry. So numbers which have lots of factors both tend to be sums of two squares and to be puzzle numbers in many different ways; as we get to bigger numbers those start “crowding out” the smaller numbers.

The math internet has noticed this at least once before. John D. Cook wrote in 2014: Jigsaw puzzles that say they have 1,000 pieces have approximately 1,000 pieces, but probably not exactly 1,000. Jigsaw puzzle pieces are typically arranged in a grid, so the number of pieces along a side has to be a divisor of the total number of pieces. This means there aren’t very many ways to make a puzzle with exactly 1,000 pieces, and most have awkward aspect ratios.

1000 as 25-by-40 seems reasonable, but 27 x 37 = 999 or 28 x 36 = 1008 would also work. I assume that you wouldn’t actually see a 999-piece puzzle because the lawyers would claim calling that 1000 was false advertising. A puzzle blog indicates that most “500-piece” puzzles are actually 27 x 19 = 513 and most “1000-piece” puzzles are 38 x 27 = 1026 – both of these aspect ratios are approximations of \sqrt{2}. That is a good aspect ratio to use if you want the ability to make both “500-piece” and “1000-piece” (or more generally, “N-piece” and “2N-piece”) versions of the same puzzle. Similarly the A0, A1, … series of paper sizes used in most of the world have that aspect ratio and therefore an An piece of paper can be cut into two A(n+1) pieces.

Will the rest of the hurricane season be worse?

Isaias will be the ninth tropical storm of the season, likely to form on July 30. This is unusually early. From CNN on July 28: “Once it is given the name Isaias — pronounced (ees-ah-EE-as) — it will be the earliest storm to begin with an “I” on record. The previous record was set on August 7, 2005, part of the busiest season to date.”  Storms are named in alphabetical order, so a storm starting with “I”  is the ninth storm of the season.

This did seem early to me. The recent example that I remember Irma, which made landfall on the mainland US on September 10, 2017. (I spent a day seeing lots of people with Florida license plates driving around Atlanta, thinking “at least I don’t live in Florida!”, and then the next day having my power out and being glad that the trees the storm took out in my backyard were ones that needed taking out anyway.)

What does this mean for the rest of the hurricane season? Should we expect a busy one?

There’s an R package, HURDAT, that compiles the NOAA hurricane database. For each year, we can count the number of storms. First let’s make a list of all the storms:

library(HURDAT)
library(lubridate)
al <- get_hurdat(basin = 'AL') cutoff_month = 7 cutoff_day = 29 storms = al %>% group_by(Key) %>% filter(Status %in% c('TS', 'HU')) %>%
mutate(rk = rank(DateTime)) %>% filter(rk == 1) %>%
mutate(season_part = ifelse(month(DateTime) < cutoff_month |
month(DateTime) == cutoff_month & day(DateTime) <= cutoff_day, 'early', 'late')) %>%
select(Key, Name, DateTime)

and count the storms by time in the season:

storm_counts = storms %>% mutate(yr = year(DateTime)) %>%
group_by(yr, season_part) %>% count()

But how do we know about are storms? Not all storms were observed in the past, because some stay out to sea. The “satellite era” began in 1966, according to this article about this year’s Tropical Storm Gonzalo (the earliest “G” storm ever). So let’s plot the number of storms before and after July 29, separately for each of the two “eras”.

satellite_start = 1966
storm_counts %>% spread(season_part, n, 0) %>%
mutate(satellite = ifelse(yr >= satellite_start, 'satellite', 'pre-satellite')) %>%
ggplot() + facet_wrap(~satellite) + geom_jitter(aes(x=early, y=late)) +
stat_smooth(aes(x=early, y=late, group = satellite), method = 'lm') +
scale_x_continuous(paste0('storms before ', cutoff_month, '/', cutoff_day)) +
scale_y_continuous(paste0('storms after ', cutoff_month, '/', cutoff_day))

20200729-storms-early-late

But 2005 is a big outlier! I remember that season. Of course it was the year of Katrina. And in December there was a hurricane named Epsilon, which I could not take seriously because epsilon is small. Indeed if we drop it the effect of early-season storms on late-season ones gets much smaller.

20200729-storms-early-late-no2005

So there’s a small effect – but not as large as might be expected! So how many storms will we have this season? Calling predict(storm_model, data.frame(early = 9), interval = 'prediction', level = 0.95), gets a 95% prediction interval for the number of storms in the rest of the season: 12.5 plus or minus 7.2. So we could reasonably have as few as 5 more or as many as 20 more, for a total of 14 to 29.

A 50% prediction interval goes from 10 to 15 more storms in the remaining part of the season, for a total of 19 to 24. For comparison, the quartiles of the distribution of the number of storms after 7/29 in the satellite era are 7 and 12. So if we knew nothing about what’s happened so far, we’d give a 50% prediction interval of 16 to 21 storms for the number at the end of the season.

Let’s see what happens. And let’s hope these storms stay far out to sea, because pandemics and hurricanes don’t seem very compatible.

Double (or triple) and sort

Here’s a sequence I keep mulling over in my head:

1, 2, 4, 8, 16, 23, 46, 29, 58, 116, 223, 446, …

Does it grow without bound? First we should specify what it is. Each number is double the one before, but with the digits sorted into ascending order. The doubling makes the number larger… but the sorting makes it smaller, so perhaps this sequence tops out somewhere?

This sequence actually becomes periodic pretty quickly. The full sequence is:

1, 2, 4, 8, 16, 23, 46, 29, 58, 116, 223, 446, 289, 578, 1156, 1223, 2446, 2489, 4789, 5789, 11578, 12356, 12247, 24449, 48889, 77789, 155578, 111356, 122227, 244445, 48889, …
and then it repeats with period 6. The key is in the final step: once we get a number that ends with 5, there’s a 0 in its double so the next term is actually shorter. (It is surprisingly hard to keep the mental arithmetic straight to the point where I can do this in my head.) There are other such loops: if you start with 3 you eventually reach

69, 138, 267, 345, 69, …

(note that everything stays divisible by 3 here!). If you start with 7 you get to

167, 334, 668, 1336, 2267, 3445, 689, 1378, 2567, 1345, 269, 358, 167, …

and if you start with 9 you get to

9, 18, 36, 27, 45, 9, …

And there’s a series of these sequences. Every number in that period-6 loop that you reach if you start with 1 has a repeated digit, and you can have less or more of those:

49, 89, 178, 356, 127, 245, 49, …
489, 789, 1578, 1356, 1227, 2445, 489, …
4889, 7789, 15578, 11356, 12227, 24445, 4889, …

The structure is quite a bit richer than the triple-and-sort sequence, presumably because divisibility by three “plays well” with rearranging digits. There, as far as I can tell, no matter what number you start with you end up with 45. (Tripling that give 135, tripling again gives 405, which sorts back to 45.) The reason for this seems to be that any repeated digits, when tripled, give you repeated 3s or 6s, and tripling those gives repeated 0s. For example, consider the start number 9261632697:

  • Tripling gives 27784898091 – which incidentally has a zero – and sorting that gives 1247788899.
  • Tripling again gives 3743366697 – the repeated 7s and 8s “turn into” repeated 3s and 6s — and sorting gives 3334666779.
  • At this point everything collapses – tripling gives 10004000337, and sorting gives 13347.
  • Another collapsing step: 13347 × 3 = 40041, which sorts to 144.
  • The remainder of the sequence is uninteresting:  234, 27, 18, 45, 135, 45, 135, …

This is the “typical” behavior – but it’s a bit surprising to me that it always works. I suspect a proof would be in terms of vectors that show the number of each digit, rather than with the raw numbers themselves.

How full is the pool?

Riddler Express for July 3, 2020:

Suppose a queue of swimmers arrives at [a] pool when it opens at 9 a.m. One at a time, each person randomly picks a lane from among the lanes that are available (i.e., the lane has no swimmer already and is not adjacent to any lanes with swimmers), until no more lanes are available.

At this point, what is the expected number of swimmers in the pool?

For five lanes the pool will be full when lanes 1, 3, and 5; 1 and 4; or 2 and 5 are occupied.

If lane 2 is first to be filled then we will certainly end up with just two occupied lanes (2 and 4); similarly with 4.

If lane 3 is first we certainly end up with three occupied lanes, since 1 and 5 remain and they don’t interfere with each other.

If the first swimmer uses lane 1 we actually have to think. The remaining lanes are 3, 4, and 5. If the second swimmer fills one of the “new” edge lanes (3 or 5) first then we’ll end up with three full lanes; if the second swimmer fills up 4 we end up with two full lanes. So if the first swimmer fills lane 1 we have an expectation of 2 2/3 lanes full, and similarly for lane 5.

Putting it all together the expected number of full lanes is (2/5) (2) + (1/5) (3) + (2/5) (8/3) = 37/15.

This is sort of a discrete version of “Renyi’s parking problem”: if you have a street of length x and cars of length 1 park at random, how full is the street expected to be when nobody else can park?

In the n-lane case there’s a recursive solution. Let f(n) be the expected number of full lanes in an n-lane pool. Clearly f(1) = 1, f(2) = 1; we already worked out f(5) = 8/3.

To find f(n), consider where the first swimmer goes.

  • If they use lane 1 or lane n then there is one remaining sub-pool of n-2 lanes.
  • If they use lane 2 or lane n-1 then there is one remaining sub-pool of n-3 lanes.
  • If they use lane k with 3 \le k \le n-2 then there are two remaining sub-pools of k-2 and n-k-1 lanes.

So this gives us, if n \ge 4,
f(n) = 1 + {2 \over n} f(n-2) + {2 \over n} f(n-3) + \sum_{k=3}^{n-2} {1 \over n} (f(k-2) + f(n-k-1)) $
with the base cases f(1) = 1, f(2) = 1, f(3) = 5/3.

If we compute this for large n we get f(n) \sim 0.4323n, which agrees with the Monte Carlo simulations in Georgiou, Kranakis, and Krizanc 2009. (Simulation isn’t necessary here but it’s necessary if you consider an analogous problem on a grid.) The constant 0.4323 is (1-e^{-2})/2. Kranakis and Krizanc have some online notes on the urinal problem. as well, and the problem goes back at least to a problem posed in the SIAM Review in 1962 by Freedman and Shepp.

(2020-07-07: fixed numerical expression of constant 0.4323)

The smallest non-obvious composite

It’s been a while, I know.

My partner sent me a link to this tweet yesterday:

Testing divisibility by 17 is hard!  There’s a test: 10A + B is divisible by 17 if and only if A – 5B is.  (Proof: 10A + B is divisible by 17 if and only if 12(10A + B) is, since 12 and 17 are relatively prime; then observe 12(10A + B) – 17(7A + B) = A – 5B.). But nobody has internalized this test.

But 51 is also divisible by 3, and that’s easy to see if you know the usual test: a number is divisible by 3 if and only if the sum of its digits is.  5 + 1 = 6.

So assume you know this test. Also assume you know how to test for numbers that are divisible by 2 or 5 (you can look at the last digit), 11 (for two-digit numbers you can just check if the two digits are the same), and you know the squares.  Then the smallest number that looks prime but isn’t is the product of the first two distinct primes I haven’t mentioned, 7 × 13 = 91.

Back in 2016 Christian Lawson-Perfect collected data on this from a game he made that asked people which numbers were prime; 51 is the number people got wrong most often.  But 91 is the composite most often identified as prime that’s not a multiple of three.

57 also looks prime.

A tale of zloty and forint

I had a wonderful vacation in Vienna and Budapest in May.  (Vienna is entirely irrelevant to this story.)

My wife every so often talks about taking a trip to Poland for no particular reason. (Readers: why do I want to visit Poland?)

So I started looking through Wikitravel to answer the above question, and every so often it mentions a price. I need to get the exchange rate to put it in context. So I type “usd to pln” into Google:

Screen Shot 2017-08-04 at 7.28.59 PM

In the chart, it looks like the zloty has been appreciating against the dollar. Is that because the dollar has been going down or because the zloty has been going up? Well, let’s see if the forint has made the same move. Type “usd to huf” into Google:

Screen Shot 2017-08-04 at 7.30.16 PM

It has! Hmm, I guess the dollar isn’t doing so well in the past few months. You might wonder why I checked the exchange rate for Hungarian forints and not for euros. The reason is because I know the forint was trading at around 275-280 to the dollar in May, whereas I can’t remember what the euro was. It looks like it was around $1.15 US but I used “vacation math” and just treated prices in euros as if they were in US dollars.

And then because I like typing random things into Google, try “pln to huf”:

Screen Shot 2017-08-04 at 7.33.29 PM

Is there some currency arbitrage opportunity? Of course not, the math works out perfectly:

Screen Shot 2017-08-04 at 7.34.35 PM

Can you even directly trade zloty for forint, or would you end up going via some major currency anyway?

Also, somewhere in Budapest you can see this:

IMG_1444.JPG

and here’s a closeup of the text at the bottom:

IMG_1447.JPG

A transcription of the text (sorry if I have introduced any errors):

A Rubik-kocka bármely keverés utan legfeljebb 20 tekerésből kirakható, de az osszetettsége miatt osszesen 43252003274489856000-féle eltérő állás hozható létre. “Mindig van megoldas és nem is csak egy!”(Rubik Ernő)

I do not read Hungarian (nor any language related to it, because there are very few such languages) but I know what this says. I suspect some of you do too.

The riddle of the number chain

This week’s “Riddler Classic” puzzle, from FiveThirtyEight:

From Itay Bavly, a chain-link number problem:

You start with the integers from one to 100, inclusive, and you want to organize them into a chain. The only rules for building this chain are that you can only use each number once and that each number must be adjacent in the chain to one of its factors or multiples. For example, you might build the chain:

4, 12, 24, 6, 60, 30, 10, 100, 25, 5, 1, 97

You have no numbers left to place after 97, leaving you with a finished chain of length 12.

What is the longest chain you can build?

It turns out this question is literally as old as I am. Carl Pomerance wrote a paper On the longest simple path in the divisor graph, C. Pomerance, Proc. Southeastern Conf. Combinatorics, Graph Theory, and Computing, Boca Raton, Florida, 1983, Cong. Num. 40 (1983), 291–304. The result of this paper is that if f(n) is the length of the longest chain in \{1, 2, \ldots, n\}, then f(n) = o(n). That is, as n gets large, we can’t “use up” a constant fraction of the integers. This paper was dedicated to Erdos on his seventieth birthday.

But the paper is not constructive. The natural thing to do is to think of this as a graph-theory problem. Construct a graph with vertices 1, 2, \ldots, n and draw an edge between latex a$ to b if a | b. But finding the longest path in a graph is NP-hard.

For example, with n = 10 this graph appears as follows:

graph-10

and you can see that 7 is going to cause something of a problem. If we just avoid it we can find the path

9, 3, 6, 1, 4, 8, 2, 10, 5

so f(n) \ge 9. You can also see something of the structure of this problem – numbers which share prime factors cluster together in the graph, and we end up putting numbers with the same prime factors together in the sequence. Here all the multiples of three are at the beginning, all the multiples of two in the middle, and all the multiples of five at the end.

In fact f(n) = 9, which we can figure out by arguing around 7. We can’t have a path that has 7 in the middle because 7 has degree 1 in this graph, so if 7 occurs at all it must occur at the start or the end. Without loss of generality let’s say we have a path that starts with 7, of length 10 We must have 7, 1, and then a path in this graph that goes through all eight vertices:

graph-10-without-1-7

but such a path clearly doesn’t exist – if it existed it would have to go between the two vertices of degree 1, which are 9 and 5, but one can’t navigate properly around 2.

So what to do? I know nothing better than trial and error. We can quickly build the following path of length 30:

64, 32, 96, 48, 16, 8, 24, 72, 36, 12, 4, 2, 6, 18, 54, 27, 9, 81, 3, 1, 5, 15, 45, 90, 30, 10, 60, 20, 40, 80

which basically zig-zags through the 3-smooth numbers (that is, numbers with all factors 2 and 3) and then the numbers you get if you multiply those by 5. Then we want to insert little excursions out along different primes. For example between 3 and 1 we can insert

3, 51, 17, 68, 34, 1

which is essentially the path 3, 1, 4, 2 multiplied by 17. Doing a few of these insertions gives a length-49 sequence

64, 32, 96, 48, 16, 8, 24, 72, 36, 12,
4, 84, 28, 7, 14,
2, 88, 44, 22, 11, 99, 33, 66,
6, 18, 54, 27, 9, 81,
3, 51, 17, 68, 34,
1, 38, 76, 19, 95,
5, 15, 45, 90, 30, 10, 60, 20, 40, 80

where I’ve bolded the newly inserted bits, which go “out along” the primes 7, 11, 17, and 19 respectively. (Don’t ask me what happened to 13.) We can keep hacking away at this, for example sticking some multiples of 25 in between 5 and 15 to get

64, 32, 96, 48, 16, 8, 24, 72, 36, 12,
4, 84, 28, 7, 14,
2, 88, 44, 22, 11, 99, 33, 66,
6, 18, 54, 27, 9, 81,
3, 51, 17, 68, 34,
1, 38, 76, 19, 95,
5, 100, 50, 25, 75, 15, 45, 90, 30, 10, 60, 20, 40, 80

Since there are no 13s in here, let’s put them in. One way to do this is in the second line, which we can replace with the second and third lines here:

64, 32, 96, 48, 16, 8, 24, 72, 36, 12,
4, 56, 14, 42, 21, 84, 28,
7, 91, 13, 39, 78, 26, 52
,
2, 88, 44, 22, 11, 99, 33, 66,
6, 18, 54, 27, 9, 81,
3, 51, 17, 68, 34,
1, 38, 76, 19, 95,
5, 100, 50, 25, 75, 15, 45, 90, 30, 10, 60, 20, 40, 80

and this has length 62. Things get trickier now – the idea becomes to look for numbers that aren’t in the sequence but have all their factors small. 63 isn’t in the sequence and it feels insertable, and if we replace 27, 9, 81, 3 with 27, 81, 9, 63, 3 we can do it. A few more little moves like this get us up to length 67:

64, 32, 96, 48, 16, 8, 24, 72, 36, 12,
4, 84, 28, 56, 14, 98, 49,
7, 91, 13, 39, 78, 26, 52,
2, 88, 44, 22, 11, 99, 33, 66,
6, 18, 54, 27, 81,
9, 63, 21, 42,
3, 51, 17, 68, 34,
1, 38, 76, 19, 95,
5, 80, 40, 20, 100, 50, 25, 75,
15, 45, 90, 30, 60, 10, 70, 35

which is the solution I submitted.

The “missing” numbers now are

23 29 31 37 41 43 46 47 53 55 57 58 59 61 62 65 67 69 71 73 74 77 79 82 83 85 86 87 89 92 93 94 97

and you can see what the basic problem ends up being.  All of these have at least one prime factor that’s greater than or equal to 11. Can we squeeze any of these in? In particular 55 and 77 seem like they could be inserted along this basic route, although I can’t figure out how.

Picturing math

The Metropolitan Museum of Art (New York) has an exhibit Picturing Math: Selections from the Department of Drawings and Prints, running from January 31 to May 8.  I was in New York mid-January, and I may have reason to be there again in June or July, so this seems like they’re setting their schedules to toy with me.  Fortunately, you can “view exhibition objects” at their web site or you can look at this article about the exhibit by Allison Meier for Hyperallergic.