Sticky Notes is an excellent podcast about classical music, by the conductor Joshua Weilerstein. A recent episode was on Bruckner’s fourth symphony, a piece I wasn’t familiar with. In Bruckner’s work, there appears the “Bruckner rhythm”, which is two quarter notes followed by three quarter note triplets. And in musical terminology, that’s entirely correct!

But if the terminology made mathematical sense, that would be a “sixth note”, which somehow I had never realized. A few people have pointed this out (here, here, here). It seems like this is most natural to programmers creating software that has to deal with music. For example – and I can’t believe I remember this – QBasic had a PLAY command that played music through the computer speaker, specified note by note. For the main theme of the first movement of Bruckner’s fourth symphony, something like

PLAY "O3 E-4 < B-4 A-6 G6 F6 E-4"

would have played the example

(That’s a tenor clef, the fourth line from the bottom is middle C.). The - represents flat, the O3 represents the starting octave, and the < represents an octave change. (I cannot guarantee this code actually works, because it’s 2020.)

The PLAY command mishandles the other tricky bit of notation, which is “dotted” notes. For example C4 is a quarter-note C. C4. is a dotted-quarter-note C, that is, one and a half times as long as a quarter-note. But C4.” is one and a half times as long as that, or 2.25x the original quarter-note. But a double-dotted note in ordinary musical notation is 1.75x as long. If I remember correctly I thought it was 2.25 because I found the PLAY command before I ran across double dots in any actual music.

Also, in music, the ordinary fact “2 + 2 = 4” becomes “3 + 3 = 5” – that is, two thirds make a fifth. Every so often on Music Stack Exchange one sees people saying musical notation and nomenclature don’t make sense, which is true if you’re coming at things fresh, but is also an example of how Stack Exchange sites get weird as you go conceptually further away from StackOverflow.

John Cook posted about the “baseball inequality“. In his formulation, if you have two lists of k positive numbers each, and $\latex d_1, \ldots, d_k$, then

.

This has the interpretation that the batting average of a team is between the batting average of the best player and that of the worst player.

This is not what I expected from the headline “baseball inequality” and the list of numbers. What I was expecting was the following. If the two lists are in numerical order, and , then

for any permutation of 1, 2, …, k. This is actually what’s called the rearrangement inequality. Its use in baseball is in setting the batting order. If you want to arrange the batting order so your team gets the most hits, then you want the players with the best batting averages (the highest y) to be earlier in the batting order and therefore get the most at-bats (the highest x). (One-line idea of proof: if any of the are out of order, you can increase the sum by putting them in order.)

Reality is a bit more complicated, because:

first, the goal of baseball is of course to get runs, not hits, so you want to count walks, and extra-base hits as well; hence sorting players by OPS (on-base percentage plus slugging percentage) should be better than sorting by batting average. (OPS doesn’t make dimensional sense because you’re adding two fractions with different denominators, but let’s ignore that.)

second, there are interactions between the players – in order to score runs you usually need to get multiple hits in close succession. Since batting orders are cyclic, you perhaps don’t want to have your worst hitter going immediately before your best hitter, and indeed some teams have tried batting the pitcher eighth. (I’m a National League fan; don’t talk to me about designated hitters.)

These are probably problems that are best solved by simulation, and I’ve got a day job.

A quick way to see this: Trump won 306 electoral votes, Clinton 232. (There were faithless electors, but we can ignore them for the purposes of this question.). Trump won 30 states, Clinton 21. For the purposes of this post, DC is a state, which it probably also should be in reality.

Each state has as many electoral votes as it has senators and representatives combined. (DC gets 3.). Imagine breaking up the electoral votes in each state into two classes: the “senatorial” electoral votes (two per state), and the “representational” electoral votes (the other ones). Then Trump won the “senatorial” electoral votes by 60 to 42, leaving a 246-190 margin among the “representational” votes.

In 2000, on the other hand, Bush won 271 electoral votes in 30 states, and Gore 267 in 21 states; the “representational” votes were therefore split as 211 for Bush, 225 for Gore. If there were, say, twice as many Representatives (870 instead of 435 – and let’s keep the math simple and say DC gets 6 electoral votes in this scenario), and every state had twice as many electoral votes, then Bush would have had about (60 + 211 × 2) = 482 EV, and Gore (42 + 225 × 2) = 490 EV. Also in this world Nate Silver’s web site is named nineseventytwo.com (which is available, as of this writing). In fact, Bush would have won with any number of representatives less than 491; Gore with more than 655; and in between, the lead swings back and forth, according to a 2003 analysis by Michael G. Neubauer and Joel Zeitlin.

In short: 2000 was the way it was because small states are overrepresented in the Electoral College; 2016 was the way it was because the Electoral College is winner-take-all.

Quoc’s lab has a microcentrifuge, a piece of equipment that can separate components of a liquid by spinning around very rapidly. Liquid samples are pipetted into small tubes, which are then placed in one of the microcentrifuge’s 12 slots evenly spaced in a circle.

For the microcentrifuge to work properly, each tube must hold the same amount of liquid. Also, importantly, the center of mass of the samples must be at the very center of the circle — otherwise, the microcentrifuge will not be balanced and may break.

Quoc notices that there is no way to place exactly one tube in the microcentrifuge so that it will be balanced, but he can place two tubes (e.g., in slots 1 and 7).

Now Quoc needs to spin exactly seven samples. In which slots (numbered 1 through 12, as in the diagram above) should he place them so that the centrifuge will be balanced? Extra credit: Assuming the 12 slots are distinct, how many different balanced arrangements of seven samples are there?

To generate the subsets of [n] of size k, I just take:

the subsets that don’t contain n (which are therefore subsets of [n-1] of size k), and

the subsets that do contain n (which are therefore subsets of [n-1] of size k-1, with n adjoined) (I learned this a good two decades ago, from section 3.6 of Herb Wilf’s notes East Side, West Side.)

Then iterate over the subsets to find their centers of mass:

our_comb = combinations(12, 7)
com_x = rep(NA, length(our_comb))
com_y = rep(NA, length(our_comb))
for (i in 1:length(our_comb)){
com_x[i] = mean(x[our_comb[[i]]])
com_y[i] = mean(y[our_comb[[i]]])
}

So our_comb runs through the combinations, and com_x and com_y are the x- and y-coordinates of the center of mass. Plotting the centers of mass gives we get an attractive symmetric pattern. Here I’ve plotted transparent points, so the darker points are points that arise from more distinct combinations. The outermost points represent the most imbalanced centrifuges – for example the one closest to the top represents loading slots 9, 10, 11, 12, 1, 2, and 3. The main question is: how many points are on top of each other at the very center.

I can extract those subsets from the matrix our_comb and build them into a matrix for looking at. There’s a small problem in that we used floating-point arithmetic, so the coordinates don’t come out exactly at zero, but it’s enough to ake the ones that are “close enough”:

(The order here is lexicographic order, but you have to read the rows backwards.)

If you can make sense of this pile of numbers without making some pictures, good for you! But I am human, though a mathematician, so let’s plot. Here big black dots represent the loaded spots, and small blue dots represent the non-loaded spots.

(And yes, it’s in base R graphics. It’s been a while since I’ve used those – at my day job it’s all ggplot all the time.)

par(mfrow = c(4, 3), mar = c(1, 1, 1, 1))
for (i in 1:nrow(balanced_combs)){
full = balanced_combs[i,]
empty = setdiff(1:n, balanced_combs[i,])
plot(x[full], y[full], pch = 19,cex = 2,
asp = 1, axes = FALSE, xlab = '', ylab = '',
xlim = c(-1.25, 1.25), ylim = c(-1.25, 1.25))
points(x[empty], y[empty], pch = 19, cex = 1, col = 'lightblue')
}

So there are 12 ways to load the centrifuge… but we can clearly see that all of them are really just rotations of a single pattern. To me the visually most convenient representative of that pattern is the third one in the third row, which has loadings at 12, 1, 4, 5, 7, 8, and 11.

We can play a little game of connect-the-dots to see why this pattern is balanced. Connect 12, 4, and 8 to form an equilateral triangle. Connect 1 and 7; connect 5 and 11. Like this:

Each of those three subsets has its center of mass at the center of the circle, so the whole arrangement does too.

That decomposition turns out to be the key to the general problem of which centrifuge sizes can be loaded with which number of tubes and remain balanced. Stay tuned.

Me: I can’t do that, because rainbow color maps are considered harmful. This is especially true because if we’re going to use one, shouldn’t it be green for “go” and red for “stop”.

Partner: just shut up and color.

(The original version of this meme didn’t have the colors, so you can make whatever palette you want, as in this example.)

Evelyn Lamb has a delightful page-a-day calendar. Today (yes, a few days late) I learned that TWENTY NINE is the only word in English that is written with a number of straight-line strokes equal to its value. (This is in a sans serif font; in particular I is one stroke, not three.)

English is surprisingly rich in numbers that have all straight-line strokes. In the Latin alphabet the letters that are all straight lines are A, E, F, H, I, K, L, M, N, T, V, W, X, Y, Z. That leads to the following words that are all straight-lined and appear in numbers, and the number of straight lines that make them up:

– having more strokes than their value: FIVE (10), NINE (11), ELEVEN (19), TWELVE (18), FIFTEEN (20), NINETEEN (23)
– having less strokes than their value: TEN (9), TWENTY (18), FIFTY (12), NINETY (16)

Any word that equals its own value must combine elements from both these lists, and it’s not hard to see that TWENTY NINE is the only one that works. The full list of straight-line numbers in English is: 5, 9, 10, 11, 12, 15, 19, 25, 29, 55, 59, 95, 99. (All the larger numbers include “HUNDRED” or “THOUSAND” or something ending in “-ION”, so we can stop there.)

Evelyn suggests this as part of how to memorize the largest known prime (it’s a Mersenne prime, and she suggests doing it in binary so every bit is 1, so the hard part is remembering where you are).

It’s hard to even find straight-line numbers in other languages, because a lot of the alphabet is missing. They include:

German ZWEI (2), ZEHN (10), ELF (11)

Dutch EEN (1), TWEE (2), ZEVEN (7), TIEN (10), ELF (11) (edited 9/3: also ZWAALF (12))

Norwegian has a lot: EN (1), FEM (5), ATTE (8), NI (9), TI (10), ELLEVE (11), FEMTEN (15), ATTEN (18), NITTEN (19), FEMTI (50), ATTI (80), NITTI (90) (and also 55 = FEMTIFEM, 58, 59, 85, 88, 89, 95, 98, 99)

As does Danish: EN, FEM, NI, TI, ELLEVE, FEMTEN, ATTEN, NITTEN, with the same meanings as in Norwegian, but then the bigger numbers are formed irregularly.

Swedish has less: EN (1), FEM (5), ATTA (8), ELVA (11) – the numbers are very similar to Norwegian but the “-teen” ending is “-ton”, not “-ti” like Norwegian.

Spanish VEINTE (20), MIL (1000), and MIL VEINTE (1020)

Italian VENTI (20), MILLE (1000), and MILLEVENTI (1020)

Portuguese VINTE (20), MIL (1000) and MIL E VINTE (1020)

French MILLE (1000), but not VINGT (20)

ELVA (Swedish for 11) is the only other one I could find that also has the self-referential property, and the Chinese numerals 一, 二, 三 if you want to stray from the alphabetic world. (Edited 9/3: also Dutch TIEN = 10, which I inexplicably missed before.)

(This post was edited to add the list of numbers and to clarify that ELVA is not the only straight-line number outside of English, but the only one I could find with this self-referential property.)

Every evening I compute an approximate second derivative. It comes from the data at the COVID Tracking Project. Specifically, it’s

where is the number of positive coronavirus cases in the US on day . So this is the number of new cases this week, subtracted from the number last week. Every day I hope it’s negative. Lately it has been. Yesterday it was

Who would have thought 288 thousand cases in a week would seem good? But derivatives matter. That would have been terrifying on June 29 (282k cases from June 22 – 29; 194k cases from June 15 – 22). But now it feels like a reason for maybe a little bit of hope.

The most hopeless plots, I think, are the cumulative ones. The number of positive tests never goes down. Early on in all this we all wanted to “flatten the curve”… some people seemed to be referring to this curve, which could never be flat.

What we want to know is how many new cases there are. How much should we worry right now? A lot, it seems. But less than before?

And there is something heartening about seeing a negative number. If we plot the approximate second derivative, we can get negative numbers!

Here we can see, broadly, four phases: the growth of March and early April; the slow decline that lasted through mid-June; the “second wave” of late June and July; the decline of August.

But what’s not visible here is the terror of the fast growth of March, because that was growth from a low base. If we plot the growth rate

This was the period of doubling times, when cases were doubling multiple times a week. I’ve suppressed the growth rates over 100 percent per week – for example there were 44867 new cases in the US between March 16 and 23, compared to 5744 between March 9 and March 16, nearly three doublings in a week. Those days were, quite literally, off the charts.

Of course you could easily make a 300-piece puzzle, for example 15 by 20. And you could make a 299-piece puzzle — that factors as 13 times 23. But a 301-piece puzzle would have to be 7 by 43, assuming that the pieces form a nice grid,and that doesn’t seem like a “reasonable” size for a puzzle.

So which numbers could actually be reasonable sizes for puzzles? This could be nice to know if, say, you want to know if you have all the edge pieces. (But a 500-piece puzzle could be 20 x 25, or some reasonable slightly larger numbers like 19 x 27 = 513, or 18 x 28 = 504.) It’s a nice puzzle nonetheless.

So let’s say that a number is a “puzzle number” if it’s of the form with — that is, a puzzle has to have an aspect ratio between 1 (square) and 2. (The choice of 2 is arbitrary here, but any other constant would be more arbitrary.) We can easily work out the first few puzzle numbers:

which is enough to find them in the OEIS: that’s A071562, defined as “Numbers n such that the sum of the middle divisors of n (A071090) is not zero.” (What’s a “middle divisor”, you ask? It’s a divisor of that’s between and .) Titling it that way seems a bit strange to me: I’d have called it “Numbers such that the number of middle divisors of n (A067742) is nonzero”.

The first 10,000 puzzle numbers have been calculated; the 10,000th is 35,956. There are 43 under 100, 336 under 1,000, and 2,932 under 10,000 — it doesn’t appear that they have contant density. It’s not hard to take this further — there are 26870 under , 252,756 under , 2409731 under , and 23169860 under . For example, in R, you can generate the puzzle numbers as follows. (This take a few seconds to run. Note that you don’t have to compute any prime factorizations.)

N = 10^8
= rep(0, N)

for (m in 1:floor(sqrt(N))){
max_n = min(2*m,floor(N/m))
a[m*(m:max_n)] = a[m*(m:max_n)]+1
}

puzzle = which(a >= 1)

I had at first thought this sequence had a natural density, because I was only trying numbers up to a few hundred in my head, and because the number of middle divisors appears to average somewhere around . There’s a nice heuristic for this – a middle divisor of is somewhere between and ; the “probability” that a number is divisible by is , so the “expected number” of middle divisors of is

.

But there must be more numbers as you go further out that have many middle divisors, and more zeroes. This is similar to the behavior you see for the problem of “how many ways can an integer be written as a sum of two squares”. In that case the “average” is , but an integer greater than one can be written as a sum of two squares if and only if its prime decomposition contains no prime congruent to 3 mod 4 raised to an odd power; asymptotically the number of positive integers below which are the sum of two squares goes like for a constant (the constant is the Landau-Ramanujan constant). That connection might stem from the fact that both sequences are multiplicative. For sums of two squares, that follows from the factorization-based characterization or from the fact that

.

For puzzle numbers, it follows from a remark of Franklin T. Adams-Watters in the OEIS entry. So numbers which have lots of factors both tend to be sums of two squares and to be puzzle numbers in many different ways; as we get to bigger numbers those start “crowding out” the smaller numbers.

The math internet has noticed this at least once before. John D. Cook wrote in 2014: Jigsaw puzzles that say they have 1,000 pieces have approximately 1,000 pieces, but probably not exactly 1,000. Jigsaw puzzle pieces are typically arranged in a grid, so the number of pieces along a side has to be a divisor of the total number of pieces. This means there aren’t very many ways to make a puzzle with exactly 1,000 pieces, and most have awkward aspect ratios.

1000 as 25-by-40 seems reasonable, but 27 x 37 = 999 or 28 x 36 = 1008 would also work. I assume that you wouldn’t actually see a 999-piece puzzle because the lawyers would claim calling that 1000 was false advertising. A puzzle blog indicates that most “500-piece” puzzles are actually 27 x 19 = 513 and most “1000-piece” puzzles are 38 x 27 = 1026 – both of these aspect ratios are approximations of . That is a good aspect ratio to use if you want the ability to make both “500-piece” and “1000-piece” (or more generally, “N-piece” and “2N-piece”) versions of the same puzzle. Similarly the A0, A1, … series of paper sizes used in most of the world have that aspect ratio and therefore an An piece of paper can be cut into two A(n+1) pieces.

Isaias will be the ninth tropical storm of the season, likely to form on July 30. This is unusually early. From CNN on July 28: “Once it is given the name Isaias — pronounced (ees-ah-EE-as) — it will be the earliest storm to begin with an “I” on record. The previous record was set on August 7, 2005, part of the busiest season to date.” Storms are named in alphabetical order, so a storm starting with “I” is the ninth storm of the season.

This did seem early to me. The recent example that I remember Irma, which made landfall on the mainland US on September 10, 2017. (I spent a day seeing lots of people with Florida license plates driving around Atlanta, thinking “at least I don’t live in Florida!”, and then the next day having my power out and being glad that the trees the storm took out in my backyard were ones that needed taking out anyway.)

What does this mean for the rest of the hurricane season? Should we expect a busy one?

But how do we know about are storms? Not all storms were observed in the past, because some stay out to sea. The “satellite era” began in 1966, according to this article about this year’s Tropical Storm Gonzalo (the earliest “G” storm ever). So let’s plot the number of storms before and after July 29, separately for each of the two “eras”.

But 2005 is a big outlier! I remember that season. Of course it was the year of Katrina. And in December there was a hurricane named Epsilon, which I could not take seriously because epsilon is small. Indeed if we drop it the effect of early-season storms on late-season ones gets much smaller.

So there’s a small effect – but not as large as might be expected! So how many storms will we have this season? Calling predict(storm_model, data.frame(early = 9), interval = 'prediction', level = 0.95), gets a 95% prediction interval for the number of storms in the rest of the season: 12.5 plus or minus 7.2. So we could reasonably have as few as 5 more or as many as 20 more, for a total of 14 to 29.

A 50% prediction interval goes from 10 to 15 more storms in the remaining part of the season, for a total of 19 to 24. For comparison, the quartiles of the distribution of the number of storms after 7/29 in the satellite era are 7 and 12. So if we knew nothing about what’s happened so far, we’d give a 50% prediction interval of 16 to 21 storms for the number at the end of the season.

Let’s see what happens. And let’s hope these storms stay far out to sea, because pandemics and hurricanes don’t seem very compatible.

Does it grow without bound? First we should specify what it is. Each number is double the one before, but with the digits sorted into ascending order. The doubling makes the number larger… but the sorting makes it smaller, so perhaps this sequence tops out somewhere?

This sequence actually becomes periodic pretty quickly. The full sequence is:

1, 2, 4, 8, 16, 23, 46, 29, 58, 116, 223, 446, 289, 578, 1156, 1223, 2446, 2489, 4789, 5789, 11578, 12356, 12247, 24449, 48889, 77789, 155578, 111356, 122227, 244445, 48889, …
and then it repeats with period 6. The key is in the final step: once we get a number that ends with 5, there’s a 0 in its double so the next term is actually shorter. (It is surprisingly hard to keep the mental arithmetic straight to the point where I can do this in my head.) There are other such loops: if you start with 3 you eventually reach

69, 138, 267, 345, 69, …

(note that everything stays divisible by 3 here!). If you start with 7 you get to

And there’s a series of these sequences. Every number in that period-6 loop that you reach if you start with 1 has a repeated digit, and you can have less or more of those:

The structure is quite a bit richer than the triple-and-sort sequence, presumably because divisibility by three “plays well” with rearranging digits. There, as far as I can tell, no matter what number you start with you end up with 45. (Tripling that give 135, tripling again gives 405, which sorts back to 45.) The reason for this seems to be that any repeated digits, when tripled, give you repeated 3s or 6s, and tripling those gives repeated 0s. For example, consider the start number 9261632697:

Tripling gives 27784898091 – which incidentally has a zero – and sorting that gives 1247788899.

Tripling again gives 3743366697 – the repeated 7s and 8s “turn into” repeated 3s and 6s — and sorting gives 3334666779.

At this point everything collapses – tripling gives 10004000337, and sorting gives 13347.

Another collapsing step: 13347 × 3 = 40041, which sorts to 144.

The remainder of the sequence is uninteresting: 234, 27, 18, 45, 135, 45, 135, …

This is the “typical” behavior – but it’s a bit surprising to me that it always works. I suspect a proof would be in terms of vectors that show the number of each digit, rather than with the raw numbers themselves.