Is head weight a constant fraction of body weight?

Nurses are among those most frequently injured on the job, says Daniel Zwerdling of NPR (in a long piece that’s worth reading). One of the most common sources of such injuries is lifting patients, which gets worse as we Americans get heavier.

There’s a chart that is a bit perplexing from a mathematical point of view, though. This chart claims, for example, that the weight of your head is seven percent of your body weight – regardless of that weight. (the trunk is 43 percent, each arm is 5, and each leg is 20.). These are based on “body segment parameters” from, as far as I can tell, a 1996 study by Paolo da Leva based on gamma-ray scanning by earlier researchers. The major use of this sort of work seems to be in studying how the body moves.

But I’d think, for example, that the weight of the head grows less slowly than overall weight – this comes from extensive looking at the heads of people of different weights – and other body parts more so to compensate. I don’t have a pile of cadavers or machines for scanning live subjects – any ideas?

Posted in Uncategorized | 1 Comment

Men and women making exactly the same (not a post about the pay gap)

From the Census Bureau via Slate, on the income gaps between opposite-sex married couples:

  • in 3.9 percent of couples, the husband earns 5,000 to 9,999 more dollars (per year) than the wife;
  • in 25.4 percent of couples, the husband earns within 4,999 dollars of the wife;
  • in 2.8 percent of couples, the wife earns 5,000 to 9,999 more dollars than the husband.

(The rest of the couples have more than a 10,000-dollar differential.)

Something seems fishy here. Call the wife’s earnings W and the husband’s earnings H; we’re interested here in the distribution of the random variable W-H. (Of course it’s difficult to write out the distribution of W-H; we know W and H are correlated, by assortative mating.) The three bins above correspond to W-H being in the intervals [-10000, -5000], [-5000, +5000] and [+5000, +10000]. The second interval is twice as wide as the others – so we’d expect twice as many couples to be in that middle bin as the ones on either side of it.

But instead we have six to nine times as many. Any explanations? All I can think of to explain this phenomenon – if it’s real – is that there are a surprisingly large number of cases where the husband and wife do the same job (not just working at the same place, but actually doing the same thing, for the same pay)… but how many couples like that can there be?  It seems more likely to be an artifact of how the survey works.

Posted in Uncategorized | 3 Comments

Fun with factorials

From futility closet, pointing to this entry in the Encyclopedia of Integer Sequences: numbers such that n divides the number of digits of n! are: 1, 22, 23, 24, 266, 267, 268, 2712, 2713, 27175, 27176, 271819, 271820, 271821, 2718272, 2718273, 27182807, 27182808, 271828170, 271828171, 271828172, and so on. (For example, 266! has 266 \times 2 = 532 digits. This supposedly comes from a column of Martin Gardner, “Factorial Oddities”, which I don’t have.

This seems a bit mysterious at first: what’s the decimal expansion of e doing there? But there’s a simple explanation. Recall Stirling’s approximation: n! \approx (n/e)^n. Taking log base 10, we get \log_{10} n! \approx n \log_{10} (n/e). But for n! to have kn digits, we need \log_{10} n! \approx kn. Thus n! will have kn digits around when \log_{10} (n/e) = k. Solving for n gives n \approx e \times 10^k.

This basically all follows from the approximation (n!)^{1/n} \approx (n/e).

But the numbers in that series are actually a bit below a power of 10 times e; recall e = 2.718281828\ldots 1, so if what I’d just done worked exactly we’d have 2718281 in the sequence, But we have 2718272 and 2718273, eight and nine less than that. This is because we could have used the more accurate verison of the approximation: n! \approx \sqrt{2\pi n} (n/e)^n. Thus (n!)^{1/n} \approx n/e is a slight underapproximation.

1. no, e isn’t rational.

Posted in Uncategorized | Leave a comment

Blizzard of 2015

Nate Silver wrote an excellent blog post on how (in New York) it’s been snowing the same amount it always has but on fewer days. If it’s snowing on fewer days, those must be bigger storms. He ran the tests to show that the increase in very large storms since about 2000 is significant. As he observes, “Anthropogenic global warming, as I’ve said, is a plausible cause.”

(I was going to write this post. Now I don’t have to dig up the weather data.)

Since there was much less snow than forecast in New York, national (i. e. New York-centric) media have been wringing their hands about the difficulties of forecasting. See for example Adam Chandler in the Atlantic on meteorologists apologizing for bad forecasts, Harry Enten at fivethirtyeight on how the forecasts went wrong, Eric Holthaus at Slate on the same, and Zeynep Turecki at Medium on probabilistic forecasts. The common thread is that there’s generally an incentive to forecast high, because predicting more snow than actually happens causes less harm than predicting less snow than actually happens.

Meanwhile, here in Atlanta, just about a year ago we had two inches of snow coupled with a poorly timed forecast, and people were stranded overnight. One local station is doing “Snow Jam: Then and Now” on tomorrow evening’s news. I don’t know about you, but I’d take two feet in Boston — which I saw a few times in my four years there — over two inches in Atlanta.

Posted in Uncategorized | 1 Comment

A crossword clue

48-Across in today’s New York Times crossword: “group you can rely on when it counts?”

Twelve letters.

I’ve referred to their work as our nation’s once-every-decade exercise in large-scale enumerative combinatorics. (Although of course they do more than just the every ten years count required by the Constitution, and their mathematically trained employees are surely statisticians.)

Posted in Uncategorized | 1 Comment

Why everyone seems to have cancer

George Johnson wrote a few weeks ago in the New York Times on why everyone seems to have cancer. This is, somewhat paradoxically, good news – or at least not bad news. A larger proportion of people are dying of cancer now than in the past not because we’re getting worse at treating cancer – we’re actually getting better. But we’re getting better at treating other things (like heart disease) faster. See the graph on p. 10 of this CDC report on 2010 death statistics. Rates of death from complications of Alzheimer’s disease are actually increasing – and Alzheimer’s is even more of a disease of old age than cancer is.

This reminds me from a fact about cancer I learned from John Allen Paulos’s book Beyond Numeracy (1992), from a chapter explaining why correlation is not causation:

Nations that add fluoride to their water have a higher cancer rate than those that don’t. … Is fluoridation a plot? … [T]hose nations that add fluoride to their water are generally wealthier and more health-conscious, and thus a greater percentage of their citizens live long enough to develop cancer, which is, to a large extent, a disease of old age.

If we do cure cancer – which is a disease of old age because it essentially is due to accumulated errors in cell mutations – will we just find something that’s a disease of even older age?

Posted in Uncategorized | Leave a comment

Convex hulls, the TSP, and long drives

I’ve been reading the book In Pursuit of the Traveling Salesman by William Cook lately. In Chapter 10, on “The Human Touch”, Cook mentions a paper, by James MacGregor and Thomas Ormerod, “Human performance on the traveling salesman problem”. One thing that this paper mentions is that humans reach solutions to the TSP by looking at global properties such as the convex hull — there’s a theorem that a tour must visit the vertices of the convex hull in order.

This reminds me of one of my favorite travelogues, Barry Stiefel’s fifty states in a week’s vacation, in which Stiefel visited all fifty states on a week’s vacation (doing the lower 48 by driving, and flying to Alaska and Hawaii) Stiefel writes, in regard to getting to Kentucky: Now things were going to get complicated. Until then, my route had been primarily one of following the inside perimeter of the states on the outside perimeter of the 48 states. When planning the route, I had stared at the map for over an hour before concluding that Kentucky was going to be the toughest state to get. I just couldn’t plan a route that efficiently went through it. So I had to do a several-hour out-and-back loop to get it.

It’s not obvious that Kentucky would be so hard. Kentucky borders Virginia and Ohio, both of which are on that outside perimeter, while a bunch of states further west don’t border any of the perimeter states. (By my eyeballing, those are Kansas, Nebraska, and Missouri.) Stiefel chose to approach Kentucky from the southeast, though, and southeastern Kentucky is not exactly crawling with highways. And as you might gather from the title of Stiefel’s page, he was in search of a shortest-time route, not a shortest-distance one.

Posted in Uncategorized | 1 Comment