I’m reading John R. Taylor’s textbook An Introduction to Error Analysis: The study of uncertainties in physical measurements. This is meant for people taking introductory physics lab classes, but it never hurts to revisit these things. I actually double-majored in math and chemistry in college. It’s fun watching the contortions that chemists go to to avoid math.
Anyway, measurements come with uncertainities: that is, they have the form , where
is our best estimate of the quantity and
is an estimate of the uncertainty. (We can think of this as being roughly the standard deviation of the distribution from which
is drawn.) In these intro lab classes one quickly learns some rules for manipulating these uncertainties. These can be thought of as defining arithmetic on intervals; however this isn’t the usual interval arithmetic but actually an abbreviation of arithmetic on probability distributions.
– that is, the variances attached to the measurements add. Similarly, for differences,
. For example,
and
. Note that the error for the sum and the difference are the same – but for the difference, the error is relatively much bigger.
- To find
, start by finding the fractional uncertainties
and
. Then the squares of the fractional uncertainties add: the fractional uncertainty of the product is
. The same fractional uncertainty holds for quotients. For example, the fractional uncertainty in
is
, and that in
is
. So the fractional uncertainty in their product is $\sqrt{(0.133)^2 + (0.15)^2 = 0.201$. Thus we have for the product
and for the quotient
.
- Perhaps one learns rules for dealing with powers, logarithms, and the like. These are all easily derived from the rule
.
For example,
– in fact, when taking nth powers, the fractional uncertainty is raised to the
power. Similarly,
In this case, the fractional uncertainty becomes the absolute uncertainty in the logarithm. If we know a number to within ten percent, we know its log to within 0.1 unit.
But implicit in the rules for sums, differences, products, and quotients is the idea that the errors in the measurements of
are independent! So these rules can’t be used if there’s correlation between the errors. More simply, they can’t be used if the quantity that you’re interested in is a function of many variables, some of which occur more than once. Consider for example the Atwood machine, as Taylor does in his problem 3.47. This consists of two objects, of masses
and
with
; the larger mass accelerates downward, with acceleration
. Here
is the acceleration due to gravity. We assume this is known exactly. So there may be correlation between the numerator and the denominator.
So what can we do? In this particular case it’s not hard to rewrite as
where , and use the rules that I’ve already discussed. (But it may be hard to see that this is worth doing!) For example (I’m taking these numbers from Taylor) say
. Then the fractional uncertainty in the quotient
is
, and we get
. Then
, so
, and thus we have
.
Alternatively, we think that is likely to lie in the interval
; then f(0.489) = 0.343 and f(0.511) = 0.324, so we figure that f(m/M) is likely to lie in the interval [0.324, 0.343].
But we are not guaranteed that our rewriting trick will always work. What else can we do? I’ll address that in a future post.
Allow me to toot my horn.
I taught physical chemistry laboratory, before I retired. I’d try to get my students to do an error estimate on their measurements and on the computed results. It proved to be hard to get them to make reasonable estimates of the errors in measurements. The best method of propagation to an over-all estimate of the error in the final computed results seemed to me to be the following. Compute the result, recompute with one input measurement incremented by the estimate of its error, take the difference to get the contribution of that input error. Repeat for each input piece of data. Square the separate computed errors, add the squares, take the square root. (Easy enough to do with a good spread sheet program.) That gives a pretty good estimate of the final error in the computed result, if the estimates of the errors in the input measurements are good. Of course, that was the problem. Read a thermometer with one degree divisions marked on the stem at intervals of about 4mm? Then obviously the error is one degree. (Never mind that one can estimate the temperature to a tenth of a degree with perhaps a one sigma error of one tenth of a degree.) The advantage of this method over what is being described in the part of Taylor you are presenting is that it is justifiable for pretty much all sources of error, whether related to the output linearly, or exponentially, or in some complex way that is not easy to describe in the elementary formulas. The seemingly simple experimental determination of the heat capacity ratio of an ideal gas by the adiabatic expansion method is an example of such complexity, as the measured atmospheric pressure is used several times in the calculation.
As an excellent short introduction and with a very good section on error propagation (perhaps as a suggested textbook) you might find Hughe’s and Hase’s – Measurements and Their Uncertainties interesting as well.