Pronoy S. answered 03/30/24
Ok, let us take a lightning tour of 'Error Analysis', as it is usually called, or 'Analysis of Uncertainties', which is conceptually less misleading than the popular name. The term 'Error' suggests that it is something that comes about from the experimenter being careless (which may well be the case, but not always). In reality, any measuring apparatus measuring any physical quantity (like r) always has some uncertainty; no measuring device is infinitely resolving. So, when we measure r, we always estimate the mean value to be within some neighborhood δr>0 of an estimate, r0. In other words, the estimated value of r lies in the range (r0-δr, r0+δr), or, more elegantly, r ∈ (r0 ± δr), with possible values of r lying in an interval of length 2*δr
First, a little point regarding the phrasing of your question. You say that we have measured the radius r of a disk to be (19 ± 0.1) cm. The mean value should be specified till the least significant digit (so, 19.x instead of 19). The least (or last) significant digit is the one whose place value is comparable to the uncertainty of the measuring apparatus, which in this case is δr = 0.1 cm. That is, when we measured a distance using a mm scale, we record a value upto the mm value, and then take into account the fact that the measurement is uncertain due to the fact that the scale has a smallest marking (1mm), and any point lying between two markings has an uncertain position. The measured r could be (19.1 ± 0.1) cm, for example.
Now, coming to the question, we wish to use the uncertainty in the measured value of r in order to estimate the uncertainty in the calculated value of area A. In order to do this, we need to perform what is known as propagation of uncertainty. The basic idea goes as follows:
Let us say that we wish to compute some R that is a function of r : R = R(r). Obviously, if we could only estimate the value of r upto some uncertainty, we can only calculate the value of R(r) upto some uncertainty. as well. The simplest way one could go about doing so is to use the range of values estimated for r , (rmin,rmax) = (r0-δr, r0+δr) to find the range of possible values of R. If, for example, R ∼ rn where n > 0 (for example, R(r) = A = πr2), then Rmax = rmax n and Rmin = rmin n .
So, δR = ( rmax n - rmin n )/2 = (rmax - rmin)(rmaxn-1 + rmaxn-2rmin + ... rminn-1)/2
= δr (rmaxn-1 + rmaxn-2rmin + ... rminn-1). So, δR ∝ δr.
Now comes the crucial insight. In case we have a sensible measuring setup, the absolute value of the estimated mean value should be much greater than the uncertainty, that is |r0| >> δr (If we wish to measure something a few mm in length using a mm scale, we don't have a very sensible experimental setup). When this happens, we can expand any function of the two variables, δR(r0,δr), into a power series of terms of decreasing importance: δR(r0,δr) = δR0 + δR1(δr/r0) + δR2(δr/r0)2 + ... .
It is clear that the constant δR0 = δR(r0, δr=0) = 0.
The terms decrease in importance as |δr/r0| << 1, so |δr/r0|2 <<<< 1 etc.
If we only wish to estimate the most dominant contribution of the measurement uncertainty δr to the calculation uncertainty δR, we should only retain the δR1 term.
So, δR ≅ δr(δR1/r0 ) ~ # δr ~ δr (rmaxn-1 + rmaxn-2rmin + ... rminn-1) from above.
Since we are only interested in the dominant contribution of δr (i.e. the δr1 contribution), we should estimate both rmax and rmin in the bracket to be equal to r0; the dependence of rmax and rmin on δr introduces additional powers of δr. Then, we have : δR ≅ (n r0n-1)δr. If we have R = crn for some constant c, then δR ≅ c(nrn-1) δr.
Note that this is very suggestive, as it should be. Actually, we can derive the above result in a much more elegant (and illuminative) method. Essentially, when |r0| >> δr, the problem of propagating uncertainties reduces to the mathematical problem of finding the Taylor series expansion for the function R of r about some value r0 . This is given as:
R(r0+δr) = R(r0) + R'(r0) δr + R''(r0) (δr)2/2 +....+ R(m)(δr)m/(m)! + ... ≅ R0 + R'(r0) δr ,
where R'(r0) is the first derivative of the fn R with respect to r evaluated at r0 (and R(m) the mth derivative). When R = crn, R' = cnrn-1, giving the above result. The relative uncertainty is then δR/R0 ≅ n δr/r0.
Note that the constant c is unimportant when evaluating the relative uncertainty.
◊
BONUS: This second method actually allows us to estimate the uncertainty in a calculated quantity R that depends on several measured quantities a,b,c,.... By employing the same logic, we can perform a power series expansion on R. Now, the power series expansion will involve partial derivatives.
R(a+δa , b+δb , c+δc , ...) ≅ R(a,b,c,...) + (∂R/∂a (a,b,c) δa + ∂R/∂b δb + ∂R/∂c δc +...).
If R = # an bm co ... , then it is easy to show that this method yields
δR / R0 = δR / R(a,b,c) = n(δa / a) + m(δb / b) + o(δc / c) + ...
Actually, this is an overestimate unless a, b , c are correlated variables. That is, if a, b , c are physical quantities that are connected in a manner that the observed deviation from the true mean is in the same direction for a , b, and c. Typically though, a , b , c will be uncorrelated variables (like distance, time, mass etc.) in this case, a better estimate for the relative uncertainty in R is
δR / R0 = ((n(δa / a))2 + (m(δb / b))2 + (o(δc / c))2 + ...)1/2.
But how we get there is a whole other story.
Hope this helps you with your question.