How to Read in a Data Line of Uncertain Length

Measurements and Error Assay

"It is improve to be roughly correct than precisely wrong." — Alan Greenspan

The Doubt of Measurements

Some numerical statements are exact: Mary has three brothers, and 2 + 2 = iv. However, all measurements accept some degree of doubt that may come from a multifariousness of sources. The process of evaluating the uncertainty associated with a measurement outcome is often called uncertainty analysis or error analysis. The consummate argument of a measured value should include an estimate of the level of confidence associated with the value. Properly reporting an experimental outcome along with its uncertainty allows other people to make judgments about the quality of the experiment, and information technology facilitates meaningful comparisons with other similar values or a theoretical prediction. Without an uncertainty estimate, it is incommunicable to answer the basic scientific question: "Does my outcome concord with a theoretical prediction or results from other experiments?" This question is fundamental for deciding if a scientific hypothesis is confirmed or refuted. When nosotros make a measurement, we generally assume that some exact or true value exists based on how we define what is being measured. While we may never know this truthful value exactly, we attempt to find this ideal quantity to the best of our ability with the time and resource available. As nosotros make measurements past dissimilar methods, or fifty-fifty when making multiple measurements using the aforementioned method, nosotros may obtain slightly different results. So how do we report our findings for our best estimate of this elusive true value? The most common style to evidence the range of values that we believe includes the true value is:

( ane )

measurement = (best estimate ± uncertainty) units

Let's take an instance. Suppose y'all want to find the mass of a gold ring that you lot would similar to sell to a friend. You exercise non want to jeopardize your friendship, then yous want to get an accurate mass of the ring in society to charge a off-white market cost. You gauge the mass to be between ten and 20 grams from how heavy information technology feels in your hand, simply this is not a very precise judge. After some searching, you observe an electronic balance that gives a mass reading of 17.43 grams. While this measurement is much more precise than the original estimate, how do y'all know that information technology is accurate, and how confident are you that this measurement represents the truthful value of the band'south mass? Since the digital display of the balance is limited to 2 decimal places, yous could report the mass as

m = 17.43 ± 0.01 yard.

Suppose you apply the aforementioned electronic balance and obtain several more readings: 17.46 yard, 17.42 g, 17.44 thousand, so that the average mass appears to exist in the range of

17.44 ± 0.02 1000.

By now you may feel confident that you know the mass of this ring to the nearest hundredth of a gram, but how do you know that the true value definitely lies betwixt 17.43 g and 17.45 thousand? Since you desire to be honest, yous determine to use another balance that gives a reading of 17.22 g. This value is clearly below the range of values found on the first residue, and under normal circumstances, you might not care, merely you desire to exist off-white to your friend. So what practise you practise now? The answer lies in knowing something about the accuracy of each musical instrument. To help answer these questions, nosotros should first ascertain the terms accuracy and precision:

Accurateness is the closeness of understanding betwixt a measured value and a true or accepted value. Measurement error is the corporeality of inaccuracy.

Precision is a measure of how well a result tin can be determined (without reference to a theoretical or true value). It is the caste of consistency and agreement amongst independent measurements of the same quantity; also the reliability or reproducibility of the result.

The uncertainty judge associated with a measurement should account for both the accuracy and precision of the measurement.

Note: Unfortunately the terms mistake and uncertainty are often used interchangeably to depict both imprecision and inaccuracy. This usage is so mutual that it is incommunicable to avoid entirely. Whenever you lot see these terms, brand sure you understand whether they refer to accurateness or precision, or both. Observe that in order to decide the accuracy of a particular measurement, we accept to know the platonic, true value. Sometimes nosotros have a "textbook" measured value, which is well known, and we assume that this is our "platonic" value, and use it to estimate the accurateness of our effect. Other times we know a theoretical value, which is calculated from basic principles, and this likewise may exist taken every bit an "ideal" value. But physics is an empirical science, which ways that the theory must be validated past experiment, and not the other style around. We can escape these difficulties and retain a useful definition of accuracy past bold that, fifty-fifty when nosotros practise non know the true value, nosotros can rely on the best available accustomed value with which to compare our experimental value. For our example with the gilt ring, there is no accepted value with which to compare, and both measured values have the same precision, so we take no reason to believe one more than than the other. We could look up the accuracy specifications for each balance as provided by the manufacturer (the Appendix at the end of this lab manual contains accuracy data for almost instruments y'all volition use), only the best way to assess the accuracy of a measurement is to compare with a known standard. For this situation, it may exist possible to calibrate the balances with a standard mass that is accurate within a narrow tolerance and is traceable to a primary mass standard at the National Institute of Standards and Technology (NIST). Calibrating the balances should eliminate the discrepancy betwixt the readings and provide a more authentic mass measurement. Precision is frequently reported quantitatively by using relative or partial uncertainty:

( 2 )

Relative Uncertainty =

incertitude
measured quantity

Example:

m = 75.5 ± 0.5 g

has a fractional uncertainty of:

 = 0.00half dozen = 0.7%.

Accurateness is frequently reported quantitatively past using relative error:

( three )

Relative Error =

measured value − expected value
expected value

If the expected value for m is lxxx.0 chiliad, then the relative fault is:

 = −0.056 = −v.6%

Annotation: The minus sign indicates that the measured value is less than the expected value.

When analyzing experimental information, it is important that you understand the deviation between precision and accurateness. Precision indicates the quality of the measurement, without whatsoever guarantee that the measurement is "correct." Accurateness, on the other mitt, assumes that there is an ideal value, and tells how far your answer is from that ideal, "right" respond. These concepts are directly related to random and systematic measurement errors.

Types of Errors

Measurement errors may be classified every bit either random or systematic, depending on how the measurement was obtained (an instrument could crusade a random error in one situation and a systematic error in another).

Random errors are statistical fluctuations (in either direction) in the measured data due to the precision limitations of the measurement device. Random errors tin can exist evaluated through statistical assay and tin exist reduced by averaging over a large number of observations (come across standard error).

Systematic errors are reproducible inaccuracies that are consistently in the same direction. These errors are difficult to observe and cannot be analyzed statistically. If a systematic error is identified when calibrating against a standard, applying a correction or correction factor to compensate for the effect can reduce the bias. Different random errors, systematic errors cannot be detected or reduced by increasing the number of observations.

When making careful measurements, our goal is to reduce as many sources of fault as possible and to keep track of those errors that we can not eliminate. It is useful to know the types of errors that may occur, and so that we may recognize them when they arise. Mutual sources of mistake in physics laboratory experiments:

Incomplete definition (may be systematic or random) — One reason that it is impossible to make exact measurements is that the measurement is not always clearly divers. For example, if two different people measure the length of the same string, they would probably get different results because each person may stretch the string with a different tension. The best way to minimize definition errors is to advisedly consider and specify the weather condition that could affect the measurement. Failure to account for a cistron (ordinarily systematic) — The most challenging part of designing an experiment is trying to command or account for all possible factors except the one independent variable that is beingness analyzed. For example, you may inadvertently ignore air resistance when measuring free-fall dispatch, or you may neglect to account for the upshot of the Earth's magnetic field when measuring the field near a small magnet. The best way to account for these sources of error is to begin with your peers nigh all the factors that could perchance affect your result. This brainstorm should exist done earlier first the experiment in order to programme and account for the confounding factors before taking data. Sometimes a correction can be applied to a result later on taking data to account for an error that was non detected earlier. Environmental factors (systematic or random) — Exist aware of errors introduced by your firsthand working environment. You may need to take account for or protect your experiment from vibrations, drafts, changes in temperature, and electronic racket or other effects from nearby appliance. Instrument resolution (random) — All instruments have finite precision that limits the ability to resolve small-scale measurement differences. For instance, a meter stick cannot be used to distinguish distances to a precision much better than near half of its smallest scale division (0.5 mm in this case). One of the all-time ways to obtain more than precise measurements is to utilize a null difference method instead of measuring a quantity directly. Null or balance methods involve using instrumentation to measure the difference between ii similar quantities, one of which is known very accurately and is adjustable. The adjustable reference quantity is varied until the difference is reduced to zero. The two quantities are then balanced and the magnitude of the unknown quantity can be constitute past comparison with a measurement standard. With this method, problems of source instability are eliminated, and the measuring musical instrument tin be very sensitive and does not fifty-fifty need a scale. Scale (systematic) — Whenever possible, the calibration of an musical instrument should be checked before taking data. If a scale standard is not available, the accuracy of the musical instrument should exist checked by comparing with another instrument that is at least as precise, or past consulting the technical data provided by the manufacturer. Calibration errors are usually linear (measured every bit a fraction of the full calibration reading), then that larger values effect in greater accented errors. Goose egg offset (systematic) — When making a measurement with a micrometer caliper, electronic residuum, or electric meter, ever cheque the zero reading start. Re-cipher the musical instrument if possible, or at least mensurate and record the cypher starting time so that readings can be corrected afterward. It is also a good idea to bank check the naught reading throughout the experiment. Failure to null a device volition result in a constant mistake that is more meaning for smaller measured values than for larger ones. Physical variations (random) — It is e'er wise to obtain multiple measurements over the widest range possible. Doing so often reveals variations that might otherwise become undetected. These variations may phone call for closer examination, or they may exist combined to notice an boilerplate value. Parallax (systematic or random) — This error tin can occur whenever there is some distance betwixt the measuring scale and the indicator used to obtain a measurement. If the observer's eye is non squarely aligned with the arrow and scale, the reading may be too high or depression (some analog meters have mirrors to help with this alignment). Instrument drift (systematic) — Most electronic instruments have readings that drift over fourth dimension. The amount of drift is generally not a business organization, only occasionally this source of error can be significant. Lag time and hysteresis (systematic) — Some measuring devices require fourth dimension to reach equilibrium, and taking a measurement before the instrument is stable will result in a measurement that is too high or depression. A mutual case is taking temperature readings with a thermometer that has not reached thermal equilibrium with its environment. A similar effect is hysteresis where the instrument readings lag backside and appear to have a "memory" effect, as data are taken sequentially moving up or downwardly through a range of values. Hysteresis is most unremarkably associated with materials that become magnetized when a changing magnetic field is practical. Personal errors come up from carelessness, poor technique, or bias on the office of the experimenter. The experimenter may measure incorrectly, or may apply poor technique in taking a measurement, or may introduce a bias into measurements by expecting (and inadvertently forcing) the results to concur with the expected consequence.

Gross personal errors, sometimes called mistakes or blunders, should exist avoided and corrected if discovered. Equally a dominion, personal errors are excluded from the error analysis discussion because it is more often than not assumed that the experimental result was obtained by post-obit right procedures. The term human fault should too be avoided in fault analysis discussions considering information technology is as well general to be useful.

Estimating Experimental Uncertainty for a Unmarried Measurement

Any measurement you make will have some dubiousness associated with it, no affair the precision of your measuring tool. So how do you decide and report this uncertainty?

The dubiety of a single measurement is express by the precision and accuracy of the measuring musical instrument, along with any other factors that might bear on the ability of the experimenter to make the measurement.

For example, if you are trying to use a meter stick to measure out the diameter of a tennis ball, the uncertainty might be

± five mm,

but if you used a Vernier caliper, the uncertainty could be reduced to maybe

± ii mm.

The limiting factor with the meter stick is parallax, while the second example is limited by ambiguity in the definition of the lawn tennis ball's diameter (it's fuzzy!). In both of these cases, the uncertainty is greater than the smallest divisions marked on the measuring tool (likely 1 mm and 0.05 mm respectively). Unfortunately, there is no full general dominion for determining the uncertainty in all measurements. The experimenter is the i who can best evaluate and quantify the uncertainty of a measurement based on all the possible factors that affect the result. Therefore, the person making the measurement has the obligation to brand the best judgment possible and report the incertitude in a way that clearly explains what the uncertainty represents:

( four )

Measurement = (measured value ± standard incertitude) unit of measurement

where the ± standard uncertainty indicates approximately a 68% conviction interval (see sections on Standard Deviation and Reporting Uncertainties).
Example: Diameter of tennis ball =

6.seven ± 0.2 cm.

Estimating Uncertainty in Repeated Measurements

Suppose you fourth dimension the period of oscillation of a pendulum using a digital instrument (that you presume is measuring accurately) and notice: T = 0.44 seconds. This single measurement of the flow suggests a precision of ±0.005 s, but this instrument precision may not give a complete sense of the incertitude. If yous repeat the measurement several times and examine the variation among the measured values, yous can get a better thought of the uncertainty in the menstruation. For example, here are the results of five measurements, in seconds: 0.46, 0.44, 0.45, 0.44, 0.41.

( 5 )

Average (mean) =

x 1 + ten 2 + + 10 North
N

For this situation, the best estimate of the catamenia is the average, or mean.

Whenever possible, repeat a measurement several times and average the results. This average is generally the best estimate of the "true" value (unless the information fix is skewed by one or more than outliers which should exist examined to make up one's mind if they are bad data points that should be omitted from the average or valid measurements that require farther investigation). More often than not, the more repetitions you lot brand of a measurement, the better this estimate volition be, only be careful to avoid wasting time taking more measurements than is necessary for the precision required.

Consider, as another case, the measurement of the width of a slice of paper using a meter stick. Being careful to keep the meter stick parallel to the edge of the paper (to avoid a systematic error which would cause the measured value to be consistently higher than the correct value), the width of the paper is measured at a number of points on the sheet, and the values obtained are entered in a information table. Note that the last digit is only a rough gauge, since it is difficult to read a meter stick to the nearest tenth of a millimeter (0.01 cm).

( half-dozen )

Average =

sum of observed widths
no. of observations
 = = 31.19 cm

This average is the best available estimate of the width of the piece of paper, but it is certainly not exact. We would have to average an space number of measurements to approach the true mean value, and even then, we are not guaranteed that the mean value is accurate because there is still some systematic mistake from the measuring tool, which can never be calibrated perfectly. So how exercise we express the dubiousness in our boilerplate value? Ane way to limited the variation among the measurements is to use the average deviation. This statistic tells usa on average (with 50% confidence) how much the private measurements vary from the mean.

( 7 )

d =

|x 1 ten | + |ten two x | + + |x N x |
N

However, the standard divergence is the most common way to characterize the spread of a data set. The standard deviation is always slightly greater than the average divergence, and is used considering of its association with the normal distribution that is oftentimes encountered in statistical analyses.

Standard Deviation

To summate the standard deviation for a sample of N measurements:

  • 1

    Sum all the measurements and divide by N to get the average, or hateful.
  • 2

    Now, subtract this average from each of the N measurements to obtain Northward "deviations".
  • 3

    Square each of these Northward deviations and add together them all upward.
  • 4

    Divide this result by

    ( N − 1)

    and accept the square root.

We tin write out the formula for the standard difference as follows. Let the North measurements be called x 1, 10 2, ..., 10Northward . Permit the average of the N values exist called

10 .

So each departure is given by

δ x i = x i x , for i = 1, 2, , N .

The standard deviation is:

In our previous instance, the average width

x

is 31.19 cm. The deviations are: The average deviation is:

d = 0.086 cm.

The standard deviation is:

due south =

(0.14)2 + (0.04)2 + (0.07)2 + (0.17)2 + (0.01)2
5 − one
 = 0.12 cm.

The significance of the standard divergence is this: if you now make one more than measurement using the aforementioned meter stick, you can reasonably expect (with about 68% conviction) that the new measurement will be within 0.12 cm of the estimated average of 31.19 cm. In fact, it is reasonable to use the standard difference as the dubiousness associated with this single new measurement. Nevertheless, the uncertainty of the average value is the standard divergence of the mean, which is always less than the standard deviation (see next section). Consider an case where 100 measurements of a quantity were made. The average or hateful value was 10.v and the standard departure was s = 1.83. The effigy beneath is a histogram of the 100 measurements, which shows how often a certain range of values was measured. For example, in twenty of the measurements, the value was in the range 9.5 to 10.five, and most of the readings were close to the mean value of 10.5. The standard divergence due south for this prepare of measurements is roughly how far from the average value most of the readings cruel. For a large enough sample, approximately 68% of the readings will be within one standard deviation of the mean value, 95% of the readings will be in the interval

10 ± 2 s,

and nearly all (99.seven%) of readings will lie within 3 standard deviations from the mean. The shine curve superimposed on the histogram is the gaussian or normal distribution predicted by theory for measurements involving random errors. As more and more measurements are made, the histogram will more closely follow the bellshaped gaussian curve, but the standard deviation of the distribution will remain approximately the same.

Figure 1

Figure one

Standard Departure of the Mean (Standard Error)

When we study the average value of Due north measurements, the uncertainty we should associate with this average value is the standard deviation of the mean, frequently called the standard error (SE).

( 9 )

σ 10 =

south
N

The standard error is smaller than the standard divergence by a factor of

one/

North
.

This reflects the fact that we expect the incertitude of the boilerplate value to get smaller when we use a larger number of measurements, N. In the previous example, we detect the standard fault is 0.05 cm, where we have divided the standard deviation of 0.12 by

v
.

The concluding outcome should and then be reported as:

Average paper width = 31.19 ± 0.05 cm.

Anomalous Data

The start step y'all should have in analyzing data (and even while taking data) is to examine the information ready equally a whole to look for patterns and outliers. Dissonant data points that lie outside the general tendency of the data may suggest an interesting phenomenon that could lead to a new discovery, or they may but be the consequence of a mistake or random fluctuations. In whatsoever example, an outlier requires closer examination to determine the crusade of the unexpected result. Extreme information should never be "thrown out" without clear justification and explanation, because yous may exist discarding the most significant office of the investigation! However, if you tin clearly justify omitting an inconsistent information bespeak, and so you should exclude the outlier from your analysis then that the boilerplate value is non skewed from the "true" mean.

Fractional Dubiety Revisited

When a reported value is adamant by taking the average of a set of independent readings, the partial dubiousness is given by the ratio of the uncertainty divided by the average value. For this example,

( x )

Fractional uncertainty =  =  = 0.0016 ≈ 0.two%

Notation that the fractional dubiousness is dimensionless but is ofttimes reported equally a per centum or in parts per meg (ppm) to emphasize the fractional nature of the value. A scientist might also make the argument that this measurement "is adept to well-nigh i function in 500" or "precise to about 0.2%". The partial doubtfulness is also of import because information technology is used in propagating doubt in calculations using the result of a measurement, as discussed in the next department.

Propagation of Uncertainty

Suppose we want to determine a quantity f, which depends on x and maybe several other variables y, z, etc. We want to know the mistake in f if we measure ten, y, ... with errors σ x , σ y , ... Examples:

( xi )

f = xy (Area of a rectangle)

( 12 )

f = p cos θ ( x -component of momentum)

( 13 )

f = x / t (velocity)

For a unmarried-variable function f(x), the deviation in f can exist related to the deviation in 10 using calculus:

( fourteen )

δ f =

δ ten

Thus, taking the square and the boilerplate:

( xv )

δ f 2 =

2
δ x ii

and using the definition of σ , we become:

( sixteen )

σ f =

σ x

Examples: (a)

f =

ten

( 17 )

 =

1
two
10

( 18 )

σ f =

σ x
two
x
, or  =

(b)

f = x 2

(c)

f = cos θ

( 22 )

σ f = |sin θ | σ θ , or  = |tan θ | σ θ


Note : in this situation, σ θ must exist in radians.

In the instance where f depends on two or more variables, the derivation above tin can be repeated with small modification. For ii variables, f(x, y), we accept:

The partial derivative ways differentiating f with respect to x holding the other variables fixed. Taking the square and the average, we go the constabulary of propagation of dubiety:

If the measurements of ten and y are uncorrelated, then

δ x δ y = 0,

and we get:

Examples: (a)

f = 10 + y

( 27 )

σ f =

σ 10 2 + σ y 2

When calculation (or subtracting) independent measurements, the absolute uncertainty of the sum (or departure) is the root sum of squares (RSS) of the individual absolute uncertainties. When adding correlated measurements, the uncertainty in the issue is simply the sum of the absolute uncertainties, which is always a larger uncertainty guess than adding in quadrature (RSS). Calculation or subtracting a abiding does not alter the accented uncertainty of the calculated value equally long as the abiding is an exact value.

(b)

f = xy

( 29 )

σ f =

y 2 σ x 2 + ten two σ y 2

Dividing the previous equation by f = xy, nosotros get:

(c)

f = ten / y

Dividing the previous equation by

f = ten / y ,

nosotros get:

When multiplying (or dividing) independent measurements, the relative uncertainty of the production (quotient) is the RSS of the private relative uncertainties. When multiplying correlated measurements, the dubiety in the result is just the sum of the relative uncertainties, which is always a larger uncertainty gauge than calculation in quadrature (RSS). Multiplying or dividing by a constant does not alter the relative uncertainty of the calculated value.

Note that the relative uncertainty in f, as shown in (b) and (c) above, has the same form for multiplication and division: the relative dubiety in a production or quotient depends on the relative incertitude of each private term. Instance: Find uncertainty in v, where

five = at

with a = 9.8 ± 0.1 m/s2, t = 1.2 ± 0.1 s

( 34 )

=  =  =

(0.010)2 + (0.029)2
 = 0.031 or 3.1%

Find that the relative uncertainty in t (ii.9%) is significantly greater than the relative uncertainty for a (1.0%), and therefore the relative uncertainty in five is essentially the same every bit for t (about three%). Graphically, the RSS is like the Pythagorean theorem:

Figure 2

Figure 2

The total dubiety is the length of the hypotenuse of a correct triangle with legs the length of each uncertainty component.

Timesaving approximation: "A concatenation is only as strong as its weakest link."
If 1 of the uncertainty terms is more than than three times greater than the other terms, the root-squares formula can be skipped, and the combined dubiety is just the largest uncertainty. This shortcut can relieve a lot of time without losing whatever accuracy in the estimate of the overall uncertainty.

The Upper-Lower Bound Method of Uncertainty Propagation

An alternative, and sometimes simpler procedure, to the ho-hum propagation of uncertainty law is the upper-lower spring method of uncertainty propagation. This alternative method does not yield a standard uncertainty guess (with a 68% confidence interval), just it does give a reasonable estimate of the uncertainty for practically whatsoever situation. The basic thought of this method is to apply the doubt ranges of each variable to summate the maximum and minimum values of the function. You lot can also think of this process as examining the all-time and worst example scenarios. For example, suppose you measure an angle to exist: θ = 25° ± 1° and y'all needed to find f = cos θ , then:

( 35 )

f max = cos(26°) = 0.8988

( 36 )

f min = cos(24°) = 0.9135

( 37 )

f = 0.906 ± 0.007

where 0.007 is one-half the divergence betwixt f max and f min

Note that even though θ was merely measured to ii significant figures, f is known to iii figures. By using the propagation of dubiety law:

σ f = |sin θ | σ θ = (0.423)( π /180) = 0.0074

(same upshot as to a higher place).

The incertitude estimate from the upper-lower bound method is more often than not larger than the standard uncertainty estimate establish from the propagation of uncertainty police, only both methods volition give a reasonable estimate of the uncertainty in a calculated value.

The upper-lower bound method is peculiarly useful when the functional relationship is not clear or is incomplete. One practical application is forecasting the expected range in an expense upkeep. In this case, some expenses may exist stock-still, while others may be uncertain, and the range of these uncertain terms could be used to predict the upper and lower bounds on the total expense.

Pregnant Figures

The number of meaning figures in a value tin exist defined as all the digits between and including the beginning non-nothing digit from the left, through the last digit. For case, 0.44 has two significant figures, and the number 66.770 has five significant figures. Zeroes are pregnant except when used to locate the decimal point, equally in the number 0.00030, which has 2 significant figures. Zeroes may or may non be meaning for numbers like 1200, where information technology is not clear whether ii, three, or 4 significant figures are indicated. To avoid this ambiguity, such numbers should exist expressed in scientific notation to (e.g. 1.20 × 10three clearly indicates three significant figures). When using a calculator, the display will ofttimes testify many digits, only some of which are meaningful (significant in a unlike sense). For example, if you lot want to estimate the expanse of a circular playing field, you might footstep off the radius to exist nine meters and apply the formula: A = π r two. When you compute this area, the calculator might report a value of 254.4690049 grandtwo. It would be extremely misleading to report this number every bit the area of the field, because information technology would suggest that you know the area to an cool degree of precision—to within a fraction of a square millimeter! Since the radius is only known to i meaning figure, the terminal answer should as well contain only one pregnant figure: Surface area = 3 × 102 g2. From this case, we can see that the number of meaning figures reported for a value implies a certain degree of precision. In fact, the number of significant figures suggests a rough estimate of the relative uncertainty:

The number of significant figures implies an guess relative uncertainty:
1 significant effigy suggests a relative uncertainty of about ten% to 100%
2 significant figures suggest a relative uncertainty of about 1% to ten%
3 meaning figures suggest a relative dubiousness of about 0.one% to ane%

To sympathise this connection more clearly, consider a value with ii significant figures, like 99, which suggests an uncertainty of ±1, or a relative incertitude of ±1/99 = ±1%. (Really some people might argue that the unsaid uncertainty in 99 is ±0.5 since the range of values that would circular to 99 are 98.v to 99.4. Merely since the uncertainty here is only a crude estimate, there is not much point arguing about the factor of two.) The smallest 2-pregnant figure number, 10, also suggests an uncertainty of ±i, which in this example is a relative uncertainty of ±i/ten = ±x%. The ranges for other numbers of significant figures tin be reasoned in a similar mode.

Use of Pregnant Figures for Unproblematic Propagation of Uncertainty

By following a few elementary rules, meaning figures tin can be used to discover the appropriate precision for a calculated result for the four virtually basic math functions, all without the utilise of complicated formulas for propagating uncertainties.

For multiplication and division, the number of significant figures that are reliably known in a product or quotient is the same every bit the smallest number of significant figures in any of the original numbers.

Example:

6.vi
× 7328.7
48369.42  =   48 × 103
(2 significant figures)
(5 significant figures)
(ii significant figures)

For addition and subtraction, the consequence should be rounded off to the final decimal place reported for the least precise number.

Examples:

223.64 5560.5
+ 54 + 0.008
278 5560.five

If a calculated number is to be used in further calculations, it is skilful practice to continue one extra digit to reduce rounding errors that may accrue. Then the final reply should exist rounded according to the above guidelines.

Doubtfulness, Significant Figures, and Rounding

For the same reason that it is dishonest to report a result with more pregnant figures than are reliably known, the uncertainty value should also not be reported with excessive precision. For example, it would be unreasonable for a student to report a effect like:

( 38 )

measured density = 8.93 ± 0.475328 1000/cm3 Incorrect!

The uncertainty in the measurement cannot possibly be known so precisely! In almost experimental work, the confidence in the dubiety estimate is not much better than about ±50% because of all the various sources of mistake, none of which can be known exactly. Therefore, incertitude values should exist stated to merely one significant effigy (or maybe ii sig. figs. if the first digit is a ane).

Because experimental uncertainties are inherently imprecise, they should exist rounded to one, or at most ii, significant figures.

To aid requite a sense of the corporeality of conviction that can be placed in the standard deviation, the post-obit table indicates the relative doubtfulness associated with the standard divergence for various sample sizes. Note that in order for an doubt value to be reported to 3 meaning figures, more than ten,000 readings would be required to justify this degree of precision! *The relative uncertainty is given past the gauge formula:

 =

1
2(Due north − 1)

When an explicit uncertainty estimate is fabricated, the dubiety term indicates how many meaning figures should be reported in the measured value (non the other manner around!). For example, the uncertainty in the density measurement to a higher place is about 0.v g/cm3, so this tells united states of america that the digit in the tenths place is uncertain, and should exist the last one reported. The other digits in the hundredths identify and across are insignificant, and should non be reported:

measured density = 8.9 ± 0.5 g/cm3.

RIGHT!

An experimental value should be rounded to be consistent with the magnitude of its uncertainty. This generally ways that the concluding significant figure in whatsoever reported value should exist in the same decimal place every bit the dubiety.

In nigh instances, this practice of rounding an experimental consequence to exist consistent with the doubt estimate gives the aforementioned number of meaning figures every bit the rules discussed before for elementary propagation of uncertainties for adding, subtracting, multiplying, and dividing.

Caution: When conducting an experiment, it is of import to continue in mind that precision is expensive (both in terms of time and textile resource). Do not waste your time trying to obtain a precise effect when just a rough gauge is required. The cost increases exponentially with the amount of precision required, so the potential benefit of this precision must be weighed against the extra cost.

Combining and Reporting Uncertainties

In 1993, the International Standards Organization (ISO) published the first official worldwide Guide to the Expression of Uncertainty in Measurement. Before this time, uncertainty estimates were evaluated and reported according to different conventions depending on the context of the measurement or the scientific subject. Here are a few central points from this 100-page guide, which can be plant in modified form on the NIST website. When reporting a measurement, the measured value should be reported along with an estimate of the total combined standard incertitude

U c

of the value. The total uncertainty is constitute past combining the incertitude components based on the two types of incertitude analysis:
  • Type A evaluation of standard uncertainty - method of evaluation of uncertainty by the statistical analysis of a series of observations. This method primarily includes random errors.
  • Type B evaluation of standard uncertainty - method of evaluation of uncertainty by means other than the statistical analysis of series of observations. This method includes systematic errors and any other uncertainty factors that the experimenter believes are of import.
The individual uncertainty components u i should be combined using the police of propagation of uncertainties, ordinarily chosen the "root-sum-of-squares" or "RSS" method. When this is done, the combined standard uncertainty should be equivalent to the standard deviation of the result, making this uncertainty value represent with a 68% confidence interval. If a wider confidence interval is desired, the uncertainty tin exist multiplied by a coverage factor (usually k = 2 or three) to provide an uncertainty range that is believed to include the true value with a conviction of 95% (for k = 2) or 99.vii% (for 1000 = iii). If a coverage factor is used, at that place should exist a clear explanation of its meaning so in that location is no confusion for readers interpreting the significance of the uncertainty value. You should be aware that the ± uncertainty annotation may be used to indicate different confidence intervals, depending on the scientific discipline or context. For example, a public stance poll may report that the results have a margin of error of ±3%, which means that readers tin be 95% confident (not 68% confident) that the reported results are accurate within 3 percentage points. Similarly, a manufacturer's tolerance rating by and large assumes a 95% or 99% level of confidence.

Conclusion: "When exercise measurements concord with each other?"

Nosotros now accept the resources to respond the key scientific question that was asked at the first of this error analysis discussion: "Does my outcome hold with a theoretical prediction or results from other experiments?" Generally speaking, a measured result agrees with a theoretical prediction if the prediction lies within the range of experimental uncertainty. Similarly, if two measured values have standard incertitude ranges that overlap, and so the measurements are said to be consistent (they agree). If the uncertainty ranges exercise not overlap, then the measurements are said to be discrepant (they practise non agree). Yet, yous should recognize that these overlap criteria tin can give two contrary answers depending on the evaluation and confidence level of the doubtfulness. Information technology would be unethical to arbitrarily inflate the uncertainty range just to brand a measurement hold with an expected value. A better procedure would be to talk over the size of the divergence between the measured and expected values within the context of the uncertainty, and effort to discover the source of the discrepancy if the deviation is truly significant. To examine your own information, you are encouraged to utilize the Measurement Comparison tool available on the lab website. Here are some examples using this graphical assay tool:

Figure 3

Effigy three

A = 1.2 ± 0.4

B = 1.viii ± 0.4

These measurements concur within their uncertainties, despite the fact that the percent difference betwixt their cardinal values is 40%. However, with one-half the uncertainty ± 0.2, these aforementioned measurements do not agree since their uncertainties practise not overlap. Further investigation would exist needed to make up one's mind the crusade for the discrepancy. Perchance the uncertainties were underestimated, at that place may have been a systematic mistake that was not considered, or there may be a true difference between these values.

Figure 4

Figure iv

An alternative method for determining understanding between values is to calculate the divergence between the values divided by their combined standard incertitude. This ratio gives the number of standard deviations separating the two values. If this ratio is less than 1.0, and then it is reasonable to conclude that the values agree. If the ratio is more than two.0, then it is highly unlikely (less than about 5% probability) that the values are the same. Example from above with

u = 0.4: = 1.ane.

Therefore, A and B likely agree. Example from above with

u = 0.two: = 2.1.

Therefore, it is unlikely that A and B concur.

References

Baird, D.C. Experimentation: An Introduction to Measurement Theory and Experiment Design, threerd. ed. Prentice Hall: Englewood Cliffs, 1995. Bevington, Phillip and Robinson, D. Data Reduction and Error Analysis for the Physical Sciences, 2nd. ed. McGraw-Hill: New York, 1991. ISO. Guide to the Expression of Uncertainty in Measurement. International Arrangement for Standardization (ISO) and the International Committee on Weights and Measures (CIPM): Switzerland, 1993. Lichten, William. Data and Error Analysis., 2nd. ed. Prentice Hall: Upper Saddle River, NJ, 1999. NIST. Essentials of Expressing Measurement Uncertainty. http://physics.nist.gov/cuu/Doubt/ Taylor, John. An Introduction to Error Analysis, 2nd. ed. University Science Books: Sausalito, 1997.

rogersstintion1940.blogspot.com

Source: https://www.webassign.net/question_assets/unccolphysmechl1/measurements/manual.html

0 Response to "How to Read in a Data Line of Uncertain Length"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel