This site will look better if you upgrade to a browser that supports web standards.
Real-world measured values are rarely exact - there's usually some "plus or minus" in the measuring device, the ability to read analog values, etc. Say, for instance, that the length of an American football field is measured with a yardstick. Few people would expect the measured result to be 100.00000000... yards. Instead, there will be a loss of precision due to a variety of factors: perhaps the yardstick isn't precisely 1.000.. yard long, or the beginning of each measurement doesn't start exactly where the previous measurement ended, or the length measured wasn't exactly parallel to the long axis of the field, etc.
To account for this error, results of mathematical operations on measurements should be related to the size of the potential error. For instance, when adding two lengths, one paced off a yard at a time, and the other measured using a tape measure, the amount of potential imprecision in the paced value makes reporting the sum in sixteenths of an inch a bit ridiculous - instead, the sum might be reported in yards, plus or minus a foot.
Precision is denoted by the number of significant digits in the number. The more significant digits, the more precise, or certain, the value is. In most cases scientific notation is the clearest way to denote significant digits (SD):
4321 = 4 x 10^3 when written with one SD, 4321 = 4.3 x 10^3 when written with 2 SD, and 4321 = 4.321 x 10^3 when written with 4 SD.
0.01 = 1 x 10^-2 when written with one SD, 0.01234 = 1.2 x 10-2 when written with 2 SD, and 0.01234 = 1.234 x 10-2 when written with 4 SD.
By default, the precision of a value is assumed to be plus or minus 1 in the last significant digit. So 4 x 10^3 implies that the true value is between 3000 [(4 - 1) x 10^3] and 5000 [(4 + 1) x 10^3]. The value 4.32 x 10^3 implies that the value lies between 4310 [(4.32 - 0.01) x 10^3] and 4330 [(4.32 + 0.01) x 10^3].
XL has no built in functions that either determine the number of significant digits, nor perform calculations that round to a particular number of significant digits. One workaround would be to take the base 10 logarithm of the result and use that result to determine the number of significant digits. Where SD is the number of significant digits desired:
=ROUND(A1, SD - 1 - INT(LOG10(ABS(A1))))
A shorter, if more obscure method was provided by Harlan Grove in a newsgroup posting:
=--TEXT(A1,"." & REPT("0", SD) & "E+000")
Recall that what is displayed in a cell has only the most tenuous relationship to what is stored in the cell. Unfortunately, display formats aren't dynamic. One can obviously use scientific notation - chooseand choose 2 decimal places, or and enter:
but to display significant digits without scientific notation, a text function is necessary. Here's one way, again, provided by Harlan Grove in a newsgroup posting. This provides for 3 significant digits, for all values greater than or equal to 0.100:
This page last updated
© Copyright 2001 - 2004 McGimpsey and Associates. Except where noted, all code on this site may be distributed under the Gnu GPL. Acknowledgement is appreciated.