If “Standard error” and “Confidence intervals” measure precision of measurement, then what are the measurements of accuracy?

In book “Biostatistics for dummies” in page 40 I read:

The standard error (abbreviated SE) is one way to indicate how precise your
estimate or measurement of something is.

and

Confidence intervals provide another way to indicate the precision of an
estimate or measurement of something.

But there is not written anything how to indicate accuracy of the measurement.

Question: How to indicate the how accurate the measurement of something is? Which methods are used for that?


Not to be confused with Accuracy and Precision of the test: https://en.wikipedia.org/wiki/Accuracy_and_precision#In_binary_classification

Answer

Precision can be estimated directly from your data points, but accuracy is related to the experimental design. Suppose I want to find the average height of American males. Given a sample of heights, I can estimate my precision. If my sample is taken from all basketball players, however, my estimate will be biased and inaccurate, and this inaccuracy cannot be identified from the sample itself.

One way of measuring accuracy is by performing calibration of your measurement platform. By using your platform to measure a known quantity, you can reliably test the accuracy of your method. This could help you find measurement bias, e.g., if your tape measure for the height example was missing an inch, you would recognize that all your calibration samples read an inch too short. It wouldn’t help fix your experimental design problem, though.

Attribution
Source : Link , Question Author : vasili111 , Answer Author : Nuclear Hoagie

Leave a Comment