How Many Digits Do You Report? A Surprising Answer

How Many Digits Do You Report? A Surprising Answer

Question from Anon (with added paragraphifications):

Message: Dear Dr Briggs, I am a lay man in statistics, who has tried to avoid this kind of sport as much as possible, so I would be very grateful if you could help me make to understand the following issue: I have a discussion about reporting the number of significant digits of mean values, calculated on basis of multiple measurements of a given property.

As a biochemist, my take on this is very simplistic: if a device has a resolution of say 1 digit, mean values are reported with a significance of at best 2 digits.

For instance: if volumes are measured on a device having a 1 ml resolution (precision), mean values are reported with a 1 digit precision at best, e.g 1.3 ml.

On the internet, however, the common opinion is that resolution is increased by repeating a measurement many (infinite) times. Example: sea level is measured with a resolution of 1 cm, but by taking many measurements, it is claimed to get a mean value with a precision of, say, up to 1 mm.

My (simplistic) take on this is that mean value is a mathematical artefact per se and as such does not necessarily have a meaning in the real world.

For instance: in the Netherlands the average number of children in elementary school classes is 30.66 (SD: 1.47), yet there are no classes with this exact number of children (as far as I am aware of). I would be much obliged if you would explain to me (as if I am a labrador) what’s wrong with my line of reasoning, and what then the sense is of reporting mean values etc with precisions that exceed the instrument’s physical capabilities. With kind regards Anon

Thanks, Anon. Here’s my semi-contrarian take. For the busy: the answer is there is no answer; not a single one-sized-fits-all answer.

Start with the idea that there is no such thing as probability. Therefore, there are no such things as probability distributions. And if there are no such things as probability and probability distributions, there are no such things as parameters of probability distributions.

That is, none of these things have existence in Reality. They are useful, at times, mathematical tools to help quantify uncertainty. They are epistemological aids. But they do not exist, i.e. they do not have being.

Now any set of measurements of your beaker, each capable of being measured to 1 ml, or to whatever level, will have a mean. That is, add up all measurements, divide by the number of measurements, and that is the mean. Of those measurements. It can be reported to as many digits as you care to write. Infinite, even. 10.1876156161257852715566561871371215118128712121010121012021 ml. Or whatever.

That is the mean of those measurements. By definition. This many-digit number is also verifiable, in the sense that you can go back and check your calculations and see if you made any mistake. All assuming the measurements are the measurements. I mean, the measurements might not be accurate representations of the contents of those beakers, due to error, bias, or whatever. But they are the measurements you used in calculating the mean.

Well, so far we’ve said nothing (though it took many words to say this nothing). Except that if you take an average, it is the average of those numbers. Indeed, if you do not report all the digits in that average, you are cheating, in a sense. You have shaved away information. If the mean was 10.1876156161257852715566561871371215118128712121010121012021 ml, and you report only 10.2 ml, then you have said what was not so.

We’re done, really.

Unless we want to say something about future (or unknown) measurements. Then we can use those old measurements to inform our uncertainty in future measurements.

Any set of future measurements will have a mean, just like the old set. The one big assumption we must make is, if we want to use the old measurements, that future measurements will have the same causes in the same proportions as in the old set. (I’ll stop here today with just saying that on that large subject.)

If we believe in the similarity of causes, we might model our uncertainty in that new mean using the old measurements. We can also, of course, model our uncertainty in the new measurement values, or of any function of them (the mean is one among an infinity of functions).

So then. Given the old data and old mean, what is the probability the new mean will be 10.1876156161257852715566561871371215118128712121010121012021 ml? Can’t answer, because it depends on the model you use, naturally. Though we might guess that the probability of 10.1876156161257852715566561871371215118128712121010121012022 ml (note the last digit) is not too different. And very likely, again depending on the model, the probability of one or either is weer than wee. Mighty small numbers. Negligibly small, I would guess, for almost any decision you care to make.

And it is the decision you make with your mode-slash-prediction that drives everything.

For ease, let’s call our old calculated mean m. The probability the new one will be exactly equal to m is probably exceedingly small (depending on the model, of course). The probability it is m +/- 0.000000000000000000000000000000000000000000000000000000001 ml (a number an order of magnitude more precise than m; i.e. one less zero) is also likely negligibly small.

And so on up to some point. Maybe that point is +/- 0.01 ml, or maybe it is +/- 0.1 ml, or it could even be +/- 1 ml, such that we are, say, 90% sure the new mean will be in the old mean +/- some window. The window depends on the decision you would make. If you would not do anything differently for a new mean of (say) 10.18 ml or 10.13 ml or 10.25 ml, then maybe buckets of 0.25 ml are fine. Then you’d report to the nearest 0.25 ml. Or maybe you’re in some new wild physics experiment of the disappearingly small, and then you want that tight, tight window.

Of course, you cannot verify the actual future observations closer than +/- 1 ml, but you can verify the mean to greater accuracy. The calculated mean, I mean (a small pun). You can’t verify the actual contents mean to closer than 1 ml. Not just with the measurements you take (you may be able to verify if other things downstream in the causal chain from the contents mean can be measured). So if that’s your only goal, then there’s no reason to report other than +/- 1 ml. If that’s too confusing, ignore it for it.

And why 90%? Why indeed? Why not 80%? Or 99%? There is no window confidence number that should be picked, except the one that matches the decision you will make, and the consequences you face based on that decision. Not that I will make. That you will make. (This is analogous with betting.)

The whole point of this is that there is no answer to your question. That is, there is no one correct answer. The answer is: it depends.

It’s not only your particular data or model, it’s all of them. Everywhere and everytime.

Bonus! It’s pretty easy to do math on this in some standard problems. Maybe we do this some day. But you try it first.

Buy my new book and learn to argue against the regime: Everything You Believe Is Wrong.

Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here; Or go to PayPal directly. For Zelle, use my email.

30 Comments

  1. As I recall from engineering classes lo these many decades ago, your precision is no better than that of your lease accurate measurement, and can have no more significant digits than the measurement with the fewest. This is why some measurements have leading or trailing zeroes.

    Of course, this is for engineering, not esoteric and imaginary artifacts like “climate change”. When you just make stuff up, precision doesn’t matter.

  2. Robin

    Setting precision aside, doesn’t it also depend somewhat on the nature of the data? For example, Anon states that the mean elementary school class in the Netherlands is 30.66, but the data are of the Natural numbers, not the Real numbers (as would assumed with laboratory measurements). So whoever is calculating this value of 30.66 is applying a continuous probability model to natural number data. Isn’t this just wrong?

    And then we have certain data that are assumed to be real numbers, but when precision is considered they actually become discrete numbers and in many cases cannot drop below zero (in the case of mass, length etc). So in the real world, there is no such thing as a real numbering system, but there is such a thing as Natural numbers and Discrete data.

    @Briggs: “You have shaved away information” Excellent take away and excellent article. Thank you Briggs.

  3. C.R. Dickson

    A long time ago, when science still existed, students used to learn about numbers and their properties in public schools. Using the basics, scientists (who are almost extinct now) developed rules for using numbers so their research work could be compared to the work of other scientists. I have listed this reference in a comment here before
    https://crdickson.substack.com/p/numbers-in-science-what-every-journalist
    These rules are nothing special. They are somewhat like those for driving in traffic on roads and highways (stoplights, speed limits, etc.).

    It is the politicians and lawyers who intentionally create ambiguity in order to “win” their “debate.” A relatively recent paper in the Survey of Geophysics reviewed the accuracy of satellite methods used to determine global mean sea levels, and it reports (in its Fig. 3) a trend line with a negative slope of 0.0208 mm/year. Imagine that, climate practitioners can “see” bacteria by spying on the ocean surface with satellites, google earth, and a laptop.

    Notes: 1) The Survey of Geophysics paper is pay to read. It appears (don’t know for certain) that Springer has successfully blocked Sci Hub. Locking up knowledge is another sign that science is almost completely dead. 2) Also, the glassware pieces in the photo are not connected as they should be for real use. That photo is like the random incorrect equations displayed on many photos supposedly representing classroom chalkboards.

  4. Kip Hansen

    Briggs ==> If Anon had asked Pragmatist (instead of a statistician) he would have received both the same and quite a different answer.

    “The Mean is The Mean.– True but trivial”

    If we are talking science — the study of real physical things — then the answer can and should be quite different. Mathematical Means are often supplied by The Science as if they were actual physically meaningful results. However, with many types of properties that can and have been measured, they are simply “abstractions about abstractions” and have neither physical reality or physical meaning.

    Thus, “too many numerals to the right of the decimal point” becomes an absurdity about an absurity.

  5. Two thoughts:

    1) I routinely include problems in my math classes that ask students “how many of such and such would you need”. Example: How many buses? Students quickly understand that the mathematical solution might be 2.76 buses but that the real answer is 3 buses.

    2) To Robin, 30.66 is fine because it is an average. No one is claiming that any classroom has .66 of a child. This can be useful, for example “replacement birth rate”. If RBR is 2.3 children per couple it would make no sense to say “can’t have 0.3 children, therefore replacement rate is 2”.

  6. C.R.Dickson

    For Upstate:
    Yes, that is the one. I used an on-line version in Nov 2016 in my Substack essay, but it went into print in the journal in 2017. Figure 3 is the figure I of interest. Also, Sci Hub does show the paper, there’s still hope.

    Satellite Altimetry-Based Sea Level at Global
    and Regional Scales
    M. Ablain1 • J. F. Legeais 1 • P. Prandi 1 • M. Marcos 2 •
    L. Fenoglio-Marc 3,4 • H. B. Dieng5 • J. Benveniste 6 •
    A. Cazenave 5,7

    Received: 19 May 2016 / Accepted: 8 October 2016 / Published online: 16 November 2016
    Ó Springer Science+Business Media Dordrecht 2016
    Surv Geophys (2017) 38:7–31
    DOI 10.1007/s10712-016-9389-8

  7. vince

    “When you just make stuff up, precision doesn’t matter.”

    “They are playing a game. They are playing at not playing a game. If I show them I see they are, I shall break the rules and they will punish me. I must play their game, of not seeing I see the game.”

    ~ R. D. Laing.”

  8. Hmm, way back in the dark ages, our admonition about reporting calculated properties depended on both the precision and the resolution. You couldn’t measure something finer than your resolution, which then had to limit your precision. Then you had to propagate your uncertainty or your errors or both if you wanted to do further analysis using your previous measurements. You also had to report *all* of your measurements and had to justify not including some if you had errors, or admonish your readers/peers to not take their conclusions too far. “Don’t make vast conclusions from half-vast data.”

  9. Huh? Sorry, there’s a right answer.

    If you are reporting an observable – e.g. the average amount of liquid in N beakers – you cannot report beyond least accurate reading.

    If you are reporting a calculation without a physical reference you can report to any precision you want, but you must recognize that the result applies to the numbers, not anything measured and denoted by those numbers.

    e.g. the average of 1.1 1.3 1.2 1.5 is 1.275 but if those numbers are estimates of ML in beakers, the average beaker will have had about 1.3ML in it.

  10. Alan Tomalty

    William , the conclusion that I get from your treatise on measurement is that you can’t lower the measurement uncertainty simply by taking an infinite number of measurements and averaging out the results. Of course this is not the same as trying to guess the ratio of black balls to white balls in an urn by taking samples of a certain size and averaging the results from an infinite number of samples. I use the word “infinite” instead of the word “large”. However I digressed and now back to the measurement uncertainty. The ~ 1000 page textbook called “Microwave Radar and Radiometric Remote Sensing” by Ulaby and Long states on page 183 “To reduce the uncertainty of a radar measurement of the backscatter from a terrain surface, it is necessary to average many independent samples together. I AM ASSUMING THAT THEY ARE WRONG AND DO NOT UNDERSTAND THE MEANING OF UNCERTAINTY. Am I correct?

  11. JohnK

    In the hopes that the following will help somebody, may I lay this out slightly differently?

    0000. Using our measurement device, we are making, and getting, VARIOUS values, and WONDERING which value is “better.” THAT’S the question we are really asking.

    000. What does “better” mean? A very good question. It will turn out that we most often will not be able to say which measurement values are “more accurate,” only that, given the measurement device we actually have, some measurement values are “more likely to occur in the future.” Sometimes, we can roughly equate the two, but sometimes, we have to be very careful to note, and to know, the difference.

    00. We make an assumption that our measurement stands for something real. What we actually see is a number on a digital readout, a pointer on a dial, etc. But suppose we want to measure “phlogiston.” Will our readout — ever — measure that?

    0. VERY important: We’ve already proved to ourselves that we CANNOT find the “one true measurement,” the “real value,” with our existing set-up — since we’re getting various values. We don’t have a “one true measurement machine” available. We just have a machine that gives various values — that’s it. In physicist-speak: that’s a non-trivial point.

    1. This is where Matt’s post enters in. His first point is about means vs. our measurements. In a moment, we will see that this is about calculation precision vs. measurement precision.

    2. A mean is a calculation, not one of our measurements. Our measurements have the precision that they do — and no more.

    3. But because the mean is a calculation and not one of our measurements, it can have a “precision” as fine as we please. Suppose our measurements are 1.2 ml, 1.1 ml, etc. That is as much measurement precision as we get. This measurement precision must be kept eternally separate from calculation precision.

    4. Yes, it is crazy to imagine that some magic will manufacture out of thin air something more precise than the precision of our real measurements. When we imagine this (and many do), we are implicitly confusing calculation precision (which we can derive to any decimal place we please) with measurement precision. The two are entirely separate, and it is very often fatal to confuse the two.

    5. Here the question we ought to be asking (but hardly anybody does), is WHY are we calculating the mean of our measurements? For instance, why not calculate the 15th root of the sum of every third value (etc.)?

    6. And really: why do we calculate anything at all? After all, the only (semi) real things we have are our actual measurements, with the precision that they have. There they are: QED.

    7. Roughly speaking, the reason is the nature of density functions. Density functions are completely abstract, completely numberless. First, we must CHOOSE a particular density function (there’s many different ones). Then we need at least one actual numerical value for the density function to populate and crank out the numbers that we call a “probability distribution.”

    8. By custom, the value that we choose represents our “best guess.” We could have shut our eyes and picked one of our actual measurements; the density function doesn’t care. By further custom, we calculate the mean of our measurements, and (roughly speaking) feed that calculated number in to our chosen density function, which then cranks out numbers. (This is now invisibly done by our “statistical” software, so it looks like “we” aren’t doing anything, “choosing” anything. BUT WE ARE.)

    9. But how can we say that the calculated mean is our “best guess?” Even more fundamentally, how can we say that the mean is more accurate than any particular measurement we have made?

    3. We can’t say that. It’s not AUTOMATICALLY true that the mean is a better guess than a particular measurement we have already made. One of our particular measurements might be dead on — but which one that is, we can’t know.

    4. Thus the context is our UNCERTAINTY (seems to me, somebody wrote a book with that title) about which of our measurements is more reliable, and which are less reliable. Note that I wrote, “more reliable,” rather than “more accurate.” This was deliberate; we’ll get to that in a moment.

    5. This gets us to the other part of Matt’s post. We choose a model, with our assumptions and our collected evidence, TO PREDICT WHICH FUTURE MEASUREMENT VALUES ARE MORE LIKE TO OCCUR.

    6. NOTA BENE: Regarding our measurement values, I have deliberately not used the words, “more accurate,” only “more likely to occur in the future.” We will not open the can of worms whose label says, “THE TRUE VALUE IS INSIDE HERE.” Our measurement devices are what they are; we can quantify our uncertainty in what values they might give in the future. AND THAT’S ALL. To repeat what was written in #0, above: that is a non-trivial point.

    SUMMARY. Measurement precision is completely separate from calculation precision. We calculate means, first because they are a number, and density functions need numbers to become probability distributions, second because a mean’s numerical value can help to quantify our prediction as to which future measurement values are more like to occur, given our actual measurement devices and our other evidence and assumptions. Given the real world, it’s often wiser to talk about measurement values that are “more reliable,” rather than “more accurate,” lest we commence a vain search for the “one true value.”

  12. Briggs

    Alan,

    Good question. U&L’s problem is different. Unlike Anon, they are measuring a signal that has error. We assumed Anon was measuring without error.

    U&L are probably saying they measure Y = X + N. They want X, but see Y, which is X plus “noise”, or other things added to X. They likely assume, based on experience or hope or whatever, that the probability N > 0 is 50%. If that’s so, then if you take multiple measurements Y_i, and take the mean of Y, you get mean(Y) = X + mean(N). That assumes X doesn’t change through the measurements. The hope is that mean(N) = 0, or thereabouts.

    You can go on to quantify your uncertainty in N formally, then you can measure how likely mean(Y) is from X.

    It then turns into Anon’s problem for reporting digits of that uncertainty in X (which I see some insist have a one-size-fits-all answer, which I tried to show was false).

    Hope that helps.

  13. Briggs

    JohnK,

    Endorsed, except to note that we needn’t use density functions, but any probability model, such as one with a huge number of possible values, but still discrete and finite.

    Maybe we should do the example after all.

  14. Alan Tomalty

    Since each measurement has the same uncertainty and the true value could be anywhere within that uncertainty range, taking a mean of an infinite number of measurements simply means that you have divided up the uncertainty range into 2 with the mean in the middle of that range. That mean is no more likely to be the true mean or even close to the true mean than any other point within the range. Uncertainty cannot be lessened by more sampling if we are talking about measurement error( which is different from the balls in the urn example that I mentioned). Hoping that the noise has a probability >0 of 50% and hoping that mean(N) are just that: HOPE.

  15. Alan Tomalty

    Here is a perfect example: If you take the measurements in a boat and the waves move your hand in the same direction every time, there will be a constant bias that you may not be aware of. This constant bias will definitely show up in your infinite number of measurements. If you take the mean of those measurements and assume that is the real precise measurement, you simply are assuming (that because most of the measurements are coagulating around that mean); that the bias doesn’t exist , when in fact it does exist but you aren’t aware of it.

  16. Cloudbuster

    Heresolong: “No one is claiming that any classroom has .66 of a child. ”

    Well, leaving out antebellum US classrooms with black slave students, should such a thing have existed. Or double amputees (how many parts do you have to be missing to qualify as .66?). 🙂

  17. Alan Tomalty

    Another example: Let us say that you are running climate model simulations for projected temperature 50 years from now and you run 10000 of them on the same supercomputer only varying the timestepping in the calculations, you will get 10000 answers but an average of them will be no closer to the truth.

  18. Alan Tomalty

    If there is no bias in your measuring device then an average of an infinite number of measurements will approximate the true value. However you have no way of knowing whether there is bias or not, therefore you cannot decrease uncertainty by simply ramping up the number of measurements.

  19. Milton Hathaway

    As an engineer heavily involved in measurement devices, this topic really strikes a nerve.

    When a measurement is made, the measured result can be written as the true value, call it M, plus an error term: M + e. The error term already includes some digitization error, but typically it’s negligible (i.e., there are plenty of digits). Some consider it best practice to further reduce the number of digits to a bare minimum, for various reasons. This adds in an additional error term, call it E, making our measured result M + e + E. I have run across many cases where E is as large or even larger than e, arbitrarily crippling the measurement accuracy.

    Sometimes quite a bit is known about the error. For example, it might be quite repeatable from measurement to measurement, if the measurements are made closely in time and the ambient temperature is stable (for example). When this is true, it is often possible to track small changes in a measured value to make relative measurements, assuming the instrument designer hasn’t thrown away those “inaccurate” digits that are needed to reveal the small changes.

    Sometimes the error is known to be uncorrelated (or can be forced to be uncorrelated) with the measurement, and it is known to be zero mean (or can be forced to be zero mean). In such cases, measurement accuracy can be improved, sometimes dramatically, by combining many measurements by averaging or other types of post-processing. Reducing the number of digits in measurement results can hinder or preclude such post-processing.

    Sometimes a measured result is used in a real-time feedback control system. Reducing the number of digits can cause the system to have undesirable limit-cycles with degraded performance.

    The ‘consumer’ of the measurement results is an important consideration. If the consumer is a human watching numbers on a display, there is little value in digits that change rapidly; if the consumer is a computer, generally the more digits the better, at least until data transmission bandwidth becomes an impediment.

    In my experience, the software engineers who design instrumentation firmware tend to love these rules-of-thumb for mindlessly throwing out digits, as fewer digits make their life easier in a variety of ways. As a user of these measurement results, too few digits has been a recurring problem, too many digits has seldom if ever been a problem. I have used far too many otherwise high-quality measurement devices crippled needlessly by someone’s sense of esthetics on the number of displayed digits.

    (Wow, that really turned into a rant. Like I said, sore spot.)

  20. Milton Hathaway

    Alan Tomalty: ” . . . you have no way of knowing whether there is bias or not, therefore you cannot decrease uncertainty by simply ramping up the number of measurements.”

    True enough, but often you know that any bias is short-term stable. For some measurements, you can measure a value, call it A, then invert the measurement and measure it again, -A, then compute A-(-A)/2, and the bias averages out. Sometimes inverting the measurement without inverting the bias is trivial, sometimes it’s challenging (or impossible).

    This is a simple concept that every engineer picks up, so it’s not patentable. However, apply this simple concept to a measurement that no one has applied it to before, and suddenly it’s patentable. There are many, many, many patents that are essentially this simple concept at their core – I myself am listed as a co-inventer on one. Literally all I did was listen to a new engineering grad describe his problem and say “why don’t you invert it and measure it again” and BAM, I get a thousand bucks as a co-inventer. Right place, right time. That’s another pet peeve of mine, that the US patent system allows so many obvious ideas to be patented, but that’s a rant for another day.

    Btw, it’s not just bias that prevents averaging from improving a measurement. The error must also be uncorrelated to any aspect of the measurement. Again, if this isn’t the case, there are sometimes tricks that can force it. One of the bizarre ideas I like to think about is a huge coordinated release of CO2 into the atmosphere on random days all over the world, then averaging some measurements, in an attempt to get a measure of the climate sensitivity to CO2. Totally unpractical, but its fun describing it to a true-believer and watching their horrified reaction.

  21. Robin

    @Milton: The practice of land surveying has routinely relied upon some of the errors you’ve mentioned and the fact that they will cancel each other out. Of course this can also be thwarted by some of Mandelbrot’s discoveries wrt fractal dimension (but that is another subject).

    Briggs indicated that Anon’s question referred to a situation where Natural number data is being mapped to Real numbers; where there is no error in the underlying data (assuming the individual counts are correct). Where the problem becomes Real number data being mapped to Real numbers (where there is error in the underlying data) I gather it changes. I hope Briggs prepares more information on this very interesting subject.

    Among other activities, I conduct computer simulations of processes related to Civil Engineering problems. Prediction types of things like “on this major project, simulate the entire construction process and identify what and where the risks are.” I wrote the software when I noted over and over again in developing markets the application of designs and specifications that were leading to construction failure or arbitration. Typically the contractor gets hammered when the real culprit is a terrible engineering design and spec of the consultant.

    The key takeaway for me (that I agree with fully) is Briggs’ statement “You have shaved away information”. This is very true for simulations. You have to keep as much information in the system as possible and when the final result comes forth then consider the correct precision or significant digits for reporting purposes.

    Personally, for Anon’s point, ie 30.66 students, I agree with him. My view is that, for the purposes of determining classroom student numbers, maybe a median value, rather than arithmetic mean, would have been more useful. But that would have involved an examination of each classroom; whereas calculating the mean could have been done in 5 minutes from someone’s desk: “Let’s see, we have X students and Y classrooms which gives us a mean of 30.66”. However, I also fully accept @Hersolong’s comment “If RBR is 2.3 children per couple it would make no sense to say ‘can’t have 0.3 children, therefore replacement rate is 2’.” This is undoubtedly true from an epidemiological perspective, but what do you tell the individual family unit should this question arise? Should they have 2 or 3 children?

  22. JH

    The following statements give different conclusions/perceptions on the distribution of the number of children in an elementary school class.

    A: The average number of children in elementary school classes is 31 (SD: 1).
    B: The average number of children in elementary school classes is 30.66 (SD: 1.47).

    For example, Chebyshev’s theorem allows one to conclude from A that at least 75% of classes have 29 to 33 students; and B 27 (theoretical calculations result in 27.72) to 34 (33.60) students. Here, I would report using integers since the number of students in a class must be an integer. However, statistics such as average, standard deviation, and skewness are characteristics of distributions and not meant to be a realized/realizable value of a variable, e.g., the number of children in a classroom.

  23. JH

    * HTML tag error… let me try again.*

    The following statements give different conclusions/perceptions on the distribution of the number of children in an elementary school class.

    A: The average number of children in elementary school classes is 31 (SD: 1).
    B: The average number of children in elementary school classes is 30.66 (SD: 1.47).

    For example, Chebyshev’s theorem allows one to conclude from A that at least 75% of classes have 29 to 33 students; and B 27 (theoretical calculations result in 27.72) to 34 (33.60) students. Here, I would report using integers since the number of students in a class must be an integer.

    However, statistics such as average, standard deviation, and skewness are characteristics of distributions and are not meant to be a realized/realizable value of a variable, e.g., the number of children in a classroom.

  24. Briggs

    The most interesting thing about Chebyshev was that his first name was Pafnuty. “Hi. My name is Pafnuty.”

    His model (the inequality) is a fine thing. But it is, of course, just a model. Because it is an equality of other models (of a certain form). It’s a good model, though, for many things.

  25. JH

    Briggs,

    How English translation can assassinate the beauty of one’s name! I know.

    Chebyshev theorem is applicable as long as the mean and standard deviation are given.

    “There is no such thing as probability.” Do you mean that there is no such material thing? What is the point of such a statement?

    I have come across some papers inquiring into the existence of theoretical probability distributions a while ago. What’s known is that for de Finetti, probability exists only subjectively within the minds of individuals. That is, it doesn’t exist outside of our mind. (Hence, he is said to be anti-realist. Are you?)

  26. Briggs

    JH,

    Yes. Probability is not a material or physical property of anything. It does not exist. Just like logical statements. This is the Realist position.

    But I invite all those who say it is real, a real property, to demonstrate its existence, as we can, say, electromagnetic energy.

  27. spaceranger

    When I began college we were still using slide rules. Really grew an appreciation for orders of magnitude. By the time I graduated, calculators had moved in. When I was a graduate assistant, my students were turning in numbers with 7 decimal places based on data they gathered with a wooden meter stick.

    But then I moved on to the aerospace industry, and I discovered the real use for decimal places: They are there to make your wild-assed guesstimates look as if you arrived at them analytically.

  28. Anon

    Thanks all for sharing your thoughts, I understand things better now (I think/hope).
    Summing it up:
    Precision of a measured property cannot be improved by repeating the measurement (hurray!). What can be improved by taking more measurement, though, is the precision of (calculated) characteristics (Mean, SD) of the resulting distribution. As many respondents pointed out, however, these have neither physical reality nor a physical meaning. (The stale joke of the statistician who drowned crossing a river comes to mind).
    What bugs me is that Mean, SD and what-you-have are expressed in the same physical units (grams, meters) as the underlying measured variables. So what to make of that? In this regard the comment of Robin, questioning the application of a continuous model to natural numbers, is spot-on. Indeed, division (calculating mean) is not defined as an operator in the group of natural numbers +zero). And measurements are essentially about natural numbers, at least practically (the instrumental resolution), but may be even fundamentally. But that is a discussion for another day.

  29. “Yes. Probability is not a material or physical property of anything. It does not exist. Just like logical statements. This is the Realist position.

    But I invite all those who say it is real, a real property, to demonstrate its existence, as we can, say, electromagnetic energy.”

    God first.

    Justin

Leave a Reply

Your email address will not be published. Required fields are marked *