Christos Argyropoulos â€(@ChristosArgyrop) asked me to comment on his post of the same title at the blog Statistical Reflections of a Medical Doctor.
One can judge from experiment, or one can blindly accept authority. To the scientific mind, experimental proof is all important and theory is merely a convenience in description, to be junked when it no longer fits. To the academic mind, authority is everything and facts are junked when they do not fit theory laid down by authority.
From this grist, our friend Argyropoulos bakes the following pita: “[Heinlein] summarises the essence of the differences between Bayesian (scientific mind) and frequentist (academic mind) inference or at least in their application in scientific discourse.”
To this we can only respond, frequentists are people, too (right, JH?).
Heinlein’s shorthand is a false dichotomy: there are more ways to judge than from evidence or blindly accepting authority. One can infer, extrapolate, guess, induce, deduce and so on. And theories can be junked, modified, or they can even be proved true (rare, rare).
But I take his point. Tradition, collegiality, the big C (Consensus), and the big G (grants), ego, prestige, hope for promotion, boredom, politics, politics, politics drive science just as much or more than any passion for uncovering truth.
For objective Bayesians, models are only convenience instruments to summarise and describe possibly multi-dimensional data without having to carry the weight of paper, disks, USB sticks etc containing the raw points. Parameters do the heavy lifting of models and the parametric form of a given model may be derived in a rigorous manner using a number of mathematical procedures (e.g. maximum entropy)…
Now consider the situation of the frequentist mind: even though one can (and most certainly will!) use a hypothesis test (aka reject the null) to falsify the model, the authoritarian can (and most certainly will!) hide behind the frequentist modus operandi and claim that only an unlikely body of data was obtained, not that an inadequate model was utilized.
Yes to this last point; double-yes with flags waving. The vast, immeasurable multitude of models are never tested. Talk about taking things on faith! Entire fields look to software as the ancients used to consult oracles. If the chicken guts are spotted, i.e. the p-value is wee, the theory is true. Almost no one ever checks the model on data never yet seen. Models are taken “as in” and trusted implicitly.
The leading sin of statistics is failing to teach this distinction.
I’ll let Argyropoulos have the last word to get the discussion started. (I’m badly distracted by the news of the day.)
[O]ur systematic failure to respond to the financial crisis or even to advance science in the last 3-4 decades can be traced to the dominating influence of academicians over scientists. Rather than systematically evaluating evidence for or against particular models in specific domains, we seem to only judge models/explanations by the authority/status of their proponents, a situation not unlike the one in the 30s when Heinlein wrote the aforementioned piece.