This one picture proves why you must distrust every new boast from every scientist. Scientists’ ideas must be assumed false until they are proven true by people other than themselves.
You would not trust your firm’s accountant to audit your books. You hire a disinterested agency. This is not because you suspect your accountant of fraud or embezzlement. It is not because you do not love him. It is because everybody makes mistakes, and the man making them is the one least likely to see him. And because business should be accomplished in a businesslike manner.
Unseen error is why publishers employ copy editors. I have one myself, but he has been held hostage by my enemies for many years (something about unpaid debt to an Emir). That is another story. Publishers do not distrust their authors, or think less of them, when hiring copy editors. But it is a truth of much weight and experience that authors simply cannot see their very own self-created typos, even though they have gone over the text in questions dozens, even hundreds, of times.
Now sometimes—I say nothing about how often–accountants play loosey goosey with the books. Even if their acts don’t reach illegality in a technical sense, they make some shifty moves to make themselves, or their companies, look better than they are in reality. Or to make the expense account go a little farther.
That is why you hire your own accountants if you are going to invest in a company, or buy it, and you don’t use theirs. This act of outside verification is called, most appropriately, due diligence. Everybody wants to know just how things are and what the actual state of affairs is. This is prudent and sober.
This does not happen in science. And must.
Look again at the picture. Scientists, likely drunk on power and having swallowed gallons of grant-tainted hubris, really did say that sitting without a mask, and chewing, laughing, spitting, farting, and in a crowd, was safer than walking alone with a mask.
No sane person believed it at the time these great scientific egos were making this asinine claim. And no reasonable person believes it now. But some did believe it, and they believed because scientists said it.
Whatever claims scientists (including doctors) had on our assent and trust, built from early years of success and triumph, on the shoulders of giants, as it were, is now gone. Three years of idiot panic and a steady stream of preposterousities (you heard me), coupled with outright lies, and obviously ridiculous theories we were made to assent to under penalty of law, has wiped away all basis for trust. Not to mention doctors are now openly killing their patients, in Canada and elsewhere, are many are convinced men are women, and vice versa.
As you and I, dear reader, have discussed many times, our rulers and elite have been trying to juice a panic over global warming, now called “climate change”. You know the theory. A fractional increase in a trace gas, added to the atmosphere in part by man, will cause a “climate crisis”. Everything good, photogenic, and deligious will be crushed, and every evil, biting stinging unpalatable thing will flourish, when “climate change” strikes.
When.
Which is why rulers want to control every waking, and every dreaming, moment of your life. To quell the “climate crisis.” Because of the theory that carbon dioxide beyond a certain point, which the earth has experienced before but now must not, will doom us all.
You might therefore be interested to know that the theory behind these monumental claims has never been independently assessed.
May I repeat that? The theory has never been independently assessed.
We are supposed to instead take their word for it. As we had to take their word about masks.
Scientists cannot be allowed to be the judges of their own theories. Not theories which rulers leverage to increase their power and riches. This is not good business. It is not diligent. It is not wise. It asks for—no: it begs for—trouble.
It is having your own account audit your books, and you being shammed into allowing it because you do not have a degree in accountancy, or that you are an double-entry denier, or that you are bad person.
Do not bnlindly trust those who say Trust The Science. As the man once said, trust, but verify.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
Apologies… I do not like telling my favorite astrologically astounding statistician that he is wrong…
Scientists are never correct. Good scientists know this. We never prove something correct. The best we ever do is fail spectacularly over and over again to prove that thy idea is wrong. Every time we fail we try to come up with a test that WILL prove us wrong.
In there, is the conundrum. YOU MUST do this with all hypotheses.. I don’t care if that hypothesis has been elevated to the lofty rank of “LAW”, you keep testing it.
I understand someone saying “That violates the laws of Physics”… We don’t have time to explain to every relative who discovers rare earth magnets that he ain’t gonna make a infinity energy device…
But we NEVER prove the hypothesis. There are the folks who cling desperately to the Hypotheses… If Plancks constant changes in the middle of the night, they may go insane. If you understand that nothing is proven, you realize… F(@#$ M@ we have to start the process over again.
@brad tittle, why would someone derive something from a hypothesis which cannot be correct?
Asking for a friend.
As man pushed the boundaries of science the term dazzle them with
bullshit entered the lexicon. Rube Goldberg thankfully defined the boundaries
of scientific comprehension and the first principal dazzle and paralyze with fear
operated unmolested for over a century.
Chaeremon –
Because even when something feels right, there is the potential of misinterpretation or that the underlying causes might be due to something else altogether; things the scientist never even dreamed about. And sometimes, discoveries in other fields can complete upend our understanding in another.
Also when you have multiple competing interpretations or hypotheses for a phenomena, and all of them seem to explain a phenomena, yet all of them contradict each other, you still have no idea which is the true explanation, especially when theories are often dogmatically held and routinely tinkered with, surrounded by ad hoc epicycles upon epicycles of glue and bandages
So keeping these complexities in mind, all science is usually discovery through a process of elimination. But even that is rife with fault as often once thought dead theories can resurrect themselves.
Science ain’t easy. True practitioners understand that by grounding in proper philosophy and humility under theology. Then there is the hubris of THE SCIENCE ™ which bets it all on 00 against the house on the spinning roulette wheel of life because they are EXPERTS dammit! Worship and obey them! The ball’s gotta land on that number sometime!
Dr Briggs: I had a thought about copy-editing and typos, and I offer this suggestion in a spirit of helpfulness. Have you ever thought about crowd-sourcing copy-editing to a few of your readers (the expert ones, of course!) prior to publication? It could be a trade: early access to a chapter of the book in exchange for returning a copy-edited version. There would still be incentive for people to buy the book later; to read all the other chapters. “Everything you Believe…” was a good read, especially for all the examples you provided, but it was also rough in places. I thought about sending you my copy marked with red ink, but that seemed like a rude thing to do to someone I don’t know. I realize you didn’t ask for advice, but I guess that is one of the dangers of having a comment board!
I know quite a few people who could look at that “walking vs. sitting” illustration and not laugh. They’re still testing and masking. There’s nothing you can say to such people, and they vote.
Falsifiability sounds good as a way to preserve science, but in reality it causes all sorts of problems. There are many reasons for this.
First, as our host has discussed in previous articles (and alludes to in this one), what counts as falsifiable? Or more to the point, what counts as a “falsification”? Individual results can be dismissed. “That’s just one study!” “It’s within measurement error!” “They didn’t use the same methodology that I did!” Even if there are many results which force a theory to be abandoned, it can just have epicycles added to it until things are once again consistent with the data. Theoretical physics in the last 40 or so years consists of basically nothing but doing this. If an epidemiological model fails, we can simply imagine causes. “The tests have too many false negatives!” “People who are the most likely to get sick don’t want to get tested!” “Many diagnoses for other diseases were really for my favored disease!” And of course this stuff actually does happen, for instance there have been multiple people who claimed that all “excess” deaths in the last three years were due to COVID (thus getting the death numbers closer to the catastrophic numbers they needed for their theories.)
Compounding this problem we all know that academics are loath to criticize their peers. In a publish or perish environment the “I’ll scratch your back if you scratch mine” is the most effective. Furthermore, papers that check the work of others offer little in the way of prestige (but a lot of risk in terms of making enemies.) So even if it were simple to falsify something, most scientist wouldn’t bother.
Problem number two is that there are always infinitely many models which withstand falsfication. This is most obvious in areas where a new model is created to justify new observations. Both the new model and the old model would have been consistent with the original observations, and there are many alternative models which might be used in the future to accommodate some new observations. Philosophically if we are only leaning on “not yet proven to be false” models, then we should trust all of them. But of course, the only thing that they are sure to agree on is the observed data. Hence you can’t use all models simultaneously to make predictions; you have to choose a specific one. But what then justifies this choice of specific model? Historically it is the trends of the academic community, which very often are not based on anything “scientific.”
A focus on falsifiability also encourages sloppy logical thinking. Specifically, some results in science are chosen as philosophical axioms (usually with guidance from observations), some are fit to observations, and some are derived from the first two types of results. For example, if you believe that Newton’s laws of motion are true, then it necessarily follows that many other equations which can be derived from them must be true. The only way that these derived equations can fail to be true is if the laws of motion are wrong. Similarly, if you take the law of universal gravitation to be true and use experiments to estimate the corresponding constant of proportionality, then you can derive many other things. These derived results could be wrong if the law of universal gravitation is false or if you improperly measured the constant, but not otherwise.
But in practice when people focus on falsifiability they group all things together. Everything might be wrong, the axioms, the observations, and the derived results. But to really believe this you would have to be so skeptical that you couldn’t get anything done. If every experiment and every formula can be flat out wrong, who would be brave enough to use them to predict anything? So things loop back around into a complete faith that every result is accurate, with some lip service being given to the idea that they could be falsified. But to do that we would need to audit everything regularly, and as our host notes here that doesn’t happen.
BRM (Broken Record Mode): “Do bettter!” just isn’t going to cut it. When humans are rewarded for doing undesirable things, such as practicing bad science, guess what happens? More bad science.
I offer no good solutions. There really should be a price to pay for the bad-science practitioners, at least reputationally. But the reputation-assignment machine is under the control of the axe-grinders. On a positive note, the passions of the axe-grinders of 100 years ago are treated as interesting footnotes at best; in the absence of passion, scholars strive for accuracy and understanding. The old saw about history being written by the victors is nonsense.
Perhaps a monthly journal dedicated to highlighting the worst of the worst, naming names of the bad scientists, their bad sponsors, their bad peer-reviewers, etc. This journal could be focused on the curious reader of 100-years in the future who is dispassionately interested in long-ago problems and how society went astray trying to address them. One format that might work is the one used in voter’s pamphlets in my state. An issue is stated, such as the text of an “Initiative to the People”. Representatives of the pro and con sides state their case on facing pages, with their main argument above the fold, and their rebuttal of opposing argument below the fold. These arguments are presented in a largely dispassionate “just the facts, ma’am” tone, making it harder for the reader to dismiss the opposing arguments out of hand.
Some websites exist along these lines (Retraction Watch, PubPeer, Science-Based Medicine), but I’m talking about something intended to be more formal and lasting, something libraries want to archive, something highly respected. While this publication could only include a few critiques in each issue, the worst of the worst, fear of making into into this journal should be a motivating factor for even minor offenders. “History won’t look kindly” can be a powerful motivator.
Rereading what I just wrote … nah, that can’t happen, for a hundred reasons. Oh, well, that’s the best I got.
“copy-editing and typos”: I was once advised “Bad spelling/grammar is like bad breath – people forget what you said, but remember how badly you stunk when you said it”.
Correcting spelling and grammar as you write it can really kill the flow of thought, but I strive to always reread and attempt to correct. Yes, I still fail in that, but a whole lot less than if I didn’t try.
If you’ve read this far, you now are infected with the brain worm that infected me years ago. Sorry, but not sorry. It could have been worse – the words to “The Beverly Hillbillies” are stuck in there, too.
Disinterested audits? Formerly known as “peer review”.
The issues you’ve been on about lately all pretty much come down to the same thing: groupthink built and reinforced by idealogically controlled hiring and funding sucks. Well, duh!
Want to do something about it? there’s an emerging black market in ideas evolving – so support that via things like a public archiv server for pre-pubs.
(Some people I know in a core science are members of a dark web group happily peer reviewing each other on the old model and developing new ideas – in secret because their careers would end if admin/funders knew what they work on. This is dumb and weak, but practical – and easily spread a sciency kind of samizdat).
p.s.
Re Jim H – a thought on copy editing. I’d be happy to volunteer.
Which is why samizdat emerges in stifling (to sound reasoning) environments. Life finds a way. Sound thinkers do also. So, if you are living in times or places where samizdat has to be, your culture is doomed.
Cmdr Briggs ==> “authors simply cannot see their very own self-created typos” — You speak the truth, sir! I have found this in all of my own work — even though I often use a professional editor — who finds most of my typos….but not all!
Briggs ==> Your enemies have been at it again in this piece — just as mine do in all my work.
Kip,
My enemies have finally hurt themselves. For they have proven my very point.
Good luck with finding someone to audit what CERN is up to. Do they even have any real idea themselves?
The bulk of my spelling errors and grammatical faults are primarily down to typing things in these awful awful touchscreen phones that think they can autocorrect me. But sometimes that’s all you have…
By 2025 GTP-AI will be ONE MILLION TIMES MORE INTELLIGENT THAN THE SMARTEST HUMAN. This spells the end of human “authority” as we know it.
In response to Chaeremon’s question “why would someone derive something from a hypothesis which cannot be correct?”
To get familiar with the general limits of the frameworks one employs (logical, metaphysical, physical, &c.) and perhaps even accidentally furnish methods and material for unrelated — or yet unknown — theories or branches of science.
For example, in European science of approx. 11th-14th century it was widely believed for a vacuum to be physically impossible, this in line with Aristotle’s demonstrations though not without further development and deviations. Nonetheless, it was very common to speculate on the behavior of various bodies under these purely hypothetical conditions. In this manner, the medieval scientists were able to derive some conclusions to be discovered and confirmed by later scientists who by then operated with very different methods and vocabulary. (E.g., Albert of Saxony’s speculations on the speed of movement of mixed bodies [= not elemental bodies consisting solely of one of the four elements] in a vacuum; see Grant’s “Much Ado about Nothing” for a reference.)
What in particular led the medieval natural philosophers to investigate these matters? Simple need to engage their intellectual resources and curiosity was perhaps the main factor. Yet in an interesting way, the insistence of the Church authorities that Aristotle’s “laws of nature” cannot be said to be binding on or restricting God’s omnipotence may have come into play as well. As Grant summarizes in a passage in his “God and Reason in the Middle Ages”:
Did “experts” actually claim that standing was more dangerous than sitting? I never heard that, despite knowing quite a few Covid-fanatics.