We had the fourth Broken Science Initiative event this past Saturday at the Castro Ranch in Aromas, California. Which is apparently where CrossFit got its start. Standing room only. Terrific crowd. Good questions. Tacos. And a Mariachi band!
Greg Glassman and I spoke. The talks were recorded and will be put on the BSI YouTube page when they’re available. Now you’ve heard more than enough from me, and anyway you can watch my talk later, so I’ll tell you about Greg’s instead.
My favorite part was this “slide” of his (poster boards, with his very own Vanna White revealing them one by one).
You’ll recognize some of these, like numerology and clairvoyance. Others you might not know, like catoptromancy, which is a form of divination using mirrors. Or phyllomancy, which is divination using leaves. And not just veins on leaves, but the sound their rustling makes.
Not every form of divination—the art of foretelling the future—is on the list, which is large, and ever growing. We are all incurably curious about what will happen to us. Perhaps the most prominent absence is time series. Time series is a subset of numerology, a form of divination looking at patterns in numbers.
In time series, past numbers are scrutinized, and the patterns found within them are given mathematical form, forms which are said to have a sort of causal existence or power. These forms, which may be quantified by arcane complex alchemical formulae, or by simpler crude mathematical guides, are no different in spirit than the forms thought to be heard in the rustling in leaves.
Some of you will object and say some of these divination methods are “science”, and some are “pseudo-science”. The derogatory term is favored by scientists to dismiss unfashionable, and sometimes even unproductive, ideas.
To separate “science” from “pseudo-science” was, Greg reminded us, once a popular subject in philosophy called the “demarcation problem.” Philosophers, like our beating-bag Karl Popper, and scientists argued for years just what unique distinction there was between the two. The conclusion was that there is no conclusion. There is no thing that allows one to draw a line between what counts as science and what pseudo-science. We can tell that story another day.
Yet even if is there no demarcation, we can still separate good from bad science, useful from harmful science, and accurate and inaccurate science. Here the separation, philosophically, anyway, is easy. If the science makes good predictions, it is good science. If bad, then bad. Simple!
Simple idea, that is. Not so easy in real life.
All those methods of divination are science. Or sciences. But not all are good or useful science. What’s useful? You have heard me say hundreds of times that a model or theory can be useful to one man, and useless, and even harmful, to another. Like models for masks preventing the spread of respiratory diseases. Damn useful for the longhouse, and damned painful for its inhabitants.
Point is, usefulness and goodness depend on to what purposes the sciences are put. By science I mean models or theory, or just models, since to my mind these are the same.
The possibility of different uses for a science means it can be judged differently. The line is fuzzy.
It’s made fuzzier because of how a model’s predictions are seen.
Greg made the point over and over that a model (or science’s) value is in its predictive ability. This is true. And if this truth were generally acknowledged, the very practice of modern science would shift in a most dramatic fashion. Alas, scientists are content with announcing, nearly all the time, how beautiful their models are, and not how well they predict Reality. Predict bits of Reality never before seen or used in making the models, that is.
Maybe you scoffed at catoptromancy, thinking it absurd the future can be seen in mirrors. I too think this is not possible. But how is it practitioners think it is?
Same as any scientist following the predictive way! They make predictions according to their model, and then verify them. And they are extraordinarily creative about what counts as verification. Or what counts as proper predictions.
Scientists often hide their predictions by calling them “scenarios” or the like. But all predictions are of the form “If X, then Y (plus or minus)”. X is the set of conditions that hold in the world. If X is true, then Y (plus or minus) is a prediction, and not a “scenario.”
But you can imagine that often X can be quite dense, a tangled compound proposition open to interpretation—after the fact. The same is true for Y. Nuances and shades of meaning shift n in the verification.
This happens with all methods of divination by scientists of all stripe anxious to retain their theories. Think of both global climate models and clairvoyance.
This is why in any true test of a science, the verification must be specified in rigorous exactitude, carried out by disinterested parties, which no after-the-fact excuses allowed.
Which has been the case in many psychic science claims. But which hasn’t been done, not so far, in “climate change” models.
More to come!
Subscribe or donate to support this site and its wholly independent host using credit card click here. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
Scatomancy
Bah.
https://en.wikipedia.org/wiki/Scatomancy
You paint “science” with one big, broad, brush, and talk about its problems. But, you don’t provide examples! Not all the sciences have the same issues or to the same degree. Tossing the babies out with the bathwaters? Just reading your comments, I’d get the impression that all sciences are failures, no better than superstition.
“Scatomancy” — divination by shit. Modern science in a nutshell.
Scatommentary.
“If the science makes good predictions, it is good science. If bad, then bad. Simple!”
This is exactly right! But as you emphasize it requires honesty to work. That is a strict – moral, religious – devotion to truthfulness both individually and culturally.
As a practical example of such honesty, both individual and cultural from back when science still existed as an active endeavour, here’s a story told by Richard Feynman about his time in the Manhattan Project:
“One of the first experiences that was very interesting to me in this project at Princeton was to meet great men. I had never met very many great men before. But there was an evaluation committee that had to decide which way we were going and to try to help us along, and to help us ultimately decide which way we were going to separate the uranium. This evaluation committee had men like [Richard] Tolman and [Henry DeWolf] Smyth and [Harold] Urey, [Isidor I.]Rabi and [J. Robert] Oppenheimer and so forth on it. And there was [Arthur] Compton, for example.
One of the things I saw was a terrible shock. I would sit there because I understood the theory of the process of what we were doing, and so they’d ask me questions and then we’d discuss it. Then one man would make a point and then Compton, for example, would explain a different point of view, and he would be perfectly right, and it was the right idea, and he said it should be this way. Another guy would say well, maybe, there’s this possibility we have to consider against it. There’s another possibility we have to consider. I’m jumping! He should, Compton, he should say it again, he should say it again! So everyone is disagreeing, it went all the way around the table. So finally at the end Tolman, who’s the chairman, says, well, having heard all these arguments, I guess it’s true that Compton’s argument is the best of all and now we have to go ahead.
And it was such a shock to me to see that a committee of men could present a whole lot of ideas, each one thinking of a new facet, and remembering what the other fellow said, having paid attention, and so that at the end the decision is made as to which idea is the best, summing it all together, without having to say it three times, you see? So that was a shock, and these were very great men indeed.”
Of course, the current dishonest version of scientific prediction would be: We predicted all of humanity would die of COVID if we didn’t institute an insane tyranny in all things, then we made the tyranny and all of humanity has not died. Our prediction worked! Trust the Science!
So yes, academic science is broken, basically because it is an increasingly dishonest culture increasingly full of dishonest people. If science in any meaningful form is to be resurrected it must be by something like the original Royal Society and it precursors – outside of academia. And it must be ruthlessly exclusive and intolerant of dishonesty in all forms.
“Science” is not a single “thing,” and so to imply that “science is broken” is to lump medicine with physics with chemistry with genetics with ecology with geology and paint them all with one brush. I am a scientist, and I’d say that 95% or more of the many scientists I know are honest. And, to say “We predicted all of humanity would die of COVID if we didn’t institute an insane tyranny in all things, then we made the tyranny and all of humanity has not died.” as if that was what anybody said or did is simply hyperbole and ONLY talking about medicinal science – and they don’t speak for everybody.
The broad ambiguity in saying science is broken because of a few things you disagree with gets nobody anywhere.
I don’t have time to read your lengthy diatribe today, but I did want to mention that Net Zero Watch just sent an email with an interesting article by some guy named William M. Briggs. Quite a coincidence his name is just like yours, eh?
Dear Mr. Roper.
Which academic “science” is independent of government grants? Which academic “science” is independent of absurd bibliometric measures of “quality”? Which one is exempt from the reproducibility crisis? Which one is exempt from the massive corruption of Diversity, Inclusion and Equity? Did you fail to notice that even mathematicians must now sign diversity statements to be hired?
Yes, all parts of “science” is subject to the same rot and broken in basically the same way for the same reasons. You fail to see the dishonesty around you, just as you failed to see the two years of COVID-tyranny justified in the name of “Science”.
The liquid flowing from current academia is not bathwater, and the things floating in it are not babies.
Morton Nielsen said: ““science” is independent of government grants? Which academic “science” is independent of absurd bibliometric measures of “quality”?’
Just because a government grant is used to fund research does not mean the researcher is dishonest or does shoddy work. And, “measures of ‘quality'” are not corrupt in and of themselves and in many (but not all) ways the measures can be associated with true “quality.” So, these claims by themselves say nothing. But, even Da Vinci, Galileo, Copernicus and Isaac Newton had to buddy up to the filthy rich of the time to get their financial support. Nobody said that the system is perfect, and even now, there are many scientists trying to improve the system. But, simply saying it’s an imperfect system is a truism, void of useful information. And, it implies that ALL scientists are corrupt, and they’re clearly not. In the same logic, anybody that has a boss is corrupt, because they’ve got to keep their boss happy. The system is evolving, as all systems do, and will improve with time as scientists recognize the issues and work to change them (such as with Sci-hub, Z-library, and other ways to free up research so that we don’t have to pay exorbitant prices to get a copy of a paper. Ranting about it among other ranters will do little to improve things. But, pointing out clear examples to colleagues, friends and scientists might.
Morten Nielsen: you are correct, sir!
Competence, honesty, absence of ideological influence, freedom from government and corporate funding, and divorce from academic corruption must all happen at once. Not bloody likely.
JJR: “But, you don’t provide examples!”
I stumbled across a website years ago when I was looking for a critique of a study that that was getting a lot of attention, but seemed pretty bogus to me:
https://www.wmbriggs.com/
Since that time, this particular website has dismantled many dozens, if not many hundreds, of scientific studies. I highly recommend that you check it out, and peruse some of the many past study analyses to be found in the archives there.
James Roper:
The Minnesota Department of Health based its COVID response based on predictions from the University of Minnesota. The predictions are available here:
https://mn.gov/covid19/assets/MNmodel_PPT%205.21.20%201019AM_tcm1148-434753.pdf
You’re an honest guy, so give me your honest answer:
-Are these predictions accurate or not?
-If you believe they are, what metric did you use to determine the success of the predictions?
-If you believe they are not, what should be done about the failure of medical science here?
Keep paddling in that well-known river in Egypt, @Roper. Scientists that take government grants are corrupt. You can’t get a government grant unless you achieve a government approved result. A classic example is investigations into group differences in human intelligence. It’s impossible to do objective science in this area because you risk falling foul of laws against various -isms, quite apart from the fact you can’t get government money or published in mainstream journals. Since being published in mainstream journals is a large part of how academic scientists justify their salaries there’s no future for investigations into group differences. Actually I know someone who has carved out a very narrow niche in this area but he’s in trouble if ageism becomes the new societal crusade.
Modern science is entirely a political and entirely designed towards regime support, or garnering profits for global megacorps. Essentially it’s all now Lysenkoism. Anyone who imagines different clearly has no ability to grasp phenomenal reality or is a scientist on the take.
I REJECT The Science and embrace INSTINK. I have an INSTINK for survival. Follow your INSTINKS.
John Pate said: “Scientists that take government grants are corrupt.” Now that will require some proof. Unless you do not consider the National Science Foundation a government agency. NSF funds much of the research done in the USA, and equivalents in other countries fund most of the research there. NIH ditto, for more medical research. Nearly half of all research takes place at institutions of higher education. The way the system works, if not perfect, requires that research professors get grants, and a part of that grant money always go to the institution of higher learning. All that was set up long ago, before any researchers today were alive. To get a grant they write proposals that are evaluated by other researchers. So, exactly how are all those researchers corrupt simply by doing their job? It may not be the best system in the world, but that doesn’t mean that everybody that gets or apply for research funding is corrupt.
Mr Roper, I tend to agree with you, and I think much of the argumentation here stems from differing interpretations of the word “science” and “scientist.” I don’t think anyone is “wrong” or “right” just talking about slightly different notions. It is true people can be wrong, but there is an one thing that can be relied on, to quote Feynman, “Nature is never wrong.”
In general Mr. Brigss, I think this is an Excellent article. I think part of the problem is the creep that has occurred in the popular use of the term “science.” As you mentioned, many set up a binary demarcation such as science/pseusdo-science, but that raises hackles in many with the value judgement implicit in “pseudo.” In some popular arenas anything is “Science” if it comes out of a computer or is spoken by a person wearing a white lab coat. There seems to be more of a continuum of what people want to include in the term “science” now. I wish there was another word or set of words that could describe different approaches to “predicting things that are observed to happen” or sometimes folks just want to “explain things that are observed to happen.”
As a physicist, I tend to favor (likely by a process of self-selection) the very hard core, minimalist view of “science” expressed by Feynman, for example in the famous snippet from his Cornell lectures at this link https://www.youtube.com/watch?v=EYPapE-3FRw . The essential elements are 1) coming up with a mental theory, 2) expressing the theory quantitatively such that one can 3) make a non-arbitrary quantitative prediction of the outcome of a controlled experiment. The process must be such that anyone can generate the predictions (and get the same results), and anyone can perform the experiment(s) and get the same results. It doesn’t mean only some particular people can make the predictions, or they can only make the predictions if “the vibes are right” or some such. In this view, one never proves a theory “right” one only establishes that for the experiments done so far, it has not been demonstrably wrong. [This latter point can require a good bit of work making sure the experiments are not flawed in some way and that the quantitative bounds of the intrinsic uncertainties of the experiment and the prediction are understood, but often very generous bounds are enough, at least to decide between competing theories.] In this view, to paraphrase Feynman, “it doesn’t matter who makes the prediction, or who funds the guy who makes the prediction, or where he lives, if it disagrees with experiment it is wrong.”
This view works well for people who are content to work in the relatively mundane world that can be studied via controlled experiments. For some, that world is overly restricted perhaps and even dull or mundane, but at least it is internally consistent and well grounded and has some utility in applications. But ultimately people do this kind of “science” because they like it for its own sake. As Feynman also said, “physics is like sex, sure it has practical results, but that is not why we do it.”
A problem arises, though, because there are a lot of things we want to know about, and even predict, that simply are not amenable to controlled experiments. Even within physics, there are such areas and degrees of separation from controlled experiments. Astronomy is an example, which in some ways can be probed experimentally, but in many is simply beyond such. Cosmology is one in which controlled experiments are virtually impossible but still people generally include it within physics. In most cases, these areas seek to “explain things that are observed to happen” in terms of theories that have been experimentally checked, but predictions are usually limited to predicting future observations of things (perhaps controlled observations, but not controlled experiments in the sense that they system being observed is controlled) that have not been observed or observed in a specific way before. There are many other examples, such as evolution. But most of these areas restrict themselves to making predictions in terms of non-arbitrary quantities from the experimentally based disciplines, i.e. standard units of measure (forces, energies, momenta, etc.) in non-arbitrary units.
But we seem to want to “explain” things that don’t seem to be amenable to the standard terms and units of the hard core experimentally based sciences. Climate science is a difficult example. For one, it is not possible to do controlled experiments on climates. Also, different people (i.e. people who write big computer code models) get different results for predictions ofthe same future climate observations. And there are quite a few concepts involved that are not well defined within the experimentally based sciences, such as “fingerprints” or “forcings” or even the notion of a temperature index as a thermodynamic quantity (it isn’t).
It gets even worse when we want to “explain” or “predict” things that are even less well expressed in relation to some experimentally grounded notions, such as psychology, or sociology.
It would be nice if there was some other more nuanced way to describe analytical approaches to explanation or prediction, than the word “science.” Maybe there just isn’t. There seems to be a trend to replace “scientific” with “intelligent” as in the trend toward artificial intelligence, or sometimes machine learning or neural networking. There seems to be a concern, even fear, that “AI” will somehow be that generator of Truth that some have been wanting to find in the word “Science.” The fear seems mostly to be held by those who either 1) don’t understand the inner workings of such codes or 2) have a very low opinion of how well ordinary people can distinguish between confirmable fact versus simply a plausible statement. Time will tell. Something big may happen with AI, or something akin to the Y2K apocalypse (i.e. nothing) may happen.
James Roper,
You wanted to talk about specific instances of scientific predictions failing. I gave you one to consider, and you have ignored it. Instead you talk about general hypotheticals, the exact sort of thing that you criticized others for earlier.
So I ask you again: were the University of Minnesota predictions correct? If so, what metric did you use? If not, what should be done about it?
Rudolph, I read the Minnesota predictions, and they were all from the year 2020, and the two graphs that showed the model predictions with the observed values of mortality and cases, the models were reasonably close. I found nothing in the article you linked that said anything about the following 2 years, and so I found no way to test how close they were over the long run. And, the whole point of those models was to attempt to predict the dynamics reasonably well. Everybody knows that pandemic dynamics are complicated because of the mutations in the virus, and so nobody expects a model to be perfect. But, if it can predict trends, even if for a while, then that’s good. To sum it up, nothing in that document looked terrible nor perfect. I notice you didn’t make any claims about it but just requested that I did. And, always remember, ONE example does not prove that science in general has problems.
Why would it be a problem that the predictions are for 2020? That simply means that we actually have the data for deaths in 2020, i.e. we can actually verify if the predictions worked or not.
And you obviously didn’t do that. This is obvious enough from the fact that you didn’t cite any specific numbers, only speaking in vague generalities. So let me look at some actual numbers.
Let’s focus on the deaths after one year stat, i.e. deaths through January of 2021. The UMN models give predictions for a variety of scenarios, depending on the response taken and efficacy of testing. The absolute best case scenario is 6b, which assumed that lockdowns would continue longer than they did in reality, that there would be more tests done during that time period than happened in reality, and that tests would be more accurate than they were in reality. So if a correct prediction means anything at all, it means that the number of deaths in reality should be higher than those shown in scenario 6b.
Scenario 6b predicts 22,589 deaths after one year, with a 95% confidence interval going from 12,903 to 32,012 deaths. The actual number of deaths UNTIL TODAY is 14,835, according to the Minnesota department of health:
https://www.health.state.mn.us/diseases/coronavirus/stats/death.html
That is, the number of deaths more than THREE years later is less than the number of deaths predicted after ONE year in a scenario which should have had much fewer deaths than reality (if the models were accurate and mitigation methods actually did anything.) The MN department of health no longer allows you to easily find deaths per week in historical data, so finding the exact amount after one year is tricky. But luckily I did this at the time:
https://www.wmbriggs.com/post/33825/
After one year, there were about 4,200 deaths. About a third of the bottom point on the confidence interval in the absolute optimal scenario.
Yeah, these models are about as busted as models can get.
But thank you for serving as an object lesson for how science gets away with bad predictions, James. You act like many scientists do. That is, you don’t actually dig into the data when it could be bad for you, but rather speak of the models working “reasonably well” or matching vague “trends.” You appeal to the whole thing being complicated, which SHOULD mean that we should be careful trusting models but instead which you use to mean that we should excuse bad results from models. And if something ends up being truly horrendous (which will take a great deal of effort to get you to admit, since you don’t engage with the data) you simply say “well this ONE example is bad, but everything else in science is great!”
It’s people exactly like you who ruined science. And don’t go on about it being unlikely that researchers would be corrupt or sloppy. All that needs to be true for that to happen is for people to be as lazy and sloppy as you are in their reasoning. And from having spoken to many academics in many disciplines, I can verify that most of them ARE like that.
Rudolph said: “And you obviously didn’t do that. This is obvious enough from the fact that you didn’t cite any specific numbers, only speaking in vague generalities. So let me look at some actual numbers.”
Indeed, I said that the graphs that showed the model predictions and the true values were in relatively close agreement. Nobody would, or should, use those models to continue predicting, especially long-term, especially when variants showed up. Using them to predict the relatively short-term trend is okay, but they’d have to tweak the model for longer terms.
For you to expect the early 2020 model to predict the long-term means you don’t understand modeling, epidemic dynamics, or both. For you to expect me to jump just because you said to, is also unrealistic. And, pandemic dynamics is not ALL of science. It’s people like you who don’t seem to understand much who think science is ruined when it is not. It may have its ups-and-downs, but then all human endeavors do, even communication.
The model literally makes predictions for one year out. But we are supposed to ignore those because they are too long term? Why make them in the first place.
But if you dig into the historical data even the one MONTH predictions are inaccurate. Is that also too long term? What then is acceptable? One week? One day?
In what meaningful sense do models predict anything if one month is too far out?
But again, thank you for serving as a good object lesson. Your rhetoric is all about the usefulness and accuracy of science. But of course when the rubber hits the road you are willing to abandon both usefulness and accuracy, so long as the PRESTIGE of science is upheld. And in this way you are no different from the tens of thousands of academics willing to excuse a million inaccurate predictions as long as the paper counts on their CVs can be padded out.
@Roper You didn’t address any of the points I made, you simply trolled me by laying out some irrelevant nonsense about how the bureaucracy dispenses taxpayer money. Like I said it’s not science it’s Lysenkoism. The grants and favours are only given to regime approved science and the so-called scientists play along to keep their snouts in the trough. As for not all science is ruined maybe you mean stuff like CERN that nobody can challenge or replicate nor has any practical engineering or social application to make it become grounded in the real world where results might actually matter and mean anything. It’s all fake and gay.
A guy will spend his entire life at a university doing research on bugs. Then he retires, goes senile, dies, gets eaten by bugs. All his research gets thrown out, gets eaten by bugs. His kids throw his bug collection into the trash, gets eaten by other bugs. Science!
Rudolph Harrier complains about a model out of Minnesota that didn’t fare very well, and then seems to think that it summarizes all of science. I don’t find the model particularly disturbing, because as far as I could tell, it was abandoned in 2020, the same year it was developed. But, regardless, it was an epidemic model, and isn’t typical of all sciences or all models. And, I’d agree, a model that can’t predict a month ahead isn’t a good model. That doesn’t make all models bad, nor all sciences a problem.
On bugs. The guy who spends his life doing research on bugs has a wonderful life because he was doing what intrigued him, and learning a lot about life in the process. Died a happy scientist, and then was eaten by bugs, because no matter what, we’re all eaten by bugs in the end. And, oh, we know a lot more about bugs because he lived. Science!
@Pate. You didn’t make any points.
You really are the perfect example of why science is broken and will continue to be broken.
-You complain about examples not being provided, and use that to say that science is fine.
-I give you an example of a busted model, and you only skim through it, using a “eh, looks plausible” test to say it is fine
-After pointing out the inaccuracy of its predictions you try to defend the model by saying that we are extrapolating too far (even though the extrapolated time period matches what the model covered, and the time period that politicians used the model to set policy in.)
-After some more prodding you finally admit that the model made bad predictions, but say that it doesn’t matter since we probably have new models.
-Furthermore you say that the majority of the rest of science is fine anyway.
Now with this attitude, how would we ever find errors in science? Your attitude is to first assume everything is going right, then to explain away apparent errors, then to say that they don’t matter. What would it take for you to say that there is a problem? If I gave you 100 more busted epidemiological models, you would just say that epidemiology has some problems or (more likely) that this specific type of epidemiological model is questionable, but other medical models are fine. You give lip service to the accuracy of science, but you do not want to really analyze the accuracy for fear of what you might find.
And the average scientist is just like you. A great deal of faith in the general effectiveness of science, but with no great desire to put anything to the test. In fact for the average scientist it is worse, since having an incorrect result could be a great blow to his personal prestige, and finding an incorrect result that a peer has published could lead to him getting an enemy. So everyone charges ahead patting themselves on the back for the “self-correcting” nature of science, while being very careful to never to look for anything to correct.
Rudolph Harrier, you really are the perfect example of why it’s a pain trying to explain anything. First of all, you’re clearly already pissed off. You want your example to be more meaningful than it is. And then, you don’t like it when someone else just doesn’t extrapolate to everything as you do.
What you don’t realize is that you don’t even know what the average scientist is like, yet you make claims about them. You think they all have this faith in the effectiveness in science, and, well, scientists might tend to have a reasonable perspective on the usefulness of the scientific method, but they all know that humans are fallible. Clearly you do not interact with scientists, because then you’d know that many don’t really care if their previous study is shown to be incorrect, because that’s how science proceeds. By fits and starts, making mistakes, uncovering mistakes, doing it right, a lot, and so on.
My point about the Minnesota model is simply that it doesn’t matter. It’s not representative of the “sciences” and it’s not perfect. It was made to be useful for a while, but it couldn’t take into account new strains because it was based on how the old strains were acting. No surprise there. Models must always be “tuned” simply because population dynamics are chaotic, and there may be no real way to predict epidemics in the long term. But, we can try, because even if we can foresee into the short term, and take precautions, we can often save a lot of lives. But, because we use the model to take measures, the measures themselves, if effective, will make changes in the epidemic dynamics that will then make the model useless, or require that the model be adapted. You seem to want perfection and since you don’t get it, science is all messed up. That’s just not the way models, or science, works.
I am a scientist, ornithologist, and I have taught statistics to grad students for more than 20 years. I’ve told them about the problems with “statistical significance,” with hypothesis testing (and formulating realistic and useful hypotheses), and how to do research so they can actually get a result. I’ve always been iconoclastic, and even had issues with editors over studies I’ve done because they didn’t seem to like my results, even if the research was carefully and correctly done.
Epidemiology is not the same thing as “science.” Talk to any epidemiologist, and they will tell you they know all about the problems that come with model making. They’ll explain how models CAN still be useful for helping to understand the dynamics of the epidemic. After all, a model CAN show us what we think is going to happen with our current understanding of the epidemic. If the near future does not follow the model, it means that something is going on that we don’t yet understand, and so gives us a clue to go look for it. Models do not have to be perfect to be useful.
@Roper. You blithely ignored mention of investigations in to group differences in human intelligence. You failed to leap to the defence of CERN. It’s clear to me at this point that you’re trolling us. I worked at two universities before I retired and how universities have become all about the money and pandering to their customers. Science is now used by the regime to impose its faux moral order and clearly is becoming more and more disconnected from phenomenal reality. That’s why models and appealing to future catastrophes are so important to modern science. I thought ornithology was an actual science but since you seem to be happy to throw things out of what you consider science it’s pretty difficult to imagine what your screeds of text are actually arguing about, though your fondness for the bug man brings to my mind the worm Ouroboros .
Pingback: Briggs’s Best Broken Science Blast – William M. Briggs
@JamesRoper — If you watch the video of this event, you will hear wmbriggs specifically say “Science is not all broken…”
Specific examples have been given repeatedly.
Science is still science. There is good science out there. But JEDI is entering at all levels… My son just graduated from UW in Aerospace. The second speaker was touted for her contributions on the JEDI front… (Justice, equality, diversity, Inclusion). BLM entered the speeches and they were not talking about getting land to launch rockets… The Keynote speaker talked about the first Woman and the first person of Color on the moon coming soon. Sustainability was a key aspect of at least 1 of the pHD theses…
Philosophy IS part of aviation and aerospace. Bernoulli or Angle of Attack? Arguing over this will lead to improvements in our understanding of Aerospace. Arguing about DEI, DIE, JEDI does not get us anywhere, except to avoid NOT being included in the next grant cycle.
NOT enters the pictures everywhere. No matter what we do, we are stuck in the NOT.
When you gaze into the not, the destruction of science is easier to see.
Those prediction were a joke from the very beginning. Then the data started getting changed 8 months after it was published.. Fun stuff..
I trust most of people who comment on this blog to be able to handle their shit when things tap dance side sideways… There are a few who jump in from time to time and completely miss what our Uncertain Host is pointing at. We do not prove that we are correct. We fail to prove that we are wrong which runs us into the conundrum. You never get to stop coming up with ways of proving that you are wrong. Infinite loop.