Our post today is provided by Terry Oldberg, M.S.E., M.S.E.E., P.E. Engineer-Scientist, Citizen of the U.S. That’s a lot of letters, Terry! Oldberg joined our Spot the Fallacy Contest, which had been laying fallow. He says he found multiple instances of equivocation in global warming arguments. What say you?
Summary and Introduction
No statistical population underlies the models by which climatologists project the amount, if any, of global warming from greenhouse gas emissions we’ll have to endure in the future. This absence of a statistical population has dire consequences. They include:
- The inability of the models to provide policy makers with information about the outcomes from their policy decisions,
- The insusceptibility of the models to being statistically validated and,
- The inability of the government to control the climate through regulation of greenhouse gas emissions.
Rather than describe global warming climatology warts and all, the government obscures its unsavory features through repeated applications of a deceptive argument. Philosophers call this argument the equivocation fallacy.
The Equivocation Fallacy
The failure of global warming research is concealed by multiple instances of the equivocation fallacy (EF), an example of which is (Jumonville):
Major premise: A plane is a carpenter’s tool.
Minor premise: A Boeing 737 is a plane.
Conclusion: A Boeing 737 is a carpenter’s tool.
The mistake can be exposed by replacement of the first instance of “plane” by “carpenter’s plane” and by replacement of the second instance of “plane” by “airplane.”
Major premise: A carpenter’s plane is a carpenter’s tool.
Minor premise: A Boeing 737 is an airplane.
Conclusion: A Boeing 737 is a carpenter’s tool.
A term that has several meanings is said to be “polysemic.” The technique to expose the fallaciousness of any example is to disambiguate all of the terms in the language in which an argument is made.
Polysemic terms in climatology
Climatologists often use polysemic terms. Some of these terms are words. Others are word pairs. The two words of a word pair sound alike and while they have different meanings climatologists treat the two words as though they were synonyms in making arguments. Examples are (Oldberg):
- model
- scientific
- project-predict
- projection-prediction
- validate-evaluate
- validation-evaluation
An example
In “Is Climate Modeling Science?,” Real Climate’s Gavin Schmidt attacks an opponent’s claim that climate models are not scientific. His argument, though, draws an improper conclusion from an equivocation.
Were climate models of the past built under the scientific method of inquiry? Schmidt argues that: At first glance this seems like a strange question. Isn’t science precisely the quantification of observations into a theory or model and then using that to make predictions? Yes. And are those predictions in different cases then tested against observations again and again to either validate those models or generate ideas for potential improvements? Yes, again. So the fact that climate modeling was recently singled out as being somehow non-scientific seems absurd.
Dr. Schmidt’s argument appears to be:
Major premise: All scientific models are built by a process in which the predictions of these models are validated.
Minor premise: All climate models are built by a process in which the predictions of these models are validated.
Conclusion: All climate models are scientific models.
This argument contains the polysemic terms “model,” “scientific,” “prediction” and “validate.”
Disambiguating “model”
The word means: a) a kind of algorithm that makes a predictive inference and b) a kind of algorithm that makes no predictive inference. For reference to the kind of algorithm that makes no predictive inference, I’ll reserve the French word modèle. Models and modèles have remarkably different characteristics, as we’ll see.
Disambiguating “predict-project” and “prediction-projection”
To “predict” is to do something different than to “project” yet most global warming climatologists use the two terms synonymously (Green and Armstrong). The idea of a “prediction” is closely related to the idea of a “predictive inference.” This relationship follows because a “predictive inference” is a conditional prediction, like these:
Given that it is cloudy: the probability of rain in the next 24 hours is thirty percent.
Given that it is not cloudy: the probability of rain in the next 24 hours is ten percent.
A “prediction” is an unconditional predictive inference. For example, “The probability of rain in the next 24 hours is thirty percent.” Notice there is no condition.
A predictive inference is made by a model but not a modèle. On the other hand, a modèle is capable of making projections while a model is incapable of making them. The “projection” of global warming climatology is a mathematical function that maps the time to the projected global average surface air temperature.
Disambiguating “validate-evaluate” and “validation-evaluation”
As the long time IPCC expert reviewer Vincent Gray tells the story, many years ago he complained to IPCC management that its assessment reports were claiming its modèles were validated when these modèles were insusceptible to being validated. After tacitly admitting to Dr. Gray’s charge, the IPCC established a policy of changing the term “validate” to the similar sounding term “evaluate” and the term “validation” to the similar sounding term “evaluation.” Thereafter, many climatologists fell into the habit of treating the words in each word-pair as if they were synonyms. A consequence was for the two polysemic terms validate-evaluate and validation-evaluation to be created.
A model is said to be “validated” when the predicted relative frequencies of the outcomes of events are compared to the observed relative frequencies in a sample that is randomly drawn from the underlying statistical population, without a significant difference being found between them. As it has no underlying statistical population, a modèle is insusceptible to being validated. However, it is susceptible to being “evaluated.” In an evaluation, projected global average surface air temperatures are compared to observed global average surface air temperatures in a selected time series.
Disambiguating “scientific”
According to Wikipedia, “A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of knowledge that has been repeatedly confirmed through observation and experiment.” For a model, validation serves the purpose of confirming through observation and experiment. Does evaluation serve the same purpose for a modèle?
It does not. In an evaluation, projected temperatures are compared to observed temperatures but a judgment is not made in which claims made by a modèle are confirmed or denied. Thus, “scientific” cannot legitimately be used as a modifier of “modèle.” On the other hand, “scientific” can legitimately be used as a modifier of “model.”
Translating Gavin Schmidt’s argument
With the help of the disambiguated terminology developed immediately above, Dr.Schmidt’s argument can be translated into a form free from equivocation. His argument now reads:
Major premise: All scientific models are built by a process in which the predictions of these models are validated.
Minor premise: All climate modèles are built by a process in which the projections of these modèles are evaluated.
Conclusion: (none logically possible)
No conclusion is possible because Dr. Schmidt’s argument is not of the form of a syllogism. His original conclusion that “All climate models are scientific models” is a consequence from drawing an improper conclusion from an equivocation.
Contrasting a model and a modèle
This contrast is illustrated in this table:
model | modèle |
---|---|
makes predictive inference | makes no predictive inference |
makes predictions | makes no predictions |
underlying statistical population | no underlying statistical population |
makes no projections | makes projections |
susceptible to validation | insusceptible to validation |
insusceptible to evaluation | susceptible to evaluation |
product of scientific method | not product of scientific method |
conveys information to user | conveys no information to user |
makes climate controllable | does not make climate controllable |
The last two lines of the above table deserve amplification. If there were any, predictions from a climate model would convey information to a policy maker about the outcomes from his or her policy decisions prior to these outcomes happening; the availability of this information might make the climate controllable. Currently, however, we have no climate models. We do have climate modèles but they make no predictions hence conveying no information to a policy maker. Thus, after decades of effort and the expenditure of several hundred billion U.S. dollars on global warming research, the climate remains uncontrollable. Nonetheless governments, including our federal government, persist in trying to control the climate.
The “models” of AR4
Every entity in AR4 which is referenced by the polysemic term “model” is an example of a modèle. If the language of the methodological arguments that are made in the Federal Advisory Committee Climate Assessment Report (FACCAR) were to be disambiguated, the authors of the FACCAR would be compelled to admit that the items in the above list are descriptive of the climate modèles that are currently being used in making policy on emissions of greenhouse gases by the federal government. If these admissions are not made, there will be continuing catastrophic waste of the capital of the people of the U.S. on: a) attempts at controlling the uncontrollable and b) foolishly framed, deceptively described global warming research. To make these admissions would require courage and integrity on the part of the Advisory Committee.
Mr. Briggs, Mr, Oldberg, permission to repost this article to another site (Deviantart, where I swim in a sea of hostility to climate Reason).
“Laying fallow” – as opposed to perhaps “laying eggs”? Surely you mean “lying…”?
I am always amused when they call a computer program a model and claim it can predict the future. Many years ago I worked in the RF design group at Harris corporation and designed waveguide filters. The design process required many pages of arithmetic and you could spend a week doing calculations. It was drugery so I had a computer program with about 2500 lines of Fortran to mechanize the design process. Our minicomputer could do the arithmetic in about two minutes that would take me a week. We usually couldn’t calculate the exact design on the first try but we were in the ballpark and could hit it in the next iteration or two. I didn’t call the program a filter model and claim it could predict the future. A computer program is deterministic, you put in the specification data and the program calculated the design. It is not predicting the future. Anybody that claimes they know how to write a computer program that can predict the future is a charlatan.
Was it not the UN intention, to confuse, conflate and evade the scientific method, whilst cloaking themselves in the authority of science.
The longer I examine the IPCC, UN and my governments actions and statements with regard to the alarm over weather, the more convinced I become that science was only important to sceptics, politics and PR have dominated the official pretense of a problem.
We have been played for billions and the science establishment people who have provided cover for this scam, will be first, to be offered as scapegoats by the politicians and bureaucrats.
Otter:
Permission to repost to a different site is hereby granted.
Ray:
Provided that a computer program makes a conditional prediction or “predictive inference,” it can be programmed to make predictions. For example,
Given that it is cloudy: the probability of rain in the next 24 hours is thirty percent.
Given that it is not cloudy: the probability of rain in the next 24 hours is ten percent.
is an example of a predictive inference. With a few lines of code, a model that makes this predictive inference can be programmed to make predictions. For example, it can be programmed to predict that “the probability of rain in the next 24 hours is thirty percent.” A climate modèle makes no predictive inference thus being incapable of making predictions.
Thank you for stating this argument so clearly. I have been attempting to clarify this in the comments at the Washington Post, absent your eloquent presentation. This certainly clarifies my thinking.
I have been lurking and enjoying your posts for some time Briggs and appreciate this guest posting as well as your own.
dalyplanet:
Thanks for the kind words. I share credit for the proof provided herein with Dr. Vincent Gray and for clarity of the exposition of this proof with my editor, Dr. William Briggs.
Given that it is cloudy: the probability of rain in the next 24 hours is thirty percent.
and
the probability of rain in the next 24 hours is thirty percent.
Sorry but I don’t see the difference. If I state how I arrived at the prediction then it’s not a prediction but an inference and if I keep my reasons to myself only then I’m predicting?
I would think a qualified prediction would be:
If it is cloudy tomorrow morning then the probability of rain tomorrow afternoon will be 30%.
Ah! I think I see it. Maybe you really were saying “if”. I took “Given that it is cloudy” to mean it is NOW cloudy. As in: “Given the condition of the engine, it is a wonder that it even started (or will continue to do so).” I assume (now) you meant “is” to mean “will be”.
A very tense situation.
I don’t see the problem, DAV. Given that it is cloudy now, the chance of rain over the next 24 hours (i.e., in the future) is 30%. That is, data available now is used to make a prediction of the future. I used to make predictions like that with a milk bottle, a straw, and a rubber balloon.
Actually, thinking about it some more, the statement “The probability of rain tomorrow is 30%” is a qualified prediction in itself — if it is a prediction at all. It’s like saying it probably won’t rain, which, of course, means that it might. As predictions go, you haven’t said much. It isn’t testable. A statement like “when it’s cloudy, expect rain 30% of the time” is more testable.
I don’t see the problem, DAV. Given that it is cloudy now, the chance of rain over the next 24 hours (i.e., in the future) is 30%.
Well, then I’m confused about the distinction between making a prediction and stating the reasons for making it and making one without giving any reason for making it. Why is the latter a prediction and the former a predictive inference? And why is it a qualified prediction?
Sorry, I’ve been using qualified to mean conditional. Probably not the best use of the word.
DAV:
This is a bit complicated so please bear with me.
The following prediction is conditional:
given that it is cloudy, the probability of rain in the next 24 hrs is 30%
given that it is not cloudy, the probability of rain in the next 24 hrs is 10%
The following prediction is unconditional:
the probability of rain in the next 24 hrs is 30%
“Cloudy” and “not cloudy” are examples of conditions. “Rain in the next 24 hrs” and “no rain in the next 24 hrs” are examples of outcomes. A “prediction” is an extrapolation from a condition belonging to an event to an outcome belonging to the same event in which the condition is certain but the outcome is uncertain. For IPCC climatology, there are no events. Hence, there are no predictions.
Terry Oldberg
OK, I get that. I’m just wondering why you were dwelling on it when the topic was Disambiguating “predict-projectâ€. There were no examples of “project” and only the statement a “modèle is capable of making projections. Followed by The “projection†of global warming climatology is a mathematical function that maps the time to the projected global average surface air temperature.
One might argue that arriving at a 30% chance of rain in the next 24 hours is also a projection from a mathematical model. As stated, it also doesn’t predict anything to the extent where it can be verified. I gather though you actually meant given any time when it’s cloudy it will rain 30% of the time in the following 24 hours and not a specific 24 hour period like tomorrow.
One might also argue that the climate models are also making predictions in the sense that given what we (or rather, they) know about how the climate works (the model) the current conditions (rising CO2, e.g.) will lead to X. I suspect, though, what is really going on is a curve fit (there’s an amazing amount of “tweaking”) followed by observation of the goodness of fit for “verification” (but then maybe “hindcast” means something else).
Surely “Dr Schmidt’s argument” is a syllogism with an undistributed middle term (the universal premisses share the same predicate) and is therefore invalid regardless of the terms used.
It’s worse than this. My interactions with an otherwise fairly intellgient economist for the World Bank cannot disuade him from the argument: ‘Both CO2 and temeprature have gone up concurrently. Therefore CO2 caused it.”
This is the sum of all your fears
Does your World Banker also believe that US women going to work destroyed Detroit? Or does he believe that CO2 decreased from the 1940s to the 1970s?
DAV (12 May 2013 at 4:14 am):
In your post you inadvertently equivocate by making your arguments in a language containing ambiguous terms. That you equivocate makes it impossible for one to draw a proper conclusion from each of your arguments.
Regarding the meaning of “30%,” it is the proportion of events in the underlying statistical population in which cloudiness is followed by rain in the next 24 hours.
Rich:
I disagree. If terms in “Dr. Schmidt’s argument” are not polysemic, he has successfully proved that the set of climate models is a subset of the set of scientific models.
“the probability of rain in the next 24 hours is thirty percent.”
How is the foregoing statement a prediction in any sense of the word “prediction”? It neither says that it will rain within the next 24 hours nor that it will not rain within the next 24 hours. As written it asserts a present fact about an extant object “the probability of rain” whatever the metaphysical nature of such an object may be. Furthermore, the temporal reference is itself obscure: is the sentence to be read as meaning “the present probability that there will have been rain when 24 hours from now will have expired is 30%”, or that “in a great many times past, when conditions have been very similar to the way they are now, rain has ensued within 24 hours in 30% of the cases”? Its one thing to say “It will rain within 24 hours” which is a prediction; and another to say “Either it will rain within 24 hours or it will not rain within 24 hours” which is not a prediction. Invoking probabilities for a single event is just a way of obfuscating so as to make what is not a prediction be mistaken for one.
JT:
Thank you for giving me the opportunity to clarify.
In my example, thirty percent is the proportion of unobserved events in the underlying statistical population in which rain in the next 24 hours follows observed cloudiness.
I never thought I’d see Briggs post a guest post insisting on statistical populations, let alone insisting that a probability in a prediction is unconditional. But never mind that – a guest post is a guest post, after all.
Terry, I’m afraid Rich is right as far as what the argumetn as your present it. If you’d taken the first premise to be something like “A model is scientific if…”, then it would be another matter.
DAV, like you, on my first reading it seemed that Terry was trying to contrast predictive inferences and predictions, but then I realised that that emphasis was meant to use predictive inference to show how prediction is not projection. I still don’t feel that the section made clear what distinction Terry is making between predictions and projections, but it seems from the rest of the post that it is closely related to the distinction between validation and evaluation.
So, the claimed polysemy of ‘model’ depends on the relevant prediction/projection distinction, which as far as I can tell depends on the validation/evaluation distinction. The ‘scientific’ polysemy is a sidetrack – Terry’s comments don’t say that it is polysemic, just that it is used in an invalidly in one case, which is really just arguing the opposite of Schmidt, not helping identify the fallacy. So we’re really only interested in how evaluation is not validation. It would be interesting to go into more detail there (and I doubt that’s it only Climate Science that would be ‘singled’ out.)
All,
My ideas on this briefly.
Predictions can certainly be unconditional, but only in the subjective sense, as in, “The chance the Tigers win tomorrow is 40%.” The conditions are there, of course, but hidden. In an objective sense, no prediction is unconditional.
There is no such thing as a “statistical population”. There are only premises which (subjective or objective) allow us to say, “The uncertainty in X is quantified in such-and-such a way.” Often the “such-and-such” are some parameterized model (like a “normal”).
There is no reason in the world, save one, for “random” samples. The one reason is to check the sinful nature of man. Nobody would trust anything but a coin flip to see who receives the kickoff because coin flips in these kinds of situations cannot be predicted. “Random”, after all, only means “unknown” or “unpredictable.”
There are surely distinctions between “projection” and “prediction”, in at least the sense that people using these terms treat them differently. There are too many distinctions to talk about briefly. However, there is another sense when these two terms are identical, and that is when they are used to describe a situation in which people are making decisions based on the projection/prediction. This is when the decision maker looks upon the projection/prediction as a forecast. And even those who separate projection from prediction want decision makers to assume there is a forecast—especially when these separators are asking for funds or for political action.
My own opinion is that climate models and climatology is a science. In some cases it is a very good science; in others, appallingly poor, especially when making projections/predictions of things which will be affected by climate change. And, thus far, it has not been able to make skillful forecast out beyond a couple of months. But calling it a “science” is no distinction. Even astrology is a “science.” See especially Steven Goldberg on this last point.
Jonathan D:
Thanks for taking the time to respond. I’ll address each of your points as best I can.
The terms “probability” and “statistical population” are polysemic. Thus, when they are used without disambiguation there is the danger of drawing improper conclusions from equivocations.
Regarding Rich’s comment, I gather that you disagree with my claim that “If terms in “Dr. Schmidt’s argument†are not polysemic, he has successfully proved that the set of climate models is a subset of the set of scientific models” but don’t know why you would as it seems obvious that Schmidt has proved the set of climate models to be a subset of the set of scientific models, absent polysemic of terms that include “scientific” and “model.”
Contrary to your recollection, “scientific” and “model” appear in a list of polysemic terms from a cited paper.
In distinguishing between a prediction and a projection, I’ve tied the definition of a prediction to logic by defining a prediction as an unconditional predictive inference. A projection is a mathematical function and not an inference.
You say that “The claimed polysemy of ‘model’ depends on the relevant prediction/projection distinction…” That’s not exactly accurate. In the disambiguated terminology, a model differs from a modèle in many ways only one of which makes the prediction/projection distinction.
William Briggs:
Thanks for sharing your ideas. My comments on them follow.
Some of the terms that you use in describing your ideas are polysemic; that they are polysemic leads you to draw logically improper conclusions from equivocations. One of these terms is “unconditional.” Given that a probability is conditional and that this condition is observed, the probability that is associated with the resulting prediction is unconditional. You reach the conclusion that this probability is conditional by attaching two meanings to “conditional.”
In your usage of the term, “statistical population” is polysemic. Nearly every scientific study in which I’ve played a part has centered on a statistical population. This took the palpable form of independent observed events described in computer files and countable. Thus, when you state that “there is no such thing as a ‘statistical population'” you must be giving a different meaning to this term than I and my colleagues gave to it.
I’m surprised to hear your apparent view that randomized samples have no use in scientific research. I’ve used them on many occasions for the purpose of ensuring that the composition of the sample approximates the composition of the underlying population. Would you paint your house with unmixed paint? I don’t think so.
The modifier “scientific” is more properly applied to a methodology than to a model. Whether the methodology of global warming climatology is or is not scientific depends upon how one disambiguates the polysemic term “science.” It can be disambiguated to “demonstrable knowledge” and in this case the methodology is not scientific for in lieu of the underlying statistical population the knowledge is not demonstrable. Conversely, it can be disambiguated to “the process that is operated by people calling themselves ‘scientists’.” In this case, the methodology is scientific. Under the Daubert Standard, the federal courts disambiguate “science” to “demonstrable knowledge” and thus the methodology is not scientific. This has the significance, I think, that the EPA’s endangerment finding is illegal.
Terry, the argument as you presented it was of the form:
All cats are furry.
All dogs are furry.
Therefore, all dogs are cats.
Rewording it turns it in an argument that is valid absent equivocation, which is presumably the argument intended by Schmidt.
I see that in your linked paper you did give two meanings of the word ‘scientific’, making a distinction which can often be relevant. However, you didn’t actually refer to the second meaning in this post, and there was no reason why you should, since Schmidt is clearly not using it. Do you disagree that his argument would be valid if there no polysemy in “model”, “prediction” and “validate”?
I’m not sure what you mean by saying models and modèles differ in many ways, only one of which makes the prediction/projection distinction. You disambiguate the ideas by means of a single distinction – the use or not of predictive inferences. You assert that the rest of the disambiguations match up with the model/modèle distinction. Maybe it is not right to say that the model/modèle difference depends on prediction/projection, but it certainly depends on the projective inference/no projective inference distinction, which seem to be what you are getting at with prediciton/projection.
As for prediction/projection, Briggs’ comment has expressed my thoughts very well (especially ‘too many distinctions to talk about briefly’). Let me add that while a projection, as you describe it a ‘mathematical function mapping time to projected…” could possibly have nothing to do with prediction or predictive inferences, it is also true that such functions can be based on predictive inferences. If the important distinction if the use or lack of predictive interences, point out that something is a projection (as described) is neither here nor there.
Jonathan D:
Thanks for taking the time to respond.
Under my reading of his post to RealClimate, Dr. Schmidt argues that being built by a process in which its predictions are validated confers upon a model the property of being “scientific.” Thus, models are “scientific” because they are validated and climate models are “scientific” because they are validated. As there are models that are not climate models, climate models are a subset of “scientific” models.
This argument is dissimilar to the argument that:
All cats are furry
All dogs are furry
Therefore, all dogs are cats
as the set of all dogs is a not a subset of the set of all cats but the set of all climate models is a subset of the set of all scientific models. Thus, I do not agree that his argument would be valid if there no polysemy in “modelâ€, “prediction†and “validate.â€
Regarding the model/modèle difference, in my article I show that “models” and “modèles” have features that are held in common but whose feature-values are opposed. As I show, feature-values of a “model” are conducive to controlling the global surface temperature. The corresponding feature-values of a modèle are not conducive to this temperature. The inquiry into global warming has produced only modèles and yet governments persist in try to control the global surface temperature. That they persist in doing this is consistent with the hypothesis that they are dupes of instances of instances of the equivocation fallacy.
Regarding the distinction that I make between a “prediction” and a “projection,” a “prediction” is an extrapolation from an observed state of an event to an inferred state but for the IPCC modèles there are no such events. Thus, there are no predictions from them.
Terry, I’m afraid I’ve tried to cover too many different points too quickly, and they seem to have come across mixed together.
I agree that Schmidt’s argument is that climate models are a subset of scientific models, for his definition of scientific, and is valid absent polysemy. However, the argument as you presented it in the post is not Schmidt’s argument, but is like the dogs and cats example. This was Rich’s point. (Of course, it’s also a mistake to use the truth values of the conclusions to declare that two arguments are not the same.)
My separate question, regarding your point about “scientific”, would be better asked as would you accept that a process in which model predictions are validated makes a model scientific in the sense of demonstrable knowledge as long as ‘prediction’ and ‘validated’ take on the correct meanings? It seems fairly clear to me that Schmidt is not arguing that they are scientific simply because scientists make/run them, so that meaning is a distraction.
The idea that climate ‘models’ don’t involve extrapolation from observed events to inferred events is a fairly big claim. I still think most of what you’re trying to get at is in your distinction between validation and evaluation. It would be quite interesting if you did show how this effected usefulness in controlling temperature or whatever else we’re interested in.
Jonathan D:
It sounds as though we are in at least partial agreement regarding pertinent logical characteristics of Schmidt’s argument but that you fault my characterization of Schmidt’s argument in that it resembles for you the dogs and cats example. My possibly clumsily executed aim is to present an accurate characterization of Schmidt’s argument as a bad but apparently good example of modus ponens. Let A and B designate propositions. Modus Ponens is
A implies B
A
Therefore B.
Specialized to Schmidt’s argument, A is the proposition that a model is validated. B is the proposition that the methodology which produced this model is scientific. Under the usual semantics of “model,” validation” and “scientific,” Schmidt’s conclusion is true. However, Schmidt’s argument uses the three terms in polysemic ways. In view of the polysemy, to draw a conclusion from Schmidt’s argument is logically improper.
In order for valid conclusions from further development of this topic to be reached, I need to disambiguate polysemic terms. In the course of the following remarks I adopt the disambiguation that is described in the paper under discussion.
I accept that a process in which model predictions are validated makes a model scientific in the sense of “demonstrable knowledge.” The diambiguated terms “model,” “prediction” and “validation” imply the existence of a statistical population which, however, does not exist for the IPCC climate modèles for to “validate” a model is to compare the predicted to the observed relative frequencies of the outcomes of the events in the underlying statistical population but for the IPCC modèles there are no such population, hence are no such events or relative frequencies. It follows that the knowledge which is produced by global warming research is not demonstrable and thus that the methodology of this research is not scientific.
The information that has been provided by global warming research to policy makers about the outcomes from policy decisions is a function of the relative frequences of the outcomes of events of various kinds in the underlying population. As there are no such relative frequencies, outomes, events or population, in making policy the makers of this policy must be making it without reference to information about the outcomes from their policy decisions. To attempt to control global surface temperatures in the absense of a predictive inference regarding these temperature is futile in principle.
Pingback: A criticism of computer science: models or modèles?
Interesting discussion. I prefer Karl Poppers filter for what constitutes a scientific theory. A theory to be scientific must offer a falsifiable prediction. If the prediction fails the theory has no skill and is falsified,… so start over. Its does not follow that a falsified theory was not constructed scientifically. In scientific method it is OK to have theories fail as long as the series of failures trend toward improvement in predictive skill. Ideas and constructions that do not offer a falsifiable prediction may still be compelling but they are by Karl Poppers philosophy……not scientific. We should be primarily interested in demonstrated skill the confidence that the theory offers in the predictions. It is quite acceptable to subscribe to a theory that has been falsified yet shows useful skill. Newton’s Law of Gravitation is an example of a useful but falsified theory.
Pingback: The Big Lie | Texanation