William M. Briggs

Statistician to the Stars!

Page 149 of 420

Reasoning To Belief: Feser’s The Last Superstition: A Refutation of the New Atheism — Part I

The pummeling Read Part I, Part II, Part III, Part Interlude, Part IV, Part V, Part VI. Part Last.

This begins a series of posts reviewing Ed (if I may call him that; for all I know he goes by the more elegant Edward) Feser’s The Last Superstition: A Refutation of the New Atheism. (The posts won’t be contiguous.) We’ll also make use of Feser’s Aquinas, his Theory of Mind, and of his paper “Existential Inertia and the Five Ways,”1 which contains a tight, crystalline summary of Aquinas’s Five Ways.

Every atheist must read this book. Every atheist who is sincerely committed to his belief, that is. Casual atheists who would rather stick with unproven, but comforting, orthodoxies had best keep away. Because this book will be rough on them. Perhaps, some claim, too rough for a book from a Christian.

It is well to dispense with certain irrelevant matters immediately. Feser gives us a manly Christianity, in muscular language. His words oft have the tone of a teacher who is exasperated by students who have, yet again, not done their homework. The exasperation is justifiable. “Aquinas,” he tells us, “as is well known, always painstakingly considered all opposing arguments, and always made a point of attacking an opponent’s position at it strongest point.” Yet most of Aquinas’s modern-day opponents do not consider him at all. Or they gleefully poke at the remnants of a straw effigy theologians set fire to long ago, all the while congratulating themselves on their brilliance.

This does not compute for Feser, who does not suffer (arrogant) fools well—or at all. This perplexes some readers who undoubtedly expect theists to be soft-spoken, meek, and humble to the point of willing to concede miles to gain an inch. Feser is more of a theological Patton: he is advancing, always advancing, and is not interested in holding on to anything except the enemy’s territory. This stance has startled some reviewers. Typical is the (self-named) Unpublishable Philosopher who ignores the meat of the book and whines about “ad hominems.”

Now if a man, a theist, says, “Richard Dawkins is a jackass and here is a proof showing his attacks on God’s existence fail utterly” and a second man, an atheist, is interested in whether this proof is valid, then it is irrelevant to the proof that the theist calls Dawkins a jackass—unless that statement forms part of the proof. Which in Feser’s book, which is loaded with similar phrases, such statements do not. (Feser nowhere uses the word jackass.)

However, if the atheistic Dawkins fan hears the theist, all that penetrates through to his ossicles is jackass. The word lodges deep in his auditory canal and blocks further entrance: the proof goes unheard, or it is heard but badly distorted. And this is so—it is an empirical and not a philosophical question—whether Dawkins is a jackass. The proof is forgotten and the argument turns to whether the theist is himself a jackass for claiming Dawkins is; or if he is not a jackass, then whether he is a good Christian because (the atheist once read) good Christians don’t call people jackasses, even if their targets demonstrably are jackasses, or about the use of the ad hominem, etc. Then comes the final fallacy which says that because somebody who claims to be a Christian does an unChristian deed, Christianity must be false or unworthy of study. Or that Feser’s book needn’t be taken seriously.

Feser does spend a fraction of his time upbraiding his enemies for not heeding their lessons, and he isn’t shy about publicizing the “F”s he hands out. He says that Dawkins and Dennett are “ignoramuses” because of their “embarrassingly ill-informed dismissals” of proofs of God’s existence. He calls the work of Sam Harris a “disgusting spectacle.” He says that views held by eliminative materialists “are titillating and have, for obvious reasons, an emotional appeal for adolescents of all ages. But from a rational point of view, they are completely worthless; as David Stove once said, at the end of the day their proponents have little more to offer in their defense than ‘shit-eating grins.”

He says that “smugness is half the fun of being a liberal (the other half being the tearing down of everything one’s ancestors, and one’s betters generally, worked so hard to build).” He claims the “New Atheist’s pretense that a religious view of the world can only ever be the result of wishful thinking rather than objective rational argumentation is thereby exposed as a falsehood, the product, if not of willful deception, at least of inexcusable ignorance”. “No doubt”, says Feser, a New Atheist responding to his book will be “sputtering some response” but there is also no doubt that “the response will be superficial, ill-informed, and dogmatic, long on attitude and short on understanding.”

Dawkins’s attempts to counter the Unmoved Mover argument is a “serious lapse in scholarly competence and/or intellectual integrity”. Of the now-dead Hitchens and the other prominent New Atheists he says that one “gets the impression that the bulk of their education in Christian theology consisted of reading Elmer Gantry…supplemented with a viewing of Inherit the Wind“.

Well, gasp. Keep in mind, though, that these are all questions of fact, not metaphysics. If Feser can prove them—I say he can—this is fine. But if not, it does not imply he cannot prove his philosophy.

Warning Note: Many of the arguments to come, especially about the nature of causality, will be unfamiliar to us, and were once to Yours Truly, who was raised in the Scientific Way. If any of my summaries are suspect, defer to the book. It is vastly more probable that I have screwed it up than has Feser.

Warning Prediction: you may think you have discovered a shiny new, never-thought-of-before aha-zinger that guts classical metaphysics, leaving nothing but a greasy spot, but the chance of this is low. Philosophers have been gnawing away at these questions for hundreds to thousands of years. So while you may deliver us an argument which allows you to dismiss classical metaphysics, an argument which none of us here at the humble WMBriggs.com recognize for what it is (stale fish), this does not imply your discovery is unique, persuasive, or valid. The burden is on you to search the authorities, pro and con, and definitively prove your claim.

So today—and today only—let’s argue about whether Feser should or should not have called Dennett an ignoramus, whether Feser’s empirical claims about this or that political question are right or wrong, whether the pugilistic tone had better been left out of the book, etc., etc. Get it out of our system. Get it off your chest. Adopt the ton supérieur and educate us on just what the ad hominem is and why it’s use is discouraged. Because next time we start in on the arguments themselves and we can’t be distracted by irrelevancies.

Update To newcomers unused to our ways: swearing, threats, and other idiotic behavior is not allowed. All comments which are abusive will be summarily censored.

Read Part I, Part II, Part III, Part Interlude, Part IV, Part V, Part VI. Part Last.

—————————————————————————————

1American Catholic Philosophical Quarterly, vol. 85, no. 2 (2011).

Chick-fil-A And Bigotry: One Aspect Of What Marriage Is

“Chick-Fil-A Shattered Sales Records On ‘Chick-Fil-A Appreciation Day'” so reads the headline at Business Insider.

Many are not so happy about this. In a unintentionally hilarious video (especially when he points to a distant group of young people and comments on them), the CFO of Vante—now ex-CFO—berates a young woman working at Chick-fil-A and calls the company a “hateful” corporation.

An employee who self-labels herself as a “closeted gay woman” wrote in the Daily Beast, “Customers sang ‘God Bless America’ in the dining room. They vocalized their support for ‘family values’ in a way that made me want to vomit.” And don’t miss the interesting argument of this young woman.

Many other are saying that those who oppose gay “marriage” are “bigots.” This is a false charge and (at least) based on a misunderstanding of what marriage is.

My dear readers, marriage is not a contract between two people. It is an understanding between two people and society. And not just the society of the United States. Marriage is an understanding between two people and everybody else.

This is easy to see. Except in rare instances, a man and woman who marry do not sign a contract with one another. At best, they fill out a form which informs their local government of the union. And this is only necessary because of certain housekeeping matters, such as tax, visitation rights and the like, that differ by locality the world over, and differ in a locality by time. But the pair are not married in their eyes, or ours, by a civil contract. They commit to one another; they swear an oath; they promise before God; they unite in love.

Consider: when this pair, now married, travels far from their homes, they are not required to prove their marriage by document. The custom and naturalness of the bonding and their word of it are proof enough of the claim of marriage. Documents are only required when the couple want to make themselves subject to the housekeeping matters of the new locality.

When a married couple encounter others, here or abroad, they expect to be treated as a married couple, in virtue of the oath they swore. This is because the couple expects others will honor the understanding that a man and woman who mate are a couple. However, if it becomes known that the two people have not made the marriage oath (“We’re just living together”) then everybody treats this non-pair, now just two separate people, in a different way, even if this treatment is only a subtle change.

What those who scream “Bigot!” are asking is thus not to be allowed to join together in pairs (or in groups, etc.), because that is already allowed. What they are asking is that everybody else, especially here in the United States, but also abroad, change their behavior. Despite suffering from other flaws, this vacates the common argument given that “If you don’t like gay marriage, don’t marry somebody gay.” The change in the definition of marriage is not only a difference in the kind of two people it joins, but it must also change the way society (every society) and the couple interact. Supporters are thus not asking for the right to join, but are asking the government to force everybody else in society who don’t support gay “marriage” to change their behavior.

Now many, the majority as it stands, in the USA, a certainly the majority of the rest of the world, have a natural law or a religious or other philosophical and theological understanding of what marriage is. This means that those who hold these views, if gay “marriage” is legalized, will be forced to either reject those views or to not voice them or to not act on them in certain situations. Of course, those that actually reject their philosophy will be small in number. The majority will continue to hold their view. Legalized gay “marriage” may force these folks to change their bookkeeping behavior, but it cannot change their fundamental behavior.

To these people, a document from the government does not make a marriage, and they will not (at least internally) treat it as such, no matter how much this is desired. The victory won in courts will not translate to a moral victory. It will also not translate to all those other places in the world which will continue to hold to tradition.

Question About Non-Normal Distributions

Thanks to everybody who sent in links, story tips, and suggestions. Because of my recent travel and pressures of work, I’m (again) way behind in answering these. I do appreciate you’re taking the effort to send these in, but sometimes it takes me quite a while to get to them. I need a secretary!

First I enjoy your site. I have a technical education (engineering) but was never required to develop an in depth understanding of statistic.

I tend to be a natural skeptic of almost all things. One of my “hobbies” is following “bad science”. It seems that this is more common than most people realize, especially in medical, economic, psychology, and sociology. (All systems that are non-linear and controlled by large numbers of variables.) I think climate falls into this category.

I don’t expect “personal” response to this, but perhaps you could address it on your site someday.

I once read story where a noted hydrologist who was being honored at MIT, was summarizing some of his research and he mentioned this…”precipitation was not a normal distribution”. It has fat tails. (That may be why we always complain that that it raining too much or too little….because rain fall is seldom average.)

My question is this, when a phenomenon is not a normal distribution, and assumed to be so, how could this affect the analysis?

Precipitation does not “have” a normal distribution. Temperature does not “have” a normal distribution. No thing “has” a normal distributed. Thus it always a mistake to say, for example, “precipitation is normally distributed” or a mistake to say “temperature is normally distributed.” Just as it is always wrong to say, “X is normally distributed” where X is some observable thing.

What we really have are actual values of precipitation, actual values of temperature, actual measurements of some X. Now, we can go back in time and collect these actual values, say for precip, and plot these. Some of these values will be low, more will be in some middle range, and a few will be high. A histogram of these values might even looked vaguely “bell-shaped”.

But no matter how close this histogram of actual values resembles the curve of a normal distribution, precipitation is still not normally distributed. Nothing is.

What is proper to say, and must be understood before we can tackle your main question, is that our uncertainty in precipitation is quantified by a normal distribution. Saying instead, and wrongly, that precipitation is normally distribution leads to the mortal sin of reification. This is when we substitute a model for reality, and come to believe the unreality more than in the truth.

Normal distributions can be used to model our uncertainty in precipitation. To the extent these modeled predictions of a normal are accurate they can be useful. But in no sense is the model—this uncertainty model—the reality.

Now it will often be the case when quantifying our uncertainty in some X with a normal that the predictions are not useful, especially for large or small values of the X. For example, the normal model may say there is a 5% chance that X will be larger than Y, where Y is some large number that takes our fancy. But if we look back at these predictions we see that Y or larger occurs (for example) 10% of the time. This means the normal model is under-predicting the chance of large values.

There are other models of uncertainty for X we can use, perhaps an extreme value distribution (EVD). The EVD model may say that there is a 9% chance that X will be larger than Y. Then, to the extent that these predictions matter to you—perhaps you are betting stocks or making other decisions based on these predictions—then you’d rather go with a model which better represents the actual uncertainty. But it would be just as wrong to say that X is EV distributed.

The central limit theorem (there are many versions) says that certain functions of X will in the long run “go” or converge to a normal distribution. Two things. One: it is still only uncertainty in these functions which converges to normal. Two: we recall Keynes who rightly said “in the long run we shall all be dead.”

More Proof Music Is Growing Worse

New proof (which wasn’t really need) that popular music is, as has long been claimed, been growing worse has arrived thanks to the diligent work of Joan Serrà and his colleagues in the Nature: Scientific Reports paper, “Measuring the Evolution of Contemporary Western Popular Music.” From the abstract:

[W]e prove important changes or trends related to the restriction of pitch transitions, the homogenization of the timbral palette, and the growing loudness levels.

The central results could not have been summarized better than in the Australian, which correctly wrote:

OLD fogeys have long proclaimed it, parents have long suspected it, and ageing rockers have long feared that even thinking it would turn them into all they used to despise.

But it seems that believing today’s music is samey, boring and, well, just too loud does not necessarily make you a miserable reactionary. Rather, it is the scientific truth.

Before continuing, let’s snap our minds back to May of 2010, when Yours Truly posited a theory of Musical Badness.

Musical Badness (MB) quantified is this: the proportion of the time a length of music is devoted to repetitiveness.

Then in September of last year, Yours Truly and his Number Two Son computed one practical measure of Musical Badness, best summarized in this picture:

For the Billboard number one song of each year, we computed the number of unique words per song from which we formed the ratio of unique words to total number of words. The idea is that—on average—a song that is more repetitive is worse than a song which is more expansive in its use of lyric—or melody, harmony, or rhythm. As we then said,

Of the three songs with the lowest proportion of unique words, two are by the Beatles. 1964’s I Want Hold Your Hand (21%), and 1968’s Hey Jude (18%), which featured the lyric “na na na, na na na” sang 40 times. Simple to digest, no? The other worst offender was a song called Too Close by Next in 1998 (18%), which featured the subtle refrain:

Baby when we’re grinding
I get so excited
Ooh, how I like it
I try but I can’t fight it
Oh, you’re dancing real clos
Cuz it’s real, real slow
You’re making it hard for me

(Incidentally, see also proof that it is global warming which causes musical badness.)

Return to the present, where I am delighted to report that the new work from Spain confirms one aspect of the Musical Badness measure, the growing simplicity, i.e. repetitiveness, of popular music. The Australian quotes study co-author Martin Haro, who said “The complexity of the pitch transition – chords and melodies – is simplified over the years…Right now, music is full of these simple transitions. In the Fifties, new chords were tried and we were more experimental…[music today] is less an artistic expression and more a commercial product. Old music was more expressive, more experimental.”

For their work, they used a dataset which included “the year annotations and audio descriptions of 464,411 distinct music recordings (from 1955 to 2010)” in genres “rock, pop, hip hop, metal, or electronic.” They looked at loudness, pitch, and timbre. The main findings are that:

Yet, we find three important trends in the evolution of musical discourse: the restriction of pitch sequences (with metrics showing less variety in pitch progressions), the homogenization of the timbral palette (with frequent timbres becoming more frequent), and growing average loudness levels (threatening a dynamic richness that has been conserved until today).

The paper is clear and uses simple mathematical and statistical methods. The plots require some expertise understanding distributions, but all are crystalline and unambiguous: pop music has held the same structure over long periods of time, but individual songs are “one-note Johnnies” (with some hip hop offerings, this is literally true). These changes are not some subtle signal hidden where only advanced models can discover it. No. The decline of musical quality is plain.

And easy to place in time: the period of decline began in the later 1960s, which is no surprise to anybody.

Shown here is just one of their plots, proving popular music is growing—on average—louder:

Popular music splitting more eardrums than ever

The authors say that the “evidence points towards an important degree of conventionalism, in the sense of blockage or no-evolution, in the creation and production of contemporary western popular music.” There is “less variety in pitch transitions, towards a consistent homogenization of the timbral palette, and towards louder and, in the end, potentially poorer volume dynamics.”

Yes, kids, you heard it right: get off my musical lawn!

Müller lite: Why Every Scientist Needs a Classical Training—Christopher Monckton of Brenchley

His Lordship sent this around to all the usual suspects asking that it be given a wide audience. I’m traveling today.

About 18 months ago, as soon as I heard of Dr. Richard Müller’s Berkeley Earth Temperature project, I sent an email to several skeptical scientists drawing their attention to his statement that he considered his team’s attempt to verify how much “global warming” had occurred since 1750 to be one of the most important pieces of research ever to be conducted in the history of science. This sounded too much like propaganda.

He was posing, I said, as a skeptical scientist; his results would broadly confirm the pre-existing temperature series; when his research ended, he would declare himself to have been converted from scepticism to the belief that merely because the world had warmed the warming must be our fault; and publication of his results would be exploited as a triumphant and final confirmation of the “global warming” orthodoxy.

My doubts about Dr. Müller’s motivation intensified after I met him at the Los Alamos Climate Conference in Santa Fe, New Mexico, late last year. We lunched. He was visibly disappointed when I said that I was happy to accept the official temperature record, at least for the sake of argument. And he subsequently seemed uninterested in getting to grips with the real divide between skeptics and true-believers, which has little to do with the accuracy of the temperature record and much to do with climate sensitivity – the question how much warming we will cause.

In this reply to Dr. Müller’s much-touted editorials in the New York Times and the San Francisco Chronicle, I shall demonstrate by Classical methods that his principal conclusion “that global warming is real, that the prior estimates of the rate were correct, and that the cause is human” is incorrect a priori.

Yes, the world has warmed since 1750. However, even if one accepts Dr. Müller’s estimate of 1.5 Co warming since then, that rate is indeed well within the natural variability of the climate. Indeed, in the 40 years from 1695 to 1735, Central England (not a bad proxy for global temperature change) warmed naturally at 0.4 Co per decade, seven times faster than the 0.057 Co per decade he finds in the 262 years during which we are supposed to have influenced the weather.

Natural variability, therefore, is sufficient to explain all of the warming since 1750. No other explanation is necessary. Accordingly, it is not legitimate to claim, as the Berkeley team claim, that in the absence of any other explanation the warming must be attributed to CO2. That claim is an instance of the argumentum ad ignorantiam, the fundamental logical fallacy of argument from ignorance. It is not sound science.

Dr. Müller’s assertion that fluctuations in solar activity are too small to have any effect on the climate is fashionable but erroneous. At the nadir of the Maunder Minimum, the 70-year period from 1645-1715, there were almost no sunspots. During that solar Grand Minimum, the Sun was less active than during any other similar period since the abrupt global warming that ended the last Ice Age 11,400 years ago. The weather was exceptionally cold both sides of the Atlantic: the Hudson in New York and the Thames in London frequently froze over in the winter.

As solar activity recovered at the end of the 70-year period of exceptionally few sunspots, global temperature recovered very rapidly in parallel. Man cannot have had any measurable influence on the rapid warming from 1695-1735. The warming, therefore, was natural. The solar recovery may have been amplified in some manner, perhaps by Dr. Svensmark’s cosmic-ray effect, so as to cause much (if not all) of the rapid natural warming over the period. Or some other natural cause may have been present. But Man cannot have been the cause.

It is worth noting, in passing, that solar activity increased quite rapidly from the Grand Minimum of 1645-1715 to the Grand Maximum of 1925-1995, peaking in 1960, during which the Sun was more active than at almost any other time in the past 11,400 years.

Yes, prior estimates of the warming rate since 1750 may have been correct, but the mere fact of that rate of warming tells us nothing of its cause. There was considerable warming in the Middle Ages: indeed, Dr. Müller concedes that the weather may have been every bit as warm then as now. Yet we were not emitting CO2 in vast quantities then. It necessarily follows that the cause of the medieval warm period must have been natural. Accordingly, there is no reason why much (perhaps nearly all) of the warming since 1750 should not also have been natural.

The greatest error in the Berkeley team’s conclusion is in Dr. Müller’s assertion that the cause of all the warming since 1750 is Man. His stated reason for this conclusion is this: “Our result is based simply on the close agreement between the shape of the observed temperature rise and the known greenhouse gas increase.”

No Classically trained scientist could ever have uttered such a lamentable sentence in good conscience. For Dr. Müller here perpetrates a spectacular instance of the ancient logical fallacy known as the argument from false cause — post hoc, ergo propter hoc. However closely the fluctuations in one dataset appear to follow the fluctuations in another, one cannot legitimately assume that either caused the other.

Dr. Müller admits elsewhere in his editorial that mere correlation between one data series and another does not imply a causative link between them. Nor, one should add, does it tell us which caused which; nor whether all possible natural influences that might have driven both data series simultaneously have been allowed for.

In logic, though correlation does not necessarily imply causation, the absence of correlation necessarily implies absence of causation. During the past 15 years, notwithstanding record increases in our CO2 emissions, there has been no global warming at all. The former, then, cannot have been the principal cause of the latter.

Dr. Müller describes the current stasis in global temperature as “the ‘flattening’ of recent temperature rise that some people claim”. Yet the failure of temperatures to warm at all over the past 15 years is plainly evident in all the principal datasets. If Dr. Müller were as “careful and objective” as he claims, he would surely concede that there has indeed been no global warming for a decade and a half. He would not have described it merely as a phenomenon “that some people claim”.

He is entitled to his opinion that “the ‘flattening’ of recent temperature rise that some people claim” is not statistically significant. However, I beg to differ. Since CO2 emissions have risen at a record rate during the past 15 years, it necessarily follows that the failure of the planet to warm at all over that period points to a natural influence strong enough to overcome — at least temporarily — the rather weak warming effect of the large additional volume of CO2.

What might that natural influence be? Step forward the Pacific Decadal Oscillation, a naturally-occurring warming and cooling cycle. In 1976, the PDO switched suddenly from its cooling to its warming phase. Global temperature rose rapidly till late in 2001, when the PDO switched just as suddenly to its cooling phase, since when there has been no global warming.

The global temperature anomalies since 1850, compiled by the Hadley Centre for Forecasting, show three periods of warming that lasted more than a decade: 1860-1880; 1910-1940; and 1976-2001. These periods coincide with the cyclical warming phases of the PDO. On any view, the first two periods could not have been much influenced by us. Only in the most recent period were our CO2 emissions sufficient to cause some warming, at least in theory.

Yet in all three periods the warming was at the same rate: just 0.17 Co per decade. The warming rate in the most recent of the three periods was – within the margin of statistical error — no greater than in the two earlier periods. This inconvenient truth vitiates Dr. Müller’s conclusion that Man is the sole cause of warming.

Dr. Müller’s claim that his results are “stronger” than those of the IPCC also needs some qualification. If he were right that all of the 1.5 C° warming of the past 250 years was our fault (or, rather, our achievement, for warmer weather is better for life on Earth than cooler), it would follow, unexcitingly, that his estimate of climate sensitivity is more or less identical to its own.

Here is the math. To obtain climate sensitivity, one multiplies the radiative forcing of 5.35 times a given proportionate increase in CO2 concentration by some climate-sensitivity parameter. The IPCC’s implicit value of that parameter over the 200 years to 2100, on all six emissions scenarios, is 0.5 Co per Watt per square meter. Dr. Müller’s analysis covers 260 years, so let us call it 0.6. CO2 concentration has risen from 280 ppmv in 1750 to 390 ppmv today. Note also that the IPCC increases the estimated warming from CO2 by 43% to allow for other greenhouse gases. Then the expected warming since 1750, on the assumption that we caused all of it, is simply 1.43 x 0.6 x 5.35 ln(390/280), or 1.5 Co, which is Dr. Müller’s value.

In short, the IPCC’s central climate-sensitivity estimates are already predicated on the daring assumption that all of the warming of the past 260 years was caused by us, even though they state no more than that “most” of the warming was our achievement.

What, then, is the implication of Dr. Müller’s result for global warming to 2100? That is the $64,000 question. By that year, the IPCC estimates there will be 710 ppmv CO2 in the atmosphere, compared with 390 today. Its current central estimate, as the average of all six emissions scenarios, is that there will be 2.8 Co warming, of which 0.6 is warming that is already in the pipeline as a result of our past sins of emission. That leaves 2.2 Co caused by the greenhouse gases we shall add to the atmosphere this century.

Calculating on the basis of Dr. Müller’s result, and taking 0.4 as a suitable climate-sensitivity parameter for a period as short as 90 years, one would expect 1.43 x 0.4 x 5.35 ln(710/390), or 1.8 Co warming. This result is not “stronger” than that of the IPCC, but just a little weaker. To reach Dr. Müller’s implicit result, one would have to assume that natural influences on their own would have caused a little cooling over the past 260 years. But that assumption would contradict the exceptionally rapid increase in solar activity from Grand Minimum to Grand Maximum over the period.

If Dr. Müller had had a Classical training, he would have been made familiar with the dozen logical fallacies first codified by Aristotle 2300 years ago. He would not have attempted to draw any firm scientific conclusions as to causality merely from a superficial and in any event inadequate and uncertain correlation; and still less from a monstrous argumentum ad ignorantiam. Perhaps it is time to ensure that every scientist receives a Classical training, as nearly all of them once did.

« Older posts Newer posts »

© 2014 William M. Briggs

Theme by Anders NorenUp ↑