William M. Briggs

Statistician to the Stars!

Page 145 of 645

Free Will The Result Of ‘Background Noise’?

A visual depiction of free will?

Once again the lack of metaphysical training has led some scientists to say an incredibly silly thing. That free will “could be the result of ‘background noise’ in the brain.

According to the UC Davis press release (which, incidentally, the Independent (linked above) so badly copy-and-pasted that they left out the author’s first name and her rank):

Our ability to make choices — and sometimes mistakes — might arise from random fluctuations in the brain’s background electrical noise, according to a recent study from the Center for Mind and Brain at the University of California, Davis.

“How do we behave independently of cause and effect?” said Jesse Bengson, a postdoctoral researcher at the center and first author on the paper. “This shows how arbitrary states in the brain can influence apparently voluntary decisions.”

The brain has a normal level of “background noise,” Bengson said, as electrical activity patterns fluctuate across the brain. In the new study, decisions could be predicted based on the pattern of brain activity immediately before a decision was made.

So. “Background” noise causes the brain to do what it does. How can “noise” organize to make our actions coherent to us? Answer: it cannot. Drop a handful of pinto beans onto the floor and they may organize into a pattern resembling, say, George Washington’s profile. But the beans don’t know that. We do. There has to be an aware us to interpret the result of the “noise.”

There is no such thing as “noise”, except in the sense that it is a pattern which, to us, makes no sense. Something caused the “noise”, and that efficient cause had a final cause, a purpose or goal, even if we don’t know it. It is thus impossible “noise” can be an explanation for the lack of free will.

Before I started writing this article I walked, nay jogged, back up the hill from College Town to the Statler. (I didn’t want to miss the start of USA v. Portugal). I did not think, not for a moment, where to place my feet, how to shift my weight from step to step, what to look at. Indeed, since I was thinking intently of a book I had read and of the upcoming match, I can barely remember the trip. I know I started at A and ended at B. But how I did it, I don’t know.

I don’t especially care, either. I don’t even care how I breathed, but I know my brain had something to do with it. Can it really be news that our minds can be occupied with other matters while the rest of the body handles itself? Well, the answer must be yes: it is news.

Bengson sat people in front of a computer and asked them to look left or right after a “cue” popped up on a screen. Dull task, much like walking. If you were to hook a scanner to my head and asked me when I decided to put my foot on that spot over there, as I was strolling along, I might say to you, “Well, right now, I guess” as I was doing it.

How goofy would it sound if a scientist then said, “Aha! I was monitoring your brain and the area responsible for walking was activated before you said you made the decision. You thus have no free will!” The only possible response is: “Dude. Too much coffee.” I activated that area of my brain when I decided to walk. That area, being well trained, did it’s thing so I could concentrate on more interesting things.

Bengson said she monitored the “noise” in the brain a “second or so”—or so?—before the cue appeared, and that this “noise” formed itself into patterns which allowed her to predict, with fair accuracy, which way the person would, a second or so later, say “left” or “right”.

You’re given a cue a minute ago and looked left. You might think, “Now what? I answered left twice in a row. I’m getting bored of looking left. Next time I’m looking right.” The cue comes sometime later and you look right, just as you decided. But since the cue came well after you decided, it appears, just like walking, your brain handled the decision.

If the participants didn’t know when the cue was coming (Bengson emphasized this), how could “noise” in the brain activate itself before the cue was shown? Is Bengson claiming precognition? No. She says, “”we know people aren’t making the decision in advance.” She doesn’t know this. She’s assuming it, as my example shows.

She’s claiming the noise is causing the choice. But “noise”, like “chance”, cannot be a cause (the states of the brain can be, of course). A coauthor said the noise “inserts a random effect that allows us to be freed from simple cause and effect.” There is no freedom from cause and effect.

Update I’ve been in contact with Bengson and now have the final paper. Stay tuned for a new review.

Final Proof Global Warming Purely Political

The Skeptic went that way!

The Skeptic went that way!

Regular readers will have expected the next installment in our tour of Summa Contra Gentiles. This will appear next week after my class is over. I may say that the day-after effects of copious wine and sunshine are more than sufficient proof for God’s divine instruction, and therefore it follows God exists.

Have you noticed, really noticed, that the concept of proof has all but disappeared from major media stories on global warming?

Proof-stories are those that say “The science predicted this-and-such, and here is the evidence verifying the prediction.” These were common in the early days of the panic, back in the late ’90s when temperatures cooperated with climate models, but are now as rare as conservatives in Liberal Arts departments.

The reason is simple: there is little in the way of proof that the dire predictions of global warming are true, and much evidence, plain to the senses, that they are false.

Global warming stories still appear with the same frequency as before, but they have changed character. The new stories demonstrate convincingly, if there was any doubt left, that global warming “science” is purely political.

This is because people believe global warming not because of the science but because they desire its “solution.”

Take this example from the San Francisco Chronicle, “Democrats use climate change as wedge issue on Republicans“.

When President Obama stood before students in Southern California a week ago ridiculing those who deny climate science, he wasn’t just road testing a new political strategy to a friendly audience. He was trying to drive a wedge between younger voters and the Republican Party.

Democrats are convinced that climate change is the new same-sex marriage, an issue that is moving irreversibly in their favor…

Wedge issues are those in which one side believes strongly that it has the moral high ground.

In other words, the president and his party want the only acceptable argument to be “I believe“. Anybody who offers calm, logical arguments against the theory of “catastrophic” man-made global warming, such as observing models do not make skillful predictions, must be shouted down, shunned, driven from polite society, called evil, labeled as brutes, shamed, fired, de-funded, imprisoned.

(Remember those brave academics who called for the arrest of skeptics? Here, here, and here.)

When a True Believer meets a skeptic he sticks his fingers in his ears, stamps his feet, and screeches “Denier!” (or “Bigot!”) as if this is a knock-down devastating rebuttal. In the True Believer’s favor, a rampaging mob does earn a certain respect.

It’s rather funny in its way. Who with me recalls the academic other-way-of-knowing culture war of the 1990s which griped the academy? The literature, sociology, education, and other soft professors insisted that science had no special cause for respect, that scientific knowledge was just “another way of knowing”, that truth must always be accompanied by scare quotes because “truth” belonged to whoever was in power, etc., etc.

The war culminated in physicist Alan Sokal’s famous hoax, where he managed to get a prestigious other-way-of-knowing peer-reviewed journal to publish an article of scientific gibberish. Embarrassed, relativists sounded the general retreat and thereafter were sure to make themselves seen endorsing science whenever they could; they even adopted scientific techniques for their own research, even when this was clearly nonsensical.

Right after Sokal came global warming. The timing was perfect. Here was a science that accorded perfectly with the politics of the relativists. It was embraced with gusto. “We’re all scientists now!” they said. Global warming meant global, top-down “solutions.” Man-made catastrophic global warming was not “true”, but capital-T TRUE. A clear victory for Science.

Climate scientists were feted and funded, and many understandably gave in to the temptation to be pampered publicly. Adulation is a strong drug and addicting. To keep the supply steady, these scientists regularly ratcheted up their rhetoric, soon passing well beyond the evidence and venturing into wild speculation. Audiences were enraptured. Facts were long forgotten. All that could be seen were “solutions.”

The UN, knowing a good deal when it saw one, got involved. So did those politicians which saw they could use global warming as a “wedge issue” to harm their opponents. Governments which had higher things on its minds ignored or downplayed the movement, except when they could benefit from it. For instance, Uganda “will on Saturday 12th July host the first ever International Climate Change Conference for Children.” Ugandan leaders smell money.

And now, at rock bottom, we have our president acting like an addled college student attending an “awareness raising” rally calling out “Nyah nyah nyah.”

The point is this: the relativists were right all along. They should not have capitulated. Science—I mean its practice and not the facts—is just another way of knowing. Research which gets funded is that which is aligned with the reigning politics. “Truth” is what those in power say it is. Power, even voting, determines “reality.”

The Applicability Of Experiments

The lightning is a premise

The lightning is a premise

Every probability problem has the form Pr(Q|E), where Q is the proposition of interest and E the evidence, premises, or “data” probative (or not) of E. Change the evidence, change the probability of Q. That is, unless E1 is, given Q, logically equivalent to E2, Pr(Q|E1) will not equal Pr(Q|E1).

For instance, Q = ‘A 6 shows’, E1 = ‘This is an n-state machine with states labeled 1-n, and one state must show and this is a state’ and E2 = ‘This is a 2*n-state machine, etc.’. Thus Pr(Q|E1) = 1/n and Pr(Q|E2) = 1/(2n).

This is why there is no “probability of Q” without evidence; i.e. there is no such thing as implicit probability. If somebody says, as for example Richard Dawkins recently said, “There’s a very interesting reason why a prince could not turn into a frog — it’s statistically too improbable”, he must have reasoned he and his listener agreed on the E, the evidence used in deducing that curious probability. For there could not be a probability of a Q = ‘This prince turned into a frog’ without it.

Part of the evidence, incidentally, is tacit understanding of the words and grammar used in Q and E. Presumably Dawkins did not (here) mean by frog “A Frenchman.”

The proviso “given Q” in the first paragraph is interesting. For example, let E1 = ‘This is an n-state machine with states labeled 1-n, and one state must show and this is a state’ and E2 = ‘This is an n-state machine with states labeled 1-n, and one state must show and this is a state and the machine is blue’. Given Q, these two statements of evidence are equivalent. The color of the machine is of no interest in calculating the probability of Q. But given (say) R = ‘The machine is blue’, E1 and E2 are no longer logically equivalent. In this way, the probability equation “works in both directions.”

Since all probability is conditional, every experiment you have ever have or ever will hear of is conditional on the premises of that experiment. Data are part of these premises, as are the characteristics of that experiment.

Unfortunately, we must also assume no errors in calculation are made, including cheating. These aren’t as rare as one hopes. But worse are mistakes in reasoning. Above I said that a tacit premise is grammar. A frequent and flagrant error is to abuse grammar, for instance in the epidemiologist fallacy. This is where the researcher says he has calculated the probability of Q given E, but where he actually gives the probability of some R given (usually some modified) E. Click that link for examples.

Let’s take a test case. A set of questions is asked of a handful of college kids at an earnest American university. Are the answers these kids give representative of every other human being, of all time? Consider that the questions are to confirm some psychological, sociological, or biological theory.

One implicit premise is that the participants are human beings. Don’t laugh. When bench scientists conduct “exposure” experiments on “animal models”, this premise, that the results apply to human beings, is much shakier than when a sociology professor asks questions. Another premise is that the kids go to this college. Another is that so many wore dark socks and so many light ones. You can think of many more: indeed, you must. Are these probative of Q or aren’t they?

If you want the results to apply for people outside this university and for people who wear socks of different color, or indeed wear no socks at all, you must say (or hope) no. If being at this university, which is a premise, is probative of Q, then moving to human beings at another university changes the premises, and thus changes the probability of Q. Which direction? Who knows?

There are a host of other attributes about the experiment and the individuals involved, like the sock color, that are all premises. Everything that was there at the experiment is a premise, down to the last quark. The announced probability of Q might be dependent on all of these, but not all of these “shift” Q in any particular direction. Remember that probability is not the language of causality. We’re not saying whether any premises caused Q, only how it relates to the probability of Q.

If we want to say how our experiment is applicable to other people, then it is our duty to clearly identify the premises thought (but usually not proven) to be most probative of Q. For instance, sex of the participants is a premise. Removing it from the list of premises might shift the probability of Q, but if the shift is trivial or minor,

This is only a sketch of this topic. More to come. Enough’s enough on a Saturday.

The Cult Of The Parameter!

Statistics students inducted into the Cult

Statistics students inducted into the Cult

Carla Antonaccio has written, “The term cult identifies a pattern of ritual behavior in connection with specific objects, within a framework of spatial and temporal coordinates. Rituals would include (but not necessarily be limited to) prayer, sacrifice, votive offerings, competitions, processions and construction of monuments. Some degree of recurrence in place and repetition over time of ritual action is necessary for a cult to be enacted, to be practiced”

Generally, “researchers” in the so-called soft science badly misinterpret, use, and understand parameters. The result is massive over-confidence, false beliefs, and yet more grants (see yesterday).

Most statistical models want to say something about some “y”, or outcome of interest, and how it “correlates” to various “x”s, i.e. how varying an “x” changes our uncertainty in the “y”. To work mathematically, the models must needs have unobservable parameters which are partly associated with the “x”s, but which are not of any direct interest to the main question.

The Cult of the Parameter so falls in love with the parameters that participants forget the original goal of the analysis and speak only of the parameters as if the parameters are reality. The Cult thus relies on the Deadly Sin of Reification (see earlier this week).

Here is an example. We’ve done regression too many times to count, but recall that there is some “y”, or outcome of interest, some number the uncertainty of which we want to quantify using a normal distribution.

It would be committing the a sin of reification to say that “y” is “normally distributed.” Instead, we say that our uncertainty in “y” is quantified by a normal. Normals have two parameters, a central and a spread (m and s). In regression, the central parameter is modeled as a function of a bunch of “x”s, i.e. other variables. Works like this:

     m = b0 + b1x1 + … + xp

If, given each “x”, we knew m and we knew s, then we would know the uncertainty in “y”. That was our stated goal.

Suppose one of the “x”s is presence of male sex. The “b” associated with it, another parameter, in the presence of all the other “x”s, modifies the m, which is, I need remind us, the central parameter of the normal distribution of the “y”—which for a complete understanding also needs the spread parameter s.

Now if I were to announce to the world that male sex causes “y’ to change by the value of the b associated with male sex, I would be making a huge mistake.

Both Bayesians and frequentists make this mistake, though frequentists add the mistakes of p-values and misinterpreting confidence intervals, minor transgressions given the devastating over-certainty produced by the Cult.

Given the observed data, the “b” associated with each “x” will vary (usually) around some value, and be said to have some plus-or-minus. For instance, the software will say something like, “‘b’ equals 10.2 plus or minus 2.” The Cult will then say “‘X’ causes ‘y’ to take the value 10.2, plus or minus 2.”

The first error is mistaking probability for causality, and the second is to assume the “b” has anything directly to say about “y”. It does not. We wanted to know how “x” said something about “y”; we did not want to know anything about “b”.

The uncertainty (that plus-or-minus) in the unobservable not-interesting “b” is mistaken for the uncertainty is how “x” changes our uncertainty in “y”. Because of the mathematics of parameters, this true uncertainty (of “x’ informing “y”) is guaranteed to be larger than the uncertainty in each “b”.

The amount of error introduced by the Cult depends on the situation. If we quantify the error as the true plus-or-minus of the “y” as “x” varies divided by the plus-or-minus of the “b”, I have seen ratios anywhere from 2 to 20. Meaning, as promised, over-certainty, piled up paper upon peer-reviewed paper, is enormous.

At a guess, I’d say the Cult has influence over the majority of “research”, particularly in the so-called soft-sciences.

Examples? Here’s an old one. I’ll have more once my students this year present their projects. But you could just as easily generate your own.

Homework: read any paper which uses regression. Identify whether or not the Cult influenced the results.

« Older posts Newer posts »

© 2016 William M. Briggs

Theme by Anders NorenUp ↑