William M. Briggs

Statistician to the Stars!

Page 149 of 590

Daily Links & Comments

@1 Russell “My Bookie Wook” Brand (shades of Jon “I’m a Comedian” Stewart) edits The New Statesman; quotes Sophocles. Readers die of embarrassment. Link

@2 Did Hitler flee bunker with Eva to Argentina, have two daughters and live to 73? The bizarre theory that’s landed two British authors in a bitter war. Link

@3 Feser, clear as always: A First Without a Second: Understanding Divine Causality. Link

@4 The debate over “transgender” children is playing out across the country and while some say children should be encouraged to be themselves, others warn that no decision should be made before puberty. The hell with biology, you bigot! Link

@5 Pianist Maria Joõo Pires panics as she realises the orchestra has started the wrong concerto. Watch what happens. Don’t skip the video! Link

Please prefix comments with “@X” to indicate which story you’re commenting on. I should hardly need to say that a link does not necessarily imply endorsement.


10 Comments

Blinks As Lie Detectors

The advice on this shirt are at odds with Science.

The advice on this shirt are at odds with Science.

The holy grail of behavioral psychology is a test which determines if a person is lying. One hasn’t been discovered yet, though not because of a lack of effort.

Pretty much everything has been tried. Bumps on the head, a.k.a. phrenology, bumps in the head, a.k.a. magnetic resonance imagery, galvanic skin response, heart rate, breathing rate, stool color, sweating, nervousness, answers on proprietary questionnaires (they call them “instruments” and charge pretty pennies to view them), torture, e.g. water-boarding (yes, it’s still torture even if you undergo it for the purpose of writing about it). Even asking, “Are you lying?”

I’m guessing about the stool color.

The newest foray into fiction forecasting is eye blinking. So says Frank M. Marchak in his peer-reviewedDetecting false intent using eye blink measures” in the aptly named journal Frontiers in Psychology.

Marchak assures that “[s]ince being untruthful regarding both past and future acts includes the attribute of a desire to mislead”. That settled, Marchak collected via on- and off-line ads 54 Montanans as representatives of the entire human race.

In two extremely complicated experiments “of ecological validity”, he tracked his volunteers’ pupil diameter, blink, and eye movement using the “Smart Eye Pro version 5.4 remote eye tracker.” Results?

In both experiments…those with false intent showed a lower blink count difference, fewer numbers of blinks, and shorter maximum blink duration for questions related to their intent compared to questions related to another act for which they had no intent.

That’s a tangle, which only grows thicker when considering the experiment itself, which went like this (I’ll just do the first). People who saw the ad called a number, which was hooked to an answering machine which told volunteers to leave their own number. They were called back later, promised $25, and asked to venture to an “intake office” in some downtown building. Paperwork was filled out. And then—this is beginning to sound like a spy novel—participants were handed a slip of paper which directed them to walk to a second building in which was an “instruction room.”

Some of the volunteers were then instructed to don headphones which informed them that “they were to commit a mock crime by taking a ‘fuse lighter’ from a downstairs office in the building and providing it to a ‘contact’ after completing a credibility assessment test at another location”. They were given a photograph of the “contact.”

The remaining volunteers “heard instructions in which they were to remove a note from the door of a downstairs office and were not provided with the supplementary materials.”

More skulduggery:

In both conditions, participants exited the instruction room, walked around the block, and entered the building through a side door. They then proceeded downstairs to a basement office. Those in the truthful intent condition simply removed a sticker containing numbers from the door. Those in the false intent condition were required to enter the office and find and remove the fuse lighter. The office containing the fuse-lighter was furnished to resemble a working facility.

In both conditions, the participants exited the building through a third door and proceeded approximately 2 blocks to our laboratory to take a credibility assessment examination.

Then it got strange. Participants were debriefed with a fixed set of questions, the timing of which was painstakingly measured. For example, the “neutral” question “Do you live in Bozeman or a surrounding community?” took precisely 3005 milliseconds to hear. Not 3004 milliseconds, not 3006 milliseconds, but 3005 milliseconds because, it is presume, a one millisecond difference could make all the difference.

Besides the neutral queries, there were also out-of-the-blue questions about drugs (e.g. “Do you intend to transport illegal drugs today?” 2610 ms) and some about the bomb (e.g. “Do you plan to provide a fuse lighter to someone today?” 3060 ms).

During this grilling, the eye-scanner counted the blinks, durations of blinks, and time between blinks, and maximum blink time, both before, during, and 10 seconds after each question.

Then it got strange. The statistical manipulation of these numbers was so complex it would have put Merlin to shame. But skip that and consider the “raw” data. Two groups: “false” (fuse bomb) and “truthful” (note only) intent. For example, the average (standard deviation) maximum blink duration for neutral questions were 221.700 ms (136.474 ms) in the bomb group and 231.398 ms (123.879 ms) in the note group. The “8” in “231.398” represents a millionth of a second. There were similar results for drug and “explosives intent” questions.

Same sort of thing for number of blinks. Note group had average 6.368 blinks (SD 4.438), and bomb group had 4.467 blinks (SD 2.693) for the explosives intent queries, and similarly fewer average blinks for the other query types. What’s a thousandth of a blink? Never mind.

You can guessed what came next: frequentist hypothesis testing (after much manipulation) and wee p-values, all of which “proved” Marchak’s theory to Marchak’s satisfaction.

I have no idea what to make of this study, but I did learn this (I’ll let Marchak have the last word):

The effect of arousal on eye blink behavior has been investigated by Tanaka (1999) who examined the changes in blink rate, amplitude, and duration as a function of arousal level and found differences between a high arousal vigilance task and a low arousal counting task.

——————————————————————————-

I believe I learned of this study from Neuroskeptic @Neuro_Skeptic, but I can’t recall for sure.


7 Comments

Daily Links & Comments

@1 Amen! One should at least suspect [BS] when symbolism and other formal techniques that could easily be dispensed with without loss of rigor and with a great gain in readability are used anyway. Reminder: I have a sensitive spam filter. Link

@2 A plucky amateur dared to question a celebrated psychological finding. He wound up blowing the whole theory wide open. Link

@3 Under the Yeah, Sure heading. How 3,000 year age of empires was recreated by a simple equation: Scientists show how math can predict historical trends with 65% accuracy. Link

@4 Following close behind…Nearly 25 percent of Asia-Pacific men rapists: study. Link

@5 The Folly of Scientism. Link

Please prefix comments with “@X” to indicate which story you’re commenting on. I should hardly need to say that a link does not necessarily imply endorsement.


8 Comments

How To Properly Handle Proxy Time Series Reconstructions

This is all made up data, so as not to hurt anybody’s feelings. Also, this is a sketch. Everything can’t be done in 700 words.

We are interested in the time series T, which represents values of some thing taken at progressive time points (these needn’t be regular). But we can’t measure T. We can, however, measure a proxy of T, something “correlated” or associated with T, something which might causally be affected by T. What’s a proxy? Something like this:

Figure 1

Figure 1

Imagine the proxy is some chemical measurement inside tree rings, coral reefs, or whatever and T temperature. Somehow we have taken simultaneous measurements where both the proxy and T were available. Step one is to model the relationship, which is shown by the over-plotted line (a simple linear regression). Pretty good fit, no?

It ought to be, because this is an oracle model; which is to say, the model here is true because I picked it. In real life, the model itself is usually a guess, meaning everything that follows will paint a picture of confidence which evades us in reality.

Next thing is to guesstimate T where we have no T but where we have the proxy. Like this (the proxies aren’t shown, but I used the perfect model fitted above to predict T):

Figure 2

Figure 2

Very well, this looks like a reasonable prediction of T given new values of proxy (using the same regression). But every good scientist knows that error bars should accompany any prediction. Here’s what people using time series usually do:

Figure 3

Figure 3

The fuzziness comes from looking at the error, the plus-or-minus, of the relevant parameter inside the model (standard 95% bounds). Looks like a tight prediction, no? Even after taking into account the uncertainty of the parameter, we’re still pretty sure what T was. Right? You guessed it: wrong.

For that, we need this:

Figure 4

Figure 4

The wider bands show the plus-or-minus of T, the prediction interval of the real observable (same bounds). There is no use plotting the uncertainty of the parameter as above, because the parameter doesn’t exist. T exists. We want to know T. This is the best guess of T, our ostensible goal, and not of anything else.

I would like to shout that previous paragraph right up next to your ear until I see you nod.

Notice how much, how dramatically larger are the intervals? How less certain we really, truly are? If you noticed that, you have done well. But don’t forget that this picture is too optimistic, because the proxy-T model was known. In real life, we won’t usually know this and so have to widen the final error bars.

By how much? Nobody knows. This is key. If we knew, then we could know the model and we wouldn’t have to widen the bars. But since we do not know the proxy-T model, we do not know how much to push out the envelope. Meaning that if we accept the numerical bounds as accurate just because they are numerical, we will be too certain. Worse, in our quantitative-induced euphoria, we’ll forget that we should be less certain. Not all probability is quantifiable.

Now another thing people like to do is to plot a straight line over the guesstimated T and speak of whether there was a “statistically significant” increase or decrease in T, or they’ll use the line to say “there has been an X average increase in T” or some such thing. This is almost always folly, not the least because these judgments eschew the uncertainty we have been at pains to illuminate.

Plus there is no reason in the world to do this unless you expect that straight line to skillfully predict future values of T. How do you know if this is true? Hint: you don’t. After all, something like this can happen:

Figure 5

Figure 5

The new T (over the entire period and not just the time of the proxy) was generated in advance (as were the proxies, which recall have a specific known relationship with T). I picked this one (T is a kinda-sorta a “long-memory” time series) because of its vague resemblance to actual time series we have all seen before.



8 Comments
« Older posts Newer posts »

© 2015 William M. Briggs

Theme by Anders NorenUp ↑