Skip to content

Author: Briggs

| 3 Comments

Artificial photosynthesis

I don’t normally get excited about “Advances in Science” papers, but everything now and then it’s fun to let your imagination play.

Thus, I recommend this article about an MIT chemist named Nocera whose team discovered a cheap, non-caustic, room-temperature electrolysis process that resembles photosynthesis.

Then suddenly (and it was quite sudden), his postdoc discovers a catalyst that can produce oxygen from water, and can do it at room temperature, with cheap materials, in neutral water, and without using huge amounts of energy. In other words, he’s found a catalyst that can do one of the steps in photosynthesis the same way plants can do it. This was one of the biggest challenges chemists in the field had been facing, and he’d solved it.

Be sure to read the comments were the author of the article clarifies one or two things.

They’re still one to three orders of magnitude away from being real-life useful, but, well, it’s pleasant to think of the possibilities.

October 22, 2008 | 8 Comments

Random topics

I use the word “random” in the sense that you did not know what topics I would select today. And I use the word know in its logical sense.

On Polling

From Instapundit comes this link to the WizBang blog on polls and polling.

Mr Wiz Bang seeks to reassure his readers that the picture is not as bleak for McCain viz. the polls as reported in the media.

All of the obvious suspects are here. The polls are commissioned and designed by folks who have a definite stake and desire in the outcome of the election. This of course does not prove that the polls are biased, but it should increase the probability that you think so.

The ordering of the questions and the exact questions used are seldom revealed, but are of obvious importance. For example, in one poll Mr WB discovered that questions about McCain came right after people are solicited for their opinion on President Bush.

You never hear about non-response. For example, pollsters ask 100 people, or try to ask 100 people, but only 20 respond. Who? Why? Are these non-responses correlated with the outcome? Usually, and in fact especially in politics, they are.

One thing Mr WB doesn’t mention is lying. People lie like dogs on surveys and polls. Sometimes, the lying is evenly spread out on both sides of Yes and No, but sometimes not. In this election, I suspect the lying is not even.

One place I did not know about is the National Council on Public Polling, a body whose purpose, inter alia1, is to provide ethical guidelines on polling. If you have ever found yourself caring about any poll, then you ought to read their “20 Questions A Journalist Should Ask About Poll Results.”

I won’t bore anybody with the technicalities, but “11. What is the sampling error for the poll results?” is based on classical statistics, and thus the typical “+/- 4 points error” you hear is wrong and should, as an extremely crude rule of thumb, be multiplied by 2. This fudge factor accounts for uncertainty in the true error, not the statistical formula error, which nobody ever really cares about. The true error is this: A poll says, 46% support M, and, in the end, actual voting reveals 74% support for M, then the error is 46% – 58% = -12%.

Their take on “18. What about exit polls?” also does not account for lying. I’ve told this story 100 times, but it bears repeating. John Kerry’s exit polls had him winning, in Manhattan, by about 10 to 1. The actual result was Kerry winning by about 5 to 2. Now, it’s true that Kerry still won the city, but the actual result wasn’t even close to that predicted by the poll. People who live in Manhattan are under a lot of pressure to voice support for Democrats.

Suicides and economic downturns

This idea comes from Dave Schultz, intrepid Chief Editor of Monthly Weather Review (I am one of the many Associate Editors there; Dave, unsolicited, was kind enough to put a link to my book on his page).

Dave pointed to this article from a local New York City paper. It’s a story of how “researchers” continually find surprising and suspicious correlations with economic data.

You might have heard this one in the news last week. A “researcher” named Pettijohn supposedly found that in lean economic times, chubbier models were featured in Playboy magazine. To which I can only say: isn’t tenure a wonderful thing?

Undoubtedly still drooling—I mean reeling—from that stunning finding, Pettijohn went on to discover “that in uncertain times, people tend to prefer songs that are longer, slower, with more meaningful themes.” Which I guess explains how Barry Manilow got to be popular (From Barry: “You get what you get when you go for it”).

As insightful as Pettijohn is, he doesn’t hold a box of tissues to Leo J. Shapiro, chief executive of SAGE, a Chicago-based consulting firm. Says Shapiro: “DURING a recession, laxatives go up, because people are under tremendous stress, and holding themselves back.”

Now that’s research. “Bob, this recession measures a solid—and I do mean solid—7.4 on the old sphinctometer.”

A guy named Ruhm says that suicides increase when dollars decrease. But the data he uses (they picture it) has already been massaged and filtered etc. and we all know what happens when you smooth time series and then use those smoothed series as inputs to other analyses, right?

_____________________________________
1This phrase was a favorite of my intellectual grandfather, Allan Murphy. Murphy was huge in forecast verification and meteorological statistics, a love which he passed on to Dan Wilks (the mustache is real), who is half my father. Meaning: Murphy was Wilks’s advisor, and Wilks was, in part, mine.

October 21, 2008 | 11 Comments

Science is decided by committee

Scientists still do not appear to understand sufficiently that all earth sciences must contribute evidence toward unveiling the state of our planet in earlier times, and that the truth of the matter can only be reached by combing all this evidence. . . It is only by combing the information furnished by all the earth sciences that we can hope to determine ‘truth’ here, that is to say, to find the picture that sets out all the known facts in the best arrangement and that therefore has the highest degree of probability. Further, we have to be prepared always for the possibility that each new discovery, no matter what science furnishes it, may modify the conclusions we draw.

—Alfred Wegener.

We have all heard Wegener’s sad story. How all of “science” aligned against him and his bizarre, false, ridiculous, obviously false theory of continental drift. What happened, more or less, and certainly not formally, was about 100 years ago all geologists got together and voted that Wegener had lost his mind. But, of course, and in fact, they had, and from Wegener arose the fascinating study of plate tectonics.

Then there is the Rene Blondlot saga. All of “science” aligned against his, too, and his weird, silly, sad, and pathetic theory of n-rays. What happened was that about 100 years ago all physicists got together and came to the consensus that poor Blondlot had lost his mind. And, of course, he had. From Blondlot came the cautionary tale of how easy it is to fool yourself, even if you happen to be a very smart man. There are no n-rays.

I don’t want to dwell on the point here, but there is no such thing as science. There are things we know and things we don’t. There are more things we think are true, and many more we think are false. And that’s it. But the purposes of this essay, I’ll, like everybody else, use the word but leave it vague and undefined.

Now, for every Wegener, there is at least one Blondlot and certainly hordes of nameless others, each touting their own personalized, probably false theories-of-everything. What this means is that because some person touts a theory which “science” denies, it is more likely that that theory is false than it is true. Thus, it is usually rational, for example, to seek Dr Smith’s of State U.’s opinion on Joe Jones’s new theory of zero-point energy. That is, an appeal to the consensus is rational.

The opinion of a great many learned persons concentrated in one place is a good filter of nonsense and falsity. But this filter is too often applied indiscriminately and too assiduously and it often blocks truth, particularly if the truth is new and different, or it is against a vogue that has taken tight, but temporary, grip on the academic masses.

Max Planck: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

Thus, in with the new but only after the those with the old are out. It’s often not as bleak as Planck painted it, of course. Some fields, especially when they’re young and unossified, grow by leaps and bounds, and each new idea, no matter how trivial or valid, is celebrated. It’s only after a field has had time to metastasize, that is, be formally recognized by its own separate department—complete with chairs (endowed, naturally), meetings, and new journals—that the filter becomes fully functional.

Academic freedom and its opposite

It might, then, not surprise you to learn that as a professor at a university, the locus of emanations and endless chanting about academic freedom, you cannot teach what you want. You cannot even study what you want. You can still think what you like, but, as we all know by now, you cannot always speak or write it.

What I mean by this is that peer review is not confined to accepting or rejecting journal articles. The sword is also wielded inside departments. Courses, for example, are decided upon by a committee. Many committees, actually. There is a departmental one (or more than one), then usually one at the “school”, or group-department level. There is sometimes another beyond that at the university-wide level.

Each level has to vet and approve any new course so that, among other things, it fits in among other courses, that the material is aligned with the consensus, and so on. These are reasonable goals, but the constrictions lead to tremendous inertia.

The amount of innovation (in teaching method or material) allowed in a course is inversely proportional to its difficulty. Thus, very advanced courses—seminars,1 usually taught just to graduate students and other professors—are wide open. You can teach exactly what you like, require what you like, depart on any tangent. There is little consensus about what to teach or how.

But in 101-type classes, your behavior is strictly proscribed. The book2 is decided for you, the lesson plan is decided for you, and in some places even the quizzes, homeworks, and exams are decided for you. Again, usually this is not entirely bad. The closer the field is to being driven by logic or is empirically verifiable, the more likely that the basics in that field are known, and that the optimal order and method to teach the introductory concepts have been hammered out. So, for example, all physics students should learn that F = ma, “pre-calculus” students must know that ln(exp(x)) = x, and all chemistry students should know in what way a proton, neutron, and electron are different. Which is to say, there is a consensus about what is known and what is best about the fundamentals. This obviously works against you when the fundamentals have recently changed,3 or the fundamentals are in dispute.

A within-department consensus always exist to also ensure that the work professors do is limited. Most do not feel that it is a burden to toe the line; after all, most professors are hired to work on a specific sub-sub-area within a field, and it is this area in which they enjoy working. Academics attend meetings in their specialties and sub-specialties and the areas of work which are popular are discovered. This group-think can lead to success of the kind we have certainly seen in many fields, but it also tends to narrow the scope of new work. We have all heard somebody tell us “You’re working on that? Nobody is interested in that.” And we have taken its meaning: work on something popular or your tenure or promotion will be more difficult.

The trend towards specialization has a built-in positive feedback. The more people work in a narrow field, the narrower that area becomes, or the more likely an area splits into two or more areas which also constrict. Again, not always bad, as this can lead to rapid progress, but it clearly not the best model for all people or all fields. If you are hired to do radiative cloud modeling, for example, you are not encouraged to dabble in your neighbor’s boundary layer fluid flow problem. You certainly would receive odd looks if you were to suddenly discover an interest in, say, difference equations or philosophy. You might furtively work in these areas that are “not yours”, and you might even publish in them, but you will not receive any credit for doing so, and as I said above, papers published in other areas might even work against you: “She’s not focused” is a commonly heard phrase. Which is to say, broad curiosity is not rewarded; potentially stultifying specialization is.

On being wrong

The closer a field of study is itself to politics or any area which involves human behavior, the more the consensus acts to keep people in line than it does to promote innovation. Non-consensus ideas are not welcome. Professors holding verboten thoughts are not hired, or if they are found out, they are let go, or they even leave voluntarily, tired of the process.

Naturally, the more a field agrees on what is actually true, then the stronger the consensus is to be sought. Problem is—as you might have guessed—is that people in these human-centered fields often feel, as people in more physical fields do not, in the grip of enlightenment and so always advocate the consensus stridently. The reasons for this are obvious and well known. The solution seems to be, because people in areas which involve humans are prone to ill-informed zealousness, that they should all be taught and consistently reminded that they might be wrong. This is the reason, after all, that, on average, people involved in physical areas are humbler: they have seen and verified their failures, and they have seen and acknowledged that their predictions sometimes are a bust.

Not all who work in physical areas are so lucky as to face correction. Today, there are at least two fields in which predictions are being made that either cannot be verified or cannot be verified until quite a lot of time has passed: string theory and climatology. The best these two fields can say is “Observations we have seen are consistent with our theory.” A true, or mostly true, statement. But, and I need hardly point this out, the observations can be equally, or even more, consistent with different theories, even theories which make opposite predictions. This is why making predictions is more important than explaining what we have already seen.

In fields where making predictions is more difficult, again, the human-centered or influenced ones, the local consensus is stronger, and people in those fields look more to the past to find observations which support their views. Evidence is picked over, and the best—in the sense of most agreeable—is kept, the rest discarded or explained away. The more a field is in the grip of explanation, the stronger the consensus will be, and of course the greater the chance that there will be splinter consensuses.

This is contrasted with fields in which (verifiable) prediction is king. There may be—there certainly are—splinter groups, but people can and do swear allegiance to more than one group. The consensus in these groups is more fluid and more likely to change on short notice. If there are many factions—explanations for a phenomenon—the first from which arises a correct prediction is the one that gains the most support. If that explanation can continue to make verifiable predictions, then eventually the explanation is accepted and becomes part of the consensus.

Everybody who agrees with me, raise their hands

So far we have seen that the consensus can work both for and against what is true. This should not be surprising. Research is done by people, and people have foibles. The process, on the whole, and especially in areas which do not involve human behavior, appears to be working. It is a clunky system, but it has shown results and still has promise.

The system breaks, as it always has, when people fall in love with an idea because that idea fits in with other deeply held beliefs, or when people simply want the idea to be true. When these like-minded people form a group and then a consensus, progress is halted, or even set back. These people need more experience with failure—that is, with acknowledging failure. I have no clear idea how to do this.

Naturally everything in this essay is subject to dozens of caveats and exceptions to the rule. The general theme sticks, however: people are generally too sure of themselves.

 

——————————————-
1Incidentally, these seminar courses are often taught “off the books” by the professor. Meaning they do not always count towards their official teaching load. Credit for students taking seminars is usually limited, too.

2The difference between the 101-books used in these courses is driven more by economics and fad than by fact or material.

3This happened in physics about 60-70 years ago, and is happening in statistics now.