The Future of Scientific Publications: Abandon Journals?

A paper undergoes peer review
It took three years–or was it four?—for a paper I submitted to the august journal Annals of Statistics to wend its way through peer review. When it came back to me stamped “Rejected: And Don’t You Dare Try To Resubmit!” I originally thought it was a prank because I did not recall writing it.

However, when I saw contained in the body of this wrongly neglected jewel a marvelous joke—and not a few typos—the memory of authorship returned to me.

Another journal has only recently stopped a bounty hunter from dogging my tracks to collect page charges for an article I wrote perhaps fifteen years ago. Page charges, you ask? Why, many journals require the author to pay for the privilege of publishing. Science is the original vanity press. The economics of publishing are complicated. Journals recover the expense of charging authors by also charging readers. Publishers do add value, though. Example: they provide the service of unburdening the author of his copyright.

David Banks, a statistician at Duke, and eminence of the American Statistical Association, has asked for comments about the publishing process (it was from him that I stole today’s title):

I fear our current approach to publishing does not serve us well. It takes too long, so our best scientists are driven to other journals in faster disciplines. Refereeing is noisy and often achieves only minor gains. And the median quality of reviews is deteriorating due to journal proliferation, pressure on junior faculty to amass lengthy publication lists, and the slow burnout of conscientious reviewers.

All true. So’s this: “Published research often does not replicate”. For papers which rely on statistics, this is the largest sin, as regular readers are well aware.

Banks reminds us the system today was not always thus.

Today’s publication process was essentially invented by Henry Oldenburg, the first corresponding secretary of the Royal Society. He received letters from members describing their research, copied them out in summary form, and mailed those summaries to other members.

It was also the habit of pre-journalified scientists to correspond with one another; letters were passed in lieu of official publication. Yet we admit journals initially were a boon, especially when there was limited reader- and authorship.

Today, though, in statistics alone, there are dozens upon dozens of publications, with more appearing regularly. An advanced computerized statistical model predicts there will be 1.2 journals per statistician by the year 2023—none of which will or need be read. Why the increase? The depressing desire for quantification of the unquantifiable (a particularly dismal trait in statisticians).

It is publish or perish: paper count is the sine qua non of success within the university. Without it, departments would be aswim, unable to decide on promotion or hiring. Remove paper count—the statistic everybody uses while simultaneously decrying—and there will be no objective basis to decide who stays or who is booted.

Trouble is, with an increasing multitude of outlets, anybody can achieve a pleasing sum. This causes other metrics to be sought. Like citation count, or the sounds-like-advertising “impact” factor. Trouble with the latter is that the “best” journals have limited space. And then, because of the charmingly naive view that peer-review is a rigorous filter of truth, authors spend just as much time editing the work of others as they do writing their own papers. And then the true definition of random is found in considering why papers are accepted or rejected.

Are there alternatives to our stultifying system? Sure. I figured the world deserved to read my jocose but rejected jottings. So instead of enduring the desultory review process again, I stuck the paper on this page and on arXiv. Where, to my delight, it was actually read.

Larry Wasserman (whose books on mathematical statistics are highly readable), commenting on Banks’s plea, agrees and said:

I think we should abandon journals completely and just use arXiv.

We should eliminate refereeing completely and let the marketplace of ideas decide the value of a paper.

Sounds nice. But how do you get credit for a letter? Or a blog post? Or an arXiv dump? The worry is somebody suffering from latent accountancy will suggest number of downloads or the like—as if that would not be easy to manipulate.

Well, you shouldn’t get numerical credit. Each person’s work, or potential for same, should be judged on its own as a whole. This requires extra effort for reviewing committees, who would actually have to read instead of count papers, but tough.

This ploy isn’t perfect, either. No system is. For example: Article popularity is a weak gauge of quality. It’s easy to write many papers quickly in “hot” areas (I once attended a conference where everybody started their talk “Wavelets are…”). But some topics are more experimental or are foundational, areas which may never pay off but which are worthy of investment. And there will 1,000 arXiv wavelet-neural-net-“big”-data-of-the-day papers to every probability-really-means-this work.

The system of books, blogs, and backups to arXiv is probably the least worst.

Update Corrected thanks to the ever-watchful eye of JH.

16 Comments

  1. Ken

    Why not just band together with a group of like-minded authors & start your own “journal(s)” with a sound-alike name(s) to the [so-called] “best” journals?

    Then, publication will be timely. And anybody skimming a list of publications might be duly impressed.

    Really….have you been thinking all this time that the proliferation of journals is due to the objective proliferation of ideas & papers (vs. the subjective self-interst of those desiring to acquire of list of published papers)????

  2. Briggs

    Ken,

    To be sure, obtuse and obscure over-specialization are also problematic.

    But if you’re trying to make the case that the main reason for journal proliferation is necessary specialization and that therefore “boutique” journals are still superior to arXiv, you do not succeed. ArXiv can handle sub-sub-sub-sub-sub-sub-sub categories.

  3. Gary

    For scholarship and advancing knowledge the immediately available digital world is the place for research papers. Whether in official journals or otherwise, the point is ease of retrieval of information. It’s becoming clear that journals add less value to research now than they did before.

    For hiring and promotion purposes the policy should be that candidates provide their three best works with a cv. That’s enough for any committee to evaluate their worth. ‘Best’ is left up to the candidate who can demonstrate ability just by the selection of representative work. It really is simple to short-circuit the numbers game.

  4. Speed

    Universities will have to develop a system to objectively evaluate their faculty which will involve determining their effectiveness as teachers as well as their real research accomplishments.

    Next … How many years before the physical library disappears?

  5. Speed

    Pragmatically, peer review refers to the work done during the screening of submitted manuscripts and funding applications. This process encourages authors to meet the accepted standards of their discipline and prevents the dissemination of irrelevant findings, unwarranted claims, unacceptable interpretations, and personal views.
    Wikipedia

    Avoiding type II errors while ignoring the possibility of type I errors.

  6. Katie

    Also see Chronicle of Higher Education, October 8, 2012 (subscription required for access): Japanese Fraud Case Highlights Weaknesses in Scientific Publishing
    Yoshitaka Fujii faked nearly 200 medical studies over two decades. How was he able to avoid detection for so long?

  7. Briggs

    Money quote from Katie’s story:

    “His work was almost complete fiction, but he kept saying that it stood up because it had been accepted by so many journals.”

    How many times have we seen that used to justify statistical shenanigans?

  8. Briggs,
    Great article! Your frustrations are shared among an increasing number of scientists – especially regarding a gatekeeping mentality in some fields e.g. climatology. For this reason 50 of us last year set up Principia Scientific International (PSI). PSI promises a sympathetic viewing to all submissions and we are happy to give internal peer review and encourage open peer review by readers via our website as per our PROM process (see http://principia-scientific.org/index.php/why-is-psi-still-developing-a-science-policy.html)
    I congratulate you on helping to raise this issue and would be delighted to discuss this further with you by email.
    Regards,
    John O’Sullivan
    Coordinator: Principia Scientific International

  9. Ken

    Briggs, My entire comment was satire, not to be taken seriously…like The Onion. However, I’m sure if one cared to conduct an audit, they’d find more than a handful of examples that would appear perilously close, perhaps indistinguishable from, the cynical sarcastic premise I tossed out for fun. K

  10. Ken

    On a serious note: Briggs–instead of complaining all the time, why not address the hard realities driving the situation??

    Consider: When Thomas Jefferson was alive all the known “research” & facts, as they were then understood, was sufficiently sized such that a single person might be expected to “know everything.”

    Since the population has expanded exponentially and with it researchers adding to the body of knowledge.

    Some trends are thus unavoidable.

    And, overall, are they really all that bad?

    That’s very debatable.

    For a benchmark, consider the gyrations Benjamin Franklin, printer, diplomat, and ‘experimenter of electricity’ had to go thru to get his ideas published & recognized:

    http://www.amazon.com/Bolt-Of-Fate-Benjamin-Franklin/dp/1891620703

    That’s a fun little book about history of the development of the science of electricity centered on one of the US’ Founding Fathers. Lots of intrigues there. Included are tidbits about his research being blocked, even stolen outright, by established authorities at the most prestigious institutions…and his efforts to retaliate that included getting a backstreet publisher to publish his work independently (i.e. fundamentally the exact same issue as publishing on blogs & arXiv)…and even setting up a very dangerous lightning experiment in the hopes a particular idea-stealing established authority, that didn’t really know what he was doing, would claim precedence, try the experiment, and electrocute himself to death.

    For all your whining, what you’re describing is fundamentally business-as-usual consistent with interpersonal dynamics documented since scientific publications started coming into vogue. Given that humans haven’t fundamentally changed since, there’s no reason whatsoever to entertain the ‘pipe dream’ that they’d behave any differently.

    Since you haven’t noted any overt manipulations designed to literally kill one’s opposition (that innocent others might be inadvertently duped into trying with deadly result), so, arguably, things are markedly better…or worse [depending on one’s perspective].

    In living in and dealing with Reality, some things just are and are not going to change. The trick is to figure out how to deal with it–figure out how to “build a better mousetrap” instead of just complaining about unnecessary mice infestations.

    Recall the Serenity Prayer:

    God, grant me the serenity to accept the things I cannot change,
    The courage to change the things I can,
    And the wisdom to know the difference.

    Lately, this blog has reverted to a very sophisticated cognitive effort at doing exactly the opposite of that prayer’s simple message: whining about things that cannot (and will not) change, no indication of any effort or means to effect constructive changes anywhere, and a clear evasion of acknowledging [much less confronting] the ‘wisdom’ indicated there.

  11. Andy

    I have though much the same for a while, every scientist has a blog and publishes there, has to submit books etc. Why an old publishing model with poor peer review which is next to worthless persists is beyond me. I suspect it may be because academia is backward and not up to date. Do you really need to GO to university? Pay thousands to sit listening to a lecture which was presented previously over the past 15 years?

    Times may be a changing……let’s start with tenure first!

  12. Briggs

    Most so-called research is next to worthless, since its value is defined in self-referential terms: good research in each field is whatever good researchers in that field do (as defined by leading journals and conferences), regardless of any benefit or lack of benefit to the wider community. This self-referential circle means that research in even the hard sciences becomes increasingly political and unrelated to reality.

    From this must-read article.

  13. JH

    Uhm… the journal Annals of Mathematical Statistics ceased to exist in the mid-1970s.

  14. Briggs

    JH,

    That explains it!

  15. Adam Gallon

    Katie
    “Yoshitaka Fujii faked nearly 200 medical studies over two decades. How was he able to avoid detection for so long?”
    Probably the same way that Jan Hendrik Schön got away with his fraud, producing “evidence” that confirms some heartfelt theory!

Leave a Reply

Your email address will not be published. Required fields are marked *