(1) An academic article is only as good as the journal in which it is published
(2) A journal is only as good as the academic articles that it publishes
Of the two (1) seems to me to be obviously false. Of course researchers and their research can gain a kudos for being published in a high-impact factor journal (Science, Nature, and so on) but it is that "only as good as" that sticks in the craw: there are independent factors that contribute to the quality of a piece of research other than where it is published. Naturally there will be a high correlation between some independent assessment of "research quality" and the impact factor of the journal in which it is published but the correlation will not be perfect (there are doubtless very good papers published, for whatever reason, in lower impact factor journals, and doubtless also some dross published in the "good" ones).
So now let's examine statement (2), this seems to me, at least, to be entirely true, at least in the long term. If the editors of Nature, say, started to suddenly publish low grade research then pretty soon fewer people would read it, it would thus have less influence and its impact factor would tumble.
But Nature, Science, Cell and the like are unlikely to start publishing rubbish so what am I talking about, where is this thought experiment going?
It has seemed to me for quite a while that the whole nature of academic publishing is the wrong way round. Having written up their experiment(s) researchers will usually strive to get the paper in the highest impact factor journal they can given their discipline, topic area, methodology and the like. This 'aim high' strategy sometimes works, but often the paper will be rejected (either before or after review) and the researchers will then move down the "quality" ladder until a journal accepts the paper (or they give up!).
But this seems wrong because, as the answer to the above conundrum seems to suggest ultimately journals have more to gain from accepting good articles than researchers have from publishing in good journals. So it is the journals that should be soliciting high-quality articles from the researchers rather than the researchers going cap-in-hand to the journals. (Note that I am using "should" in an ideal world kind of way here, rather than referring in a real world kind of way -- more of which later.)
This seems to happen in some scientific disciplines. I was interested to read here a story concerning the first experiments conducted on the Large Hadron Collider (LHC). The data were collected on Monday 23rd of November 2009, the paper written up by the following Saturday, and three days after this it was accepted by the European Journal of Physics. Now, it has to be said that the impact factor of this journal is not very high, about 1.7 compared to around 30 for Nature and Science, read into that what you will, but the point I am making is that it took just over a week to go from laboratory to "in press" -- unbelievable if you compare it to psychology (my field) where the same process is likely to take a year or more. (This assuming that it is accepted by the first journal with only minor revisions required by the reviewers.)
How did this happen? Well physicists can upload papers to a server called arXiv (pronounced "archive" as the X is supposed to be the Greek letter Chi) where it is moderated by other physicists which can lead to the authors revising the manuscript (or sometimes the moderators get in on the act of revisions too). Whatever, the process is much more rapid than the glacial act of peer review. How did the European Journal of Physics get in on the act? Well the article doesn't say but the implication is that the editors visited arXiv and decided to publish the article. Why? Because for a relatively lowly journal picking up on the first data to come out of the LHC will gain it a great deal of publicity which may, in the long term, lead to greater influence subscriptions, money and the like.
This is exactly the process that we've seen in other industries such as popular music. In the old days (pre-internet, I mean) a band would scrape together some cash to record a demo tape which they would send to the A&R department of various record companies in the hope that one of them would give it a listen, like it and sign them. This may still happen, but many artists and record companies are forgoing this process. The band puts their music on Myspace or wherever and waits for the record companies to find them.
The world has changed but academic publishing is still in the era of cassette tapes and jiffy bags. It is actually worse than this. Pre-web 2.0 musicians could submit their cassettes to as many record companies as they liked to maximise their chances of getting heard and maybe hoping that they could stimulate a bidding war if more than one company was interested in signing them. When you submit an article to a journal you have to sign a form (electronic, thankfully) stating that your manuscript has not been and will not be submitted to another journal: the journal has exclusive rights to review your paper.
Do we want bidding wars between journals? Won't that harm science in the long run? Maybe, but I guess the future is a world without journals as we understand them today. Quite a few influential papers are 'published' in arXiv and never end up in a journal. But, you might argue, if these articles haven't been peer reviewed how can we guarantee academic quality? Well, you can't of course, but then you never could. I will only refer you again to Alan Sokal's paper that was accepted for publication in a high-profile discourse journal despite being peer reviewed and despite being deliberate nonsense and to this interesting if occasionally borderline unhinged article by physicist Frank J. Tippler and move on. It seems to me that the community will provide far better checks and balances on academic quality than three anonymous reviewers who only (usually) get one bite at the cherry.
ArXiv isn't perfect, and there have been some claims that the administrators have blacklisted some scientists from publishing on arXiv simply because they have expressed views that run counter to current scientific dogma; but such problems should be relatively easy to solve by for example, expanding the number and diversity of administrators, or by having papers be submitted anonymously in the first instance ensuring acceptance is based upon quality of research rather than on judgements made ad hominem. (My feeling is that this also happens in traditional journals, btw, as well as its opposite: low quality articles gaining acceptance simply as a result of their being authored by someone with a lot of intellectual clout.)
If we as social scientists want our research to be truly current, not two or more years out of date then we need something like arXiv, academia needs to catch up with the Web 2.0 revolution.
No comments:
Post a Comment