Thursday 28 April 2016

Where to submit your paper?

Today at Eco-Stats we discussed the PLoS ONE paper "Where Should I Send It? Optimizing the Submission Decision Process" which did some mathematical modelling to decide on an optimal approach to choosing the order of journals to send an ecological paper to.  The main factors considered were time to acceptance (a function of time to review and acceptance rate) and impact factor of the journal.  The authors wrote to the editorial boards of all - yes all - ISI-listed journals in ecology, and another six general journals (e.g. Science, PLoS ONE) that publish ecological papers.  They got responses from 61 journals, yielding an interesting dataset available as an appendix to their paper.  I've reformatted it as a comma-delimited file here.

The authors derived a couple of metrics (e.g. to maximise expected citations) under a host of assumptions (which made me somewhat uncomfortable, as modelling papers often do), the endpoint was metrics that could be used to evaluate different publication strategies, e.g. Science then PNAS then Ecology Letters then...

Their results I found largely unsurprising - they highlighted a few target journals, of the ones they had data on, in particular Ecology Letters, Ecological Monographs and PloS One, which all scored high as compromises between impact factor and time to publication.  Interestingly Science didn't come out smelling like roses, although this may be a function of the metrics they used and their implicit assumptions as much as anything else.  They didn't have data on all journals, e.g. I would like to know about Nature, Trends in Ecology and Evolution or Methods in Ecology and Evolution.  They expressed surprise that a pretty good strategy seemed to be submitting to journals in order of impact factor.  They expected a loss of impact due to long times spent in review, I mean you end up bouncing around between journals for years and years.  I think in practice that strategy would do worse than their model suggested, for most of us, because it didn't incorporate the positive correlation in outcomes from submitting the same paper to different journals (or more generally, any measure of how significant a given paper actually is).

Over time I've become more of a statistician than a modeller and so I was especially interested in the data behind this work, and I learnt the most just by looking at the raw data that was tucked away in an appendix.  Here are a few choice graphs which explain the main drivers behind their results.

First, Impact Factor vs time in review:




There is a decent negative correlation between impact factor and time in review (r=-0.5).  For those of us who have submitted a few papers to journals at each end of the spectrum this won't be news.  This is presumably one of the reasons why a journal has high impact - faster acceptances has a direct effect on citation metrics, and increases the incentive to submit good papers there.

The Science journal is a bit of an outlier on this graph - it has the highest impact factor but a pretty average review time, more than twice as long as Ecology Letters, so if you take these numbers at face value (are they measured the same way across journals?), and if 50 days means a lot to you, there is a case for having Ecology Letters as your plan A rather than Science.  Hmmm...

Good journals are towards the top left, and apart from Ecology Letters and Science we also have Ecological Monographs on the shortlist because it has a slightly shorter time in review than most journals with similar impact factors.  Although I wonder how large that difference is relative to sampling error (would it come out to the left of the pack next year too?)...

Next graph is Impact Factor vs Acceptance rate:
There is a slightly stronger negative association this time (r=-0.6).  I vaguely remember a bulletin article a few years ago suggesting no relation between impact factor and acceptance rate - that article used a small sample size and made the classic mistake of assuming that no evidence of a relationship means no relationship.  Well given some more data clearly there is a relationship. 

This time we are looking for papers towards to top-right.  The journal fitting the bill is by far the biggest outlier, PLoS ONE - a journal with a different editorial policy to most that reviews largely for technical correctness rather than for novelty.  It ends up with quite a high acceptance rate, and nevertheless manages a pretty high impact factor.  But its impact factor was calculated across all disciplines, what is it when limited to just ecology papers?

So anyway, from looking at the raw data and taking it at face value, what would be your publishing strategy?  A sensible (and relatively common) strategy is to first go for a high impact journal (or two) with relatively short turnaround times, which Ecology Letters is known for, and when you get tired/discouraged by lack of success, or when just trying to squeeze a paper out quickly, PLoS ONE is a good option.  This is pretty much what the paper said using fancy metrics, I guess it is reassuring to get the same sort of answer from eyeballing scatterplots of the raw data.

There are a few simplifying assumptions in this discussion and in the paper itself - a key one is that all paper are treated as equal, when in fact some are more likely to be accepted than others, and some are more suited to some journals than others.  There are assumptions like citations being the be-all and end-all, and the modelling in the original paper further assumed that the citations a paper will get are a function of the journal it is published in alone, and not to do with the quality of the paper that is published.  But it's all good fun and there are certainly some lessons to be learnt here.

12 comments:

  1. The link for the raw data file is no longer working - would you be so kind as to re-upload it?

    ReplyDelete
    Replies
    1. Hio bro apki website achi hai or bahi ap MDTVSOPRTS do follow link do ge bhai me bhi apko do follow back link doga

      Delete