February highlights from the world of scientific publishing

Some of what I learned about scientific publishing last month from Twitter: new open access journals, data release debates, paper writing tips, and lots more

New journals

Two important announcements this month, both of open access sister journals to well established ones.

First, at the AAAS meeting it was announced that Science is going to have an online-only open access sister journal, called Science Advances, from early 2015. This will be selective (not a megajournal), will publish original research and review articles in science, engineering, technology, mathematics and social sciences, and will be edited by academic editors. The journal will use a Creative Commons license, which generally allows for free use, but hasn’t decided whether to allow commercial reuse, according to AAAS spokeswoman Ginger Pinholster. The author publishing charge hasn’t yet been announced.

Second, the Royal Society announced that, in addition to their selective open access journal Open Biology, they will be launching a megajournal, Royal Society Open Science, late in 2014. It will cover the entire range of science and mathematics, will offer open peer review as an option, and will also be edited by academic editors. Its criteria for what it will publish include “all articles which are scientifically sound, leaving any judgement of importance or potential impact to the reader” and “all high quality science including articles which may usually be difficult to publish elsewhere, for example, those that include negative findings”; it thus fits the usual criteria for a megajournal in that it will not select for ‘significance’ or potential impact.

These two announcements show that publishers without an open access, less selective journal in its stable are now unusual. Publishers are seeing that there is a demand for these journals and that they can make money. Publishers also see that they can gain a reputation for being friendly to open access by setting up such a journal. This also means that papers rejected by their more selective journals can stay within the publisher (via cascading peer review), which, while saving time for the authors by avoiding the need to start the submission process from scratch, also turn a potential negative for the publisher (editorial time spent on papers that are not published) into a positive (author charges). The AAAS has been particularly slow to join this particular bandwagon; let’s see if the strong brand of Science is enough to persuade authors to publish in Science Advances rather than the increasingly large number of other megajournals.

PLOS data release policy

On 24 February, PLOS posted an updated version of the announcement about data release that they made in December (and which I covered last month). I didn’t pay much attention as the change had already been trailed, but then I had to sit up and take notice because I started seeing posts and tweets strongly criticising the policy. The first to appear was an angry and (in my opinion) over-the-top post by @DrugMonkeyblog entitled “PLoS is letting the inmates run the asylum and this will kill them”.  A more positive view was given by Michigan State University evolutionary geneticist @IanDworkin, and another by New Hampshire genomics researcher Matt MacManes (@PeroMHC). Some problems that the policy could cause small, underfunded labs were pointed out by Mexico-based neuroscience researcher Erin McKiernan (@emckiernan13). The debate got wider, reaching Ars Technica and Reddit – as of 3 March there have been 1045 comments on Reddit!

So what is the big problem? The main objections raised seem to me to fall into six categories:

  1. Some datasets would take too much work to get into a format that others could understand
  2. It isn’t always clear what kind of data should be published with a paper
  3. Some data files are too large to be easily hosted
  4. The concern that others might publish reanalyses that the originators of the data were intending to publish, so they would lose the credit from that further research
  5. Some datasets contain confidential information
  6. Some datasets are proprietary

I won’t discuss these issues in detail here, but if you’re interested it’s worth reading the comments on the posts linked above. But it does appear (particularly from the update on their 24 February post and the FAQ posted on 28 February) that PLOS is very happy to discuss many of these issues with authors that have concerns, but analyses of proprietary data may have to be published elsewhere from now on.

I tend to agree with the more positive views of this new policy, who argue that data publication will help increase reproducibility, help researchers to build on each other’s work and prevent fraud. In any case, researcher who disagree are free to publish in other journals with less progressive policies. PLOS is a non-profit publisher who say that access to research results, immediately and without restriction, has always been at the heart of their mission, so they are being consistent in applying this strict policy.

Writing a paper

Miscellaneous news

  • Science writer @CarlZimmer explained eloquently at the AAAS meeting why open access to research, including open peer review and preprint posting, benefit science journalists and their readers.
  • Impactstory profiles now show proportion of a researcher’s articles that are open access and gives gold, silver and bronze badges, as well as showing how highly accessed, discussed and cited their papers are.
  • A new site has appeared where authors can review their experience with journals: Journalysis. It looks promising but needs reviews before it can become a really useful resource – go add one!
  • An interesting example of post-publication peer review starting on Twitter and continuing in a journal was described by @lakens here and his coauthor @TimSmitsTim here.
  • Cuban researcher Yasset Perez-Riverol (@ypriverol) explained why researchers need Twitter and a professional blog.
  • I realised when looking at an Elsevier journal website that many Elsevier journals now have very informative journal metrics, such as impact factors, Eigenfactor, SNIP and SJR for several years and average times from submission to first decision and from acceptance to publication. An example is here.
  • PeerJ founder @P_Binfield posted a Google Docs list of standalone peer review platforms.

Submission to first decision time

Having written previously about journal acceptance to publication times, it is high time I looked at the other important time that affects publication speed: submission to first decision time. As I explained in the previous post, the time from submission to publication in a peer reviewed journal can be split into three phases, the two discussed previously and here and also the time needed for the authors to revise, which the journal can’t control.

A survey of submission to first decision times

I have trawled through the instructions to authors pages of the journals in the MRC frequently used journal list, which I have used in several previous posts as a handy list of relatively high-impact and well known biomedical journals. I’ve used the list as downloaded in 2012, and there may be new journals added to it now. I’ve omitted the review journals, which leaves 96.

From these pages I have tried to find any indication of the actual or intended speed to first decision for each journal. For many journals, no information was provided on the journal website about average or promised submission to first decision times. For example, no Nature Publishing Group, Lancet, Springer or Oxford University Press journals in this data set provide any information.

However, of these 96 journals 37 did provide usable information. I have put this information in a spreadsheet on my website.

20 promised a first decision within 28 or 30 days of submission. 12 others promised 20-25 days. Of the rest, two are particularly fast, Circulation Research (13 days in 2012) and Cellular Microbiology (14 days); and one is particularly slow, Molecular and Cellular Biology (4 to 6 weeks, though they may just be more cautious in their promises than other journals). JAMA and Genetics are also relatively slow, with 34 and 35 days, respectively. (Note that the links here are to the page that states the time, which is generally the information for authors.)

A few journals promise a particularly fast for selected (‘expedited’) papers but I have only considered the speed promised for all papers here.

I conclude from this analysis that, for relatively high-impact biomedical journals, a first decision within a month of submission is the norm. Anything faster than 3 weeks is fast, and anything slower than 5 weeks is slow.

Newer journals

But what about the newer journals? PeerJ has recently been boasting on its blog about authors who are happy with their fast decision times. The decision times given on this post are 17, 18 and 19 days. These are not necessarily typical of all PeerJ authors, though, and are likely to be biased towards the shorter times, as those whose decisions took longer won’t have tweeted about it and PeerJ won’t have included them in their post.

PLOS One gives no current information on its website about decision times. However, in a comment on a PLOS One blog post in 2009, the then Publisher Pete Binfield stated that “of the 1,520 papers which received a first decision in the second quarter of 2009 (April – June), the mean time from QC completion to first decision was 33.4 days, the median was 30 days and the SD was 18.” He didn’t say how long it took from submission to ‘QC completion’, which is presumably an initial check; I expect this would be only a few days.

Kent Anderson of the Scholarly Kitchen asked last year “Is PLOS ONE Slowing Down?“. This post only looked at the time between the submission and acceptance dates that are displayed on all published papers, and it included no data on decision dates, so the data tell us nothing about decision times. In a series of comments below the post David Solomon of Michigan State University gives more data, which shows that the submission to acceptance time went up only slightly between early 2010 and September 2011.

The star of journals in terms of decision time is undoubtedly Biology Open. It posts the average decision time in the previous month on its front page, and the figure currently given for February 2013 is 8 days. They say they aim to give a first decision within 10 days, and their tweets seem to bear this out: in June 2012 they tweeted that the average decision time in May 2012 had been 6 days, and similarly the time for April 2012 had been 9 days.

Other megajournals vary similarly to ordinary journals. Open Biology reports an average of 24 days, Cell Reports aims for 21 days, and G3 and Scientific Reports aim for 30 days. Springer Plus, the BMC series, the Frontiers journals, BMJ Open and FEBS Open Bio provided no information, though all boast of being fast.

What affects review speed?

If newer journals are faster, why might that be? One possible reason is that as the number of submitted papers goes up, the number of editors doesn’t always go up quickly enough, so the editors get overworked – whereas when a journal is new the number of papers to handle per editor may be lower.

It is important to remember that the speed of review is mainly down to the reviewers, as Andy Farke pointed out in a recent PLOS blog post. Editors can affect this by setting deadlines and chasing late reviewers, but they only have a limited amount of control over when reviewers send their reports.

But given this limitation, there could be reasons for variations in the average speed of review between journals. Reviewers might be excited by the prospect of reviewing for newer journals, so they are more likely to be fast. This could equally be true for the highest impact journals, of course, and also for open access journals if the reviewer is an open access fan. Enthusiastic reviewers not only mean that the reviewers who have agreed send their reports in more quickly, but also that it will be easier to get someone to agree to review in the first place. As Bob O’Hara pointed out in a comment on Andy Farke’s post, “If lots of people decline, you’re not going to have a short review time”.

A logical conclusion from this might be that the best way in which a journal could speed up its time to first decision would be to cultivate enthusiasm for their journal among the pool of potential reviewers. Building a community around the journal, using social media, conferences,  mascots or even free gifts might help. PeerJ seem to be aiming to build such a community with their membership scheme, not to mention their active Twitter presence and their monkey mascot. Biology Open‘s speed might be related to its sponsorship of meetings and its aim to “reduce reviewer fatigue in the community”.

Another less positive possible reason for shorter review times could be that reviewers are not being careful enough. This hypothesis was tested and refuted by the editors of Acta Neuropathologica in a 2008 editorial. (Incidentally, this journal had an average time from submission to first decision of around 17 days between 2005 and 2007, which is pretty fast.) The editorial says “Because in this journal all reviews are rated from 0 (worst) to 100 (best), we plotted speed versus quality. As reflected in Fig. 1, there is no indication that review time is related to the quality of a review.”

Your experience

I would love to find (or even do) some research into the actual submission to first decision times between different journals. Unfortunately that would mean getting the data from each publisher, and it might be difficult to persuade them to release it. (And I don’t have time to do this, alas.) Does anyone know of any research on this?

And have you experienced particularly fast or slow peer review at a particular journal? Are you a journal editor who can tell us about the actual submission to first decision times in your journal? Or do you have other theories for why some journals are quicker than others in this respect?

Crowdsourcing information about journals

Crowdsourced surveys of the experience of authors with journals are useful, but I have found only a few. For now, I propose a simpler survey of information gleaned from journal websites.

I was recently alerted by @melchivers (via @thesiswhisperer) to the existence of a blog by SUNY philosopher Andrew Cullison (@andycullison) that includes a set of journal surveys for the field. As Cullison explains in an overview post, the surveys consist of Google Docs spreadsheets, one for each journal, and a form interface that academics fill in with data on their experience of submitting to that journal. The information requested includes:

  • the time taken for initial review
  • the initial verdict of the journal (acceptance, rejection, revise and resubmit, conditional acceptance, withdrawn)
  • the number of reviewers whose comments were provided
  • an assessment of the quality of the reviewers’ comments
  • the final verdict if the paper was revised
  • the time from acceptance to publication
  • an overall rating of the experience with the editors
  • Some basic demographic data

This survey covers 180 journals in philosophy. The data is collated and various statistics are calculated, such as the average review time and acceptance to publication time and the average acceptance rate. Here are couple of examples: the British Journal of Philosophy of Science and Philosophy of Science.

This kind of survey could be a valuable resource for authors in a particular field who are trying to choose a journal. They are crowdsourced, so they do not rely on only one or a few people to gather data. They also provide real data on how fast journals are in practice, which might differ from the statistics or promises provided on journal websites. However, they have limitations: as pointed out in comments below one of Cullison’s posts, they suffer from reporting bias. This is important given that for many of the journals surveyed there are fewer than ten responses.

I haven’t seen any surveys like this in any other field of academia, and certainly none in biology or medicine. I would be very interested to hear if others have seen any. In biology a similar survey would probably only be useful if divided up into smaller fields, such as plant cell biology or cardiovascular medicine. Or it could focus only on the general journals that cover large areas of science, biology or medicine.

A simpler journal survey

Alternatively, or as a first step towards full surveys of journals in biomedicine, a crowdsourced survey of the information presented on journal websites could be useful. This could include information such as the promised submission to first decision time and acceptance to publication time, licensing details (copyright, Creative Commons and so on), charges, article types and length limits. This would involve only one small dataset per journal, which could fit on a single line of a spreadsheet rather than data for individual papers, so would be more manageable than Cullison’s surveys.

I have made a start on such a survey, and you can find it on Google Docs here. I have used the same set of 98 journals, derived from the UK Medical Research Council  list of journals popular with MRC authors, that I used for my open access charges spreadsheet. For every journal, the spreadsheet now contains the name of the publisher, the main journal URL, the URL for the instructions for authors, whether the entire journal is open access or not, and whether there is an open access option. There are also columns for the following information: what the website says about acceptance to publication time; whether the accepted, non-edited manuscript is published online, and what the website says about submission to first decision time. I have filled in some of these fields but haven’t yet checked all the websites for all this information.

The spreadsheet is editable by anyone. I realise that this risks someone messing up the data or adding spam text. For the columns that I don’t want you to change, I have included a partial safeguard: these columns are pulled in from a hidden, locked sheet of the spreadsheet. Please try not to delete data in any cells – just add data in empty cells. If you have any other suggestions for how to allow information to be added but not deleted, or otherwise to avoid problems, please add a comment below.

Now it’s your turn

Would you like to contribute information to this survey? If so, please go ahead and edit the spreadsheet.

If you could publicise it that would be great too.

And do you have any comments on this process, suggestions for improvement and so on?

Other questions

Have you used Cullison’s surveys and found them useful (or less useful)? Have you come across any surveys like the philosophy one for other fields? Or like my survey?

Journal metrics

Last week a new measure of the impact of a journal was launched: Google Scholar Metrics. So it seems like a good time to review the various metrics available for journals.

Below I summarise six measures of journal impact: the impact factor (IF), 5-year IF, Google Scholar Metrics, SCImago Journal Rank (SJR), Source Normalized Impact per Paper (SNIP) and Excellence in Research for Australia (ERA) ranking. As part of the research for this post I have found out the metrics (except 5-year IF) for a sample of 97 of the higher-impact biomedical journals and put them into a Google Docs spreadsheet, which can be viewed here (or on Google Docs directly here).

Most researchers get to know the IF fairly quickly when they start to read journals. So you probably know that an IF of 30 is high and that thousands of journals (the long tail) have IFs below 1. But fewer people have this kind of familiarity with the other metrics. So I have tried to estimate what range of numbers counts as ‘high impact’ for each metric. ‘High’ here means in the top 33% of my sample of 97 journals (already a high-impact sample).

To summarise the number that counts as high for each metric:
IF: 14
5-year IF: about 15
Google Scholar Metrics: 101
SJR: 0.53
SNIP: 3.85

Note that I am only talking about journal metrics, not metrics for assessing articles or researchers. As always, anyone using these figures should make sure they are using them only to judge journals, not individual papers or their authors (as emphasised by a European Association of Science Editors statement in 2007). Also remember that citations can be gamed by editors (see my previous post on the subject or a recent Scholarly Kitchen post on a citation cartel for more details).

Impact factor

The IF is provided by Thomson Reuters as part of their Journal Citation Reports, which covers ‘more than 10,100 journals from over 2,600 publishers in approximately 238 disciplines from 84 countries’.

It is calculated by dividing the number of citations in one year of articles published in that journal during the previous two years.

What counts as big: the highest-ranked journals have IFs over about 14; middle-ranking journals have numbers between 3 and 14; many low-ranked journals have numbers around 1.

Five-year impact factor

This is similar to the standard two-year IF except that citations and articles are calculated over the previous five years rather than two. It has been published only since 2007. This metric has advantages in slower-moving fields, where papers gather citations more slowly than a year or two after publication.

It is difficult to find lists of five-year IFs online, although some journals display them on their home pages. I did, however, find a study in the journal Cybermetrics that showed it is generally about 1.05 times the size of the two-year IF.

What counts as big: 15 using this figure.

Google Scholar Metrics

These were introduced on 1 April 2012 and are based on the Google Scholar database, which includes more journals and other publications than that used for the IFs. They are based on the h-index, which is defined on the Google Scholar Metrics page as follows:

The h-index of a publication is the largest number h such that at least h articles in that publication were cited at least h times each. For example, a publication with five articles cited by, respectively, 17, 9, 6, 3, and 2, has the h-index of 3.

This is a rather difficult concept to get your head around (at least it is for me). Basically the number cannot be bigger than the number of papers a journal has published, and it cannot be bigger than the highest number of times any one paper has been cited. So in the above example the h-index cannot be greater than 5 because there were only 5 articles, and the largest number of citations lower than 5 is 3, so the h-index is 3.

Google Scholar Metrics extends this as follows:

The h-core of a publication is a set of top cited h articles from the publication. These are the articles that the h-index is based on. For example, the publication above has the h-core with three articles, those cited by 17, 9, and 6.

The h-median of a publication be the median of the citation counts in its h-core. For example, the h-median of the publication above is 9. The h-median is a measure of the distribution of citations to the h-core articles.

Finally, the h5-index, h5-core, and h5-median of a publication are, respectively, the h-index, h-core, and h-median of only those of its articles that were published in the last five complete calendar years.

So the main metric is the h5-index, which is a measure of citations to a journal over 5 years to April 2012.

Note that this metric doesn’t involve any division by the number of papers published by the journal (unlike the other metrics discussed here). This means that journals that publish more papers will have proportionally larger values in Google Scholar Metrics than with other metrics.

What counts as big: the highest-ranked journals have h5-indexes over about 101; many journals seem to have numbers under 50.

SCImago Journal Rank (SJR)

The SCImago Journal Rank (SJR) is a metric produced by Scopus (part of Elsevier). It is calculated as follows:

It expresses the average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years, — i.e. weighted citations received in year X to documents published in the journal in years X-1, X-2 and X-3.

So it is also a measure of citations similar to a three-year impact factor, but the citations are weighted according to where the citation was. Further information is here (pdf). The weighting depends on how many citations each journal gets. So if journal A is cited a lot overall and journal B is not cited as much, and a paper in journal C is cited in journal A, that citation is given more weight in the calculation than a citation of journal C in journal B.

What counts as big: the highest-ranked journals have SJRs over about 3; many journals seem to have numbers under 0.5.

(Note that on the SCImago website the decimal point in the SJR is given as a comma in some places, so it looks as if the top journals have SJRs of over 1000 (1,000). On the spreadsheets that are freely downloadable from the same site or from the ‘Journal Metrics’ website (also from Elsevier) the metrics are given as 1.000 etc, so I think this is the correct version.)

Source-Normalized Impact per Paper (SNIP)

The Source-Normalized Impact per Paper (SNIP) is defined as the ratio of a journal’s citation count per paper and the citation potential in its subject field. It is designed to aid comparisons between journals in fields with different patterns of citations. It is calculated as follows:

Raw impact per paper (RIP)
Number of citations in year of analysis to a journal’s papers published in 3 preceding years, divided by the number of a journal’s papers in these three years

Database citation potential in a journal’s subject field
Mean number of 1-3 year old references per paper citing the journal and published in journals processed for the database

Relative database citation potential in a journal’s subject field (RDCP)
Database citation potential of a journal’s subject field divided by that for the median journal in the database

Source normalized impact per paper: (SNIP)
Ratio of a journal’ raw impact per paper (RIP) and the relative database citation potential (RDCP) in the subject field covered by the journal

So basically a three-year impact factor is weighted according how much papers in other journals in the same field are cited.

When I looked for lists of SNIPs for 2010 I encountered a problem: two different lists gave two different answers. The list downloaded from the Journal Metrics site gives the 2010 SNIP for Cell as 1.22, but when I searched on the CWTS Journal Indicators site (which is linked from the Journal Metrics site) it was given as 9.61. So there are two answers to the question of what counts as big: either anything over 0.7 from Journal Metrics or anything over 3 from CWTS Journal Indicators. If anyone can help me resolve this discrepancy I’d be grateful.

Australian Research Council Ranking

The Excellence in Research for Australia (ERA) evaluation exercise in 2010 included a system in which journals were ranked A*, A, B or C. Details of what the rankings mean is here. Top journals (in fact many that might elsewhere be called middle ranking) are ranked A*. It is not clear how these rankings were decided. These journal rankings were controversial and are not being used for the 2012 ERA.

Comparison of journals using these metrics

I have selected 97 high-impact journals in biology and medicine and compiled the metrics for them. I put the list of journals together by initially picking the top journals in the field by SJR, then removing all those that only publish reviews and adding a few that seemed important, or were in the MRC frequently used journal list, or were ranked highly using other metrics. The result is in a Google spreadsheet here. I have added colours to show the top, middle and bottom 33% (tertile) in this sample of each metric for ease of visualisation, and the mean, median and percentiles are at the bottom.

Sources of data:

  • IF: various websites including this for medical journals, this for Nature journals, this for Cell Press journals, this for general and evolutionary journals, this and this for a range of other journals, and individual journal websites. Please note that no data were obtained directly from Thomson Reuters, and they have asked me to state that they do not take responsibility for the accuracy of the data I am presenting.
  • Google Scholar Metrics: Google Scholar Citations Top publications in English and searches from that page.
  • SJR and SNIP: Journal Metrics.
  • ERA: ARC.

Notes on particular journals:

A few journals have anomalous patterns, unlike most that are high or lower in all the different metrics.

  • Ca: A Cancer Journal for Clinicians has a very high IF, SJR and SNIP, but comes out lower on Google Scholar metrics. A recent post in Psychology Today includes a suggestion of why this might be:

The impact factor reflects that the American Cancer Society publishes statistics in CA that are required citations for authoritative estimates of prevalence and other statistics as they vary by cancer site and this assures a high level of citation.

  • A few journals rank relatively low (out of this selection of journals) on all the metrics except the ERA rating, where they are rated A*: Development, The Journal of Immunology, Cellular Microbiology, Journal of Biological Chemistry, Molecular Microbiology, Developmental Biology, and Genetics. I don’t know why this might be, except that the ERA ratings appear to be subjective decisions by experts rather than being based on citations.
  • Proc Natl Acad Sci USA, Blood, Nucleic Acids Research, Cancer Research, Gastroenterology and most notably the BMJ come out high in Google Scholar Metrics but not so high in IF, SJR and SNIP. Perhaps they are journals that publish many papers, which is not accounted for by Google Scholar Metrics, or they could have more citations four or five years after papers are published, which would be picked up by Google Scholar Metrics but not the other metrics.
  • Finally, Systematic Biology has a high SNIP, a medium SJR and IF and a lower Google Scholar Metric. Perhaps it is in a field in which citations per paper are usually low, which is accounted for by the SNIP.

Your comments

Do you have experience of the lesser-known metrics being used by journals or by others to evaluate journals? Can you explain any of the anomalous patterns mentioned here or for the two different values for SNIPs?

Journal news for February

News related to scientific journal publishing since 4 February.

Elsevier withdraws support for the Research Works Act

Since I covered this infamous draft US law and the associated boycott of Elsevier by academics (here and in news here) the flood of blog posts on the topic has continued, and I won’t attempt to summarise them here. But the pressure seems to have had an effect: on 27 February Elsevier announced that it is no longer supporting the act, although they ‘continue to oppose government mandates in this area’.

Meanwhile, a new act has been proposed, the Federal Research Public Access Act (FRPAA), which would mandate that all research funded by every federal funder with a budget over $100 million should be made open access 6 months after publication.

Industry group ‘threatens’ journals to delay publications

The Lancet has reported (pdf) that the Mining Awareness Resource Group (MARG) has written to several scientific journals advises journals not to  publish papers from a US government study of diesel exhaust and lung cancer until a court case and congressional directives are ‘resolved’. The editor of Occupational and Environmental Medicine, Dana Loomis, is quoted as saying ‘It is vague and threatening. This has a chilling effect on scientific communications—a matter of grave concern.’

New open access journal

The open access journal Biology Open has been launched by the Company of Biologists. The journal aims to provide the research community with ‘an opportunity to publish valid and well-conducted experimental work that is otherwise robbed of timeliness and impact by the delays inherent in submission to established journals with more restrictive selection criteria. ‘

Twitter and paper citations

An arXiv preprint has found a correlation between mentions of a paper on Twitter and its later citations.

Criteria for the UK Research Excellence Framework 2014 announced

The Higher Education Funding Council for England (HEFCE) has announced the criteria and working methods that the panels for the assessment of research using the Research Excellence Framework (REF 2014) will use. REF will use citations as part of assessment but not impact factors or other bibliometrics (see page 25 of the full report for the statement regarding citations in the biology and medicine panel). Researchers at English universities will no doubt be scrutinizing the guidelines carefully.

* * * *

I’m sorry there hasn’t been a weekly Journal News recently, as I had hoped, and that this update is rather brief. I hope that the usefulness of these news updates depends more on their content than their regularity. If you want (much) more frequent updates from the world of journals and scientific publication, do follow me on Twitter!

Choosing a journal V: impact factor

This the fifth post in my series on choosing a journal, following posts on getting your paper published quickly, getting it noticed, practicalities, and peer review procedure.

It is all very well getting your paper seen by lots of people, but will that lead to an increase in your reputation? Will it lead to that all-important grant, promotion or university rating?

The impact factor of a journal is a measure of the average number of citations of papers published over the previous two years in the year being measured. A very common view among academics is that having your paper published in a journal with a high impact factor is the most important thing they can do to ensure tenure, funding, promotion and generally success. And in fact the impact factor of the journals your papers are in still has a big influence on many of those whose job it is to assess scientists (as discussed recently on Michael Eisen’s blog). It is also a factor in whether librarians choose to subscribe to a journal, which will affect how widely your paper is seen.

So even if the impact factor has flaws, it is still important. However, remember the following caveats:

  • Citations are only a proxy measure of the actual impact of a paper – your paper could have an enormous influence while not being cited in academic journals
  • Impact doesn’t only occur in the two years following the publication of the paper: in slow moving fields, in which seminal papers are cited five or ten years after publication, these late citations won’t get counted towards the impact factor so the journal’s impact factor will be smaller than justified
  • The impact factor measures the average impact of papers in the journal; some will be cited much more, some not at all
  • There are ways for journals to ‘game‘ impact factors, such as manipulating types of article so that the less cited ones won’t be counted in the calculation
  • The methods used for calculating the impact factor are proprietary and not published
  • Averages can be skewed by a single paper that is very highly cited (e.g. the 2009 impact factor of Acta Crystallographica A)
  • Although impact factors are calculated to three decimal places, I haven’t seen any analysis of the error in their estimation, so a difference in half a point may be completely insignificant
  • New journals don’t get an impact factor until they have been publishing for at least three years.

So although it is worth looking at the impact factor of a journal to which you are considering submitting your paper, don’t take it too seriously. Especially don’t take small differences between the impact factors of different journals as meaningful.

Other new metrics are being developed to measure average impact of journals, such as the Eigenfactor and Source Normalized Impact per Paper (SNIP) and SCImago Journal Rank (SJR). These might be worth looking at in combination with the impact factor when choosing a journal.

Your experience

How important is the impact factor of a journal in your decision to submit there? Have you taken other measures of impact into account? Do you think the impact factor of journals you have published in has affected the post-publication life of your papers?

And journal editors, how much difference does the impact factor of your journal make to how many papers are submitted to it, or to your marketing? Do you know the Eigenfactor, SNIP or SJR of your journal?