How to read journal instructions for authors

Journal editors often complain that few authors seem to read their instructions for authors. But journals don’t make it easy to read these instructions. Every publisher has its own way of displaying the instructions, with differences in the wording for the same thing, in the order in which information is presented and in how the information is split over web pages.

I’m going to attempt to bring some order to the chaos by picking out the points that really matter. These are:

  • Subject areas
  • Threshold for significance
  • Article types
  • Policies
  • Length limits
  • Article format for submission

There are also some things that nearly all journals require, which I’ll summarise at the end.

Scope

The most important thing to read when you are considering whether to submit to a particular journal is what subject areas it covers. This aspect is pretty straightforward, although it is the only area covered by most commercially available tools for choosing a journal, such as Edanz’s Journal Selector and JANE.

One important aspect to consider, however, is how broad a subject area you would like the journal to cover. If your study will be of interest to readers in more than one field, you will probably want an interdisciplinary journal that covers both fields.

Threshold

There is generally some statement in the instructions for authors or elsewhere in the journal information about the impact, significance or interest threshold. This can be written in all sorts of ways. For example:

  • Nature requires that articles “are of outstanding scientific importance” and “reach a conclusion of interest to an interdisciplinary readership”
  • Blood takes into account “the originality and importance of the observations or investigations, the quality of the work and validity of the evidence”
  • Cell says “The basic criterion for considering papers is whether the results provide significant conceptual advances into, or raise provocative questions and hypotheses regarding, an interesting biological question.”

‘Megajournals’ include a statement that the journal does not select on the basis of perceived impact or significance. For example:

  • PLOS ONE says “PLOS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership”
  • Frontiers says “Review editors focus on certifying the accuracy and validity of articles, not on evaluating their significance”
  • Scientific Reports says “Referees and Editorial Board Members will determine whether a paper is scientifically sound, rather than making judgements on novelty or whether the paper represents a conceptual advance.”
  • Biology Open focuses on “publication of good-quality sound research without a requirement for perceived impact”.

If you choose a selective journal rather than a megajournal, it is important to consider carefully whether your study is likely to reach their stated threshold. Get a colleague in another field to read your title and abstract and give an honest view of how groundbreaking they think it is compared with papers in various possible target journals.

You are likely to be biased towards finding your own work fascinating; never forget that editors and reviewers won’t share this view.

Article types

The instructions always include a list of the types of article that the journal publishes. Your paper must fit one of the article types and must follow the instructions for that type (especially regarding length limits).

What I call a research paper can be called by a variety of different names:

  • Original article
  • Original research
  • Research report
  • Primary research
  • Article
  • Letter

The word ‘Letter’ is used for a full (short) research paper in some journals (such as Nature journals) but for something much shorter in others, akin to the more colloquial meaning of the word letter.

Journals have a variety of criteria to distinguish between different article types. Sometimes the main difference is simply length, but often there is a difference in ‘significance’ or ‘completeness of the story’. These can be rather subjective judgements. Read a range of papers in the journal to get a feeling for the differences.

If your article isn’t a research paper, it is equally important to check whether the journal publishes articles like it. Usually journals invite review and comment articles, but some also accept unsolicited offers. Always send an email first describing your proposed review or comment, rather than just submitting it.

Policies

The policies section will vary a lot depending on the field. It will cover things like:

  • requirements for making data, software and materials available
  • ethics for animal experiments or human studies
  • adherence to subject-specific guidelines such as MIAME or CONSORT
  • adherence to authorship criteria, such as regarding ghostwriting and guest authorship (see the criteria laid out by the ICMJE)
  • whether they will accept papers that have previously been published on a preprint server or presented at a conference
  • policies on discussing the research with the media before publication.

It is crucial that your research follows all the guidelines for the journal. Violations can lead to immediate rejection.

Journals vary in how strict they are. However, if your study follows the highest possible ethical standards you are unlikely to find major differences between them. The exception to this is in journal policies on previous publication; newer journals are often less strict on this, and there is ongoing debate about the issue so instructions might change.

Format for submission

Some instructions aren’t to do with the manuscript contents itself but rather its file format and other things to do with how it is uploaded to the journal’s submission system. Publishers vary in what they require in terms of:

  • File formats allowed (commonly allowed formats for text are doc, docx, odt and rtf; TeX files may or may not be accepted)
  • Whether the text and figures should be in a single file or separate files
  • Whether the figure legends should be under each figure or at the end of the text
  • Whether a cover letter is required and what it should contain
  • Whether page or line numbers should be included
  • Whether the manuscript should be double spaced
  • Whether suggestions or exclusions of reviewers are allowed or encouraged
  • Whether submission has to be through the online system or whether post or email is allowed

Following these instructions is advisable, as online submissions systems can be inflexible. If you don’t follow the instructions there may be a delay before the manuscript is looked at by the editors or sent to review.

Length limits

All print journals and many online-only journals have length limits. It is best to keep to them at first submission, if only to avoid annoying the editors and reviewers and to avoid having to shorten your paper later if it is accepted. Some journals will reject any paper that is too long without considering it.

There are usually also length limits on the title and abstract, and sometimes on other sections too. Limits on the numbers of figures, tables and references are also common.

Formatting within the manuscript

Then there are the details of how the manuscript is laid out. In general these instructions are not quite as important at the submission stage as those listed above, as any problems can be fixed once the article is accepted. However, some journals are strict about this kind of thing being done properly on first submission. And it isn’t always clear from the instructions to authors how strict they are. See my previous post about formatting for initial submission for more.

The kind of thing that journals care about in this category are:

  • Whether the abstract is subdivided into sections
  • What sections are required in the main text (usually Introduction, Methods, Results, Discussion or similar)
  • What order the sections should be in (whether the Methods come before the Results or after the Discussion)
  • Whether citations are allowed in the abstract
  • Whether the reference citations should be numbered in order or given in the form “(Author et al., 2009)”

Non-varying instructions

Finally, there are the requirements that practically all journals have, although they can be worded in a variety of ways. These include:

  • Use SI units
  • Define all abbreviations and special symbols on first use
  • Cite all figures, tables and references in the text
  • Gene symbols should be italic; protein names should be Roman.

For more on what most journals tend to have in their instructions, see the generic set of instructions provided by the International Committee of Medical Journal Editors (ICJME).

There are companies and freelance editors, including me, who can help you to comply with instructions for authors for your target journal.

Submission to first decision time

Having written previously about journal acceptance to publication times, it is high time I looked at the other important time that affects publication speed: submission to first decision time. As I explained in the previous post, the time from submission to publication in a peer reviewed journal can be split into three phases, the two discussed previously and here and also the time needed for the authors to revise, which the journal can’t control.

A survey of submission to first decision times

I have trawled through the instructions to authors pages of the journals in the MRC frequently used journal list, which I have used in several previous posts as a handy list of relatively high-impact and well known biomedical journals. I’ve used the list as downloaded in 2012, and there may be new journals added to it now. I’ve omitted the review journals, which leaves 96.

From these pages I have tried to find any indication of the actual or intended speed to first decision for each journal. For many journals, no information was provided on the journal website about average or promised submission to first decision times. For example, no Nature Publishing Group, Lancet, Springer or Oxford University Press journals in this data set provide any information.

However, of these 96 journals 37 did provide usable information. I have put this information in a spreadsheet on my website.

20 promised a first decision within 28 or 30 days of submission. 12 others promised 20-25 days. Of the rest, two are particularly fast, Circulation Research (13 days in 2012) and Cellular Microbiology (14 days); and one is particularly slow, Molecular and Cellular Biology (4 to 6 weeks, though they may just be more cautious in their promises than other journals). JAMA and Genetics are also relatively slow, with 34 and 35 days, respectively. (Note that the links here are to the page that states the time, which is generally the information for authors.)

A few journals promise a particularly fast for selected (‘expedited’) papers but I have only considered the speed promised for all papers here.

I conclude from this analysis that, for relatively high-impact biomedical journals, a first decision within a month of submission is the norm. Anything faster than 3 weeks is fast, and anything slower than 5 weeks is slow.

Newer journals

But what about the newer journals? PeerJ has recently been boasting on its blog about authors who are happy with their fast decision times. The decision times given on this post are 17, 18 and 19 days. These are not necessarily typical of all PeerJ authors, though, and are likely to be biased towards the shorter times, as those whose decisions took longer won’t have tweeted about it and PeerJ won’t have included them in their post.

PLOS One gives no current information on its website about decision times. However, in a comment on a PLOS One blog post in 2009, the then Publisher Pete Binfield stated that “of the 1,520 papers which received a first decision in the second quarter of 2009 (April – June), the mean time from QC completion to first decision was 33.4 days, the median was 30 days and the SD was 18.” He didn’t say how long it took from submission to ‘QC completion’, which is presumably an initial check; I expect this would be only a few days.

Kent Anderson of the Scholarly Kitchen asked last year “Is PLOS ONE Slowing Down?“. This post only looked at the time between the submission and acceptance dates that are displayed on all published papers, and it included no data on decision dates, so the data tell us nothing about decision times. In a series of comments below the post David Solomon of Michigan State University gives more data, which shows that the submission to acceptance time went up only slightly between early 2010 and September 2011.

The star of journals in terms of decision time is undoubtedly Biology Open. It posts the average decision time in the previous month on its front page, and the figure currently given for February 2013 is 8 days. They say they aim to give a first decision within 10 days, and their tweets seem to bear this out: in June 2012 they tweeted that the average decision time in May 2012 had been 6 days, and similarly the time for April 2012 had been 9 days.

Other megajournals vary similarly to ordinary journals. Open Biology reports an average of 24 days, Cell Reports aims for 21 days, and G3 and Scientific Reports aim for 30 days. Springer Plus, the BMC series, the Frontiers journals, BMJ Open and FEBS Open Bio provided no information, though all boast of being fast.

What affects review speed?

If newer journals are faster, why might that be? One possible reason is that as the number of submitted papers goes up, the number of editors doesn’t always go up quickly enough, so the editors get overworked – whereas when a journal is new the number of papers to handle per editor may be lower.

It is important to remember that the speed of review is mainly down to the reviewers, as Andy Farke pointed out in a recent PLOS blog post. Editors can affect this by setting deadlines and chasing late reviewers, but they only have a limited amount of control over when reviewers send their reports.

But given this limitation, there could be reasons for variations in the average speed of review between journals. Reviewers might be excited by the prospect of reviewing for newer journals, so they are more likely to be fast. This could equally be true for the highest impact journals, of course, and also for open access journals if the reviewer is an open access fan. Enthusiastic reviewers not only mean that the reviewers who have agreed send their reports in more quickly, but also that it will be easier to get someone to agree to review in the first place. As Bob O’Hara pointed out in a comment on Andy Farke’s post, “If lots of people decline, you’re not going to have a short review time”.

A logical conclusion from this might be that the best way in which a journal could speed up its time to first decision would be to cultivate enthusiasm for their journal among the pool of potential reviewers. Building a community around the journal, using social media, conferences,  mascots or even free gifts might help. PeerJ seem to be aiming to build such a community with their membership scheme, not to mention their active Twitter presence and their monkey mascot. Biology Open‘s speed might be related to its sponsorship of meetings and its aim to “reduce reviewer fatigue in the community”.

Another less positive possible reason for shorter review times could be that reviewers are not being careful enough. This hypothesis was tested and refuted by the editors of Acta Neuropathologica in a 2008 editorial. (Incidentally, this journal had an average time from submission to first decision of around 17 days between 2005 and 2007, which is pretty fast.) The editorial says “Because in this journal all reviews are rated from 0 (worst) to 100 (best), we plotted speed versus quality. As reflected in Fig. 1, there is no indication that review time is related to the quality of a review.”

Your experience

I would love to find (or even do) some research into the actual submission to first decision times between different journals. Unfortunately that would mean getting the data from each publisher, and it might be difficult to persuade them to release it. (And I don’t have time to do this, alas.) Does anyone know of any research on this?

And have you experienced particularly fast or slow peer review at a particular journal? Are you a journal editor who can tell us about the actual submission to first decision times in your journal? Or do you have other theories for why some journals are quicker than others in this respect?

SpotOn London session: The journal is dead, long live the journal

I’m co-hosting a workshop at SpotOn London next week on the future of journals.

It’s time to end a long blogging hiatus to tell you about an exciting event coming up on Sunday 11 and Monday 12 November. SpotOn London (formerly called Science Online London) is a community event hosted by Nature Publishing Group for the discussion of how science is carried out and communicated online. There will be workshops on three broad topic areas – science communication and outreach, online tools and digital publishing, and science policy – and I am involved in one of the ‘online tools and digital publishing’ ones. This has the title ‘The journal is dead, long live the journal‘ and it will focus on current and future innovations in journal publishing. If you’re interested in how journals could or should change to better meet the needs of science, this is for you!

In this one-hour session we will have very short introductions from four representatives from different parts of the journal publishing world:

  • Matias Piipari (@mz2), part of the team behind Papers software for finding an organising academic papers
  • Damian Pattinson (@damianpattinson), Executive Editor of PLOS ONE
  • Davina Quarterman, Web Publishing Manager at Wiley-Blackwell
  • Ethan Perlstein (@eperlste) of Princeton University

We will then open the floor to contributions from participants, both in the room and online. We hope to cover three themes:

  1. Megajournals; their impact on the journal and on how papers are going to be organised into journals. Will megajournals lead to a two tier marketplace of high end journals and a few megajournals, with mid-tier journals disappearing from the market altogether?
  2. How do we find the papers of interest, in a world where journal brand doesn’t help? In a world where issues disappear, and researchers’ main point of contact with the literature is through aggregation points such as Google Scholar and Pubmed, what are the signifiers that we can build or support that will enable researchers to find the content that they need?
  3. Once you get down to the paper, are there any innovations that we should be using now, at the individual paper level, and what are the barriers to us doing this?

Science Online events have a tradition of being more than just conferences – they aim to involve lots of people outside the room via the SpotOn website and Twitter as well as those in the room. So although the conference itself is sold out (though there is a waiting list for tickets), you can still follow along and get involved before, during and after the event itself. This session is at 4.30pm on Sunday 11 November, so look out on the Twitter hashtag #solo12journals around then. Beforehand, you can comment on co-host Ian Mulvany’s blog post introducing the session, look at the Google Doc that shows the thought processes the organisers went through in planning the session, check for tweets on the hashtag, and follow me (@sharmanedit), Ian (@ianmulvany) and co-host Bob O’Hara (@bobohara) and/or the speakers on Twitter for updates.

One the day, comments from Twitter will be moderated and introduced into the discussion in the room by Bob, who will be doing this remotely from Germany. The whole session (and all other SpotOn London session) will be live-streamed (probably here) and the video will be available afterwards; there will also be a Storify page collecting tweets using the #solo12journals hashtag.

This interaction with those outside the room is important because with only an hour there is a limit to the depth with which we will be able to cover the range of issues around journals. With online discussion as well we hope that more points can be discussed in more detail than would otherwise be possible. It might get a little confusing! I am new to this format, so I am slightly apprehensive but also excited about the possibilities.

Thoughts on megajournals

I am particularly interested in the aspect of the session on megajournals and how they are changing journal publishing. By megajournals we mean all the journals that have been set up to publish papers after peer review that assesses whether the research is sound but doesn’t attempt to second-guess the potential impact of the work. Some, like PLOS ONE, are truly mega – they published over 13,000 papers in 2011. Others, like the BMC series from BioMed Central, probably publish a similar number of papers but divided into many journals in different subject areas. Others have been set up to be sister journals to better known selective journals – for example, Scientific Reports from Nature Publishing Group and BioOpen from The Company of Biologists. All are open access and online only.

Some of these journals are now showing themselves not to be the dumping ground for boring, incremental research that they might have been expected. When PLOS ONE’s first impact factor was revealed to be over 4, there was surprise among many commentators. The question is now whether papers that are unlikely to be accepted by the top journals (roughly speaking, those with impact factors over about 10, though I know that impact factor is a flawed measure) will gradually be submitted not to specialist journals but to megajournals. The opportunity to get your paper seen by many people, which open access publishing provides, could often outweigh the benefits of publishing in a journal specific to your specialist community where your paper will be seen by only that community. I will be very interested to hear people’s thoughts on this issue raised by this session.

Get involved

So do comment using one of the channels mentioned above. Have you recently made a decision about where to send a paper that you knew wasn’t one for the top-flight journals, and did you decide on a specialist journal, a megajournal or some other route to publication? Regarding the other two themes of the session, how do you find papers in your field, and what do you want research papers to look like?

Crowdsourcing information about journals

Crowdsourced surveys of the experience of authors with journals are useful, but I have found only a few. For now, I propose a simpler survey of information gleaned from journal websites.

I was recently alerted by @melchivers (via @thesiswhisperer) to the existence of a blog by SUNY philosopher Andrew Cullison (@andycullison) that includes a set of journal surveys for the field. As Cullison explains in an overview post, the surveys consist of Google Docs spreadsheets, one for each journal, and a form interface that academics fill in with data on their experience of submitting to that journal. The information requested includes:

  • the time taken for initial review
  • the initial verdict of the journal (acceptance, rejection, revise and resubmit, conditional acceptance, withdrawn)
  • the number of reviewers whose comments were provided
  • an assessment of the quality of the reviewers’ comments
  • the final verdict if the paper was revised
  • the time from acceptance to publication
  • an overall rating of the experience with the editors
  • Some basic demographic data

This survey covers 180 journals in philosophy. The data is collated and various statistics are calculated, such as the average review time and acceptance to publication time and the average acceptance rate. Here are couple of examples: the British Journal of Philosophy of Science and Philosophy of Science.

This kind of survey could be a valuable resource for authors in a particular field who are trying to choose a journal. They are crowdsourced, so they do not rely on only one or a few people to gather data. They also provide real data on how fast journals are in practice, which might differ from the statistics or promises provided on journal websites. However, they have limitations: as pointed out in comments below one of Cullison’s posts, they suffer from reporting bias. This is important given that for many of the journals surveyed there are fewer than ten responses.

I haven’t seen any surveys like this in any other field of academia, and certainly none in biology or medicine. I would be very interested to hear if others have seen any. In biology a similar survey would probably only be useful if divided up into smaller fields, such as plant cell biology or cardiovascular medicine. Or it could focus only on the general journals that cover large areas of science, biology or medicine.

A simpler journal survey

Alternatively, or as a first step towards full surveys of journals in biomedicine, a crowdsourced survey of the information presented on journal websites could be useful. This could include information such as the promised submission to first decision time and acceptance to publication time, licensing details (copyright, Creative Commons and so on), charges, article types and length limits. This would involve only one small dataset per journal, which could fit on a single line of a spreadsheet rather than data for individual papers, so would be more manageable than Cullison’s surveys.

I have made a start on such a survey, and you can find it on Google Docs here. I have used the same set of 98 journals, derived from the UK Medical Research Council  list of journals popular with MRC authors, that I used for my open access charges spreadsheet. For every journal, the spreadsheet now contains the name of the publisher, the main journal URL, the URL for the instructions for authors, whether the entire journal is open access or not, and whether there is an open access option. There are also columns for the following information: what the website says about acceptance to publication time; whether the accepted, non-edited manuscript is published online, and what the website says about submission to first decision time. I have filled in some of these fields but haven’t yet checked all the websites for all this information.

The spreadsheet is editable by anyone. I realise that this risks someone messing up the data or adding spam text. For the columns that I don’t want you to change, I have included a partial safeguard: these columns are pulled in from a hidden, locked sheet of the spreadsheet. Please try not to delete data in any cells – just add data in empty cells. If you have any other suggestions for how to allow information to be added but not deleted, or otherwise to avoid problems, please add a comment below.

Now it’s your turn

Would you like to contribute information to this survey? If so, please go ahead and edit the spreadsheet.

If you could publicise it that would be great too.

And do you have any comments on this process, suggestions for improvement and so on?

Other questions

Have you used Cullison’s surveys and found them useful (or less useful)? Have you come across any surveys like the philosophy one for other fields? Or like my survey?

Acceptance to publication time

Journals vary a lot in how long they take to publish accepted papers.

Publication speed is one factor that many authors take into account when choosing a journal. The time from submission to publication in a peer reviewed journal can be split into three phases:

  1. The time from submission from the first decision
  2. The time needed for the authors to revise
  3. The time from acceptance to publication

The second of these cannot generally be controlled by the journal, because different papers need different amounts of time to revise and the personal circumstances of the authors can affect the time needed. So only the first and third phases should be used to judge the journal. I will cover submission to first decision time in a future post and will focus on post-acceptance speed here. By ‘publication’ I mean the first time the paper is made publicly available, whether online or in print.

What happens after a paper is accepted?

Most journals have variations on a standard procedure: copyediting, typesetting, sending proofs to the authors, checking the proofs, and conversion to various formats (such as XML, HTML and pdf). For print journals, there are extra steps of compiling the pdf files into an issue and preparing them for printing – these steps don’t usually affect the time to online publication, but see below for exceptions to this.

Copyediting involves a professional editor (sometimes employed by the journal, but very often a freelancer like me), who reads the paper carefully and ensures that it is accurate, clear, readable, in correct English and in the journal’s house style. Typesetting involves laying out the paper in the journal’s format for print or pdf, with the correct fonts and symbols and with the figures at their final sizes. Some journals use the figures as the authors provide them, others edit or even redraw, and most at least check that the figures fit with the accompanying text.

After typesetting (or sometimes before), the author is sent the proof to check, along with any queries from the copyeditor. Some journals use professional proofreaders to check the proofs after typesetting and after the author has sent their corrections, but nowadays this step is skipped by many journals. But someone still needs to incorporate the author’s corrections into the article and do final checks before publication.

In my experience copyediting, typesetting and proof checking a typical research paper usually takes a few weeks. So, if the process starts immediately after acceptance and isn’t delayed, and if there is no delay from a paper gaining its final form and being published, a corrected paper can be published online a few weeks after acceptance. However, delays can occur at any stage.

Some journals display a typical or promised time from acceptance to publication on their websites. I have trawled through lots, and below is a selection. If you find more, please do add them in a comment. Note that these times are neither maximum nor minimum times – they are probably what the editors feel is a typical time, allowing for some papers to be published more quickly and some more slowly.

You can see from this list that journals from the same publisher vary in their promised times and even in whether they promise a time or not.

Factors that affect publication speed

There are many things that can affect how quickly papers are published once they are accepted.

Publication in issues

Scheduling of issues is one of the commonest reasons for delays. Although most journals now publish articles online before print, there are still some that hold accepted papers in a queue until there is space for them in an issue. Elsevier changed to article-based publication in 2010, and their press release at the time claimed that this could shorten acceptance to publication time by up to seven weeks, to only a few weeks.

Some journals have backlogs of accepted papers that lead to delays in publication of months or even years. Others have got rid of these backlogs by changing to publishing online as soon as possible after acceptance and only later assembling papers into issues (I have been involved in helping one publisher with this transition).

Journals that publish only in issues can also delay particular papers for other reasons than space: if they aim for a balance of article types in each issue they may hold a paper over if there are too many of that type in the current issue; or if they want to publicise several papers on the same topic together, they may hold some of them until all are ready.

It is difficult to work out from journal websites whether they publish in issues or not. The best way to check for any particular journal is probably to look at the acceptance dates for articles in a particular issue and see whether they are spread out (in which case publication probably happens by article) or whether they are all a similar time before the issue date (in which case publication is probably by issue).

Copyediting first or later

The most common system is to copyedit, typeset, send proofs to the authors and perhaps proofread before online publication. Some journals, however, now publish the accepted version almost immediately after acceptance, and do any copyediting and typesetting later, replacing the accepted version when the edited and typeset version is ready. The latter journals can therefore boast acceptance to publication times of a few days or even hours rather than weeks.

I have been able to establish that the following publishers post accepted articles online before editing or typesetting for some or all journals:

  • Wiley (‘OnlineAccepted’ option offered by some journals)
  • Elsevier (Gastroenterology, publication within 5-7 business days)
  • American Chemical Society (all journals, ‘usually within 30 minutes to 24 hours of acceptance’)
  • Genetics Society of America (Genetics)
  • BioMed Central (all journals, ‘publication occurs at the moment of acceptance’)

Fast track articles

Some journals have a fast track that offers faster publication for selected articles. This can speed up publication of these articles, but it can result in slower publication for all the non-fast-track articles if staff time is taken up with the fast-track ones. The editors make the decision on which papers are fast-tracked, but authors can usually request it and their request may be honoured if their reasons are judged to be good enough.

The following publishers offer fast-track publication for some or all journals:

Acceptance date issues

When looking at journal acceptance to publication times, it is worth bearing in mind that the acceptance date is the date when the final formal letter of acceptance is sent to the author. In reality, the decision to publish in principle is often made earlier, and the authors receive an email saying that the paper will be accepted as long as they make some final minor changes. Authors often feel at this point that the paper has been accepted, and it is usually safe to celebrate at this point. But it is not a final acceptance, and acceptance to publication times are measured only from the formal acceptance date.

How to estimate how fast a journal will publish after acceptance

I suggest following these steps to work out how fast your target journal is likely to publish your accepted paper.

1. Check if it publishes accepted versions before any editing or typesetting. If so, publication time is likely to be 0–3 days.

2. Check if it publishes papers online as soon as possible after acceptance, rather than waiting for an issue (print or online). Check whether this happens to all papers or just when the author requests, and request it if needed. If your paper is in this system, publication time is likely to be about 3–8 weeks.

3. Check what the journal’s website says about the acceptance to publication times they aim for, and multiply by about 1.5 to get a maximum probable time. If this time has elapsed after acceptance, you can justifiably email the editors requesting an update.

4. Look at some recent papers: most journals give the dates of acceptance and online publication on the paper, and often on the page containing the online abstract, so you can get a feel for how much time elapses between these events.

5. If it publishes only in print, be prepared for a long wait!

Your experience

Researchers, how important is publication speed after acceptance to you? Do you know some particularly fast journals or publishers, or can you recommend avoiding others that are very slow? Or can you point us to journal websites that promise a certain time to publication?

Journal editors, could you tell us how quickly your journal publishes papers after acceptance? Have you considered publishing your target times on the journal website, and why did you decide to do this or not?

Journal metrics

Last week a new measure of the impact of a journal was launched: Google Scholar Metrics. So it seems like a good time to review the various metrics available for journals.

Below I summarise six measures of journal impact: the impact factor (IF), 5-year IF, Google Scholar Metrics, SCImago Journal Rank (SJR), Source Normalized Impact per Paper (SNIP) and Excellence in Research for Australia (ERA) ranking. As part of the research for this post I have found out the metrics (except 5-year IF) for a sample of 97 of the higher-impact biomedical journals and put them into a Google Docs spreadsheet, which can be viewed here (or on Google Docs directly here).

Most researchers get to know the IF fairly quickly when they start to read journals. So you probably know that an IF of 30 is high and that thousands of journals (the long tail) have IFs below 1. But fewer people have this kind of familiarity with the other metrics. So I have tried to estimate what range of numbers counts as ‘high impact’ for each metric. ‘High’ here means in the top 33% of my sample of 97 journals (already a high-impact sample).

To summarise the number that counts as high for each metric:
IF: 14
5-year IF: about 15
Google Scholar Metrics: 101
SJR: 0.53
SNIP: 3.85

Note that I am only talking about journal metrics, not metrics for assessing articles or researchers. As always, anyone using these figures should make sure they are using them only to judge journals, not individual papers or their authors (as emphasised by a European Association of Science Editors statement in 2007). Also remember that citations can be gamed by editors (see my previous post on the subject or a recent Scholarly Kitchen post on a citation cartel for more details).

Impact factor

The IF is provided by Thomson Reuters as part of their Journal Citation Reports, which covers ‘more than 10,100 journals from over 2,600 publishers in approximately 238 disciplines from 84 countries’.

It is calculated by dividing the number of citations in one year of articles published in that journal during the previous two years.

What counts as big: the highest-ranked journals have IFs over about 14; middle-ranking journals have numbers between 3 and 14; many low-ranked journals have numbers around 1.

Five-year impact factor

This is similar to the standard two-year IF except that citations and articles are calculated over the previous five years rather than two. It has been published only since 2007. This metric has advantages in slower-moving fields, where papers gather citations more slowly than a year or two after publication.

It is difficult to find lists of five-year IFs online, although some journals display them on their home pages. I did, however, find a study in the journal Cybermetrics that showed it is generally about 1.05 times the size of the two-year IF.

What counts as big: 15 using this figure.

Google Scholar Metrics

These were introduced on 1 April 2012 and are based on the Google Scholar database, which includes more journals and other publications than that used for the IFs. They are based on the h-index, which is defined on the Google Scholar Metrics page as follows:

The h-index of a publication is the largest number h such that at least h articles in that publication were cited at least h times each. For example, a publication with five articles cited by, respectively, 17, 9, 6, 3, and 2, has the h-index of 3.

This is a rather difficult concept to get your head around (at least it is for me). Basically the number cannot be bigger than the number of papers a journal has published, and it cannot be bigger than the highest number of times any one paper has been cited. So in the above example the h-index cannot be greater than 5 because there were only 5 articles, and the largest number of citations lower than 5 is 3, so the h-index is 3.

Google Scholar Metrics extends this as follows:

The h-core of a publication is a set of top cited h articles from the publication. These are the articles that the h-index is based on. For example, the publication above has the h-core with three articles, those cited by 17, 9, and 6.

The h-median of a publication be the median of the citation counts in its h-core. For example, the h-median of the publication above is 9. The h-median is a measure of the distribution of citations to the h-core articles.

Finally, the h5-index, h5-core, and h5-median of a publication are, respectively, the h-index, h-core, and h-median of only those of its articles that were published in the last five complete calendar years.

So the main metric is the h5-index, which is a measure of citations to a journal over 5 years to April 2012.

Note that this metric doesn’t involve any division by the number of papers published by the journal (unlike the other metrics discussed here). This means that journals that publish more papers will have proportionally larger values in Google Scholar Metrics than with other metrics.

What counts as big: the highest-ranked journals have h5-indexes over about 101; many journals seem to have numbers under 50.

SCImago Journal Rank (SJR)

The SCImago Journal Rank (SJR) is a metric produced by Scopus (part of Elsevier). It is calculated as follows:

It expresses the average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years, — i.e. weighted citations received in year X to documents published in the journal in years X-1, X-2 and X-3.

So it is also a measure of citations similar to a three-year impact factor, but the citations are weighted according to where the citation was. Further information is here (pdf). The weighting depends on how many citations each journal gets. So if journal A is cited a lot overall and journal B is not cited as much, and a paper in journal C is cited in journal A, that citation is given more weight in the calculation than a citation of journal C in journal B.

What counts as big: the highest-ranked journals have SJRs over about 3; many journals seem to have numbers under 0.5.

(Note that on the SCImago website the decimal point in the SJR is given as a comma in some places, so it looks as if the top journals have SJRs of over 1000 (1,000). On the spreadsheets that are freely downloadable from the same site or from the ‘Journal Metrics’ website (also from Elsevier) the metrics are given as 1.000 etc, so I think this is the correct version.)

Source-Normalized Impact per Paper (SNIP)

The Source-Normalized Impact per Paper (SNIP) is defined as the ratio of a journal’s citation count per paper and the citation potential in its subject field. It is designed to aid comparisons between journals in fields with different patterns of citations. It is calculated as follows:

Raw impact per paper (RIP)
Number of citations in year of analysis to a journal’s papers published in 3 preceding years, divided by the number of a journal’s papers in these three years

Database citation potential in a journal’s subject field
Mean number of 1-3 year old references per paper citing the journal and published in journals processed for the database

Relative database citation potential in a journal’s subject field (RDCP)
Database citation potential of a journal’s subject field divided by that for the median journal in the database

Source normalized impact per paper: (SNIP)
Ratio of a journal’ raw impact per paper (RIP) and the relative database citation potential (RDCP) in the subject field covered by the journal

So basically a three-year impact factor is weighted according how much papers in other journals in the same field are cited.

When I looked for lists of SNIPs for 2010 I encountered a problem: two different lists gave two different answers. The list downloaded from the Journal Metrics site gives the 2010 SNIP for Cell as 1.22, but when I searched on the CWTS Journal Indicators site (which is linked from the Journal Metrics site) it was given as 9.61. So there are two answers to the question of what counts as big: either anything over 0.7 from Journal Metrics or anything over 3 from CWTS Journal Indicators. If anyone can help me resolve this discrepancy I’d be grateful.

Australian Research Council Ranking

The Excellence in Research for Australia (ERA) evaluation exercise in 2010 included a system in which journals were ranked A*, A, B or C. Details of what the rankings mean is here. Top journals (in fact many that might elsewhere be called middle ranking) are ranked A*. It is not clear how these rankings were decided. These journal rankings were controversial and are not being used for the 2012 ERA.

Comparison of journals using these metrics

I have selected 97 high-impact journals in biology and medicine and compiled the metrics for them. I put the list of journals together by initially picking the top journals in the field by SJR, then removing all those that only publish reviews and adding a few that seemed important, or were in the MRC frequently used journal list, or were ranked highly using other metrics. The result is in a Google spreadsheet here. I have added colours to show the top, middle and bottom 33% (tertile) in this sample of each metric for ease of visualisation, and the mean, median and percentiles are at the bottom.

Sources of data:

  • IF: various websites including this for medical journals, this for Nature journals, this for Cell Press journals, this for general and evolutionary journals, this and this for a range of other journals, and individual journal websites. Please note that no data were obtained directly from Thomson Reuters, and they have asked me to state that they do not take responsibility for the accuracy of the data I am presenting.
  • Google Scholar Metrics: Google Scholar Citations Top publications in English and searches from that page.
  • SJR and SNIP: Journal Metrics.
  • ERA: ARC.

Notes on particular journals:

A few journals have anomalous patterns, unlike most that are high or lower in all the different metrics.

  • Ca: A Cancer Journal for Clinicians has a very high IF, SJR and SNIP, but comes out lower on Google Scholar metrics. A recent post in Psychology Today includes a suggestion of why this might be:

The impact factor reflects that the American Cancer Society publishes statistics in CA that are required citations for authoritative estimates of prevalence and other statistics as they vary by cancer site and this assures a high level of citation.

  • A few journals rank relatively low (out of this selection of journals) on all the metrics except the ERA rating, where they are rated A*: Development, The Journal of Immunology, Cellular Microbiology, Journal of Biological Chemistry, Molecular Microbiology, Developmental Biology, and Genetics. I don’t know why this might be, except that the ERA ratings appear to be subjective decisions by experts rather than being based on citations.
  • Proc Natl Acad Sci USA, Blood, Nucleic Acids Research, Cancer Research, Gastroenterology and most notably the BMJ come out high in Google Scholar Metrics but not so high in IF, SJR and SNIP. Perhaps they are journals that publish many papers, which is not accounted for by Google Scholar Metrics, or they could have more citations four or five years after papers are published, which would be picked up by Google Scholar Metrics but not the other metrics.
  • Finally, Systematic Biology has a high SNIP, a medium SJR and IF and a lower Google Scholar Metric. Perhaps it is in a field in which citations per paper are usually low, which is accounted for by the SNIP.

Your comments

Do you have experience of the lesser-known metrics being used by journals or by others to evaluate journals? Can you explain any of the anomalous patterns mentioned here or for the two different values for SNIPs?

A comparison of open access publication charges

Having covered submission fees and other charges, it is about time I covered the main event, isn’t it? I’m talking about open access publication fees, also known as author publishing charges (APCs) and many other names. This is a fee for making your article free for readers to read, and usually for them to download, distribute and do whatever they like with as well.

What do you get for your money?

Before agreeing to pay a fee to make your article open access, make sure you check the licence. True open access (as defined by the Budapest Open Access Initiative) means that it should be equivalent to the Creative Commons Attribution licence (CC:BY), which allows others to copy, distribute and make derivative works, including for commercial purposes, as long as they attribute it to you. Some publishers have a similar licence but with a non-commercial clause (CC:BY-NC) – there is debate about whether the NC clause stops something being open access. Others allow reading for free but restrict other uses, which really can’t be called open access at all. If the rights of readers and re-users are restricted, you are getting less open access for your APC than if there is a CC:BY licence.

Surveys of APCs

To find journal open access charges you usually need to look on the website of the individual journal. However, several organisations have usefully put together summaries of licences and charges. The Wellcome Trust and the UK Medical Research Council (MRC) both have mandates that all research they fund must be made freely available within 6 months of publication, and they both have lists on their websites of journals that do and don’t comply with this mandate. The following lists are available:

  • The MRC has a downloadable spreadsheet listing licences and charges of the most popular couple of hundred journals in which their authors publish
  • The Wellcome Trust has a list of the top 200 journals used by their authors showing which are compliant with their mandate, but not including charges
  • BioMed Central has a page comparing their APCs and licence with those of other publishers
  • The University of California, Berkeley library collections have a similar comparison page covering many subject areas
  • The Directory of Open Access Journals (DOAJ) provides information on charges, but this doesn’t seem to be searchable
  • SHERPA/RoMEO has a list of charges by publisher

I have taken the MRC spreadsheet, converted the currencies and calculated some statistics, and the result is in a Google Docs spreadsheet here. Some journals have two different fees depending on whether the author is a member of the society that runs the journal (or has a discount for some other reason). Of the 209 journals that allowed open access publication of some kind (ie gold open access, not just allowing deposition in a repository, which is called green open access), the mean fee was US$2845.08 (£1793.60) for members or US$2881.93 (£1816.21) for non-members. The standard deviation of the fee was $729.17 (£459.71) for members or $687.09 (£438.60) for non-members. The median is $3000 (£1891.33).

Some notes on these figures. Firstly, they are from the MRC document last updated April 2011, with currencies converted using xe.com on 23 March 2012. Secondly, they cover journals in medicine and related fields, particularly biology. Thirdly, they include 8 BioMed Central journals, 10 BMJ journals, 52 Elsevier journals (including 11 Cell Press journals), 14 Nature Publishing Group journals, 17 OUP journals, 4 PLoS journals, 12 Springer journals and 48 Wiley/Wiley-Blackwell journals. The median charge is $3000 because non-Cell-Press Elsevier journals charge this amount and there are lots of Elsevier journals in the list.

The MRC list also includes journals in the Lancet stable (The Lancet, The Lancet Neurology and The Lancet Oncology, published by Elsevier ), which charge £400 ($634.47) per page. I’m not sure how many pages the average research paper is, but at 6 pages this would be £2400 ($3808.46) and at 10 pages it would be £4000 ($6344.70).

Waivers

These fees may seem very high to some. Don’t forget that many publishers have waivers for those who cannot afford to pay the APC. PLoS offers a waiver for anyone who does not have funds to cover the fee (and I’ve heard informally that they ask no questions). BioMed Central gives an automatic waiver to authors from a WHO list of developing countries, and also considers waivers and discounts on a case by case basis. I haven’t researched all publishers to find out their policy on waivers – perhaps that’s for a future post! If you can’t afford the APC for a journal to which you would like to submit your paper, I suggest explaining this when you submit and asking for a waiver or discount.

Your experience

Do you know of other sources of information on APCs for different journals or on average APCs? Have you spotted any errors in my spreadsheet?

Journal submission fees: why are they so rare?

In a previous post I discussed fees that journals charge for colour printing, per page or for supplementary material. All those fees are charged only to authors whose papers are accepted. Here I’ll look at fees that are charged to the authors of all submissions, included those that are rejected.

In 2010 a report on submission fees by Mark Ware was published by the Knowledge Exchange, a collaboration of the UK Joint Information Systems Committee (JISC) with similar organisations in Denmark, Germany and the Netherlands. This followed a study investigating whether submission fees could play a role in a business model for open access journals. They concluded that for journals with a high rejection rate in particular, submission fees can help to make the open access publication fee more reasonable and could thus make the transition to open access easier.

Although the report focuses on submission fees in the transition to open access, they also noted:

In certain disciplines, notably economic and finance journals and in some areas of the experimental life sciences, submission fees are already common.

Which journals charge a submission fee?

The Knowledge Exchange report includes a table of journals that already charge a submission fee. For biology journals, these fees are listed as mostly being around US$50-75.

I’ve checked on the journal websites for a selection of those listed in this report, and some seem to no longer charge for submission – in particular, the US$400 submission fee that Ideas in Ecology & Evolution charged when it launched in 2008 seems to have now been dropped, and I can’t find any mention of submission fees on the websites of Journal of Biological Chemistry or FASEB Journal.

The journals that I could verify as charging submission fees are:

  • Journal of Neuroscience (Society for Neuroscience) has a submission fee of US$125 (as well as the page charges and colour printing charges mentioned in the previous post)
  • Hereditas (an open access Wiley-Blackwell journal) charges 100 euros (US$133)
  • Stem Cells (Wiley-Blackwell, with an open access option) charges $90
  • Journal of Clinical Investigation (American Society for Clinical Investigation) and Cancer Research (American Association for Cancer Research) charge US$75
  • several other journals mentioned in the Knowledge Exchange report charge around US$50.

Elsevier say in their FAQ that you need to look in each journal’s guide to authors to find out if they charge submission fees (as with other charges).

All the above except Hereditas are subscription journals.

Why submission fees, or why not?

The Knowledge Exchange report interviewed publishers about the pros and cons of submission fees. Unfortunately, they don’t give any details of who was interviewed, except that they were ‘stakeholders including publishers, libraries, research funders, research institutions and individual researchers’, or the text of the interviews, so it is difficult to interpret the results. However, from these interviews the report identified the following advantages:

  • The costs of publication are spread over more authors
  • The fee may put off authors from submitting ‘on spec’ to a journal where they know their paper has only a tiny chance of getting accepted, thus saving work for the journal.

The disadvantages mentioned included:

  • The fee might put off authors and thus make the journal less competitive
  • It was unclear whether funders would cover the charge (though interviews with funders for the study suggested that they would)
  • It would require administration.

Given the findings of this report, I’m surprised that more journals don’t charge a submission fee. I would be surprised if it put off speculative submissions (the time it takes for a paper to be reviewed is surely a bigger cost to the authors than a charge at the level of US$50-100). But for  open access journals with high rejection rates, as the report says, it seems particularly appropriate. Is the risk of seeming uncompetitive with other journals the only reason why these fees aren’t being widely tried?

This is interesting in the context of the statements by Nature Publishing Group that Nature couldn’t go open access because they would have to charge a very high publication fee. I’ve heard this most recently from Alison Mitchell at the debate ‘Evolution of Science’ in Oxford in February: she said that the publication fee would need to be about £10,000 (US$15,850) for Nature research journals and £30,000 (US$47,550) for Nature (see the video of the debate – this statement is at 17 minutes 30 seconds).

A conversation on Twitter with Heather Piwowar (@researchremix, a postdoc with Dryad studying data use among researchers) and Ethan Perlstein (@eperlste, an evolutionary pharmacologist at Princeton University) about this NPG statement led me to Jan Velterop (@Villavelius, a director of Aqcknowledge.com and a former colleague of mine at BioMed Central), who has written on submission fees several times on his blog. He kindly emailed me with further thoughts.

Jan’s most recent blog post summarises his reasons for liking submission fees:

The basic reason I am in favour of submission fees is that it makes scientific publishing really the service industry that it is, its main task nowadays having nothing to do with publishing per se, but mainly with arranging peer review and quality assurance of one sort or another.

Of course, this might not be what publishers want their main task to be…

Another argument for them that he lists is:

It removes the suspicion that OA journals might be tempted to accept more than they should just because of the money that accepted articles bring

And what about the disadvantages? Jan tells me that journal publishers are wary of introducing new fees that other journals don’t charge (see the ‘competitiveness’ point above). They are particularly wary because of a bit of history I didn’t know about:

One of the reasons why commercial journals dominate STM these days is the fact that society journals, still mostly independent in the 1960′s, charged page charges. Commercial journals made much of the fact that (then) they didn’t, and so attracted a growing percentage of authors, who could publish with them for free…

Among the reasons publishers are not too keen are:

1) The risk that authors ‘defect’ to journals without charges. After all, that happened before.

I can see that given this history, journals might be more cautious than otherwise.

Jan goes on to mention a reason I hadn’t heard before:

2) The risk that authors might expect transparency with regard to the speed, peer-review, and acceptance/rejection procedure. If you only have to pay when accepted (as is the case for the current author-side payment OA journals), you may not care too much about the speed, quality of the peer review, and acceptance processes, but if you have to pay even if you are rejected, then that becomes a very different story. Publishers know that they cannot guarantee any quality in that regard – with a few exceptions, perhaps – and fear the pressure of quality requirements on them if they were to move in that direction.

This is a very good point. It is certainly difficult to give guarantees about the speed or quality of peer review, which relies on voluntary work by researchers. It is related to a disadvantage listed in Jan’s recent blog post:

The need to be able to justify rejections properly, particularly if challenged (after all, submitters have paid for an assessment)

Jan also gives a third reason that intrigues me: that the level of submission fees might reveal information about a journal’s rejection rate that they would rather be kept quiet:

if they reject only about a tenth of the submissions, then obviously the submission charge cannot be very much lower than 9/10th of the publication charge for the same revenue to be achieved

So a journal might want to be seen as very selective, rejecting a high proportion of submitted articles, but they might actually have a lot lower rejection rate than this. For example say a journal with a rejection rate of 90% was considering a submission fee of $50 and a publication fee of $1000 (and all authors pay the submission fee, whether accepted or not). Then for every 9 articles accepted, the journal would receive $9000 in publication fees, plus $4500 for the 90 articles submitted, making $13500. But if the same fees were applied to a journal that rejects only about 10%, then for every 9 articles accepted, they would get only $9000 plus $500 for the 10 submitted articles ($9500). The number of articles accepted is public, whereas the number rejected isn’t. To get the level of fees they would receive if they had a 90% rejection rate they would need to charge a submission fee of ($13500 – $9000)/10 = $450. This level of submission fee is unlikely to be acceptable to authors.

(My calculation comes out with a submission fee half what Jan estimates, which I think is because I am assuming both a submission fee and a publication fee are charged, whereas he is assuming only a submission fee.)

In conclusion, the main advantage of submission fees is also their main advantage in other circumstances: that they would reduce the number of submissions. So if a journal has a high rejection rate, it makes sense to charge a submission fee, but otherwise it doesn’t. This actually applies to subscription and open access journals equally – in both cases a submission fee provides extra revenue, which could be used to reduce other charges, included subscriptions, page charges or publication fees (or to increase profits of course). The main reason why high-rejection-rate journals aren’t currently charging submission fees seems to be because it would make them less competitive, but given that these journals are by definition the place that people want to be published, this doesn’t seem a very strong argument. I wouldn’t be surprised if one journal tries submission fees and other then followed suit in the next few years.

Your experience

Have you paid a submission fee to a journal? Would you consider it if it meant a lower level of other charges, such as page charges or fees for open access publication?

Journal editors: has your journal considered a submission fee? If you don’t have one, why not? If you do, why?

Journals that charge authors (and not for open access publication)

Among the discussion of open access recently, there have been a few comments about the level of charges for open access publication. But of course many journals charge authors even without making their articles freely available. I think these charges are worth highlighting so that you can make an informed choice of journal.

Frequently these charges are to cover the cost of colour printing, which seems reasonable given that nowadays printed journal articles are a bonus not standard. But not all: some journals have submission fees (which I’ll cover in a future post), others have page charges, and I found two that even charge for supplementary material.

I’m not going to comment here on whether I think these charges are justified. But I suggest you take the charges into account when choosing a journal, and think about whether they represent value for money. If they go towards supporting a scientific society that you would like to donate to, for example, or if you feel that your paper will have its full impact only if printed in colour, you might be happy to pay. Also, if you can afford these charges, why not consider spending the money on making your article freely available instead?

Colour charges

In the past, print journals often charged authors for printing their article in colour, as colour printing was (and still is) more expensive than printing in black and white. With online publication there is no difference in cost, so it doesn’t make sense for journals to charge authors for colour for the online version of an article. But some journals are still charging for colour printing.

A few examples (with links to the relevant page) are:

  • The Journal of Neuroscience (Society for Neuroscience) charges US$1000 per colour figure, but offers free colour when it is judged essential by the editors and when the first and last authors are members of the society.
  • J Biol Chem charges US$150 per colour figure (with discounts for society members).
  • Evolution (Wiley-Blackwell) charges $500.00 per printed figure. FEMS Microbiology Letters (also Wiley-Blackwell) offers free colour provided that the colour is deemed essential for interpretation of the figure, whereas another Wiley-Blackwell journal, Proteomics, charges €500 for one colour figure up to €1664 for four.
  • FASEB Journal charges US$350 per colour figure.
  • BMJ Journals all seem to charge £250 per article for colour printing, but the BMJ itself (pdf) does not.
  • Of Oxford University Press journals, Bioinformatics and Human Molecular Genetics charge £350/US$600/€525 per colour figure, whereas Journal of Experimental Botany charges £100/US$190/€150.
  • Some Springer journals charge for colour printing, but I wasn’t able to find out which ones.
  • Similarly, some Nature Publishing Group journals charge for colour printing, but I wasn’t able to find out which ones. As far as I can tell, Nature and its sister journals with the word ‘Nature’ in the title have no charges.
  • Elsevier’s author site seems to imply that all their journals have colour charges.

Journals that do not charge for colour printing include:

Page charges

Page charges seem to be almost as common as colour charges, but there isn’t much logic as to which journals charge for what. Only one journal that I could find, Journal of Neuroscience, has publication fees per article (US$980, or US$490 for Brief Communications) – all others charge per page, sometimes over a certain limit. For example:

  • FASEB Journal charges US$80 per printed page for the first 8 pages and $160 per page thereafter. Articles containing eight or more figures and/or tables cost an additional $150 per figure or table.
  • J Biol Chem charges US$80 per page for the first nine pages and $160 per page thereafter (with discounts for society members).

The charges don’t seem to be consistent within each publisher.

One publisher is consistent – none of the BMJ Journals or BMJ (pdf) have any page charges.

Fees for supplementary material

I had never heard of the idea of charges for supplementary material until I was researching for this post. But FASEB Journal charges for supplemental ‘units’ (presumably files) at $160 each (up to four units are allowed), and Proc Natl Acad Sci USA charges US$250 per article for up to five pages of SI (US$500 over six pages). I haven’t come across any other journal that does this.

Your experience

Have I missed any important biomedical journals that have particularly striking charging policies (not including open access charges)? What do you think about these fees? Journal editors, what is the rationale for how much your journal charges for what? Do also let me know if can expand on any of the incomplete parts of in this post.

Journal news for February

News related to scientific journal publishing since 4 February.

Elsevier withdraws support for the Research Works Act

Since I covered this infamous draft US law and the associated boycott of Elsevier by academics (here and in news here) the flood of blog posts on the topic has continued, and I won’t attempt to summarise them here. But the pressure seems to have had an effect: on 27 February Elsevier announced that it is no longer supporting the act, although they ‘continue to oppose government mandates in this area’.

Meanwhile, a new act has been proposed, the Federal Research Public Access Act (FRPAA), which would mandate that all research funded by every federal funder with a budget over $100 million should be made open access 6 months after publication.

Industry group ‘threatens’ journals to delay publications

The Lancet has reported (pdf) that the Mining Awareness Resource Group (MARG) has written to several scientific journals advises journals not to  publish papers from a US government study of diesel exhaust and lung cancer until a court case and congressional directives are ‘resolved’. The editor of Occupational and Environmental Medicine, Dana Loomis, is quoted as saying ‘It is vague and threatening. This has a chilling effect on scientific communications—a matter of grave concern.’

New open access journal

The open access journal Biology Open has been launched by the Company of Biologists. The journal aims to provide the research community with ‘an opportunity to publish valid and well-conducted experimental work that is otherwise robbed of timeliness and impact by the delays inherent in submission to established journals with more restrictive selection criteria. ‘

Twitter and paper citations

An arXiv preprint has found a correlation between mentions of a paper on Twitter and its later citations.

Criteria for the UK Research Excellence Framework 2014 announced

The Higher Education Funding Council for England (HEFCE) has announced the criteria and working methods that the panels for the assessment of research using the Research Excellence Framework (REF 2014) will use. REF will use citations as part of assessment but not impact factors or other bibliometrics (see page 25 of the full report for the statement regarding citations in the biology and medicine panel). Researchers at English universities will no doubt be scrutinizing the guidelines carefully.

* * * *

I’m sorry there hasn’t been a weekly Journal News recently, as I had hoped, and that this update is rather brief. I hope that the usefulness of these news updates depends more on their content than their regularity. If you want (much) more frequent updates from the world of journals and scientific publication, do follow me on Twitter!

Follow

Get every new post delivered to your Inbox.

Join 48 other followers