February highlights from the world of scientific publishing

Some of what I learned about scientific publishing last month from Twitter: new open access journals, data release debates, paper writing tips, and lots more

New journals

Two important announcements this month, both of open access sister journals to well established ones.

First, at the AAAS meeting it was announced that Science is going to have an online-only open access sister journal, called Science Advances, from early 2015. This will be selective (not a megajournal), will publish original research and review articles in science, engineering, technology, mathematics and social sciences, and will be edited by academic editors. The journal will use a Creative Commons license, which generally allows for free use, but hasn’t decided whether to allow commercial reuse, according to AAAS spokeswoman Ginger Pinholster. The author publishing charge hasn’t yet been announced.

Second, the Royal Society announced that, in addition to their selective open access journal Open Biology, they will be launching a megajournal, Royal Society Open Science, late in 2014. It will cover the entire range of science and mathematics, will offer open peer review as an option, and will also be edited by academic editors. Its criteria for what it will publish include “all articles which are scientifically sound, leaving any judgement of importance or potential impact to the reader” and “all high quality science including articles which may usually be difficult to publish elsewhere, for example, those that include negative findings”; it thus fits the usual criteria for a megajournal in that it will not select for ‘significance’ or potential impact.

These two announcements show that publishers without an open access, less selective journal in its stable are now unusual. Publishers are seeing that there is a demand for these journals and that they can make money. Publishers also see that they can gain a reputation for being friendly to open access by setting up such a journal. This also means that papers rejected by their more selective journals can stay within the publisher (via cascading peer review), which, while saving time for the authors by avoiding the need to start the submission process from scratch, also turn a potential negative for the publisher (editorial time spent on papers that are not published) into a positive (author charges). The AAAS has been particularly slow to join this particular bandwagon; let’s see if the strong brand of Science is enough to persuade authors to publish in Science Advances rather than the increasingly large number of other megajournals.

PLOS data release policy

On 24 February, PLOS posted an updated version of the announcement about data release that they made in December (and which I covered last month). I didn’t pay much attention as the change had already been trailed, but then I had to sit up and take notice because I started seeing posts and tweets strongly criticising the policy. The first to appear was an angry and (in my opinion) over-the-top post by @DrugMonkeyblog entitled “PLoS is letting the inmates run the asylum and this will kill them”.  A more positive view was given by Michigan State University evolutionary geneticist @IanDworkin, and another by New Hampshire genomics researcher Matt MacManes (@PeroMHC). Some problems that the policy could cause small, underfunded labs were pointed out by Mexico-based neuroscience researcher Erin McKiernan (@emckiernan13). The debate got wider, reaching Ars Technica and Reddit – as of 3 March there have been 1045 comments on Reddit!

So what is the big problem? The main objections raised seem to me to fall into six categories:

  1. Some datasets would take too much work to get into a format that others could understand
  2. It isn’t always clear what kind of data should be published with a paper
  3. Some data files are too large to be easily hosted
  4. The concern that others might publish reanalyses that the originators of the data were intending to publish, so they would lose the credit from that further research
  5. Some datasets contain confidential information
  6. Some datasets are proprietary

I won’t discuss these issues in detail here, but if you’re interested it’s worth reading the comments on the posts linked above. But it does appear (particularly from the update on their 24 February post and the FAQ posted on 28 February) that PLOS is very happy to discuss many of these issues with authors that have concerns, but analyses of proprietary data may have to be published elsewhere from now on.

I tend to agree with the more positive views of this new policy, who argue that data publication will help increase reproducibility, help researchers to build on each other’s work and prevent fraud. In any case, researcher who disagree are free to publish in other journals with less progressive policies. PLOS is a non-profit publisher who say that access to research results, immediately and without restriction, has always been at the heart of their mission, so they are being consistent in applying this strict policy.

Writing a paper

Miscellaneous news

  • Science writer @CarlZimmer explained eloquently at the AAAS meeting why open access to research, including open peer review and preprint posting, benefit science journalists and their readers.
  • Impactstory profiles now show proportion of a researcher’s articles that are open access and gives gold, silver and bronze badges, as well as showing how highly accessed, discussed and cited their papers are.
  • A new site has appeared where authors can review their experience with journals: Journalysis. It looks promising but needs reviews before it can become a really useful resource – go add one!
  • An interesting example of post-publication peer review starting on Twitter and continuing in a journal was described by @lakens here and his coauthor @TimSmitsTim here.
  • Cuban researcher Yasset Perez-Riverol (@ypriverol) explained why researchers need Twitter and a professional blog.
  • I realised when looking at an Elsevier journal website that many Elsevier journals now have very informative journal metrics, such as impact factors, Eigenfactor, SNIP and SJR for several years and average times from submission to first decision and from acceptance to publication. An example is here.
  • PeerJ founder @P_Binfield posted a Google Docs list of standalone peer review platforms.

Submission to first decision time

Having written previously about journal acceptance to publication times, it is high time I looked at the other important time that affects publication speed: submission to first decision time. As I explained in the previous post, the time from submission to publication in a peer reviewed journal can be split into three phases, the two discussed previously and here and also the time needed for the authors to revise, which the journal can’t control.

A survey of submission to first decision times

I have trawled through the instructions to authors pages of the journals in the MRC frequently used journal list, which I have used in several previous posts as a handy list of relatively high-impact and well known biomedical journals. I’ve used the list as downloaded in 2012, and there may be new journals added to it now. I’ve omitted the review journals, which leaves 96.

From these pages I have tried to find any indication of the actual or intended speed to first decision for each journal. For many journals, no information was provided on the journal website about average or promised submission to first decision times. For example, no Nature Publishing Group, Lancet, Springer or Oxford University Press journals in this data set provide any information.

However, of these 96 journals 37 did provide usable information. I have put this information in a spreadsheet on my website.

20 promised a first decision within 28 or 30 days of submission. 12 others promised 20-25 days. Of the rest, two are particularly fast, Circulation Research (13 days in 2012) and Cellular Microbiology (14 days); and one is particularly slow, Molecular and Cellular Biology (4 to 6 weeks, though they may just be more cautious in their promises than other journals). JAMA and Genetics are also relatively slow, with 34 and 35 days, respectively. (Note that the links here are to the page that states the time, which is generally the information for authors.)

A few journals promise a particularly fast for selected (‘expedited’) papers but I have only considered the speed promised for all papers here.

I conclude from this analysis that, for relatively high-impact biomedical journals, a first decision within a month of submission is the norm. Anything faster than 3 weeks is fast, and anything slower than 5 weeks is slow.

Newer journals

But what about the newer journals? PeerJ has recently been boasting on its blog about authors who are happy with their fast decision times. The decision times given on this post are 17, 18 and 19 days. These are not necessarily typical of all PeerJ authors, though, and are likely to be biased towards the shorter times, as those whose decisions took longer won’t have tweeted about it and PeerJ won’t have included them in their post.

PLOS One gives no current information on its website about decision times. However, in a comment on a PLOS One blog post in 2009, the then Publisher Pete Binfield stated that “of the 1,520 papers which received a first decision in the second quarter of 2009 (April – June), the mean time from QC completion to first decision was 33.4 days, the median was 30 days and the SD was 18.” He didn’t say how long it took from submission to ‘QC completion’, which is presumably an initial check; I expect this would be only a few days.

Kent Anderson of the Scholarly Kitchen asked last year “Is PLOS ONE Slowing Down?“. This post only looked at the time between the submission and acceptance dates that are displayed on all published papers, and it included no data on decision dates, so the data tell us nothing about decision times. In a series of comments below the post David Solomon of Michigan State University gives more data, which shows that the submission to acceptance time went up only slightly between early 2010 and September 2011.

The star of journals in terms of decision time is undoubtedly Biology Open. It posts the average decision time in the previous month on its front page, and the figure currently given for February 2013 is 8 days. They say they aim to give a first decision within 10 days, and their tweets seem to bear this out: in June 2012 they tweeted that the average decision time in May 2012 had been 6 days, and similarly the time for April 2012 had been 9 days.

Other megajournals vary similarly to ordinary journals. Open Biology reports an average of 24 days, Cell Reports aims for 21 days, and G3 and Scientific Reports aim for 30 days. Springer Plus, the BMC series, the Frontiers journals, BMJ Open and FEBS Open Bio provided no information, though all boast of being fast.

What affects review speed?

If newer journals are faster, why might that be? One possible reason is that as the number of submitted papers goes up, the number of editors doesn’t always go up quickly enough, so the editors get overworked – whereas when a journal is new the number of papers to handle per editor may be lower.

It is important to remember that the speed of review is mainly down to the reviewers, as Andy Farke pointed out in a recent PLOS blog post. Editors can affect this by setting deadlines and chasing late reviewers, but they only have a limited amount of control over when reviewers send their reports.

But given this limitation, there could be reasons for variations in the average speed of review between journals. Reviewers might be excited by the prospect of reviewing for newer journals, so they are more likely to be fast. This could equally be true for the highest impact journals, of course, and also for open access journals if the reviewer is an open access fan. Enthusiastic reviewers not only mean that the reviewers who have agreed send their reports in more quickly, but also that it will be easier to get someone to agree to review in the first place. As Bob O’Hara pointed out in a comment on Andy Farke’s post, “If lots of people decline, you’re not going to have a short review time”.

A logical conclusion from this might be that the best way in which a journal could speed up its time to first decision would be to cultivate enthusiasm for their journal among the pool of potential reviewers. Building a community around the journal, using social media, conferences,  mascots or even free gifts might help. PeerJ seem to be aiming to build such a community with their membership scheme, not to mention their active Twitter presence and their monkey mascot. Biology Open‘s speed might be related to its sponsorship of meetings and its aim to “reduce reviewer fatigue in the community”.

Another less positive possible reason for shorter review times could be that reviewers are not being careful enough. This hypothesis was tested and refuted by the editors of Acta Neuropathologica in a 2008 editorial. (Incidentally, this journal had an average time from submission to first decision of around 17 days between 2005 and 2007, which is pretty fast.) The editorial says “Because in this journal all reviews are rated from 0 (worst) to 100 (best), we plotted speed versus quality. As reflected in Fig. 1, there is no indication that review time is related to the quality of a review.”

Your experience

I would love to find (or even do) some research into the actual submission to first decision times between different journals. Unfortunately that would mean getting the data from each publisher, and it might be difficult to persuade them to release it. (And I don’t have time to do this, alas.) Does anyone know of any research on this?

And have you experienced particularly fast or slow peer review at a particular journal? Are you a journal editor who can tell us about the actual submission to first decision times in your journal? Or do you have other theories for why some journals are quicker than others in this respect?

SpotOn London session: The journal is dead, long live the journal

I’m co-hosting a workshop at SpotOn London next week on the future of journals.

It’s time to end a long blogging hiatus to tell you about an exciting event coming up on Sunday 11 and Monday 12 November. SpotOn London (formerly called Science Online London) is a community event hosted by Nature Publishing Group for the discussion of how science is carried out and communicated online. There will be workshops on three broad topic areas – science communication and outreach, online tools and digital publishing, and science policy – and I am involved in one of the ‘online tools and digital publishing’ ones. This has the title ‘The journal is dead, long live the journal‘ and it will focus on current and future innovations in journal publishing. If you’re interested in how journals could or should change to better meet the needs of science, this is for you!

In this one-hour session we will have very short introductions from four representatives from different parts of the journal publishing world:

  • Matias Piipari (@mz2), part of the team behind Papers software for finding an organising academic papers
  • Damian Pattinson (@damianpattinson), Executive Editor of PLOS ONE
  • Davina Quarterman, Web Publishing Manager at Wiley-Blackwell
  • Ethan Perlstein (@eperlste) of Princeton University

We will then open the floor to contributions from participants, both in the room and online. We hope to cover three themes:

  1. Megajournals; their impact on the journal and on how papers are going to be organised into journals. Will megajournals lead to a two tier marketplace of high end journals and a few megajournals, with mid-tier journals disappearing from the market altogether?
  2. How do we find the papers of interest, in a world where journal brand doesn’t help? In a world where issues disappear, and researchers’ main point of contact with the literature is through aggregation points such as Google Scholar and Pubmed, what are the signifiers that we can build or support that will enable researchers to find the content that they need?
  3. Once you get down to the paper, are there any innovations that we should be using now, at the individual paper level, and what are the barriers to us doing this?

Science Online events have a tradition of being more than just conferences – they aim to involve lots of people outside the room via the SpotOn website and Twitter as well as those in the room. So although the conference itself is sold out (though there is a waiting list for tickets), you can still follow along and get involved before, during and after the event itself. This session is at 4.30pm on Sunday 11 November, so look out on the Twitter hashtag #solo12journals around then. Beforehand, you can comment on co-host Ian Mulvany’s blog post introducing the session, look at the Google Doc that shows the thought processes the organisers went through in planning the session, check for tweets on the hashtag, and follow me (@sharmanedit), Ian (@ianmulvany) and co-host Bob O’Hara (@bobohara) and/or the speakers on Twitter for updates.

One the day, comments from Twitter will be moderated and introduced into the discussion in the room by Bob, who will be doing this remotely from Germany. The whole session (and all other SpotOn London session) will be live-streamed (probably here) and the video will be available afterwards; there will also be a Storify page collecting tweets using the #solo12journals hashtag.

This interaction with those outside the room is important because with only an hour there is a limit to the depth with which we will be able to cover the range of issues around journals. With online discussion as well we hope that more points can be discussed in more detail than would otherwise be possible. It might get a little confusing! I am new to this format, so I am slightly apprehensive but also excited about the possibilities.

Thoughts on megajournals

I am particularly interested in the aspect of the session on megajournals and how they are changing journal publishing. By megajournals we mean all the journals that have been set up to publish papers after peer review that assesses whether the research is sound but doesn’t attempt to second-guess the potential impact of the work. Some, like PLOS ONE, are truly mega – they published over 13,000 papers in 2011. Others, like the BMC series from BioMed Central, probably publish a similar number of papers but divided into many journals in different subject areas. Others have been set up to be sister journals to better known selective journals – for example, Scientific Reports from Nature Publishing Group and BioOpen from The Company of Biologists. All are open access and online only.

Some of these journals are now showing themselves not to be the dumping ground for boring, incremental research that they might have been expected. When PLOS ONE’s first impact factor was revealed to be over 4, there was surprise among many commentators. The question is now whether papers that are unlikely to be accepted by the top journals (roughly speaking, those with impact factors over about 10, though I know that impact factor is a flawed measure) will gradually be submitted not to specialist journals but to megajournals. The opportunity to get your paper seen by many people, which open access publishing provides, could often outweigh the benefits of publishing in a journal specific to your specialist community where your paper will be seen by only that community. I will be very interested to hear people’s thoughts on this issue raised by this session.

Get involved

So do comment using one of the channels mentioned above. Have you recently made a decision about where to send a paper that you knew wasn’t one for the top-flight journals, and did you decide on a specialist journal, a megajournal or some other route to publication? Regarding the other two themes of the session, how do you find papers in your field, and what do you want research papers to look like?

Journal news for February

News related to scientific journal publishing since 4 February.

Elsevier withdraws support for the Research Works Act

Since I covered this infamous draft US law and the associated boycott of Elsevier by academics (here and in news here) the flood of blog posts on the topic has continued, and I won’t attempt to summarise them here. But the pressure seems to have had an effect: on 27 February Elsevier announced that it is no longer supporting the act, although they ‘continue to oppose government mandates in this area’.

Meanwhile, a new act has been proposed, the Federal Research Public Access Act (FRPAA), which would mandate that all research funded by every federal funder with a budget over $100 million should be made open access 6 months after publication.

Industry group ‘threatens’ journals to delay publications

The Lancet has reported (pdf) that the Mining Awareness Resource Group (MARG) has written to several scientific journals advises journals not to  publish papers from a US government study of diesel exhaust and lung cancer until a court case and congressional directives are ‘resolved’. The editor of Occupational and Environmental Medicine, Dana Loomis, is quoted as saying ‘It is vague and threatening. This has a chilling effect on scientific communications—a matter of grave concern.’

New open access journal

The open access journal Biology Open has been launched by the Company of Biologists. The journal aims to provide the research community with ‘an opportunity to publish valid and well-conducted experimental work that is otherwise robbed of timeliness and impact by the delays inherent in submission to established journals with more restrictive selection criteria. ‘

Twitter and paper citations

An arXiv preprint has found a correlation between mentions of a paper on Twitter and its later citations.

Criteria for the UK Research Excellence Framework 2014 announced

The Higher Education Funding Council for England (HEFCE) has announced the criteria and working methods that the panels for the assessment of research using the Research Excellence Framework (REF 2014) will use. REF will use citations as part of assessment but not impact factors or other bibliometrics (see page 25 of the full report for the statement regarding citations in the biology and medicine panel). Researchers at English universities will no doubt be scrutinizing the guidelines carefully.

* * * *

I’m sorry there hasn’t been a weekly Journal News recently, as I had hoped, and that this update is rather brief. I hope that the usefulness of these news updates depends more on their content than their regularity. If you want (much) more frequent updates from the world of journals and scientific publication, do follow me on Twitter!

Journal news for 20-27 January

A brief summary of recent news related to journals and scientific publishing.

Datasets International

The open access publisher Hindawi has launced Datasets International, which “aims at helping researchers in all academic disciplines archive, document, and distribute the datasets produced in their research to the entire academic community.” For a processing charge of $300 authors can upload an apparently unlimited amount of data under a Creative Commons CC0 licence (and associated dataset papers under an Attribution licence), according to comments on Scott Edmunds’ Gigablog. The new journals currently associated with this initiative are Dataset Papers in: Cell Biology, Optics, Atmospheric Sciences and Materials Science, though no doubt more will follow. (Heard via @ScottEdmunds.)

Peerage of Science

A company run by three Finnish scientists this week has a new take on improving peer review. Peerage of Science is a community of scientists (‘Peers’), formed initially by invitation, who review each other’s papers anonymously before submission to journals. Reviews are themselves subjected to review, which means that reviewers receive recognition and ratings for their work. The reviews can even be published in a special journal, Proceedings of the Peerage of Science. Journals can offer to publish manuscripts at any point, for a fee – this is how the company aims to make a profit. (Heard via chemistryworldblog, via @adametkin.)

Peer review by curated social media

Science writer Carl Zimmer (@carlzimmer) reported last week in the New York Times on a recent (open access) study in Proc Natl Acad Sci USA about the generation of multicellular yeast by artificial selection in the lab. He has now posted a follow-up article in his Discovery blog, in which he presents the conversation that followed on Twitter about this paper (using Storify) and invites the author to respond, which the author does. The comments on the latter post continue the conversation, and the author continues to respond. It’s an interesting example of the author of a controversial paper engaging constructively in post-publication peer review. (Heard via @DavidDobbs.)

Research Objects

Tom Scott (@derivadow, who works for Nature Publishing Group) has published a detailed blog post outlining a proposal for a new kind of scientific publication: the Research Object. This would be a collection of material, linked by a Uniform Resource Identifier (URI), including an article, raw data, protocols, links to news about the research published elsewhere, links to the authors and their institutions, and more. He credits the Force11 (‘Future of Research Communications and e-Scholarship’) community for the idea, which is developed in greater detail here (pdf). These elements may or may not be open access, although the sophisticated searches Scott envisages will be difficult if they are not. (Heard via @SpringerPlus.)

Analysis of F1000 Journal Rankings

Phil Davis of The Scholarly Kitchen has done an analysis of the journal ranking system announced by Faculty of 1000 (F1000) in October. The analysis includes nearly 800 journals that were given a provisional F1000 Journal Factor (called FFj by F1000) for 2010. Plotting the FFj of each journal against the number of articles from it that were evaluated by F1000 shows that the two numbers are closely related; in fact, the number of articles evaluated explains over 91% of the variation in FFj. Journals from which only a few articles were evaluated suffer not only from this bias, but also from a bias against interdisciplinary and physical science journals that publish little biology. It seems to me that these biases could easily be addressed by taking into account (a) the number of articles evaluated from each journal and (b) the proportion of biology articles published in it when calculating the FFj. F1000 would be wise to study this useful analysis when reviewing their ranking system, as they plan to do regularly, according to the original announcement. (Heard via @ScholarlyKitchn.)

Choosing a journal II: getting your paper noticed

This is the second in a series of posts on factors to consider when choosing which journal to submit your paper to. Here, I will look at how your choice of journal can affect the extent to which your work is noticed.

Part one of the series, on getting your paper published quickly, is here.

How well known is the journal?

It goes without saying that papers in very well known journals like NatureScience, Proc Natl Acad Sci USA and Cell will be seen by more people than those in other journals. The more specialist journals may, however, be the ones that are seen by the people in your field, who you mainly want to reach.

How well they do they publicise their papers?

Most journals send out their tables of contents to interested readers by email. This is really a minimum, and any journal that does not doesn’t deserve your paper. But there are other ways that journals can publicise papers as well.

  • Press releases – do they have an active press office that writes attractive press releases and sends them to relevant media together with the embargoed article before publication? Do they also send press releases to bloggers (who are often more interested in science news than are the mainstream media)?
  • Do they have a blog highlighting recent papers?
  • Do they have a Twitter account with plenty of followers?
  • Do they have a Facebook page ‘liked’ by plenty of people?
  • Do they give awards for top papers of the year or similar?

Do they publish articles highlighting research papers?

Nature, for example, publishes News and Views articles, which are short pieces highlighting the most interesting research papers in the current issue. Many other journals publish similar ‘minireviews’ about recent papers.

Some journals have an editorial in each issue summarising the papers in that issue (such as in the current issue of Gut).

Some have news sections in print or online (e.g. Science) where more accessible pieces about recent papers from that journal or elsewhere can reach a general readership.

Open access versus subscription

There is some evidence that open access articles are cited more than those in subscription journals; there is also some evidence against a citation advantage. It is certainly worth considering whether you want your paper to be read just by potential readers in an institute that subscribes to the journal, or also by independent researchers, those in less well funded institutes, journalists and members of the public.

If you don’t go for a full open access journal (called ‘gold’ open access), will the journal allow you post a version on your own website or in a repository (‘green’ open access; definitions here)? Does the paper become free to read online 6 months or 12 months after publication, or never? Journals vary a lot in their restrictions on how an author can distribute their paper (see the SHERPA ROMEO directory for details).

Is the journal indexed by indexing services?

When a journal is set up it takes a while for it to be indexed by services such as PubMed, ISI Web of Science and AGRICOLA. New journals will therefore not yet be indexed, so articles won’t be findable in searches of these databases. This makes it slightly more risky to publish in very new journals.

Your experience

Have you had a published paper promoted in any of these ways, or others? Did you choose open access to get your paper seen by more people? What difference do you think these choices made?