June highlights from the world of scientific publishing

The launch of Cofactor and what I learnt from Twitter in June: an unusual journal, a very large journal, a new journal and a very slow journal, plus accusations of publisher censorship and more.

The biggest highlight of the month for me was (of course) the launch Cofactor, my company that aims to help researchers publish their work more effectively. The launch event in London was a great success – over 50 people from science and publishing came to hear about the company and from four speakers on the theme ‘What difference will changes in peer review make to authors and journals?’. Do have a look at the Cofactor website and see if we can help you improve your papers, learn about scientific publishing or decide on your publishing strategy.

On Twitter there was lots of news too!

Journals

Nature published a feature by Peter Aldhous (@paldhous) on ‘contributed’ papers in Proceedings of the National Academy of Sciences USA (PNAS). Members of the National Academy of Sciences can submit up to four papers per year using this track, and they are not peer reviewed after submission – rather, authors must obtain reviews themselves and submit them along with the paper, and the paper and reviews are assessed by members of the editorial board. In 2013, more than 98% of contributed papers were published, compared with only 18% of direct submissions (for which the review process is like that of most conventional journals).

Aldhous analysed papers from the contributed and directly submitted track and compared their citation rates:

…the difference between citation rates for directly submitted and contributed papers was not large — controlling for other factors such as discipline, contributed papers garnered about 4.5% fewer citations — but it was statistically significant. Nature‘s analysis also suggests that the gap in citation rates between directly submitted and contributed papers has been narrowing, and this does not seem to be because more-recent papers have yet to acquire enough citations for the difference to show.

The analysis is described in a supplementary information file, but the full dataset is available only on request. I questioned whether it would be better to release the full dataset so that others can reanalyse it. Others agreed, and I have collated the resulting Twitter discussion using Storify. In response, Peter Aldhous kindly put the full dataset on his website. This is a great example of the power of Twitter to get rapid responses to questions.

Meanwhile, a new journal was launching and another was celebrating an incredible milestone. The new journal is The Winnower (@TheWinnower), which describes itself as an open access online science publishing platform that uses open post-publication peer review. There is a charge of US$100 to publish an article, which is not editorially checked but is published straight away, after which anyone can add a review. There aren’t many papers there at the moment but The Winnower is an interesting experiment. It differs from the recently launched ScienceOpen (@Science_Open) in that the latter has editorial checks and only scientists can review papers.

The largest journal in the world is of course PLOS One, and this month it published its 100,000th article. There is no doubt that this megajournal is changing scientific publishing and will continue to do so.

Finally, Texas biologist Zen Faulkes (@DoctorZen) posted his experience of publishing in a small regional journal, The Southwestern Naturalist. He submitted the paper in 2011, got reviews back 9 months later, submitted a revised version within a week and it was finally accepted for the December 2013 issue… which actually came out in early June 2014. Although Zen wasn’t in a hurry, he says

But after this experience, I think I would have been much better off submitting this paper to PLOS ONE or PeerJ or a similar venue.
Except… wait, PeerJ didn’t exist when I submitted this paper. With publications like PeerJ, journals like The Southwestern Naturalist are going to be in trouble soon.

I too wonder why authors would choose a small journal like this, especially if it is closed access, over one that can reach a much broader audience.

Difficulties with publishing criticism of the publishing industry

In late May an article entitled ‘Publisher, be damned! From price gouging to the open road‘ was published by four Leicester University management researchers in a fairly obscure Taylor & Francis journal, Prometheus: Critical Studies in Innovation. I would not have noticed it had it not been for coverage by Paul Jump (@PaulJump) in Times Higher Education telling the extraordinary story behind the publication of the article. A senior manager at the publisher demanded that the article be cut, the issue containing it was delayed by 8 months, and the editorial board threatened to resign in protest at the publisher’s actions. There is more detail and analysis in this Library Journal article. Eventually it was published with the names of particular publishers removed. It seems to me that the publishers’ attempt at suppressing the article had the opposite effect and is a good example of the Streisand Effect. (Via @LSEImpactBlog, @RickyPo and @Stephen_Curry)

Paper writing resources

Two very useful resources came to my attention this month. The first (via @digiwonk) was a blog post on The Chronicle of Higher Education’s Careers hub by Kirsten Bell, a research associate from the University of British Columbia. The post, entitled ‘The Really Obvious (but All-Too-Often-Ignored) Guide to Getting Published‘, gives five simple tips for getting your article published. They are

  1. Familiarize yourself with the journal you want to submit to
  2. Make sure you nominate reviewers, if the journal gives you the option to do so
  3. Don’t make it glaringly obvious that your paper has been rejected by another journal
  4. Learn how to write a paper before actually submitting one
  5. Be persistent (when it’s warranted)

The second was actually a guide to reading a scientific paper, but it is well worth reading when you are writing too, so as to understand how it is likely to be read. The article was in the Huffington Post by @JenniferRaff, a postdoc at the University of Texas, entitled ‘How to Read and Understand a Scientific Paper: A Step-by-Step Guide for Non-Scientists‘. She tells readers to identify the ‘big question’ and the ‘specific question’ that the authors are trying to answer with their research, and then by reading the methods and results determine whether the results answer the specific question. If your paper is written so that the specific question and the answer to it are easy to find, you will get more readers and probably more citations. (Via @kirkenglehardt)

Altmetrics

You’ve seen the Altmetric donut – now here’s the Plum Analytics PlumPrint! (via @PlumAnalytics).

ImpactStory (@ImpactStory) have a good roundup of news about altmetrics from June, mentioning the PlumPrint, responses to a UK Higher Education Funding Council for England (HEFCE) consultation on the use of metrics in assessment, and highlights from a recent altmetrics conference.

Miscellany

A post by Manchester University computational biologist Casey Bergman (@CaseyBergman) was popular among the scientists I follow: ‘The Logistics of Scientific Growth in the 21st Century‘ argued that the exponential growth in academic research is over, and explored what that might mean for those at different stages in their careers.

On the BioMed Central blog, David Stern describes his unsuccessful efforts to replicate a published effect and the realisation that it was an artefact of data binning. He recommends not just providing the full data but displaying it too. (Via @emckiernan13 and @mbeisen)

There was a great post on blog ‘The Rest of the Iceberg’ (@ECRPublishing) entitled ‘What To Do When You Are Rejected‘, focusing on what you can learn from receiving peer reviews.

Launching a new venture with a debate on peer review

The big day has arrived: this evening about 60 people will gather in Kings Cross, London, to launch my new company, Cofactor. Hopefully lots more will follow along online using the hashtag #PeerRevFactors, because this will not just be a launch, it will also be an evening of short talks and discussion about peer review. The theme is ‘What difference will changes in peer review make to authors and journals?’ and we have four great speakers:

I will also give a brief introduction to Cofactor and to the theme, and after the talks the audience of science, publishing and communications people will join in to discuss what they’ve heard. The talks will start at about 19:00 BST. I will post a summary of the event here afterwards, including a Storify of the tweets.

What is Cofactor?

Regular readers of this blog will know that I know quite a lot about journals. For a while I have been looking for the best way in which I can use this knowledge to help researchers. The solution is a company offering editorial help, consultancy and workshops to researchers. It consists of me and a growing team of freelance editors and editorial consultants covering a wide area of science.

Having these expert editors to call on means that Cofactor can check and improve many more research papers than I could on my own. At the same time, clients still benefit from my expertise on every paper, as I check all the editing done by my freelancers. My time will also (hopefully!) be freed up to offer more specialised consultancy and to give workshops to groups of researchers.

I also hope to be able to get involved in more projects around scientific publishing, open science and so on. The most popular posts on this blog by far have been the surveys of journals with respect to their speed (of review and publication), impact metrics and charges (for open access and other things), so I will be doing updated surveys on these and other features of journals before long. One project that is already under way is an innovative Journal Selector tool, which will help researchers to choose a journal based on these kinds of factors.

Cofactor is offering several kinds of help with scientific papers: substantive editing, a quick check called the Cofactor Summary and an abstract check. We can also help researchers choose a journal, negotiate the peer review process or decide on a publishing strategy. And our workshops can help junior or more experienced researchers to understand the big changes in scientific publishing and how these affect them.

Do get in touch for help with publishing your papers, to book a workshop, or to talk about working for Cofactor.

What difference will changes in peer review make?

So, tonight’s theme is new forms of peer review and what difference they are making already and will make in the future.

What kinds of peer review are we talking about?

  • Open peer review
  • Post-publication peer review
  • Peer review that is independent of journals
  • Crowdsourced peer review
  • Innovative review processes involving discussion between reviewers and authors

Another relatively new kind of peer review is that practised by journals such as PLOS ONE and PeerJ (‘megajournals‘), in which reviewers are asked to comment only on whether the science is sound and not whether the conclusions are interesting or significant.

My take is that anyone who writes scientific papers should start rethinking how they do this in the light of these changes. If your paper is reviewed in the open, everyone will be able to see the comments of the reviewers, and often the original submitted version too. So you can’t rely on reviewers or journal staff to quietly correct any errors. Unless you ensure errors are corrected before submission, they will be publicly visible when the paper is published.

If you think you can escape this public scrutiny by avoiding journals that have open review, think again. Services such as PubMed Commons and PubPeer are gaining in popularity, and papers that are seen to have major problems are being discussed at length in these and other forums.

So your best defence against criticism of your paper online is to ensure that it has no major errors when you first submit it to a journal. And the best way to do that is to get it checked by a professional editor before submission, someone with experience of editing presubmission journal papers and who knows the kinds of errors to look out for. And guess what: Cofactor has editors like this ready and waiting to check your paper!

Let’s get talking

So please do join the discussion today using the hashtag #PeerRevFactors, or in the comments here, and tell us what you think the effect of these new kinds of peer review will be. Have you commented on someone else’s paper or written a published review (I have)? Have you experienced open review or had comments on your paper after publication, and how did you feel about that? Have you changed the way you prepare your papers?

March highlights from the world of scientific publishing

An update on what I learnt from Twitter last month: dodgy citation metrics, mislabelled papers and journals and more.

Metrics

A wonderful Perspective piece appeared in the open access journal mBio entitled Causes for the Persistence of Impact Factor Mania. Here, Arturo Casadevall (Editor in Chief of the journal) and Ferric C. Fang treat the misuse of the journal impact factor as if it were a disease and suggest possible causes and treatments. They diagnose the main problem as: “Publication in prestigious journals has a disproportionately high payoff that translates into a greater likelihood of academic success” and that these disproportionate rewards “create compelling incentives for investigators to have their work published in such journals. ” Their solutions are not new but worth reading. (via @PeppeGanga)

A less useful post was a widely shared news feature in the Pacific Standard: Killing Pigs and Weed Maps: The Mostly Unread World of Academic Papers. This gave an interesting look at citation analysis, but it started with a rather dodgy statistic:

A study at Indiana University found that “as many as 50% of papers are never read by anyone other than their authors, referees and journal editors.” That same study concluded that “some 90% of papers that have been published in academic journals are never cited.”

This ‘study’ turns out to be a feature in Physics World from 2007 by Indiana University Librarian Lokman I Meho, in which these numbers are simply asserted, with no citation and no data to back them up. Yoni Appelbaum (@YAppelbaum) pointed out a paper by Vincent Larivière and Yves Gingras on arXiv that effectively debunks these numbers. I also found a paper from 2008 whose Discussion section cites various studies on the proportion of uncited papers – which ranges from 15% to 26% for scientific and mathematical research papers, but was much higher in the social sciences (48% uncited) and humanities (93% uncited). So the situation isn’t as bad as the Pacific Standard made out, unless you are in the humanities.

Open access

The Wellcome Trust, the UK’s largest provider of non-governmental funding for scientific research, released a dataset on Figshare of the fees paid in the 2012-13 financial year for open access publication (APCs). @ernestopriego posted an initial analysis, @CameronNeylon posted a tidied up version of the dataset and @petermurrayrust and Michelle Brook (@MLBrook) initiated a crowdsourced attempt to check whether all the articles paid for were actually made open access by their publishers. The resulting spreadsheet will continue to be used for checking whether any paid open access papers are being wrongly marked as copyright of the publisher, or being put behind a paywall, or being given a link to payment for a licence to reproduce or reuse (anyone can help with this if they wish). Peter Murray-Rust has identified some examples where these errors have been made, which seem to be mostly from Elsevier, and this prompted Elsevier to post an explanation of why this is taking so long to fix (they were alerted the problem two years ago, as Mike Taylor has explained). 

Richard Poynder (@RickyPo) pointed me to a post on Google+ by David Roberts about changes in the APCs of Elsevier maths journals. Some have been pegged to small annual increases, others have gone up 6-8%, while one has had its APC reduced by 30%. The latter just happens to be the journal for which the editorial board threatened to quit in protest at Elsevier’s continuing lack of sufficient support for open access. The APCs are generally between US$500 and US$5000. In response to this, Ross Mounce (@rmounce) pointed out that Ubiquity Press (@ubiquitypress), whose APCs are US$390, have given a full breakdown of what the APC pays for. @HansZauner asked why all publishers can’t do the same, but this seems unlikely to happen.

It was also Richard Poynder who tweeted a very useful guide to choosing an open access journal, produced by Ryerson University Library & Archives in Canada. This gives a series of tests to see whether a journal is likely to be reputable rather than a ‘predatory’ journal, including membership of OASPA, journal metrics, peer review procedure and editorial board membership. @BMJ_Open pointed out that the page implied that double blind peer review was the most widely accepted standard. The page has now been changed, perhaps in response to this comment, to say “Take into consideration that blind peer review and open peer review are both considered a credible standard for scientific publishing.”

Other open access and open data news:

  • @WoWter posted an analysis of how much it would cost the Netherlands to convert completely to gold open access.
  • The Directory of Open Access Journals (@DOAJplus) published a new application form that all journals must fill in to apply to be in the database. This includes a ‘DOAJ Seal’ that indicates the openness, indexability and discoverability of the journal. (via @MikeTaylor).
  • PLOS published an update and clarification of their open data policy, following the debates that I covered last month.
  • David Crotty wrote a good summary of the debate about PLOS’s open data policy for the @ScholarlyKitchn.
  • A new service called JournalClick was announced, which gives recommendations for open access papers to read based on what you have read (via @RickyPo).
  • A German court has ruled that the Creative Commons non-commercial (CC:NC) clause means that the material is only for personal use, so even state-owned radio stations with no advertisements, for examples, are not permitted to use CC:NC material without permission (via @petermurrayrust).
  • Duke University
    Scholarly Communications Officer, Kevin Smith

    Scholarly Communications Officer Kevin Smith (@klsmith4906) posted about two problems with Nature Publishing Group licencing: they have recently started to require Duke authors to request a formal waiver of their faculty open access policy, and their licence to publish requires the author to waive or agree not to assert their moral rights.  @grace_baynes of Nature responded in a comment.

  • @damianpattinson of PLOS posted a report of an interesting talk entitled ‘The future is open: opportunities for publishers and institutions’ that he and his colleague Catriona MacCallum (@catmacOA) gave at the UKSG conference ‘Open Access Realities’ in London in November 2013.

New journals

The IEEE launched its new journal, IEEE Access, which claims to be an open access megajournal and was listed as one that was ‘coming soon’ in Pete Binfield (@p_binfield)’s December 2013 post on megajournals. However, the FAQ makes clear that in fact the authors are required to sign over copyright to the publisher, and reuse is not allowed, although the papers are free to read online. A discussion with @MattJHodgkinson and @BenMudrak clarified the situation for me. Matt pointed out that the Budapest Open Access Initiative FAQ says “Open access journals will either let authors retain copyright or ask authors to transfer copyright to the publisher”. So copyright transfer is allowed within open access, but restricting all reuse means that this journal should not be called an open access journal. IEEE Access also doesn’t conform to the standard definition of a megajournal, as the FAQ states “IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented.” Megajournals do not select on the basis of perceived ‘interest’, so this is not a megajournal.

Other developments

  • I haven’t kept up fully with the controversy surrounding the publication of a new method (called STAP) to produce stem cells that was published in Nature in January. Paul Knoepfler’s stem cell blog (and @pknoepfler) is the place to go for full updates, but I was concerned to read that Nature has declined to publish a ‘Brief Communication Arising’ reporting that the method does not work. It seems important to me that such follow-ups should be published in the same journal as the original paper.
  • Jocelyn Sze (@jocelynesze) pointed me to a series of 2012 articles in Frontiers in Computational Neuroscience on visions for the future of scientific publishing. This editorial by Nikolaus Kriegeskorte introduces the series.

February highlights from the world of scientific publishing

Some of what I learned about scientific publishing last month from Twitter: new open access journals, data release debates, paper writing tips, and lots more

New journals

Two important announcements this month, both of open access sister journals to well established ones.

First, at the AAAS meeting it was announced that Science is going to have an online-only open access sister journal, called Science Advances, from early 2015. This will be selective (not a megajournal), will publish original research and review articles in science, engineering, technology, mathematics and social sciences, and will be edited by academic editors. The journal will use a Creative Commons license, which generally allows for free use, but hasn’t decided whether to allow commercial reuse, according to AAAS spokeswoman Ginger Pinholster. The author publishing charge hasn’t yet been announced.

Second, the Royal Society announced that, in addition to their selective open access journal Open Biology, they will be launching a megajournal, Royal Society Open Science, late in 2014. It will cover the entire range of science and mathematics, will offer open peer review as an option, and will also be edited by academic editors. Its criteria for what it will publish include “all articles which are scientifically sound, leaving any judgement of importance or potential impact to the reader” and “all high quality science including articles which may usually be difficult to publish elsewhere, for example, those that include negative findings”; it thus fits the usual criteria for a megajournal in that it will not select for ‘significance’ or potential impact.

These two announcements show that publishers without an open access, less selective journal in its stable are now unusual. Publishers are seeing that there is a demand for these journals and that they can make money. Publishers also see that they can gain a reputation for being friendly to open access by setting up such a journal. This also means that papers rejected by their more selective journals can stay within the publisher (via cascading peer review), which, while saving time for the authors by avoiding the need to start the submission process from scratch, also turn a potential negative for the publisher (editorial time spent on papers that are not published) into a positive (author charges). The AAAS has been particularly slow to join this particular bandwagon; let’s see if the strong brand of Science is enough to persuade authors to publish in Science Advances rather than the increasingly large number of other megajournals.

PLOS data release policy

On 24 February, PLOS posted an updated version of the announcement about data release that they made in December (and which I covered last month). I didn’t pay much attention as the change had already been trailed, but then I had to sit up and take notice because I started seeing posts and tweets strongly criticising the policy. The first to appear was an angry and (in my opinion) over-the-top post by @DrugMonkeyblog entitled “PLoS is letting the inmates run the asylum and this will kill them”.  A more positive view was given by Michigan State University evolutionary geneticist @IanDworkin, and another by New Hampshire genomics researcher Matt MacManes (@PeroMHC). Some problems that the policy could cause small, underfunded labs were pointed out by Mexico-based neuroscience researcher Erin McKiernan (@emckiernan13). The debate got wider, reaching Ars Technica and Reddit – as of 3 March there have been 1045 comments on Reddit!

So what is the big problem? The main objections raised seem to me to fall into six categories:

  1. Some datasets would take too much work to get into a format that others could understand
  2. It isn’t always clear what kind of data should be published with a paper
  3. Some data files are too large to be easily hosted
  4. The concern that others might publish reanalyses that the originators of the data were intending to publish, so they would lose the credit from that further research
  5. Some datasets contain confidential information
  6. Some datasets are proprietary

I won’t discuss these issues in detail here, but if you’re interested it’s worth reading the comments on the posts linked above. But it does appear (particularly from the update on their 24 February post and the FAQ posted on 28 February) that PLOS is very happy to discuss many of these issues with authors that have concerns, but analyses of proprietary data may have to be published elsewhere from now on.

I tend to agree with the more positive views of this new policy, who argue that data publication will help increase reproducibility, help researchers to build on each other’s work and prevent fraud. In any case, researcher who disagree are free to publish in other journals with less progressive policies. PLOS is a non-profit publisher who say that access to research results, immediately and without restriction, has always been at the heart of their mission, so they are being consistent in applying this strict policy.

Writing a paper

Miscellaneous news

  • Science writer @CarlZimmer explained eloquently at the AAAS meeting why open access to research, including open peer review and preprint posting, benefit science journalists and their readers.
  • Impactstory profiles now show proportion of a researcher’s articles that are open access and gives gold, silver and bronze badges, as well as showing how highly accessed, discussed and cited their papers are.
  • A new site has appeared where authors can review their experience with journals: Journalysis. It looks promising but needs reviews before it can become a really useful resource – go add one!
  • An interesting example of post-publication peer review starting on Twitter and continuing in a journal was described by @lakens here and his coauthor @TimSmitsTim here.
  • Cuban researcher Yasset Perez-Riverol (@ypriverol) explained why researchers need Twitter and a professional blog.
  • I realised when looking at an Elsevier journal website that many Elsevier journals now have very informative journal metrics, such as impact factors, Eigenfactor, SNIP and SJR for several years and average times from submission to first decision and from acceptance to publication. An example is here.
  • PeerJ founder @P_Binfield posted a Google Docs list of standalone peer review platforms.

January highlights from the world of scientific publishing

Some of what I learned last month from Twitter: new journals, new policies and post-publication reviews at PLOS, and some suggestions for how journals should work.

New journals

Three new journals have been announced that find new and very different ways to publish research. The most conventional is the Journal of Biomedical Publishing, a journal aiming to publish articles about publishing. It will be open access (with a low fee of 100 Euros) and promises only 2-4 days between acceptance and online publication. The journal has been set up by four Danish researchers and is published by the Danish Medical Association. One of them, Jacob Rosenberg, will present a study of where articles about publishing were published in 2012 at the forthcoming conference of the European Association of Science Editors.

A journal that goes further from the conventional model is Proceedings of Peerage of Science, a journal for commentaries associated with the journal-independent peer review service Peerage of Science. The journal will publish commentaries on published research, mostly based on open reviews of papers that have been generated as part of Peerage of Science. These will be free to read [edited from ‘open access’ following comments below], but there is no fee to the author – on the contrary, the authors of these commentaries will potentially receive royalties! Anyone who values a particular commentary or the journal as a whole can become a ‘public patron‘ and donate money, some of which will go to the author of that commentary. I will be watching this innovative business model with interest.

Finally, it is difficult to tell whether @TwournalOf will be a serious journal, but it certainly claims to be: a journal in which the papers each consist of a single tweet. ‘Papers’ are submitted by direct message, and the journal is run by Andy Miah (@andymiah), professor in ethics and emerging technologies at the University of the West of Scotland. I wondered (on Twitter of course) how this would work given that you can only send someone a direct message if they follow you. The answer came immediately: the journal will follow everyone who follows it. One to watch!

Developments at PLOS

Two announcements by Public Library of Science caught my eye this month. The first was actually in December but I missed it at the time and was alerted to it recently by @Alexis_Verger: PLOS have released a revised data policy (coming into effect in March) in which authors will be required to include a ‘data availability statement’ in all research articles published by PLOS journals. This statement will describe the paper’s compliance with the PLOS data policy, which will mean making all data underlying the findings described in their article fully available without restriction (though exceptions will be made, for example when patient confidentiality is an issue). This is another step in the movement towards all journals requiring the full dataset to be available. I hope other journals will follow suit.

The other announcement was about a post-publication review system called PLOS Open Evaluation. This is currently in a closed pilot stage, but it sounds like it will finally provide the evaluation of impact that the founders promised when they set up PLOS ONE to publish all scientifically sound research. Users will be able to rate an article by their interest in it, it’s article’s significance, the quality of the research, and the clarity of the writing. There is also the opportunity to go into more detail about any of these aspects.

How journals should work

The New Year started off with an open letter from Oxford psychology professor Dorothy Bishop (@deevybee) to academic publishers. She points out a big change that has happened because of open access:

In the past, the top journals had no incentive to be accommodating to authors. There were too many of us chasing scarce page space. But there are now some new boys on the open access block, and some of them have recognised that if they want to attract people to publish with them, they should listen to what authors want. And if they want academics to continue to referee papers for no reward, then they had better treat them well too.

Bishop urges journal publishers to make things easier for authors and reviewers, such as by not forcing them through pointless hoops when submitting a paper that might still be rejected (a choice quote: “…cutting my toenails is considerably more interesting than reformatting references”). She calls out eLife and PeerJ as two new journals that are doing well at avoiding most of the bad practices she outlines.

Later in the month Jure Triglav (@juretriglav), the creator of ScienceGist, showed what amazing things can be done with scientific figures using modern internet tools. He shows a ‘living figure’ based on tweets about the weather, and the figure continuously updates as it receives new data. Just imagine what journals would be like if this kind of thing was widely used!

Finally, this month’s big hashtag in science was #SixWordPeerReview. Researchers posted short versions of peer reviews they have received (or perhaps imagined). Most of the tweets were a caricature of what people think peer review involves (perhaps understandably for a humorous hashtag), and a few people (such as @clathrin) pointed out that real peer review can be very constructive.

F1000Research did a Storify of a selection, taking the opportunity to point out the advantages of open peer review at the same time. Some of my favourites were:

@paulcoxon: “Please checked Engilsh and grammar thoroughly” (actually happened)

@girlscientist: Didn’t even get journal name right. #SixWordEditorReview

@McDawg: Data not shown? No thank you

November highlights from the world of scientific publishing

Some of what I learned this month from Twitter: new preprint server, Google Scholar Library, papers on citations and p-values, and the most networked science conference ever

BioRxiv

In what could be a major development in the culture of publishing, a preprint server for biology, BioRxiv, was launched this month. It is based on the long-running arXiv preprint server used by physicists (and increasingly quantitative biologists). Nature News had a good summary.

Google Scholar Library

Google Scholar have launched a new service, Google Scholar Library (h/t @phylogenomics). This is meant to be a way to organize papers you read or cite, so it could be a competitor to reference managers such as Mendeley and Zotero. However, it doesn’t seem to be fully set up for citing papers yet: you can import into BibTeX, EndNote, RefMan and RefWorks (but not Mendeley or Zotero) or get a single citation in just MLA, APA or Chicago style.

“Top researchers” and their citations

Two papers of particular interest this month: the first actually came out in late October and is entitled “A list of highly influential biomedical researchers, 1996–2011” (European Journal of Clinical Investigation; h/t @TheWinnower). The paper, by  John Ioannidis and colleagues (who also published the influential “Why Most Published Research Findings Are False” paper), sorted biomedical authors in Scopus by their h-index and total citations and listed various pieces of information for the top 400 by this measure. I found this interesting for several reasons, including:
  • It gives a feeling for what makes a high h-index: of over 15 million authors, about 1% had an h-index of over 20, about 5000 over 50 and only 281 over 80.
  • It shows how different sources of citation data can give different h-indices for the same author (see Table 3 in the paper; as pointed out by @girlscientist)

The paper is limited by its reliance on citation data and the h-index alone, so should not be taken too seriously, but it is worth a look if you haven’t already seen it.

p-values vs Bayes factors

The second is a paper in PNAS by Valen Johnson (covered by Erika Check Hayden in Nature News) suggested that the commonly used statistical standard of a p-value less than 0.05 is not good enough – in fact, around a quarter of findings that are significant at that level may be false. This conclusion was reached by developing a method to make the p-value directly comparable with the Bayes factor, which many statisticians prefer. As I’m not a statistician I’m not in a position to comment on the Bayesian/frequentist debate, but it is worth noting that this paper recommends a p-value threshold of less than 0.005 to be really sure of a result. A critical comment by a statistician is here (via @hildabast).

SpotOn London

Finally,  the main event of November for me was SpotOn London (#solo13), a two-day conference on science communication: policy, outreach and tools. This is one of the most connected conferences you can imagine: every session was live-streamed, the Twitter backchat was a major part of the proceedings, and many people followed along and joined in from afar. The session videos can all be viewed here.
For me four sessions were particular highlights:
  • The keynote talk by Salvatore Mele of CERN. This was not only an accessible explanation of the search for the Higgs Boson, and of the importance of open access and preprint publishing in high energy physics, but also a masterclass in giving an entertaining and informative presentation.
  • The discussion session Open, Portable, Decoupled – How should Peer Review change? (Storify of the tweets here)
  • The discussion session Altmetrics – The Opportunities and the Challenges (summary and some related links from Martin Fenner here)
  • A workshop I helped with, on rewriting scientific text using only the thousand mostly commonly used words in the English language (report by the organiser, Alex Brown, here)

Highlights from the scientific publishing world in October

A summary of the key things I have learned this month via Twitter: stings, harrassment and post-publication peer review.

You may have noticed that this blog is not updated very often, but that my Twitter feed is updated several (sometimes many) times a day. I have decided to to bring some highlights of this Twitter activity to my blog, so that those of you who (for some strange reason) aren’t on Twitter can get the benefit of all the interesting things I learn there every day. Of course, this summary will focus on scientific publishing and related fields. This may become a regular blog feature.

The biggest news early in October was the ‘sting’ published in Science by John Bohannon, which showed that some disreputable journals will accept even an obviously bad fake paper. There have been many, many posts and articles about this, which are listed in Zen Faulkes (@doctorzen)’s list. A few I found most insightful are:

  • A pair of two posts by @neurobonkers, the first giving a good overview and the second classifying the journals included in the sting into those that accepted or rejected the fake paper with or without peer review.
  • This post by journal editor Gunther Eysenbach, who rejected the paper. He says “It is foolish to extrapolate these findings of a few black sheep publishers and scammers… to an entire industry. This would be as logical as concluding from Nigerian wire fraud emails that all lawyers who take a fee-for-service are scammers!”
  • The suggestion by Zen Faulkes that the fake paper could be a good resource for teaching how to write a paper.

Then there was the big scandal around sexual harassment in the science writing community, which has now led to the resignation of Scientific American’s blog editor, Bora Zivovic. An overview in the Guardian science blog by Alice Bell (@alicebell) gives the low-down and this post by Jennifer Ouellette (@JenLucPiquant) is one of the more insightful on the issues.

And then PubMed launched a commenting system, PubMed Commons. This is so important that I am going to blog about it separately.

A few other interesting things:

  • The Economist published a special series of articles on science, including a long overview of the issues, including the problem of reproducing results and publishing replications, important statistical issues, fraud, retractions and peer review (including the Science sting).
  • This post by Pat Thomson (@thomsonpat) drives home the importance of the ‘take home message’ in your paper.
  • I was directed to this amazing comprehensive guide to making a conference poster, by Colin Purrington, by @deevybee and others.
  • Open Access Week ran from 21 to 27 October. The most notable related article was in the Guardian by Peter Suber: Open access: six myths to put to rest.
  • The Chemistry journal ACS Nano published an editorial suggesting that allegations of fraud in a paper should be dealt with in private by the journal concerned, not discussed openly on blogs. Blogger Paul Bracher (@ChemBark) disagrees.

How to read journal instructions for authors

Journal editors often complain that few authors seem to read their instructions for authors. But journals don’t make it easy to read these instructions. Every publisher has its own way of displaying the instructions, with differences in the wording for the same thing, in the order in which information is presented and in how the information is split over web pages.

I’m going to attempt to bring some order to the chaos by picking out the points that really matter. These are:

  • Subject areas
  • Threshold for significance
  • Article types
  • Policies
  • Length limits
  • Article format for submission

There are also some things that nearly all journals require, which I’ll summarise at the end.

Scope

The most important thing to read when you are considering whether to submit to a particular journal is what subject areas it covers. This aspect is pretty straightforward, although it is the only area covered by most commercially available tools for choosing a journal, such as Edanz’s Journal Selector and JANE.

One important aspect to consider, however, is how broad a subject area you would like the journal to cover. If your study will be of interest to readers in more than one field, you will probably want an interdisciplinary journal that covers both fields.

Threshold

There is generally some statement in the instructions for authors or elsewhere in the journal information about the impact, significance or interest threshold. This can be written in all sorts of ways. For example:

  • Nature requires that articles “are of outstanding scientific importance” and “reach a conclusion of interest to an interdisciplinary readership”
  • Blood takes into account “the originality and importance of the observations or investigations, the quality of the work and validity of the evidence”
  • Cell says “The basic criterion for considering papers is whether the results provide significant conceptual advances into, or raise provocative questions and hypotheses regarding, an interesting biological question.”

‘Megajournals’ include a statement that the journal does not select on the basis of perceived impact or significance. For example:

  • PLOS ONE says “PLOS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership”
  • Frontiers says “Review editors focus on certifying the accuracy and validity of articles, not on evaluating their significance”
  • Scientific Reports says “Referees and Editorial Board Members will determine whether a paper is scientifically sound, rather than making judgements on novelty or whether the paper represents a conceptual advance.”
  • Biology Open focuses on “publication of good-quality sound research without a requirement for perceived impact”.

If you choose a selective journal rather than a megajournal, it is important to consider carefully whether your study is likely to reach their stated threshold. Get a colleague in another field to read your title and abstract and give an honest view of how groundbreaking they think it is compared with papers in various possible target journals.

You are likely to be biased towards finding your own work fascinating; never forget that editors and reviewers won’t share this view.

Article types

The instructions always include a list of the types of article that the journal publishes. Your paper must fit one of the article types and must follow the instructions for that type (especially regarding length limits).

What I call a research paper can be called by a variety of different names:

  • Original article
  • Original research
  • Research report
  • Primary research
  • Article
  • Letter

The word ‘Letter’ is used for a full (short) research paper in some journals (such as Nature journals) but for something much shorter in others, akin to the more colloquial meaning of the word letter.

Journals have a variety of criteria to distinguish between different article types. Sometimes the main difference is simply length, but often there is a difference in ‘significance’ or ‘completeness of the story’. These can be rather subjective judgements. Read a range of papers in the journal to get a feeling for the differences.

If your article isn’t a research paper, it is equally important to check whether the journal publishes articles like it. Usually journals invite review and comment articles, but some also accept unsolicited offers. Always send an email first describing your proposed review or comment, rather than just submitting it.

Policies

The policies section will vary a lot depending on the field. It will cover things like:

  • requirements for making data, software and materials available
  • ethics for animal experiments or human studies
  • adherence to subject-specific guidelines such as MIAME or CONSORT
  • adherence to authorship criteria, such as regarding ghostwriting and guest authorship (see the criteria laid out by the ICMJE)
  • whether they will accept papers that have previously been published on a preprint server or presented at a conference
  • policies on discussing the research with the media before publication.

It is crucial that your research follows all the guidelines for the journal. Violations can lead to immediate rejection.

Journals vary in how strict they are. However, if your study follows the highest possible ethical standards you are unlikely to find major differences between them. The exception to this is in journal policies on previous publication; newer journals are often less strict on this, and there is ongoing debate about the issue so instructions might change.

Format for submission

Some instructions aren’t to do with the manuscript contents itself but rather its file format and other things to do with how it is uploaded to the journal’s submission system. Publishers vary in what they require in terms of:

  • File formats allowed (commonly allowed formats for text are doc, docx, odt and rtf; TeX files may or may not be accepted)
  • Whether the text and figures should be in a single file or separate files
  • Whether the figure legends should be under each figure or at the end of the text
  • Whether a cover letter is required and what it should contain
  • Whether page or line numbers should be included
  • Whether the manuscript should be double spaced
  • Whether suggestions or exclusions of reviewers are allowed or encouraged
  • Whether submission has to be through the online system or whether post or email is allowed

Following these instructions is advisable, as online submissions systems can be inflexible. If you don’t follow the instructions there may be a delay before the manuscript is looked at by the editors or sent to review.

Length limits

All print journals and many online-only journals have length limits. It is best to keep to them at first submission, if only to avoid annoying the editors and reviewers and to avoid having to shorten your paper later if it is accepted. Some journals will reject any paper that is too long without considering it.

There are usually also length limits on the title and abstract, and sometimes on other sections too. Limits on the numbers of figures, tables and references are also common.

Formatting within the manuscript

Then there are the details of how the manuscript is laid out. In general these instructions are not quite as important at the submission stage as those listed above, as any problems can be fixed once the article is accepted. However, some journals are strict about this kind of thing being done properly on first submission. And it isn’t always clear from the instructions to authors how strict they are. See my previous post about formatting for initial submission for more.

The kind of thing that journals care about in this category are:

  • Whether the abstract is subdivided into sections
  • What sections are required in the main text (usually Introduction, Methods, Results, Discussion or similar)
  • What order the sections should be in (whether the Methods come before the Results or after the Discussion)
  • Whether citations are allowed in the abstract
  • Whether the reference citations should be numbered in order or given in the form “(Author et al., 2009)”

Non-varying instructions

Finally, there are the requirements that practically all journals have, although they can be worded in a variety of ways. These include:

  • Use SI units
  • Define all abbreviations and special symbols on first use
  • Cite all figures, tables and references in the text
  • Gene symbols should be italic; protein names should be Roman.

For more on what most journals tend to have in their instructions, see the generic set of instructions provided by the International Committee of Medical Journal Editors (ICJME).

There are companies and freelance editors, including me, who can help you to comply with instructions for authors for your target journal.

Submission to first decision time

Having written previously about journal acceptance to publication times, it is high time I looked at the other important time that affects publication speed: submission to first decision time. As I explained in the previous post, the time from submission to publication in a peer reviewed journal can be split into three phases, the two discussed previously and here and also the time needed for the authors to revise, which the journal can’t control.

A survey of submission to first decision times

I have trawled through the instructions to authors pages of the journals in the MRC frequently used journal list, which I have used in several previous posts as a handy list of relatively high-impact and well known biomedical journals. I’ve used the list as downloaded in 2012, and there may be new journals added to it now. I’ve omitted the review journals, which leaves 96.

From these pages I have tried to find any indication of the actual or intended speed to first decision for each journal. For many journals, no information was provided on the journal website about average or promised submission to first decision times. For example, no Nature Publishing Group, Lancet, Springer or Oxford University Press journals in this data set provide any information.

However, of these 96 journals 37 did provide usable information. I have put this information in a spreadsheet on my website.

20 promised a first decision within 28 or 30 days of submission. 12 others promised 20-25 days. Of the rest, two are particularly fast, Circulation Research (13 days in 2012) and Cellular Microbiology (14 days); and one is particularly slow, Molecular and Cellular Biology (4 to 6 weeks, though they may just be more cautious in their promises than other journals). JAMA and Genetics are also relatively slow, with 34 and 35 days, respectively. (Note that the links here are to the page that states the time, which is generally the information for authors.)

A few journals promise a particularly fast for selected (‘expedited’) papers but I have only considered the speed promised for all papers here.

I conclude from this analysis that, for relatively high-impact biomedical journals, a first decision within a month of submission is the norm. Anything faster than 3 weeks is fast, and anything slower than 5 weeks is slow.

Newer journals

But what about the newer journals? PeerJ has recently been boasting on its blog about authors who are happy with their fast decision times. The decision times given on this post are 17, 18 and 19 days. These are not necessarily typical of all PeerJ authors, though, and are likely to be biased towards the shorter times, as those whose decisions took longer won’t have tweeted about it and PeerJ won’t have included them in their post.

PLOS One gives no current information on its website about decision times. However, in a comment on a PLOS One blog post in 2009, the then Publisher Pete Binfield stated that “of the 1,520 papers which received a first decision in the second quarter of 2009 (April – June), the mean time from QC completion to first decision was 33.4 days, the median was 30 days and the SD was 18.” He didn’t say how long it took from submission to ‘QC completion’, which is presumably an initial check; I expect this would be only a few days.

Kent Anderson of the Scholarly Kitchen asked last year “Is PLOS ONE Slowing Down?“. This post only looked at the time between the submission and acceptance dates that are displayed on all published papers, and it included no data on decision dates, so the data tell us nothing about decision times. In a series of comments below the post David Solomon of Michigan State University gives more data, which shows that the submission to acceptance time went up only slightly between early 2010 and September 2011.

The star of journals in terms of decision time is undoubtedly Biology Open. It posts the average decision time in the previous month on its front page, and the figure currently given for February 2013 is 8 days. They say they aim to give a first decision within 10 days, and their tweets seem to bear this out: in June 2012 they tweeted that the average decision time in May 2012 had been 6 days, and similarly the time for April 2012 had been 9 days.

Other megajournals vary similarly to ordinary journals. Open Biology reports an average of 24 days, Cell Reports aims for 21 days, and G3 and Scientific Reports aim for 30 days. Springer Plus, the BMC series, the Frontiers journals, BMJ Open and FEBS Open Bio provided no information, though all boast of being fast.

What affects review speed?

If newer journals are faster, why might that be? One possible reason is that as the number of submitted papers goes up, the number of editors doesn’t always go up quickly enough, so the editors get overworked – whereas when a journal is new the number of papers to handle per editor may be lower.

It is important to remember that the speed of review is mainly down to the reviewers, as Andy Farke pointed out in a recent PLOS blog post. Editors can affect this by setting deadlines and chasing late reviewers, but they only have a limited amount of control over when reviewers send their reports.

But given this limitation, there could be reasons for variations in the average speed of review between journals. Reviewers might be excited by the prospect of reviewing for newer journals, so they are more likely to be fast. This could equally be true for the highest impact journals, of course, and also for open access journals if the reviewer is an open access fan. Enthusiastic reviewers not only mean that the reviewers who have agreed send their reports in more quickly, but also that it will be easier to get someone to agree to review in the first place. As Bob O’Hara pointed out in a comment on Andy Farke’s post, “If lots of people decline, you’re not going to have a short review time”.

A logical conclusion from this might be that the best way in which a journal could speed up its time to first decision would be to cultivate enthusiasm for their journal among the pool of potential reviewers. Building a community around the journal, using social media, conferences,  mascots or even free gifts might help. PeerJ seem to be aiming to build such a community with their membership scheme, not to mention their active Twitter presence and their monkey mascot. Biology Open‘s speed might be related to its sponsorship of meetings and its aim to “reduce reviewer fatigue in the community”.

Another less positive possible reason for shorter review times could be that reviewers are not being careful enough. This hypothesis was tested and refuted by the editors of Acta Neuropathologica in a 2008 editorial. (Incidentally, this journal had an average time from submission to first decision of around 17 days between 2005 and 2007, which is pretty fast.) The editorial says “Because in this journal all reviews are rated from 0 (worst) to 100 (best), we plotted speed versus quality. As reflected in Fig. 1, there is no indication that review time is related to the quality of a review.”

Your experience

I would love to find (or even do) some research into the actual submission to first decision times between different journals. Unfortunately that would mean getting the data from each publisher, and it might be difficult to persuade them to release it. (And I don’t have time to do this, alas.) Does anyone know of any research on this?

And have you experienced particularly fast or slow peer review at a particular journal? Are you a journal editor who can tell us about the actual submission to first decision times in your journal? Or do you have other theories for why some journals are quicker than others in this respect?

SpotOn London session: The journal is dead, long live the journal

I’m co-hosting a workshop at SpotOn London next week on the future of journals.

It’s time to end a long blogging hiatus to tell you about an exciting event coming up on Sunday 11 and Monday 12 November. SpotOn London (formerly called Science Online London) is a community event hosted by Nature Publishing Group for the discussion of how science is carried out and communicated online. There will be workshops on three broad topic areas – science communication and outreach, online tools and digital publishing, and science policy – and I am involved in one of the ‘online tools and digital publishing’ ones. This has the title ‘The journal is dead, long live the journal‘ and it will focus on current and future innovations in journal publishing. If you’re interested in how journals could or should change to better meet the needs of science, this is for you!

In this one-hour session we will have very short introductions from four representatives from different parts of the journal publishing world:

  • Matias Piipari (@mz2), part of the team behind Papers software for finding an organising academic papers
  • Damian Pattinson (@damianpattinson), Executive Editor of PLOS ONE
  • Davina Quarterman, Web Publishing Manager at Wiley-Blackwell
  • Ethan Perlstein (@eperlste) of Princeton University

We will then open the floor to contributions from participants, both in the room and online. We hope to cover three themes:

  1. Megajournals; their impact on the journal and on how papers are going to be organised into journals. Will megajournals lead to a two tier marketplace of high end journals and a few megajournals, with mid-tier journals disappearing from the market altogether?
  2. How do we find the papers of interest, in a world where journal brand doesn’t help? In a world where issues disappear, and researchers’ main point of contact with the literature is through aggregation points such as Google Scholar and Pubmed, what are the signifiers that we can build or support that will enable researchers to find the content that they need?
  3. Once you get down to the paper, are there any innovations that we should be using now, at the individual paper level, and what are the barriers to us doing this?

Science Online events have a tradition of being more than just conferences – they aim to involve lots of people outside the room via the SpotOn website and Twitter as well as those in the room. So although the conference itself is sold out (though there is a waiting list for tickets), you can still follow along and get involved before, during and after the event itself. This session is at 4.30pm on Sunday 11 November, so look out on the Twitter hashtag #solo12journals around then. Beforehand, you can comment on co-host Ian Mulvany’s blog post introducing the session, look at the Google Doc that shows the thought processes the organisers went through in planning the session, check for tweets on the hashtag, and follow me (@sharmanedit), Ian (@ianmulvany) and co-host Bob O’Hara (@bobohara) and/or the speakers on Twitter for updates.

One the day, comments from Twitter will be moderated and introduced into the discussion in the room by Bob, who will be doing this remotely from Germany. The whole session (and all other SpotOn London session) will be live-streamed (probably here) and the video will be available afterwards; there will also be a Storify page collecting tweets using the #solo12journals hashtag.

This interaction with those outside the room is important because with only an hour there is a limit to the depth with which we will be able to cover the range of issues around journals. With online discussion as well we hope that more points can be discussed in more detail than would otherwise be possible. It might get a little confusing! I am new to this format, so I am slightly apprehensive but also excited about the possibilities.

Thoughts on megajournals

I am particularly interested in the aspect of the session on megajournals and how they are changing journal publishing. By megajournals we mean all the journals that have been set up to publish papers after peer review that assesses whether the research is sound but doesn’t attempt to second-guess the potential impact of the work. Some, like PLOS ONE, are truly mega – they published over 13,000 papers in 2011. Others, like the BMC series from BioMed Central, probably publish a similar number of papers but divided into many journals in different subject areas. Others have been set up to be sister journals to better known selective journals – for example, Scientific Reports from Nature Publishing Group and BioOpen from The Company of Biologists. All are open access and online only.

Some of these journals are now showing themselves not to be the dumping ground for boring, incremental research that they might have been expected. When PLOS ONE’s first impact factor was revealed to be over 4, there was surprise among many commentators. The question is now whether papers that are unlikely to be accepted by the top journals (roughly speaking, those with impact factors over about 10, though I know that impact factor is a flawed measure) will gradually be submitted not to specialist journals but to megajournals. The opportunity to get your paper seen by many people, which open access publishing provides, could often outweigh the benefits of publishing in a journal specific to your specialist community where your paper will be seen by only that community. I will be very interested to hear people’s thoughts on this issue raised by this session.

Get involved

So do comment using one of the channels mentioned above. Have you recently made a decision about where to send a paper that you knew wasn’t one for the top-flight journals, and did you decide on a specialist journal, a megajournal or some other route to publication? Regarding the other two themes of the session, how do you find papers in your field, and what do you want research papers to look like?