February highlights from the world of scientific publishing

Some of what I learned about scientific publishing last month from Twitter: new open access journals, data release debates, paper writing tips, and lots more

New journals

Two important announcements this month, both of open access sister journals to well established ones.

First, at the AAAS meeting it was announced that Science is going to have an online-only open access sister journal, called Science Advances, from early 2015. This will be selective (not a megajournal), will publish original research and review articles in science, engineering, technology, mathematics and social sciences, and will be edited by academic editors. The journal will use a Creative Commons license, which generally allows for free use, but hasn’t decided whether to allow commercial reuse, according to AAAS spokeswoman Ginger Pinholster. The author publishing charge hasn’t yet been announced.

Second, the Royal Society announced that, in addition to their selective open access journal Open Biology, they will be launching a megajournal, Royal Society Open Science, late in 2014. It will cover the entire range of science and mathematics, will offer open peer review as an option, and will also be edited by academic editors. Its criteria for what it will publish include “all articles which are scientifically sound, leaving any judgement of importance or potential impact to the reader” and “all high quality science including articles which may usually be difficult to publish elsewhere, for example, those that include negative findings”; it thus fits the usual criteria for a megajournal in that it will not select for ‘significance’ or potential impact.

These two announcements show that publishers without an open access, less selective journal in its stable are now unusual. Publishers are seeing that there is a demand for these journals and that they can make money. Publishers also see that they can gain a reputation for being friendly to open access by setting up such a journal. This also means that papers rejected by their more selective journals can stay within the publisher (via cascading peer review), which, while saving time for the authors by avoiding the need to start the submission process from scratch, also turn a potential negative for the publisher (editorial time spent on papers that are not published) into a positive (author charges). The AAAS has been particularly slow to join this particular bandwagon; let’s see if the strong brand of Science is enough to persuade authors to publish in Science Advances rather than the increasingly large number of other megajournals.

PLOS data release policy

On 24 February, PLOS posted an updated version of the announcement about data release that they made in December (and which I covered last month). I didn’t pay much attention as the change had already been trailed, but then I had to sit up and take notice because I started seeing posts and tweets strongly criticising the policy. The first to appear was an angry and (in my opinion) over-the-top post by @DrugMonkeyblog entitled “PLoS is letting the inmates run the asylum and this will kill them”.  A more positive view was given by Michigan State University evolutionary geneticist @IanDworkin, and another by New Hampshire genomics researcher Matt MacManes (@PeroMHC). Some problems that the policy could cause small, underfunded labs were pointed out by Mexico-based neuroscience researcher Erin McKiernan (@emckiernan13). The debate got wider, reaching Ars Technica and Reddit – as of 3 March there have been 1045 comments on Reddit!

So what is the big problem? The main objections raised seem to me to fall into six categories:

  1. Some datasets would take too much work to get into a format that others could understand
  2. It isn’t always clear what kind of data should be published with a paper
  3. Some data files are too large to be easily hosted
  4. The concern that others might publish reanalyses that the originators of the data were intending to publish, so they would lose the credit from that further research
  5. Some datasets contain confidential information
  6. Some datasets are proprietary

I won’t discuss these issues in detail here, but if you’re interested it’s worth reading the comments on the posts linked above. But it does appear (particularly from the update on their 24 February post and the FAQ posted on 28 February) that PLOS is very happy to discuss many of these issues with authors that have concerns, but analyses of proprietary data may have to be published elsewhere from now on.

I tend to agree with the more positive views of this new policy, who argue that data publication will help increase reproducibility, help researchers to build on each other’s work and prevent fraud. In any case, researcher who disagree are free to publish in other journals with less progressive policies. PLOS is a non-profit publisher who say that access to research results, immediately and without restriction, has always been at the heart of their mission, so they are being consistent in applying this strict policy.

Writing a paper

Miscellaneous news

  • Science writer @CarlZimmer explained eloquently at the AAAS meeting why open access to research, including open peer review and preprint posting, benefit science journalists and their readers.
  • Impactstory profiles now show proportion of a researcher’s articles that are open access and gives gold, silver and bronze badges, as well as showing how highly accessed, discussed and cited their papers are.
  • A new site has appeared where authors can review their experience with journals: Journalysis. It looks promising but needs reviews before it can become a really useful resource – go add one!
  • An interesting example of post-publication peer review starting on Twitter and continuing in a journal was described by @lakens here and his coauthor @TimSmitsTim here.
  • Cuban researcher Yasset Perez-Riverol (@ypriverol) explained why researchers need Twitter and a professional blog.
  • I realised when looking at an Elsevier journal website that many Elsevier journals now have very informative journal metrics, such as impact factors, Eigenfactor, SNIP and SJR for several years and average times from submission to first decision and from acceptance to publication. An example is here.
  • PeerJ founder @P_Binfield posted a Google Docs list of standalone peer review platforms.

November highlights from the world of scientific publishing

Some of what I learned this month from Twitter: new preprint server, Google Scholar Library, papers on citations and p-values, and the most networked science conference ever

BioRxiv

In what could be a major development in the culture of publishing, a preprint server for biology, BioRxiv, was launched this month. It is based on the long-running arXiv preprint server used by physicists (and increasingly quantitative biologists). Nature News had a good summary.

Google Scholar Library

Google Scholar have launched a new service, Google Scholar Library (h/t @phylogenomics). This is meant to be a way to organize papers you read or cite, so it could be a competitor to reference managers such as Mendeley and Zotero. However, it doesn’t seem to be fully set up for citing papers yet: you can import into BibTeX, EndNote, RefMan and RefWorks (but not Mendeley or Zotero) or get a single citation in just MLA, APA or Chicago style.

“Top researchers” and their citations

Two papers of particular interest this month: the first actually came out in late October and is entitled “A list of highly influential biomedical researchers, 1996–2011” (European Journal of Clinical Investigation; h/t @TheWinnower). The paper, by  John Ioannidis and colleagues (who also published the influential “Why Most Published Research Findings Are False” paper), sorted biomedical authors in Scopus by their h-index and total citations and listed various pieces of information for the top 400 by this measure. I found this interesting for several reasons, including:
  • It gives a feeling for what makes a high h-index: of over 15 million authors, about 1% had an h-index of over 20, about 5000 over 50 and only 281 over 80.
  • It shows how different sources of citation data can give different h-indices for the same author (see Table 3 in the paper; as pointed out by @girlscientist)

The paper is limited by its reliance on citation data and the h-index alone, so should not be taken too seriously, but it is worth a look if you haven’t already seen it.

p-values vs Bayes factors

The second is a paper in PNAS by Valen Johnson (covered by Erika Check Hayden in Nature News) suggested that the commonly used statistical standard of a p-value less than 0.05 is not good enough – in fact, around a quarter of findings that are significant at that level may be false. This conclusion was reached by developing a method to make the p-value directly comparable with the Bayes factor, which many statisticians prefer. As I’m not a statistician I’m not in a position to comment on the Bayesian/frequentist debate, but it is worth noting that this paper recommends a p-value threshold of less than 0.005 to be really sure of a result. A critical comment by a statistician is here (via @hildabast).

SpotOn London

Finally,  the main event of November for me was SpotOn London (#solo13), a two-day conference on science communication: policy, outreach and tools. This is one of the most connected conferences you can imagine: every session was live-streamed, the Twitter backchat was a major part of the proceedings, and many people followed along and joined in from afar. The session videos can all be viewed here.
For me four sessions were particular highlights:
  • The keynote talk by Salvatore Mele of CERN. This was not only an accessible explanation of the search for the Higgs Boson, and of the importance of open access and preprint publishing in high energy physics, but also a masterclass in giving an entertaining and informative presentation.
  • The discussion session Open, Portable, Decoupled – How should Peer Review change? (Storify of the tweets here)
  • The discussion session Altmetrics – The Opportunities and the Challenges (summary and some related links from Martin Fenner here)
  • A workshop I helped with, on rewriting scientific text using only the thousand mostly commonly used words in the English language (report by the organiser, Alex Brown, here)

Choosing a journal V: impact factor

This the fifth post in my series on choosing a journal, following posts on getting your paper published quickly, getting it noticed, practicalities, and peer review procedure.

It is all very well getting your paper seen by lots of people, but will that lead to an increase in your reputation? Will it lead to that all-important grant, promotion or university rating?

The impact factor of a journal is a measure of the average number of citations of papers published over the previous two years in the year being measured. A very common view among academics is that having your paper published in a journal with a high impact factor is the most important thing they can do to ensure tenure, funding, promotion and generally success. And in fact the impact factor of the journals your papers are in still has a big influence on many of those whose job it is to assess scientists (as discussed recently on Michael Eisen’s blog). It is also a factor in whether librarians choose to subscribe to a journal, which will affect how widely your paper is seen.

So even if the impact factor has flaws, it is still important. However, remember the following caveats:

  • Citations are only a proxy measure of the actual impact of a paper – your paper could have an enormous influence while not being cited in academic journals
  • Impact doesn’t only occur in the two years following the publication of the paper: in slow moving fields, in which seminal papers are cited five or ten years after publication, these late citations won’t get counted towards the impact factor so the journal’s impact factor will be smaller than justified
  • The impact factor measures the average impact of papers in the journal; some will be cited much more, some not at all
  • There are ways for journals to ‘game‘ impact factors, such as manipulating types of article so that the less cited ones won’t be counted in the calculation
  • The methods used for calculating the impact factor are proprietary and not published
  • Averages can be skewed by a single paper that is very highly cited (e.g. the 2009 impact factor of Acta Crystallographica A)
  • Although impact factors are calculated to three decimal places, I haven’t seen any analysis of the error in their estimation, so a difference in half a point may be completely insignificant
  • New journals don’t get an impact factor until they have been publishing for at least three years.

So although it is worth looking at the impact factor of a journal to which you are considering submitting your paper, don’t take it too seriously. Especially don’t take small differences between the impact factors of different journals as meaningful.

Other new metrics are being developed to measure average impact of journals, such as the Eigenfactor and Source Normalized Impact per Paper (SNIP) and SCImago Journal Rank (SJR). These might be worth looking at in combination with the impact factor when choosing a journal.

Your experience

How important is the impact factor of a journal in your decision to submit there? Have you taken other measures of impact into account? Do you think the impact factor of journals you have published in has affected the post-publication life of your papers?

And journal editors, how much difference does the impact factor of your journal make to how many papers are submitted to it, or to your marketing? Do you know the Eigenfactor, SNIP or SJR of your journal?

Journal news for 20-27 January

A brief summary of recent news related to journals and scientific publishing.

Datasets International

The open access publisher Hindawi has launced Datasets International, which “aims at helping researchers in all academic disciplines archive, document, and distribute the datasets produced in their research to the entire academic community.” For a processing charge of $300 authors can upload an apparently unlimited amount of data under a Creative Commons CC0 licence (and associated dataset papers under an Attribution licence), according to comments on Scott Edmunds’ Gigablog. The new journals currently associated with this initiative are Dataset Papers in: Cell Biology, Optics, Atmospheric Sciences and Materials Science, though no doubt more will follow. (Heard via @ScottEdmunds.)

Peerage of Science

A company run by three Finnish scientists this week has a new take on improving peer review. Peerage of Science is a community of scientists (‘Peers’), formed initially by invitation, who review each other’s papers anonymously before submission to journals. Reviews are themselves subjected to review, which means that reviewers receive recognition and ratings for their work. The reviews can even be published in a special journal, Proceedings of the Peerage of Science. Journals can offer to publish manuscripts at any point, for a fee – this is how the company aims to make a profit. (Heard via chemistryworldblog, via @adametkin.)

Peer review by curated social media

Science writer Carl Zimmer (@carlzimmer) reported last week in the New York Times on a recent (open access) study in Proc Natl Acad Sci USA about the generation of multicellular yeast by artificial selection in the lab. He has now posted a follow-up article in his Discovery blog, in which he presents the conversation that followed on Twitter about this paper (using Storify) and invites the author to respond, which the author does. The comments on the latter post continue the conversation, and the author continues to respond. It’s an interesting example of the author of a controversial paper engaging constructively in post-publication peer review. (Heard via @DavidDobbs.)

Research Objects

Tom Scott (@derivadow, who works for Nature Publishing Group) has published a detailed blog post outlining a proposal for a new kind of scientific publication: the Research Object. This would be a collection of material, linked by a Uniform Resource Identifier (URI), including an article, raw data, protocols, links to news about the research published elsewhere, links to the authors and their institutions, and more. He credits the Force11 (‘Future of Research Communications and e-Scholarship’) community for the idea, which is developed in greater detail here (pdf). These elements may or may not be open access, although the sophisticated searches Scott envisages will be difficult if they are not. (Heard via @SpringerPlus.)

Analysis of F1000 Journal Rankings

Phil Davis of The Scholarly Kitchen has done an analysis of the journal ranking system announced by Faculty of 1000 (F1000) in October. The analysis includes nearly 800 journals that were given a provisional F1000 Journal Factor (called FFj by F1000) for 2010. Plotting the FFj of each journal against the number of articles from it that were evaluated by F1000 shows that the two numbers are closely related; in fact, the number of articles evaluated explains over 91% of the variation in FFj. Journals from which only a few articles were evaluated suffer not only from this bias, but also from a bias against interdisciplinary and physical science journals that publish little biology. It seems to me that these biases could easily be addressed by taking into account (a) the number of articles evaluated from each journal and (b) the proportion of biology articles published in it when calculating the FFj. F1000 would be wise to study this useful analysis when reviewing their ranking system, as they plan to do regularly, according to the original announcement. (Heard via @ScholarlyKitchn.)

Journal News

A brief summary of recent news related to journals and scientific publishing.

Journal of Errology

A new venture came to my notice this week that aims to provide “an experimental online research repository that enables sharing and discussions on those unpublished futile hypothesis, errors, iterations, negative results, false starts and other original stumbles that are part of a larger successful research in biological sciences.” It is not clear whether the Journal of Errology will succeed, but it is an interesting development that might fill a gap that journals are currently neglecting.

Figshare

Another place to send your miscellaneous data is figshare, which relaunched this week. This “allows researchers to publish all of their research outputs in seconds in an easily citable, sharable and discoverable manner”. They are encouraging researchers to upload negative data, supplementary material that is too large for journal limits, and miscellaneous figures that aren’t likely to get written up as a paper.

The Research Works Act

You’ll probably have heard about the Research Works Act (RWA) being proposed in the US, which would prohibit the NIH or other federal bodies from mandating (as the NIH currently does) that taxpayer-funded research should be freely accessible online.  A summary for UK readers by Mike Taylor (@SauropodMike) is here. The act is supported by the American Publishers Association, and Twitter has been full of scientists lobbying journal publishers to come out against it. So far, the AAAS (publisher of Science) and Nature Publishing Group have been among the journal publishers opposing the RWA.

An open peer review experiment

AJ Cann (@AJCann) is inviting comments on a research paper (entitled “An efficient and effective system for interactive student feedback using Google+ to enhance an institutional virtual learning environment”) on his blog, as a form of open peer review. He’s received several reviews so far, as well as comments on the process.

A journal using WordPress

Andrés Guadamuz, the technical editor of SCRIPTed, the open access journal of Law and Technology, has written a blog post “Confessions of an open access editor” that mentions that the journal is now one of the few hosted by WordPress. Given the recent launch of Annotum, the WordPress add-on for authoring scholarly publications, it looks like WordPress is going to become more important as a platform in the future.

A survey on attitudes to open access

The International Journal of Clinical Practice (IJCP), published by Wiley, has launched a survey on what authors think about the idea of the journal going completely open access (rather than having it as an option as at present). They will be asking all submitting authors for the next six months and are also inviting others to write a Letter to the Editor with their thoughts. They seem to be genuinely interested in authors’ views and not pushing either for or against open access.

The ‘academic dollar’ altmetric

A post by Sabine Hossenfelder on the BackReaction blog (which I heard about via @ScholarlyKitchn) discusses a 2010 paper entitled “An Auction Market for Journal Articles” that suggests an ‘academic dollar’ “that would be traded among editors, authors, and reviewers and create incentives for each involved party to improve the quality of articles”. They are scathing about this proposal, describing it as an example of “Verschlimmbesserung”, defined by Urban Dictionary as “an attempted improvement that makes things worse than they already were”. Altmetrics may be on the rise, but it looks like this one won’t be taking off.

http://backreaction.blogspot.com/2012/01/academic-dollar.html

Follow

Get every new post delivered to your Inbox.

Join 48 other followers