March highlights from the world of scientific publishing

An update on what I learnt from Twitter last month: dodgy citation metrics, mislabelled papers and journals and more.

Metrics

A wonderful Perspective piece appeared in the open access journal mBio entitled Causes for the Persistence of Impact Factor Mania. Here, Arturo Casadevall (Editor in Chief of the journal) and Ferric C. Fang treat the misuse of the journal impact factor as if it were a disease and suggest possible causes and treatments. They diagnose the main problem as: “Publication in prestigious journals has a disproportionately high payoff that translates into a greater likelihood of academic success” and that these disproportionate rewards “create compelling incentives for investigators to have their work published in such journals. ” Their solutions are not new but worth reading. (via @PeppeGanga)

A less useful post was a widely shared news feature in the Pacific Standard: Killing Pigs and Weed Maps: The Mostly Unread World of Academic Papers. This gave an interesting look at citation analysis, but it started with a rather dodgy statistic:

A study at Indiana University found that “as many as 50% of papers are never read by anyone other than their authors, referees and journal editors.” That same study concluded that “some 90% of papers that have been published in academic journals are never cited.”

This ‘study’ turns out to be a feature in Physics World from 2007 by Indiana University Librarian Lokman I Meho, in which these numbers are simply asserted, with no citation and no data to back them up. Yoni Appelbaum (@YAppelbaum) pointed out a paper by Vincent Larivière and Yves Gingras on arXiv that effectively debunks these numbers. I also found a paper from 2008 whose Discussion section cites various studies on the proportion of uncited papers – which ranges from 15% to 26% for scientific and mathematical research papers, but was much higher in the social sciences (48% uncited) and humanities (93% uncited). So the situation isn’t as bad as the Pacific Standard made out, unless you are in the humanities.

Open access

The Wellcome Trust, the UK’s largest provider of non-governmental funding for scientific research, released a dataset on Figshare of the fees paid in the 2012-13 financial year for open access publication (APCs). @ernestopriego posted an initial analysis, @CameronNeylon posted a tidied up version of the dataset and @petermurrayrust and Michelle Brook (@MLBrook) initiated a crowdsourced attempt to check whether all the articles paid for were actually made open access by their publishers. The resulting spreadsheet will continue to be used for checking whether any paid open access papers are being wrongly marked as copyright of the publisher, or being put behind a paywall, or being given a link to payment for a licence to reproduce or reuse (anyone can help with this if they wish). Peter Murray-Rust has identified some examples where these errors have been made, which seem to be mostly from Elsevier, and this prompted Elsevier to post an explanation of why this is taking so long to fix (they were alerted the problem two years ago, as Mike Taylor has explained). 

Richard Poynder (@RickyPo) pointed me to a post on Google+ by David Roberts about changes in the APCs of Elsevier maths journals. Some have been pegged to small annual increases, others have gone up 6-8%, while one has had its APC reduced by 30%. The latter just happens to be the journal for which the editorial board threatened to quit in protest at Elsevier’s continuing lack of sufficient support for open access. The APCs are generally between US$500 and US$5000. In response to this, Ross Mounce (@rmounce) pointed out that Ubiquity Press (@ubiquitypress), whose APCs are US$390, have given a full breakdown of what the APC pays for. @HansZauner asked why all publishers can’t do the same, but this seems unlikely to happen.

It was also Richard Poynder who tweeted a very useful guide to choosing an open access journal, produced by Ryerson University Library & Archives in Canada. This gives a series of tests to see whether a journal is likely to be reputable rather than a ‘predatory’ journal, including membership of OASPA, journal metrics, peer review procedure and editorial board membership. @BMJ_Open pointed out that the page implied that double blind peer review was the most widely accepted standard. The page has now been changed, perhaps in response to this comment, to say “Take into consideration that blind peer review and open peer review are both considered a credible standard for scientific publishing.”

Other open access and open data news:

  • @WoWter posted an analysis of how much it would cost the Netherlands to convert completely to gold open access.
  • The Directory of Open Access Journals (@DOAJplus) published a new application form that all journals must fill in to apply to be in the database. This includes a ‘DOAJ Seal’ that indicates the openness, indexability and discoverability of the journal. (via @MikeTaylor).
  • PLOS published an update and clarification of their open data policy, following the debates that I covered last month.
  • David Crotty wrote a good summary of the debate about PLOS’s open data policy for the @ScholarlyKitchn.
  • A new service called JournalClick was announced, which gives recommendations for open access papers to read based on what you have read (via @RickyPo).
  • A German court has ruled that the Creative Commons non-commercial (CC:NC) clause means that the material is only for personal use, so even state-owned radio stations with no advertisements, for examples, are not permitted to use CC:NC material without permission (via @petermurrayrust).
  • Duke University
    Scholarly Communications Officer, Kevin Smith

    Scholarly Communications Officer Kevin Smith (@klsmith4906) posted about two problems with Nature Publishing Group licencing: they have recently started to require Duke authors to request a formal waiver of their faculty open access policy, and their licence to publish requires the author to waive or agree not to assert their moral rights.  @grace_baynes of Nature responded in a comment.

  • @damianpattinson of PLOS posted a report of an interesting talk entitled ‘The future is open: opportunities for publishers and institutions’ that he and his colleague Catriona MacCallum (@catmacOA) gave at the UKSG conference ‘Open Access Realities’ in London in November 2013.

New journals

The IEEE launched its new journal, IEEE Access, which claims to be an open access megajournal and was listed as one that was ‘coming soon’ in Pete Binfield (@p_binfield)’s December 2013 post on megajournals. However, the FAQ makes clear that in fact the authors are required to sign over copyright to the publisher, and reuse is not allowed, although the papers are free to read online. A discussion with @MattJHodgkinson and @BenMudrak clarified the situation for me. Matt pointed out that the Budapest Open Access Initiative FAQ says “Open access journals will either let authors retain copyright or ask authors to transfer copyright to the publisher”. So copyright transfer is allowed within open access, but restricting all reuse means that this journal should not be called an open access journal. IEEE Access also doesn’t conform to the standard definition of a megajournal, as the FAQ states “IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented.” Megajournals do not select on the basis of perceived ‘interest’, so this is not a megajournal.

Other developments

  • I haven’t kept up fully with the controversy surrounding the publication of a new method (called STAP) to produce stem cells that was published in Nature in January. Paul Knoepfler’s stem cell blog (and @pknoepfler) is the place to go for full updates, but I was concerned to read that Nature has declined to publish a ‘Brief Communication Arising’ reporting that the method does not work. It seems important to me that such follow-ups should be published in the same journal as the original paper.
  • Jocelyn Sze (@jocelynesze) pointed me to a series of 2012 articles in Frontiers in Computational Neuroscience on visions for the future of scientific publishing. This editorial by Nikolaus Kriegeskorte introduces the series.

February highlights from the world of scientific publishing

Some of what I learned about scientific publishing last month from Twitter: new open access journals, data release debates, paper writing tips, and lots more

New journals

Two important announcements this month, both of open access sister journals to well established ones.

First, at the AAAS meeting it was announced that Science is going to have an online-only open access sister journal, called Science Advances, from early 2015. This will be selective (not a megajournal), will publish original research and review articles in science, engineering, technology, mathematics and social sciences, and will be edited by academic editors. The journal will use a Creative Commons license, which generally allows for free use, but hasn’t decided whether to allow commercial reuse, according to AAAS spokeswoman Ginger Pinholster. The author publishing charge hasn’t yet been announced.

Second, the Royal Society announced that, in addition to their selective open access journal Open Biology, they will be launching a megajournal, Royal Society Open Science, late in 2014. It will cover the entire range of science and mathematics, will offer open peer review as an option, and will also be edited by academic editors. Its criteria for what it will publish include “all articles which are scientifically sound, leaving any judgement of importance or potential impact to the reader” and “all high quality science including articles which may usually be difficult to publish elsewhere, for example, those that include negative findings”; it thus fits the usual criteria for a megajournal in that it will not select for ‘significance’ or potential impact.

These two announcements show that publishers without an open access, less selective journal in its stable are now unusual. Publishers are seeing that there is a demand for these journals and that they can make money. Publishers also see that they can gain a reputation for being friendly to open access by setting up such a journal. This also means that papers rejected by their more selective journals can stay within the publisher (via cascading peer review), which, while saving time for the authors by avoiding the need to start the submission process from scratch, also turn a potential negative for the publisher (editorial time spent on papers that are not published) into a positive (author charges). The AAAS has been particularly slow to join this particular bandwagon; let’s see if the strong brand of Science is enough to persuade authors to publish in Science Advances rather than the increasingly large number of other megajournals.

PLOS data release policy

On 24 February, PLOS posted an updated version of the announcement about data release that they made in December (and which I covered last month). I didn’t pay much attention as the change had already been trailed, but then I had to sit up and take notice because I started seeing posts and tweets strongly criticising the policy. The first to appear was an angry and (in my opinion) over-the-top post by @DrugMonkeyblog entitled “PLoS is letting the inmates run the asylum and this will kill them”.  A more positive view was given by Michigan State University evolutionary geneticist @IanDworkin, and another by New Hampshire genomics researcher Matt MacManes (@PeroMHC). Some problems that the policy could cause small, underfunded labs were pointed out by Mexico-based neuroscience researcher Erin McKiernan (@emckiernan13). The debate got wider, reaching Ars Technica and Reddit – as of 3 March there have been 1045 comments on Reddit!

So what is the big problem? The main objections raised seem to me to fall into six categories:

  1. Some datasets would take too much work to get into a format that others could understand
  2. It isn’t always clear what kind of data should be published with a paper
  3. Some data files are too large to be easily hosted
  4. The concern that others might publish reanalyses that the originators of the data were intending to publish, so they would lose the credit from that further research
  5. Some datasets contain confidential information
  6. Some datasets are proprietary

I won’t discuss these issues in detail here, but if you’re interested it’s worth reading the comments on the posts linked above. But it does appear (particularly from the update on their 24 February post and the FAQ posted on 28 February) that PLOS is very happy to discuss many of these issues with authors that have concerns, but analyses of proprietary data may have to be published elsewhere from now on.

I tend to agree with the more positive views of this new policy, who argue that data publication will help increase reproducibility, help researchers to build on each other’s work and prevent fraud. In any case, researcher who disagree are free to publish in other journals with less progressive policies. PLOS is a non-profit publisher who say that access to research results, immediately and without restriction, has always been at the heart of their mission, so they are being consistent in applying this strict policy.

Writing a paper

Miscellaneous news

  • Science writer @CarlZimmer explained eloquently at the AAAS meeting why open access to research, including open peer review and preprint posting, benefit science journalists and their readers.
  • Impactstory profiles now show proportion of a researcher’s articles that are open access and gives gold, silver and bronze badges, as well as showing how highly accessed, discussed and cited their papers are.
  • A new site has appeared where authors can review their experience with journals: Journalysis. It looks promising but needs reviews before it can become a really useful resource – go add one!
  • An interesting example of post-publication peer review starting on Twitter and continuing in a journal was described by @lakens here and his coauthor @TimSmitsTim here.
  • Cuban researcher Yasset Perez-Riverol (@ypriverol) explained why researchers need Twitter and a professional blog.
  • I realised when looking at an Elsevier journal website that many Elsevier journals now have very informative journal metrics, such as impact factors, Eigenfactor, SNIP and SJR for several years and average times from submission to first decision and from acceptance to publication. An example is here.
  • PeerJ founder @P_Binfield posted a Google Docs list of standalone peer review platforms.

January highlights from the world of scientific publishing

Some of what I learned last month from Twitter: new journals, new policies and post-publication reviews at PLOS, and some suggestions for how journals should work.

New journals

Three new journals have been announced that find new and very different ways to publish research. The most conventional is the Journal of Biomedical Publishing, a journal aiming to publish articles about publishing. It will be open access (with a low fee of 100 Euros) and promises only 2-4 days between acceptance and online publication. The journal has been set up by four Danish researchers and is published by the Danish Medical Association. One of them, Jacob Rosenberg, will present a study of where articles about publishing were published in 2012 at the forthcoming conference of the European Association of Science Editors.

A journal that goes further from the conventional model is Proceedings of Peerage of Science, a journal for commentaries associated with the journal-independent peer review service Peerage of Science. The journal will publish commentaries on published research, mostly based on open reviews of papers that have been generated as part of Peerage of Science. These will be free to read [edited from 'open access' following comments below], but there is no fee to the author – on the contrary, the authors of these commentaries will potentially receive royalties! Anyone who values a particular commentary or the journal as a whole can become a ‘public patron‘ and donate money, some of which will go to the author of that commentary. I will be watching this innovative business model with interest.

Finally, it is difficult to tell whether @TwournalOf will be a serious journal, but it certainly claims to be: a journal in which the papers each consist of a single tweet. ‘Papers’ are submitted by direct message, and the journal is run by Andy Miah (@andymiah), professor in ethics and emerging technologies at the University of the West of Scotland. I wondered (on Twitter of course) how this would work given that you can only send someone a direct message if they follow you. The answer came immediately: the journal will follow everyone who follows it. One to watch!

Developments at PLOS

Two announcements by Public Library of Science caught my eye this month. The first was actually in December but I missed it at the time and was alerted to it recently by @Alexis_Verger: PLOS have released a revised data policy (coming into effect in March) in which authors will be required to include a ‘data availability statement’ in all research articles published by PLOS journals. This statement will describe the paper’s compliance with the PLOS data policy, which will mean making all data underlying the findings described in their article fully available without restriction (though exceptions will be made, for example when patient confidentiality is an issue). This is another step in the movement towards all journals requiring the full dataset to be available. I hope other journals will follow suit.

The other announcement was about a post-publication review system called PLOS Open Evaluation. This is currently in a closed pilot stage, but it sounds like it will finally provide the evaluation of impact that the founders promised when they set up PLOS ONE to publish all scientifically sound research. Users will be able to rate an article by their interest in it, it’s article’s significance, the quality of the research, and the clarity of the writing. There is also the opportunity to go into more detail about any of these aspects.

How journals should work

The New Year started off with an open letter from Oxford psychology professor Dorothy Bishop (@deevybee) to academic publishers. She points out a big change that has happened because of open access:

In the past, the top journals had no incentive to be accommodating to authors. There were too many of us chasing scarce page space. But there are now some new boys on the open access block, and some of them have recognised that if they want to attract people to publish with them, they should listen to what authors want. And if they want academics to continue to referee papers for no reward, then they had better treat them well too.

Bishop urges journal publishers to make things easier for authors and reviewers, such as by not forcing them through pointless hoops when submitting a paper that might still be rejected (a choice quote: “…cutting my toenails is considerably more interesting than reformatting references”). She calls out eLife and PeerJ as two new journals that are doing well at avoiding most of the bad practices she outlines.

Later in the month Jure Triglav (@juretriglav), the creator of ScienceGist, showed what amazing things can be done with scientific figures using modern internet tools. He shows a ‘living figure’ based on tweets about the weather, and the figure continuously updates as it receives new data. Just imagine what journals would be like if this kind of thing was widely used!

Finally, this month’s big hashtag in science was #SixWordPeerReview. Researchers posted short versions of peer reviews they have received (or perhaps imagined). Most of the tweets were a caricature of what people think peer review involves (perhaps understandably for a humorous hashtag), and a few people (such as @clathrin) pointed out that real peer review can be very constructive.

F1000Research did a Storify of a selection, taking the opportunity to point out the advantages of open peer review at the same time. Some of my favourites were:

@paulcoxon: “Please checked Engilsh and grammar thoroughly” (actually happened)

@girlscientist: Didn’t even get journal name right. #SixWordEditorReview

@McDawg: Data not shown? No thank you

December highlights from the world of scientific publishing

Some of what I learned last month from Twitter: takedowns, luxury journals, moves in peer review services and more.

‘Luxury journals’

A big talking point on my Twitter feed in December was the provocative comments about journal publishing made by Randy Sheckman as he received his Nobel Prize for Physiology and Medicine. Writing in the Guardian on 9 December, he criticised the culture of science that rewards publications in ‘luxury journals’ (which he identified as Nature, Cell and Science). He said “I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.” Although many applauded this, some pointed out that more junior researchers may not have the freedom to do likewise, and also mentioned Schekman’s potential conflict of interest as Editor-in-Chief of the new, highly selective journal eLife, which aims to compete for the best research with these journals (some responses are summarized by Stephen Curry). Schekman responded to the criticisms in a post on The Conversation, and suggested four ways in which the research community could improve the situation.

Elsevier steps up takedown notices

Subscription journals generally require the author to sign a copyright transfer agreement that, among other things, commits them not to share their paper widely before any embargo period has passed. It appears that in December Elsevier decided to increase their enforcement of this by sending takedown notices to sites where Elsevier papers were posted. Guy Leonard described what happened to him and the reaction on Twitter and elsewhere.

Various peer review developments

Jeremy Fox (@DynamicEcology), a population ecologist at the University of Calgary, explained why he likes the journal-independent peer review service Axios Review (and is joining their editorial board).

Publons, which gives researchers credit for post-publication peer review, announced that researchers can now also get credit for their pre-publication peer reviews for journals.

Wiley announced a pilot of transferable peer review for their neuroscience journals, in which reviews for papers rejected from one journal can be transferred to another journal in the scheme, thus saving time.

F1000Research announced that its peer reviewed articles are now visible in PubMed and PubMed Central, together with their peer reviews and datasets. Articles on F1000Research are published after a quick check and then peer reviewed, and indexing by PubMed and PubMed Central happens once an article has a sufficient number of positive reviews.

Jennifer Raff, an Anthropology Research Fellow at the University of Texas at Austin, published “How to become good at peer review: A guide for young scientists“, a very useful and comprehensive guide that she intends to keep updated as she receives comment on it.

Miscellaneous news

Jeffrey Beall, a librarian who has been curating a useful list of ‘predatory’ open access journals for several years, revealed his antagonism to open access as a whole in an article that surprised many with its misconceptions about the motivations of open access advocates. PLOS founder Mike Eisen has rebutted the article point by point. Although I feel that Beall’s list is still useful for checking out a new journal, it should be taken only as a starting point, together with a detailed look at the journal’s website, what it has already published, and its membership of the Open Access Scholarly Publishers Association (OASPA).

James Hayton (@JamesHaytonPhD) wrote a great post on his 3 Month Thesis blog on the seven deadly sins of thesis writing, which should also all be avoided in paper writing: lies, bullshit, plagiarism, misrepresentation, getting the basics wrong, ignorance and lack of insight.

Finally, I was alerted (by @sciencegoddess) to a new site called Lolmythesis, where students summarize their thesis in one (not always serious) sentence. Worth a look for a laugh – why not add your own?

November highlights from the world of scientific publishing

Some of what I learned this month from Twitter: new preprint server, Google Scholar Library, papers on citations and p-values, and the most networked science conference ever

BioRxiv

In what could be a major development in the culture of publishing, a preprint server for biology, BioRxiv, was launched this month. It is based on the long-running arXiv preprint server used by physicists (and increasingly quantitative biologists). Nature News had a good summary.

Google Scholar Library

Google Scholar have launched a new service, Google Scholar Library (h/t @phylogenomics). This is meant to be a way to organize papers you read or cite, so it could be a competitor to reference managers such as Mendeley and Zotero. However, it doesn’t seem to be fully set up for citing papers yet: you can import into BibTeX, EndNote, RefMan and RefWorks (but not Mendeley or Zotero) or get a single citation in just MLA, APA or Chicago style.

“Top researchers” and their citations

Two papers of particular interest this month: the first actually came out in late October and is entitled “A list of highly influential biomedical researchers, 1996–2011” (European Journal of Clinical Investigation; h/t @TheWinnower). The paper, by  John Ioannidis and colleagues (who also published the influential “Why Most Published Research Findings Are False” paper), sorted biomedical authors in Scopus by their h-index and total citations and listed various pieces of information for the top 400 by this measure. I found this interesting for several reasons, including:
  • It gives a feeling for what makes a high h-index: of over 15 million authors, about 1% had an h-index of over 20, about 5000 over 50 and only 281 over 80.
  • It shows how different sources of citation data can give different h-indices for the same author (see Table 3 in the paper; as pointed out by @girlscientist)

The paper is limited by its reliance on citation data and the h-index alone, so should not be taken too seriously, but it is worth a look if you haven’t already seen it.

p-values vs Bayes factors

The second is a paper in PNAS by Valen Johnson (covered by Erika Check Hayden in Nature News) suggested that the commonly used statistical standard of a p-value less than 0.05 is not good enough – in fact, around a quarter of findings that are significant at that level may be false. This conclusion was reached by developing a method to make the p-value directly comparable with the Bayes factor, which many statisticians prefer. As I’m not a statistician I’m not in a position to comment on the Bayesian/frequentist debate, but it is worth noting that this paper recommends a p-value threshold of less than 0.005 to be really sure of a result. A critical comment by a statistician is here (via @hildabast).

SpotOn London

Finally,  the main event of November for me was SpotOn London (#solo13), a two-day conference on science communication: policy, outreach and tools. This is one of the most connected conferences you can imagine: every session was live-streamed, the Twitter backchat was a major part of the proceedings, and many people followed along and joined in from afar. The session videos can all be viewed here.
For me four sessions were particular highlights:
  • The keynote talk by Salvatore Mele of CERN. This was not only an accessible explanation of the search for the Higgs Boson, and of the importance of open access and preprint publishing in high energy physics, but also a masterclass in giving an entertaining and informative presentation.
  • The discussion session Open, Portable, Decoupled – How should Peer Review change? (Storify of the tweets here)
  • The discussion session Altmetrics – The Opportunities and the Challenges (summary and some related links from Martin Fenner here)
  • A workshop I helped with, on rewriting scientific text using only the thousand mostly commonly used words in the English language (report by the organiser, Alex Brown, here)

Highlights from the scientific publishing world in October

A summary of the key things I have learned this month via Twitter: stings, harrassment and post-publication peer review.

You may have noticed that this blog is not updated very often, but that my Twitter feed is updated several (sometimes many) times a day. I have decided to to bring some highlights of this Twitter activity to my blog, so that those of you who (for some strange reason) aren’t on Twitter can get the benefit of all the interesting things I learn there every day. Of course, this summary will focus on scientific publishing and related fields. This may become a regular blog feature.

The biggest news early in October was the ‘sting’ published in Science by John Bohannon, which showed that some disreputable journals will accept even an obviously bad fake paper. There have been many, many posts and articles about this, which are listed in Zen Faulkes (@doctorzen)’s list. A few I found most insightful are:

  • A pair of two posts by @neurobonkers, the first giving a good overview and the second classifying the journals included in the sting into those that accepted or rejected the fake paper with or without peer review.
  • This post by journal editor Gunther Eysenbach, who rejected the paper. He says “It is foolish to extrapolate these findings of a few black sheep publishers and scammers… to an entire industry. This would be as logical as concluding from Nigerian wire fraud emails that all lawyers who take a fee-for-service are scammers!”
  • The suggestion by Zen Faulkes that the fake paper could be a good resource for teaching how to write a paper.

Then there was the big scandal around sexual harassment in the science writing community, which has now led to the resignation of Scientific American’s blog editor, Bora Zivovic. An overview in the Guardian science blog by Alice Bell (@alicebell) gives the low-down and this post by Jennifer Ouellette (@JenLucPiquant) is one of the more insightful on the issues.

And then PubMed launched a commenting system, PubMed Commons. This is so important that I am going to blog about it separately.

A few other interesting things:

  • The Economist published a special series of articles on science, including a long overview of the issues, including the problem of reproducing results and publishing replications, important statistical issues, fraud, retractions and peer review (including the Science sting).
  • This post by Pat Thomson (@thomsonpat) drives home the importance of the ‘take home message’ in your paper.
  • I was directed to this amazing comprehensive guide to making a conference poster, by Colin Purrington, by @deevybee and others.
  • Open Access Week ran from 21 to 27 October. The most notable related article was in the Guardian by Peter Suber: Open access: six myths to put to rest.
  • The Chemistry journal ACS Nano published an editorial suggesting that allegations of fraud in a paper should be dealt with in private by the journal concerned, not discussed openly on blogs. Blogger Paul Bracher (@ChemBark) disagrees.

Journal news for February

News related to scientific journal publishing since 4 February.

Elsevier withdraws support for the Research Works Act

Since I covered this infamous draft US law and the associated boycott of Elsevier by academics (here and in news here) the flood of blog posts on the topic has continued, and I won’t attempt to summarise them here. But the pressure seems to have had an effect: on 27 February Elsevier announced that it is no longer supporting the act, although they ‘continue to oppose government mandates in this area’.

Meanwhile, a new act has been proposed, the Federal Research Public Access Act (FRPAA), which would mandate that all research funded by every federal funder with a budget over $100 million should be made open access 6 months after publication.

Industry group ‘threatens’ journals to delay publications

The Lancet has reported (pdf) that the Mining Awareness Resource Group (MARG) has written to several scientific journals advises journals not to  publish papers from a US government study of diesel exhaust and lung cancer until a court case and congressional directives are ‘resolved’. The editor of Occupational and Environmental Medicine, Dana Loomis, is quoted as saying ‘It is vague and threatening. This has a chilling effect on scientific communications—a matter of grave concern.’

New open access journal

The open access journal Biology Open has been launched by the Company of Biologists. The journal aims to provide the research community with ‘an opportunity to publish valid and well-conducted experimental work that is otherwise robbed of timeliness and impact by the delays inherent in submission to established journals with more restrictive selection criteria. ‘

Twitter and paper citations

An arXiv preprint has found a correlation between mentions of a paper on Twitter and its later citations.

Criteria for the UK Research Excellence Framework 2014 announced

The Higher Education Funding Council for England (HEFCE) has announced the criteria and working methods that the panels for the assessment of research using the Research Excellence Framework (REF 2014) will use. REF will use citations as part of assessment but not impact factors or other bibliometrics (see page 25 of the full report for the statement regarding citations in the biology and medicine panel). Researchers at English universities will no doubt be scrutinizing the guidelines carefully.

* * * *

I’m sorry there hasn’t been a weekly Journal News recently, as I had hoped, and that this update is rather brief. I hope that the usefulness of these news updates depends more on their content than their regularity. If you want (much) more frequent updates from the world of journals and scientific publication, do follow me on Twitter!

Journal news 28 January to 3 February

Your journal-related news for the week.

F1000Research

Faculty of 1000 (F1000), the well established post-publication peer review service, has announced a new service that will publish original research papers. According to the initial announcement, this will differ from traditional journals in that all papers will be published immediately, before peer review (as long as they pass a ‘sanity check’), and peer review will happen openly after that. Publication of datasets will also be encouraged. Fees are still under discussion. Retraction Watch discussed the proposal and received many comments, including from two members of F1000 staff, Rebecca Lawrence and Sarah Greene, who thanked commenters for helping them to develop the idea further. This looks like an experiment worth watching – if it takes off it could herald a big change in publication of peer-reviewed papers. (Via @F1000Research.)

Arsenic Life (or not) in arXiv

Microbiologist Rosie Redfield has been trying to replicate the experiments presented by Felisa Wolfe-Simon et al. in late 2010 about a bacterium (called GFAJ-1) that could apparently grow using arsenic instead of phosphorus. Redfield has now submitted a manuscript to Science and at the same time uploaded it to the preprint server arXiv. She has found that there is no arsenate in the DNA of arsenate-grown GFAJ-1 cells. She is inviting comments on the manuscript on her blog, as an experiment in open peer review. (Via @RosieRedfield.)

More on the boycott of Elsevier

See my post earlier this week for the background on this. The list of researchers who have pledged not to support Elsevier journals has now reached over 3800. An article by Josh Fischman in The Chronicle of Higher Education on Tuesday included responses from Elsevier (Alicia Wise and Tom Reller). Kent Anderson of the Scholarly Kitchen criticised the boycott, saying that other publishers have prices as high as Elsevier, bundling of journal subscriptions is useful rather than being wrong, and other publishers also support the Research Works Act. Elsevier also put their case in this blog post by Chrysanne Lowe. Meanwhile, Michael Eisen, one of the founders of the Public Library of Science, gives some historical context in his blog. (Various sources.)

Launch of Cell Reports

In an announcement that got rather lost in the furore about the boycott, Cell Press (part of Elsevier) launched a new open access journal, Cell Reports. According to its information for authors (pdf), it publishes ‘thought-provoking, cutting-edge research, with a focus on a shorter single-point story… in addition to a longer article format’ and also ‘significant technical advances’ and ‘major informational data sets’. Authors can choose between two Creative Commons licences for their papers: Attribution (CC BY) and Attribution-Noncommercial-No Derivative Works (CC BY-NC). Are there now any major scientific publishers left that don’t have any open access journals? Possibly not. (Via @WiseAlic.)

Gyrations in Life

A controversial paper was published this week in the little known open access journal Life, apparently after peer review, that claims to explain just about everything using a simple geometric figure, the gyre. It has been taken to pieces by John Timmer in Ars Technica and PZ Myers in Pharyngula. Following these and other criticisms, the editor has now responded, saying that peer review was thorough, and Retraction Watch has discussed the response. (Via @tdechant and @leonidkruglyak.)

Mind your Editorial Board

Evolutionary biologist Jonathan Eisen pointed out something not quite right about the biography of an editorial board member on a journal called Molecular Biology, published by OMICS Publishing Group. The expert in ‘oximological microbiology, non-linear submorphological endosaccharomorphosis, applied endoplutomomics’ turned out to be a fictional creation of the German satirical magazine Titanic. How the journal administrators could take him seriously when his biography says that he ‘has successfully completed his Administrative responsibilities as vice president of the universe for scientific publication ethics and spamology’, we may never know. His name is still on the editorial board page as I write. It may be relevant that OMICS has been described as a ‘predatory open access publisher‘. (Via @phylogenomics.)

The Research Works Act, open access and publisher boycotts

The open access movement has been around for decades, gradually building up, but this month there seems to have been an acceleration in the pace of change. I will try in this post to summarise the current situation as I see it.

The initial driver of this recent change was theResearch Works Act (RWA), a draft law proposed in the US that would prohibit federal bodies from mandating that taxpayer-funded research be made freely accessible online (as the NIH currently does). The two Representatives who are sponsoring the RWA, Darrell Issa and Carolyn Maloney, have received considerable amounts of money from the publisher Elsevier, which publishes many journals and is against open access (as reported on Michael Eisen’s blog).

The second important event was the decision of Cambridge mathematics professor and Fields Medal winner Timothy Gowers to publish a blog post on 21 January entitled ‘Elsevier — my part in its downfall‘ (after the late Spike Milligan’s book ‘Adolf Hitler: My Part in his Downfall‘. (Gowers was the initiator of the Polymath Project, an experiment in open collaboration online between thousands of mathematicians, which Michael Nielsen lauded highly in his TED talk on open science.) Gowers summarised the criticisms of Elsevier:

  1. Their very high prices
  2. Their practice of ‘bundling’ journals into collections that libraries have to subscribe to together
  3. Their ‘ruthless’ negotiation tactics with libraries
  4. Their support of the RWA, and of the related acts SOPA and PIPA (both now postponed).

He was already quietly avoiding publishing in Elsevier journals and avoiding reviewing for them. But he decided that this quiet approach wasn’t enough: he called for coordinated action by academics. He comments that ‘Elsevier is not the only publisher to behave in an objectionable way. However, it seems to be the worst’.

This led mathematician Tyler Neylon to set up ‘The cost of knowledge‘, a page where researchers could publicly declare that they ‘will not support any Elsevier journal unless they radically change how they operate’. As of writing, this has over 2300 signatures.

In the past week the usual trickle of blog posts about open access and Elsevier has turned into a flood. I’ll pick out a few here:

Elsevier and their allies have responded:

But The Lancet, which is published by Elsevier, has said it ‘strongly opposes‘ the RWA, saying: ‘This short and hastily put together legislation is not in the interests of either science or the public’.

and others have criticised these responses (e.g. Mike Eisen, Drug Monkey).

The coverage is now reaching the mainstream:

It will be interesting to see what Elsevier says in a statement that was expected today, according to the Chronicle of Higher Education.

*  *  *  *

So, where do I stand? I am a freelance editor, working directly or indirectly for scientists and for publishers, on both open access and closed access journals. I worked for two years for Elsevier and then five years for BioMed Central, one of the leading open access publishers, and part of my job at BMC was to advocate for open access. I’m not a great fan of Elsevier, partly for the reasons that others give as described above, and partly because I think they (like many other publishers) are too keen on cutting costs and not keen enough on ensuring quality in their publications.

All this means that I am sympathetic to the open access movement but am not an active advocate of it. I’m not currently in a position to refuse to work for closed access publishers, nor would that have much effect on their policies. When helping scientists choose where to submit their papers, I try to dispassionately present the arguments for different types of journals and encourage them to investigate open-access options, but the decision is up to them.

What I’d like to do is think through what effect a boycott would have on each affected journal. The first people to suffer will be the editors who handle manuscripts. Usually they have to ask several people before they get two reviewers to agree to look at a paper – with the boycott, they will get more noes before they get enough yeses.  If the editors are in-house staff, will this filter up to their managers, and to their managers’ managers, up to the top of the company? Maybe, but only if the proportion of people saying no to reviewing for the journal is big enough. And in the mean time the editors, who have no say in the policies of their company, will be having a hard time.

One way the boycott could perhaps be more effective would be if it focused on a few journals in well-defined, small fields where there is a limited pool of potential reviewers. In a small field, it might be possible for a sizeable proportion of researchers to refuse to review for a particular journal, so this would have a bigger effect.

I would hope that those refusing make their reasons clear (as in this example letter) so that in-house staff aren’t left wondering what is going on. The boycotters will also need to make it clear to the staff that it is their employers they have a problem with, not the editors and editorial assistants themselves. Extreme politeness and chocolate might go down well!

I hope everyone will also remember that there are many researchers who need to publish to keep their jobs or get funding and tenure. Not everyone has a free choice of where to submit their paper. Those who do not join the boycott should not be assumed to be enemies of it.

So if you are boycotting any particular publisher, spare a thought for both the in-house staff who have to put up with it and for the researchers who can’t join in.

Journal news for 20-27 January

A brief summary of recent news related to journals and scientific publishing.

Datasets International

The open access publisher Hindawi has launced Datasets International, which “aims at helping researchers in all academic disciplines archive, document, and distribute the datasets produced in their research to the entire academic community.” For a processing charge of $300 authors can upload an apparently unlimited amount of data under a Creative Commons CC0 licence (and associated dataset papers under an Attribution licence), according to comments on Scott Edmunds’ Gigablog. The new journals currently associated with this initiative are Dataset Papers in: Cell Biology, Optics, Atmospheric Sciences and Materials Science, though no doubt more will follow. (Heard via @ScottEdmunds.)

Peerage of Science

A company run by three Finnish scientists this week has a new take on improving peer review. Peerage of Science is a community of scientists (‘Peers’), formed initially by invitation, who review each other’s papers anonymously before submission to journals. Reviews are themselves subjected to review, which means that reviewers receive recognition and ratings for their work. The reviews can even be published in a special journal, Proceedings of the Peerage of Science. Journals can offer to publish manuscripts at any point, for a fee – this is how the company aims to make a profit. (Heard via chemistryworldblog, via @adametkin.)

Peer review by curated social media

Science writer Carl Zimmer (@carlzimmer) reported last week in the New York Times on a recent (open access) study in Proc Natl Acad Sci USA about the generation of multicellular yeast by artificial selection in the lab. He has now posted a follow-up article in his Discovery blog, in which he presents the conversation that followed on Twitter about this paper (using Storify) and invites the author to respond, which the author does. The comments on the latter post continue the conversation, and the author continues to respond. It’s an interesting example of the author of a controversial paper engaging constructively in post-publication peer review. (Heard via @DavidDobbs.)

Research Objects

Tom Scott (@derivadow, who works for Nature Publishing Group) has published a detailed blog post outlining a proposal for a new kind of scientific publication: the Research Object. This would be a collection of material, linked by a Uniform Resource Identifier (URI), including an article, raw data, protocols, links to news about the research published elsewhere, links to the authors and their institutions, and more. He credits the Force11 (‘Future of Research Communications and e-Scholarship’) community for the idea, which is developed in greater detail here (pdf). These elements may or may not be open access, although the sophisticated searches Scott envisages will be difficult if they are not. (Heard via @SpringerPlus.)

Analysis of F1000 Journal Rankings

Phil Davis of The Scholarly Kitchen has done an analysis of the journal ranking system announced by Faculty of 1000 (F1000) in October. The analysis includes nearly 800 journals that were given a provisional F1000 Journal Factor (called FFj by F1000) for 2010. Plotting the FFj of each journal against the number of articles from it that were evaluated by F1000 shows that the two numbers are closely related; in fact, the number of articles evaluated explains over 91% of the variation in FFj. Journals from which only a few articles were evaluated suffer not only from this bias, but also from a bias against interdisciplinary and physical science journals that publish little biology. It seems to me that these biases could easily be addressed by taking into account (a) the number of articles evaluated from each journal and (b) the proportion of biology articles published in it when calculating the FFj. F1000 would be wise to study this useful analysis when reviewing their ranking system, as they plan to do regularly, according to the original announcement. (Heard via @ScholarlyKitchn.)

Follow

Get every new post delivered to your Inbox.

Join 47 other followers