June highlights from the world of scientific publishing

The launch of Cofactor and what I learnt from Twitter in June: an unusual journal, a very large journal, a new journal and a very slow journal, plus accusations of publisher censorship and more.

The biggest highlight of the month for me was (of course) the launch Cofactor, my company that aims to help researchers publish their work more effectively. The launch event in London was a great success – over 50 people from science and publishing came to hear about the company and from four speakers on the theme ‘What difference will changes in peer review make to authors and journals?’. Do have a look at the Cofactor website and see if we can help you improve your papers, learn about scientific publishing or decide on your publishing strategy.

On Twitter there was lots of news too!

Journals

Nature published a feature by Peter Aldhous (@paldhous) on ‘contributed’ papers in Proceedings of the National Academy of Sciences USA (PNAS). Members of the National Academy of Sciences can submit up to four papers per year using this track, and they are not peer reviewed after submission – rather, authors must obtain reviews themselves and submit them along with the paper, and the paper and reviews are assessed by members of the editorial board. In 2013, more than 98% of contributed papers were published, compared with only 18% of direct submissions (for which the review process is like that of most conventional journals).

Aldhous analysed papers from the contributed and directly submitted track and compared their citation rates:

…the difference between citation rates for directly submitted and contributed papers was not large — controlling for other factors such as discipline, contributed papers garnered about 4.5% fewer citations — but it was statistically significant. Nature‘s analysis also suggests that the gap in citation rates between directly submitted and contributed papers has been narrowing, and this does not seem to be because more-recent papers have yet to acquire enough citations for the difference to show.

The analysis is described in a supplementary information file, but the full dataset is available only on request. I questioned whether it would be better to release the full dataset so that others can reanalyse it. Others agreed, and I have collated the resulting Twitter discussion using Storify. In response, Peter Aldhous kindly put the full dataset on his website. This is a great example of the power of Twitter to get rapid responses to questions.

Meanwhile, a new journal was launching and another was celebrating an incredible milestone. The new journal is The Winnower (@TheWinnower), which describes itself as an open access online science publishing platform that uses open post-publication peer review. There is a charge of US$100 to publish an article, which is not editorially checked but is published straight away, after which anyone can add a review. There aren’t many papers there at the moment but The Winnower is an interesting experiment. It differs from the recently launched ScienceOpen (@Science_Open) in that the latter has editorial checks and only scientists can review papers.

The largest journal in the world is of course PLOS One, and this month it published its 100,000th article. There is no doubt that this megajournal is changing scientific publishing and will continue to do so.

Finally, Texas biologist Zen Faulkes (@DoctorZen) posted his experience of publishing in a small regional journal, The Southwestern Naturalist. He submitted the paper in 2011, got reviews back 9 months later, submitted a revised version within a week and it was finally accepted for the December 2013 issue… which actually came out in early June 2014. Although Zen wasn’t in a hurry, he says

But after this experience, I think I would have been much better off submitting this paper to PLOS ONE or PeerJ or a similar venue.
Except… wait, PeerJ didn’t exist when I submitted this paper. With publications like PeerJ, journals like The Southwestern Naturalist are going to be in trouble soon.

I too wonder why authors would choose a small journal like this, especially if it is closed access, over one that can reach a much broader audience.

Difficulties with publishing criticism of the publishing industry

In late May an article entitled ‘Publisher, be damned! From price gouging to the open road‘ was published by four Leicester University management researchers in a fairly obscure Taylor & Francis journal, Prometheus: Critical Studies in Innovation. I would not have noticed it had it not been for coverage by Paul Jump (@PaulJump) in Times Higher Education telling the extraordinary story behind the publication of the article. A senior manager at the publisher demanded that the article be cut, the issue containing it was delayed by 8 months, and the editorial board threatened to resign in protest at the publisher’s actions. There is more detail and analysis in this Library Journal article. Eventually it was published with the names of particular publishers removed. It seems to me that the publishers’ attempt at suppressing the article had the opposite effect and is a good example of the Streisand Effect. (Via @LSEImpactBlog, @RickyPo and @Stephen_Curry)

Paper writing resources

Two very useful resources came to my attention this month. The first (via @digiwonk) was a blog post on The Chronicle of Higher Education’s Careers hub by Kirsten Bell, a research associate from the University of British Columbia. The post, entitled ‘The Really Obvious (but All-Too-Often-Ignored) Guide to Getting Published‘, gives five simple tips for getting your article published. They are

  1. Familiarize yourself with the journal you want to submit to
  2. Make sure you nominate reviewers, if the journal gives you the option to do so
  3. Don’t make it glaringly obvious that your paper has been rejected by another journal
  4. Learn how to write a paper before actually submitting one
  5. Be persistent (when it’s warranted)

The second was actually a guide to reading a scientific paper, but it is well worth reading when you are writing too, so as to understand how it is likely to be read. The article was in the Huffington Post by @JenniferRaff, a postdoc at the University of Texas, entitled ‘How to Read and Understand a Scientific Paper: A Step-by-Step Guide for Non-Scientists‘. She tells readers to identify the ‘big question’ and the ‘specific question’ that the authors are trying to answer with their research, and then by reading the methods and results determine whether the results answer the specific question. If your paper is written so that the specific question and the answer to it are easy to find, you will get more readers and probably more citations. (Via @kirkenglehardt)

Altmetrics

You’ve seen the Altmetric donut – now here’s the Plum Analytics PlumPrint! (via @PlumAnalytics).

ImpactStory (@ImpactStory) have a good roundup of news about altmetrics from June, mentioning the PlumPrint, responses to a UK Higher Education Funding Council for England (HEFCE) consultation on the use of metrics in assessment, and highlights from a recent altmetrics conference.

Miscellany

A post by Manchester University computational biologist Casey Bergman (@CaseyBergman) was popular among the scientists I follow: ‘The Logistics of Scientific Growth in the 21st Century‘ argued that the exponential growth in academic research is over, and explored what that might mean for those at different stages in their careers.

On the BioMed Central blog, David Stern describes his unsuccessful efforts to replicate a published effect and the realisation that it was an artefact of data binning. He recommends not just providing the full data but displaying it too. (Via @emckiernan13 and @mbeisen)

There was a great post on blog ‘The Rest of the Iceberg’ (@ECRPublishing) entitled ‘What To Do When You Are Rejected‘, focusing on what you can learn from receiving peer reviews.

May highlights in scientific publishing

News gleaned from Twitter in May: debates about replication and data sharing, articles about peer review and more.

Replication

The debate about replication in science has been fired up by a special issue of the journal Social Psychology consisting entirely of replications (explained here by editor Chris Chambers, @Chrisdc77). One author of a study that was chosen for a replication attempt wrote about her difficulties with the experience. A lot of discussion later, I particularly liked Rolf Zwaan‘s attempt to summarise both sides of the debate. He contrasts the view of ‘replicators’ that original research is a public good with that of ‘replication critics’ who seem to view it as a work of art.

A related debate concerns what happens when questions are raised about a paper and how the authors should react. Palaeontologist @JohnHutchinson posted a long and thoughtful consideration of this based on his experience with a 2011 paper on the growth rates of Tyrannosaurus, which led to a correction. He says that going over all the data again takes a huge amount of time and energy, but the process is what science is meant to be about. (via @Protohedgehog)

The attempts to replicate the STAP stem cell experiments (as covered here in March) seem to be drawing to a head, and open access and open peer review have helped to resolve the issue. F1000 Research published a non-replication by @ProfessorKenLee that contained the full dataset, and the paper was then made available for open review. A couple of weeks later it had two positive peer reviews, which means that it is now indexed in PubMed. All authors of the original STAP study have now agreed to retract it.

Data sharing

The polar bear genome was published in Cell after the dataset was released by @Gigascience. This is a step forward for open data, as Cell Press have previously said they would see the publication of data with a DOI as potential prior publication that might preclude publication of a paper on that data. (via @GrantDenkinson)

Dorothy Bishop (@deevybee) posted about her first experience of sharing data, describing it as exciting but scary. She discovered some errors in the process, and says “The best way to flush out … errors is to make the data public.”

In PLOS, Theo Bloom and Jennifer Lin summarised how the publisher’s new data sharing policy has gone down with authors. The short answer is ‘very well’, but there are still concerns, which the post lists and responds to.

In the mean time, the European Medicines Agency (EMA) has announced (see p8 of the linked pdf) that clinical trial data will be made available, but researchers and other interested parties will only be allowed to view the data on screen. Unbelievably, they will not be allowed to download it, print it, or do anything else but look at it. The German Institute for Quality and Efficiency in Health Care (@iqwig) published some reactions of researchers to this, which are well worth looking at. (via @trished)

 Peer review

 I was alerted by editor Carlotta Shearson (@CShearson) to an editorial in Journal of Physical Chemistry Letters entitled ‘Overcoming the Myths of the Review Process and Getting Your Paper Ready for Publication’. The process it describes is similar to what I’ve seen in many selective journals, so it will be useful to authors in many fields as well as physical chemistry. It also includes a table of the ‘Top Ten Unproductive Author Responses’ to reviewer comments.

Another journal editorial of interest was published in Administrative Science Quarterly, entitled ‘Why Do We Still Have Journals?’ This focuses more on social science and concludes that, for now, journals are still indispensable. (via @SciPubLab)

Miscellaneous news

A survey of Canadian journal authors was discussed by Phil Davis (@ScholarlyChickn) in the Scholarly Kitchen. Peer review, journal reputation, and fast publication were the top three factors cited in deciding where to submit their manuscripts, above open access, article-level metrics and mobile access. (via @MikeTaylor)

Following the Freedom of Information requests by Tim Gowers on Elsevier subscription pricing covered last month, Australian mathematician Scott Morrison has found out a bit about pricing and contracts for Australian universities. This may lead to FOI requests there. In the mean time, Gowers has posted updates on four more UK universities. (via @yvonnenobis)

And I will be announcing my own news very soon (though you might already have heard about my new company on Twitter). Watch out for the next post!

February highlights from the world of scientific publishing

Some of what I learned about scientific publishing last month from Twitter: new open access journals, data release debates, paper writing tips, and lots more

New journals

Two important announcements this month, both of open access sister journals to well established ones.

First, at the AAAS meeting it was announced that Science is going to have an online-only open access sister journal, called Science Advances, from early 2015. This will be selective (not a megajournal), will publish original research and review articles in science, engineering, technology, mathematics and social sciences, and will be edited by academic editors. The journal will use a Creative Commons license, which generally allows for free use, but hasn’t decided whether to allow commercial reuse, according to AAAS spokeswoman Ginger Pinholster. The author publishing charge hasn’t yet been announced.

Second, the Royal Society announced that, in addition to their selective open access journal Open Biology, they will be launching a megajournal, Royal Society Open Science, late in 2014. It will cover the entire range of science and mathematics, will offer open peer review as an option, and will also be edited by academic editors. Its criteria for what it will publish include “all articles which are scientifically sound, leaving any judgement of importance or potential impact to the reader” and “all high quality science including articles which may usually be difficult to publish elsewhere, for example, those that include negative findings”; it thus fits the usual criteria for a megajournal in that it will not select for ‘significance’ or potential impact.

These two announcements show that publishers without an open access, less selective journal in its stable are now unusual. Publishers are seeing that there is a demand for these journals and that they can make money. Publishers also see that they can gain a reputation for being friendly to open access by setting up such a journal. This also means that papers rejected by their more selective journals can stay within the publisher (via cascading peer review), which, while saving time for the authors by avoiding the need to start the submission process from scratch, also turn a potential negative for the publisher (editorial time spent on papers that are not published) into a positive (author charges). The AAAS has been particularly slow to join this particular bandwagon; let’s see if the strong brand of Science is enough to persuade authors to publish in Science Advances rather than the increasingly large number of other megajournals.

PLOS data release policy

On 24 February, PLOS posted an updated version of the announcement about data release that they made in December (and which I covered last month). I didn’t pay much attention as the change had already been trailed, but then I had to sit up and take notice because I started seeing posts and tweets strongly criticising the policy. The first to appear was an angry and (in my opinion) over-the-top post by @DrugMonkeyblog entitled “PLoS is letting the inmates run the asylum and this will kill them”.  A more positive view was given by Michigan State University evolutionary geneticist @IanDworkin, and another by New Hampshire genomics researcher Matt MacManes (@PeroMHC). Some problems that the policy could cause small, underfunded labs were pointed out by Mexico-based neuroscience researcher Erin McKiernan (@emckiernan13). The debate got wider, reaching Ars Technica and Reddit – as of 3 March there have been 1045 comments on Reddit!

So what is the big problem? The main objections raised seem to me to fall into six categories:

  1. Some datasets would take too much work to get into a format that others could understand
  2. It isn’t always clear what kind of data should be published with a paper
  3. Some data files are too large to be easily hosted
  4. The concern that others might publish reanalyses that the originators of the data were intending to publish, so they would lose the credit from that further research
  5. Some datasets contain confidential information
  6. Some datasets are proprietary

I won’t discuss these issues in detail here, but if you’re interested it’s worth reading the comments on the posts linked above. But it does appear (particularly from the update on their 24 February post and the FAQ posted on 28 February) that PLOS is very happy to discuss many of these issues with authors that have concerns, but analyses of proprietary data may have to be published elsewhere from now on.

I tend to agree with the more positive views of this new policy, who argue that data publication will help increase reproducibility, help researchers to build on each other’s work and prevent fraud. In any case, researcher who disagree are free to publish in other journals with less progressive policies. PLOS is a non-profit publisher who say that access to research results, immediately and without restriction, has always been at the heart of their mission, so they are being consistent in applying this strict policy.

Writing a paper

Miscellaneous news

  • Science writer @CarlZimmer explained eloquently at the AAAS meeting why open access to research, including open peer review and preprint posting, benefit science journalists and their readers.
  • Impactstory profiles now show proportion of a researcher’s articles that are open access and gives gold, silver and bronze badges, as well as showing how highly accessed, discussed and cited their papers are.
  • A new site has appeared where authors can review their experience with journals: Journalysis. It looks promising but needs reviews before it can become a really useful resource – go add one!
  • An interesting example of post-publication peer review starting on Twitter and continuing in a journal was described by @lakens here and his coauthor @TimSmitsTim here.
  • Cuban researcher Yasset Perez-Riverol (@ypriverol) explained why researchers need Twitter and a professional blog.
  • I realised when looking at an Elsevier journal website that many Elsevier journals now have very informative journal metrics, such as impact factors, Eigenfactor, SNIP and SJR for several years and average times from submission to first decision and from acceptance to publication. An example is here.
  • PeerJ founder @P_Binfield posted a Google Docs list of standalone peer review platforms.

January highlights from the world of scientific publishing

Some of what I learned last month from Twitter: new journals, new policies and post-publication reviews at PLOS, and some suggestions for how journals should work.

New journals

Three new journals have been announced that find new and very different ways to publish research. The most conventional is the Journal of Biomedical Publishing, a journal aiming to publish articles about publishing. It will be open access (with a low fee of 100 Euros) and promises only 2-4 days between acceptance and online publication. The journal has been set up by four Danish researchers and is published by the Danish Medical Association. One of them, Jacob Rosenberg, will present a study of where articles about publishing were published in 2012 at the forthcoming conference of the European Association of Science Editors.

A journal that goes further from the conventional model is Proceedings of Peerage of Science, a journal for commentaries associated with the journal-independent peer review service Peerage of Science. The journal will publish commentaries on published research, mostly based on open reviews of papers that have been generated as part of Peerage of Science. These will be free to read [edited from ‘open access’ following comments below], but there is no fee to the author – on the contrary, the authors of these commentaries will potentially receive royalties! Anyone who values a particular commentary or the journal as a whole can become a ‘public patron‘ and donate money, some of which will go to the author of that commentary. I will be watching this innovative business model with interest.

Finally, it is difficult to tell whether @TwournalOf will be a serious journal, but it certainly claims to be: a journal in which the papers each consist of a single tweet. ‘Papers’ are submitted by direct message, and the journal is run by Andy Miah (@andymiah), professor in ethics and emerging technologies at the University of the West of Scotland. I wondered (on Twitter of course) how this would work given that you can only send someone a direct message if they follow you. The answer came immediately: the journal will follow everyone who follows it. One to watch!

Developments at PLOS

Two announcements by Public Library of Science caught my eye this month. The first was actually in December but I missed it at the time and was alerted to it recently by @Alexis_Verger: PLOS have released a revised data policy (coming into effect in March) in which authors will be required to include a ‘data availability statement’ in all research articles published by PLOS journals. This statement will describe the paper’s compliance with the PLOS data policy, which will mean making all data underlying the findings described in their article fully available without restriction (though exceptions will be made, for example when patient confidentiality is an issue). This is another step in the movement towards all journals requiring the full dataset to be available. I hope other journals will follow suit.

The other announcement was about a post-publication review system called PLOS Open Evaluation. This is currently in a closed pilot stage, but it sounds like it will finally provide the evaluation of impact that the founders promised when they set up PLOS ONE to publish all scientifically sound research. Users will be able to rate an article by their interest in it, it’s article’s significance, the quality of the research, and the clarity of the writing. There is also the opportunity to go into more detail about any of these aspects.

How journals should work

The New Year started off with an open letter from Oxford psychology professor Dorothy Bishop (@deevybee) to academic publishers. She points out a big change that has happened because of open access:

In the past, the top journals had no incentive to be accommodating to authors. There were too many of us chasing scarce page space. But there are now some new boys on the open access block, and some of them have recognised that if they want to attract people to publish with them, they should listen to what authors want. And if they want academics to continue to referee papers for no reward, then they had better treat them well too.

Bishop urges journal publishers to make things easier for authors and reviewers, such as by not forcing them through pointless hoops when submitting a paper that might still be rejected (a choice quote: “…cutting my toenails is considerably more interesting than reformatting references”). She calls out eLife and PeerJ as two new journals that are doing well at avoiding most of the bad practices she outlines.

Later in the month Jure Triglav (@juretriglav), the creator of ScienceGist, showed what amazing things can be done with scientific figures using modern internet tools. He shows a ‘living figure’ based on tweets about the weather, and the figure continuously updates as it receives new data. Just imagine what journals would be like if this kind of thing was widely used!

Finally, this month’s big hashtag in science was #SixWordPeerReview. Researchers posted short versions of peer reviews they have received (or perhaps imagined). Most of the tweets were a caricature of what people think peer review involves (perhaps understandably for a humorous hashtag), and a few people (such as @clathrin) pointed out that real peer review can be very constructive.

F1000Research did a Storify of a selection, taking the opportunity to point out the advantages of open peer review at the same time. Some of my favourites were:

@paulcoxon: “Please checked Engilsh and grammar thoroughly” (actually happened)

@girlscientist: Didn’t even get journal name right. #SixWordEditorReview

@McDawg: Data not shown? No thank you

Why do journals insist that data ‘are’?

Given the controversy over this grammatical point, I argue that journal style guides should allow both ‘data is’ and ‘data are’.

I was recently directed (via @blefurgy and @deb_lavoy on Twitter) to an old blog post on something that frequently bugs me: the question of whether the word ‘data’ is singular or plural. The post, by Norman Gray, an astronomical data management researcher at Glasgow University, UK, dates from 2005 but I haven’t seen a better one on the topic. Gray argues that:

…the word ‘data’, in english, is a singular mass noun. It is thus a grammatical and stylistic error to use it as a plural.

Plural use is barbaric: amongst other crimes, it is a deliberate archaism, and thus a symptom of bad writing.

Strong stuff.

An alternative view is given by Peter Coles (@telescoper), another astronomer at Cardiff University, UK, who also explains the issue clearly:

For those of you who aren’t up with such things, English nouns can be of two forms: “count” and “non-count” (or “mass”). Count nouns are those that can be enumerated and therefore have both plural and singular forms: one eye, two eyes, etc. Non-count nouns (which is a better term than “mass nouns”) are those which describe something which is not enumerable, such as “furniture” or “cutlery”. Such things can’t be counted and they don’t have a different singular and plural forms. You can have two chairs (count noun) but can’t have two furnitures (non-count noun)…

…Norman Gray asserts that (a) “data” is a non-count noun and that (b) it should therefore be singular.

I tend to look and listen out for instances of ‘data’, and I have very rarely heard someone say ‘the data are’ in natural speech. As Gray says:

The majority of writers who would dutifully pluralise ‘data’ in writing naturally and consistently use it as a mass noun in conversation: they ask how much data an instrument produces, not how many; they talk of how data is archived, not how they are archived; they talk of less data rather than fewer; and they always talk of data with units, saying they have a megabyte of data, or 10 CDs, or three nights, and never saying ‘I have 1000 data’ and expecting to be understood.

You may wonder why this matters at all. Well, practically every scientific paper contains the word ‘data’ somewhere, and all the journals I edit for insist that it is made plural every time. I spend a ridiculous amount of my editing time looking out for instances of ‘the data is’ and similar. And they can’t be found automatically using a macro, either, because the subject and verb can be separated by other words, or the verb may be something else like ‘shows’ or ‘illustrates’ rather than ‘is’. This is ‘mistake’ that a lot of authors make.

So is ‘data is’ really a mistake? Are the journals right to insist on this change?

The argument from etymology

The main argument used for ‘data are’ is that the word is derived from a plural Latin word. Gray dismantles this thoroughly by showing that it never was a simple plural in Latin. It is:

…the neuter plural past participle of the first conjugation verb dare, ‘to give’ (it’s actually also the feminine singular past participle, but that really, really, doesn’t matter).

…there was almost certainly no latin word for the concept that we now identify by the english word ‘data’….

…Put another way, that means that the word ‘data’, as a technical term referring to the ore of observations, which can be painstakingly reduced to extract knowledge, is not a latin word at all. It’s a native english word with a latin past, which means, bluntly, that we get to choose how to use it, and if its meaning changes over time – as it has – then its grammatical analysis can reasonably and properly migrate also.

I find this a convincing argument. It reminds me of the pedants who don’t like split infinitives (‘to boldly go’) because Latin infinitive verbs couldn’t be split, which is pretty irrelevant to how we should treat them in English (see Wikipedia for current views on this issue).

Gray goes on to compare ‘data’ with other similar Latin-derived words, such as ‘agenda’, ‘stamina’, ‘media’ and ‘phenomena’. ‘Stamina’ is at one end of a spectrum: it is never used in the singular (‘stamen’) except in a specialist botanical sense, and it is a singular noun. ‘Phenomena’ is at the other end – the singular ‘phenomenon’ is frequently used and ‘phenomena’ is a plural noun. ‘Agenda’ is almost the same as ‘stamina’ but the singular ‘agendum’ just about makes sense (although ‘agenda item’ would be more usual). ‘Media’ is moving from being a plural of ‘medium’ to being a separate singular noun in its own right. Gray says:

In this spectrum (not ‘spectra’, of course), ‘data’ is clearly located near ‘agenda’.

I would agree with this assessment on the whole, though I disagree with Gray that ‘datum’ is ‘certainly not one of the things that makes up data’. But like ‘agenda item’, a more commonly used term would be ‘data point’.

In fact, there is a technical use of the word ‘datum’, which Gray has dug out: it is a surveying term. But the plural of this usage of ‘datum’ is ‘datums’, not ‘data’.

Peter Coles doesn’t in fact completely agree with the journal publishers’ stipulation that ‘data’ is never singular – rather, he argues that there are contexts in which the plural use makes sense, and others in which singular use is better:

“If I had less data my disk would have more free space on it.” (Non-count)

“If I had fewer data I would not be able to obtain an astrometric solution.” (Count).

I’m fine with this distinction if people want to use it. But why, then, should journals insist that the singular use is incorrect?

A proposal: stop being prescriptive about data

You may or may not agree with Norman Gray (and me) that ‘data are’ is incorrect. But you can surely agree that there is controversy about the issue. The reasons to insist on plural data are hotly contested, to say the least.

So I propose that publishers remove the stipulation in their style guides that ‘data is’ is incorrect and should be changed to ‘data are’. In fact there is no need to be prescriptive on the issue at all: if the author writes ‘data are’, it can stay, but if they write ‘data is’, that can stay too. This would save a not insignificant amount of time for copyeditors, in searching and replacing ‘data is’ and in arguing the point with authors. It would probably save authors some time and annoyance too. And it would also make journals look more modern in this age of terabytes of data.

Who is going to be the first publisher to take a leap into the unknown? You have nothing to lose but your fuddy-duddy reputation.

Your opinions

Grammatical issues like this usually generate more heat than light, so I expect there will be comments on this post. I would particularly like to hear from journal editors who have been involved in discussions about this issue for their style guides, and from authors who have railed against the ‘data are’ rule imposed by a journal. I reserve the right to remove comments that simply rehash old arguments or only say that one or other construction is ‘ugly’ or ‘just wrong’.

Journal news for 20-27 January

A brief summary of recent news related to journals and scientific publishing.

Datasets International

The open access publisher Hindawi has launced Datasets International, which “aims at helping researchers in all academic disciplines archive, document, and distribute the datasets produced in their research to the entire academic community.” For a processing charge of $300 authors can upload an apparently unlimited amount of data under a Creative Commons CC0 licence (and associated dataset papers under an Attribution licence), according to comments on Scott Edmunds’ Gigablog. The new journals currently associated with this initiative are Dataset Papers in: Cell Biology, Optics, Atmospheric Sciences and Materials Science, though no doubt more will follow. (Heard via @ScottEdmunds.)

Peerage of Science

A company run by three Finnish scientists this week has a new take on improving peer review. Peerage of Science is a community of scientists (‘Peers’), formed initially by invitation, who review each other’s papers anonymously before submission to journals. Reviews are themselves subjected to review, which means that reviewers receive recognition and ratings for their work. The reviews can even be published in a special journal, Proceedings of the Peerage of Science. Journals can offer to publish manuscripts at any point, for a fee – this is how the company aims to make a profit. (Heard via chemistryworldblog, via @adametkin.)

Peer review by curated social media

Science writer Carl Zimmer (@carlzimmer) reported last week in the New York Times on a recent (open access) study in Proc Natl Acad Sci USA about the generation of multicellular yeast by artificial selection in the lab. He has now posted a follow-up article in his Discovery blog, in which he presents the conversation that followed on Twitter about this paper (using Storify) and invites the author to respond, which the author does. The comments on the latter post continue the conversation, and the author continues to respond. It’s an interesting example of the author of a controversial paper engaging constructively in post-publication peer review. (Heard via @DavidDobbs.)

Research Objects

Tom Scott (@derivadow, who works for Nature Publishing Group) has published a detailed blog post outlining a proposal for a new kind of scientific publication: the Research Object. This would be a collection of material, linked by a Uniform Resource Identifier (URI), including an article, raw data, protocols, links to news about the research published elsewhere, links to the authors and their institutions, and more. He credits the Force11 (‘Future of Research Communications and e-Scholarship’) community for the idea, which is developed in greater detail here (pdf). These elements may or may not be open access, although the sophisticated searches Scott envisages will be difficult if they are not. (Heard via @SpringerPlus.)

Analysis of F1000 Journal Rankings

Phil Davis of The Scholarly Kitchen has done an analysis of the journal ranking system announced by Faculty of 1000 (F1000) in October. The analysis includes nearly 800 journals that were given a provisional F1000 Journal Factor (called FFj by F1000) for 2010. Plotting the FFj of each journal against the number of articles from it that were evaluated by F1000 shows that the two numbers are closely related; in fact, the number of articles evaluated explains over 91% of the variation in FFj. Journals from which only a few articles were evaluated suffer not only from this bias, but also from a bias against interdisciplinary and physical science journals that publish little biology. It seems to me that these biases could easily be addressed by taking into account (a) the number of articles evaluated from each journal and (b) the proportion of biology articles published in it when calculating the FFj. F1000 would be wise to study this useful analysis when reviewing their ranking system, as they plan to do regularly, according to the original announcement. (Heard via @ScholarlyKitchn.)

Choosing a journal III: practicalities

In this series I am looking at various aspects of choosing a journal: so far I have covered getting your paper published quickly and getting it noticed. In this third post I look at a few practical issues that might affect your choice of journal.

Do they copyedit?

If your paper is read by lots of people, any errors in it will be noticed and will reflect badly on you. Most journals use copyeditors (freelance or employed) to edit papers after they are accepted. They ensure that papers are clearly and grammatically written and query obvious potential errors with the authors; they also ensure that there is a consistent house style. Some, but not all, journals also use proofreaders for a further quality check after the authors’ corrections have been made; others rely on the authors for this.

Notable examples of journals that do not use copyeditors for their research papers are PLoS One and BMC series journals; instead, they recommend that authors use an editing service.

You may feel that you won’t make any errors, so your paper will be the one that won’t need to be edited or proofread. In my long experience of editing, however, I have not once found a paper that needed no changes. You know what you are talking about; this means it is easy to miss the omitted explanation without which your methods will be incomprehensible to some readers. Everyone needs someone else to edit their writing, even professional writers.

So if you do choose a journal that doesn’t copyedit their papers, for your reputation’s sake make sure you hire an editor (perhaps me!) to check it first.

Policies on data publication and supplementary material

If you have a large dataset, what mechanisms does the journal have for publishing it? Do they encourage supplementary material?

I know of one journal, Journal of Neuroscience, that does not allow supplementary material. Another journal, GigaScience, is set up specifically to publish very large datasets. And there are many journals with policies between these two extremes.

Also, does the supplementary material get checked or copyedited? For many journals it does not. Bear this in mind when preparing it.

Costs of publication

If the journal is open access, how much does it charge authors?

Some ‘hybrid’ journals allow authors to choose whether their article is open access or not: a list of the author charges for such journals is on SHERPA/RoMEO (updated July 2011 when I viewed it). The average charge is around US$2500.

I haven’t been able to find an up-to-date table of author charges for journals that are completely open access, but a table from 2009 is at openwetware. The average charge for these journals then was about $2350, but the difference may be just because of the time difference.

Some closed access journals have page charges or charge for colour printing.

If the journal doesn’t copyedit papers after acceptance, you will also need to factor in the cost of getting your paper edited.

Ease of use of online submission system

Nowadays, if a journal does not have an online submission system it is unusual. The systems use vary a lot. Check out the experience of other authors with submission systems and find out whether the system is easy to use. Given the many other factors to consider, however, an online submission system would probably have to be really bad to make a difference to whether you would submit your paper there.

Previous experience with the publisher/journal

If you have published with the journal before, or with others from the same publisher, you might be tempted to stick with what you know. In particular, if you know an editor on a journal this might make you more confident in submitting there. I would recommend investigating alternatives first, however.

Recommendations from other authors

Do you know anyone who has published with the journal or with others owned by the same publisher? Whether or not you have a personal recommendation, search online for comments (good or bad) from others who have published there. Some publishers (e.g. BioMed Central) have surveyed their authors to see how satisfied they are).

Your experience

How important are each of these factors in your choice of journal? Do you know of any journal publishers that have particularly good or bad online submission systems or supplementary material policies? Do you know which other journals copyedit papers or do not?