The Research Works Act, open access and publisher boycotts

The open access movement has been around for decades, gradually building up, but this month there seems to have been an acceleration in the pace of change. I will try in this post to summarise the current situation as I see it.

The initial driver of this recent change was theResearch Works Act (RWA), a draft law proposed in the US that would prohibit federal bodies from mandating that taxpayer-funded research be made freely accessible online (as the NIH currently does). The two Representatives who are sponsoring the RWA, Darrell Issa and Carolyn Maloney, have received considerable amounts of money from the publisher Elsevier, which publishes many journals and is against open access (as reported on Michael Eisen’s blog).

The second important event was the decision of Cambridge mathematics professor and Fields Medal winner Timothy Gowers to publish a blog post on 21 January entitled ‘Elsevier — my part in its downfall‘ (after the late Spike Milligan’s book ‘Adolf Hitler: My Part in his Downfall‘. (Gowers was the initiator of the Polymath Project, an experiment in open collaboration online between thousands of mathematicians, which Michael Nielsen lauded highly in his TED talk on open science.) Gowers summarised the criticisms of Elsevier:

  1. Their very high prices
  2. Their practice of ‘bundling’ journals into collections that libraries have to subscribe to together
  3. Their ‘ruthless’ negotiation tactics with libraries
  4. Their support of the RWA, and of the related acts SOPA and PIPA (both now postponed).

He was already quietly avoiding publishing in Elsevier journals and avoiding reviewing for them. But he decided that this quiet approach wasn’t enough: he called for coordinated action by academics. He comments that ‘Elsevier is not the only publisher to behave in an objectionable way. However, it seems to be the worst’.

This led mathematician Tyler Neylon to set up ‘The cost of knowledge‘, a page where researchers could publicly declare that they ‘will not support any Elsevier journal unless they radically change how they operate’. As of writing, this has over 2300 signatures.

In the past week the usual trickle of blog posts about open access and Elsevier has turned into a flood. I’ll pick out a few here:

Elsevier and their allies have responded:

But The Lancet, which is published by Elsevier, has said it ‘strongly opposes‘ the RWA, saying: ‘This short and hastily put together legislation is not in the interests of either science or the public’.

and others have criticised these responses (e.g. Mike Eisen, Drug Monkey).

The coverage is now reaching the mainstream:

It will be interesting to see what Elsevier says in a statement that was expected today, according to the Chronicle of Higher Education.

*  *  *  *

So, where do I stand? I am a freelance editor, working directly or indirectly for scientists and for publishers, on both open access and closed access journals. I worked for two years for Elsevier and then five years for BioMed Central, one of the leading open access publishers, and part of my job at BMC was to advocate for open access. I’m not a great fan of Elsevier, partly for the reasons that others give as described above, and partly because I think they (like many other publishers) are too keen on cutting costs and not keen enough on ensuring quality in their publications.

All this means that I am sympathetic to the open access movement but am not an active advocate of it. I’m not currently in a position to refuse to work for closed access publishers, nor would that have much effect on their policies. When helping scientists choose where to submit their papers, I try to dispassionately present the arguments for different types of journals and encourage them to investigate open-access options, but the decision is up to them.

What I’d like to do is think through what effect a boycott would have on each affected journal. The first people to suffer will be the editors who handle manuscripts. Usually they have to ask several people before they get two reviewers to agree to look at a paper – with the boycott, they will get more noes before they get enough yeses.  If the editors are in-house staff, will this filter up to their managers, and to their managers’ managers, up to the top of the company? Maybe, but only if the proportion of people saying no to reviewing for the journal is big enough. And in the mean time the editors, who have no say in the policies of their company, will be having a hard time.

One way the boycott could perhaps be more effective would be if it focused on a few journals in well-defined, small fields where there is a limited pool of potential reviewers. In a small field, it might be possible for a sizeable proportion of researchers to refuse to review for a particular journal, so this would have a bigger effect.

I would hope that those refusing make their reasons clear (as in this example letter) so that in-house staff aren’t left wondering what is going on. The boycotters will also need to make it clear to the staff that it is their employers they have a problem with, not the editors and editorial assistants themselves. Extreme politeness and chocolate might go down well!

I hope everyone will also remember that there are many researchers who need to publish to keep their jobs or get funding and tenure. Not everyone has a free choice of where to submit their paper. Those who do not join the boycott should not be assumed to be enemies of it.

So if you are boycotting any particular publisher, spare a thought for both the in-house staff who have to put up with it and for the researchers who can’t join in.

Journal news for 20-27 January

A brief summary of recent news related to journals and scientific publishing.

Datasets International

The open access publisher Hindawi has launced Datasets International, which “aims at helping researchers in all academic disciplines archive, document, and distribute the datasets produced in their research to the entire academic community.” For a processing charge of $300 authors can upload an apparently unlimited amount of data under a Creative Commons CC0 licence (and associated dataset papers under an Attribution licence), according to comments on Scott Edmunds’ Gigablog. The new journals currently associated with this initiative are Dataset Papers in: Cell Biology, Optics, Atmospheric Sciences and Materials Science, though no doubt more will follow. (Heard via @ScottEdmunds.)

Peerage of Science

A company run by three Finnish scientists this week has a new take on improving peer review. Peerage of Science is a community of scientists (‘Peers’), formed initially by invitation, who review each other’s papers anonymously before submission to journals. Reviews are themselves subjected to review, which means that reviewers receive recognition and ratings for their work. The reviews can even be published in a special journal, Proceedings of the Peerage of Science. Journals can offer to publish manuscripts at any point, for a fee – this is how the company aims to make a profit. (Heard via chemistryworldblog, via @adametkin.)

Peer review by curated social media

Science writer Carl Zimmer (@carlzimmer) reported last week in the New York Times on a recent (open access) study in Proc Natl Acad Sci USA about the generation of multicellular yeast by artificial selection in the lab. He has now posted a follow-up article in his Discovery blog, in which he presents the conversation that followed on Twitter about this paper (using Storify) and invites the author to respond, which the author does. The comments on the latter post continue the conversation, and the author continues to respond. It’s an interesting example of the author of a controversial paper engaging constructively in post-publication peer review. (Heard via @DavidDobbs.)

Research Objects

Tom Scott (@derivadow, who works for Nature Publishing Group) has published a detailed blog post outlining a proposal for a new kind of scientific publication: the Research Object. This would be a collection of material, linked by a Uniform Resource Identifier (URI), including an article, raw data, protocols, links to news about the research published elsewhere, links to the authors and their institutions, and more. He credits the Force11 (‘Future of Research Communications and e-Scholarship’) community for the idea, which is developed in greater detail here (pdf). These elements may or may not be open access, although the sophisticated searches Scott envisages will be difficult if they are not. (Heard via @SpringerPlus.)

Analysis of F1000 Journal Rankings

Phil Davis of The Scholarly Kitchen has done an analysis of the journal ranking system announced by Faculty of 1000 (F1000) in October. The analysis includes nearly 800 journals that were given a provisional F1000 Journal Factor (called FFj by F1000) for 2010. Plotting the FFj of each journal against the number of articles from it that were evaluated by F1000 shows that the two numbers are closely related; in fact, the number of articles evaluated explains over 91% of the variation in FFj. Journals from which only a few articles were evaluated suffer not only from this bias, but also from a bias against interdisciplinary and physical science journals that publish little biology. It seems to me that these biases could easily be addressed by taking into account (a) the number of articles evaluated from each journal and (b) the proportion of biology articles published in it when calculating the FFj. F1000 would be wise to study this useful analysis when reviewing their ranking system, as they plan to do regularly, according to the original announcement. (Heard via @ScholarlyKitchn.)

Choosing a journal IV: peer review procedure

This the fourth post in my series on choosing a journal, following posts on getting your paper published quickly, getting it noticed, and practicalities.

Most journals use the usual procedure for peer review:

  • The editors first decide whether to reject the manuscript immediately or send it to peer reviewers
  • Unless the manuscript is rejected, the editors send the manuscript to 2-3 reviewers
  • The reviewers provide reports on the manuscript
  • The editors decide, using the reports, whether to reject or invite revision
  • Unless the manuscript is rejected, the authors revise it
  • The editors decide whether to send the revised version back to reviewers
  • … and so on until final rejection or final acceptance.

A few journals, however, have variations on this, which are worth knowing about before you decide where to submit your paper.

Some examples of different peer review procedures are:

  • A small but increasing number of journals have open peer review, in which the reports, sometimes with the reviewers’ names, are published with the paper (e.g. Biology Direct, BMJ Open, medical BMC journals)
  • If you are a member of the US National Academy of Sciences, you can ‘contribute’ a paper to PNAS, together with expert reviews by researchers you have chosen
  • The newly announced journal SpringerPlus promises that “we will either accept your manuscript for publication or not, our editors will not ask for additional research”.

There are also differences in the questions the reviewers are asked about the paper. Many journals ask whether the research is interesting or important enough for the journal, and consider only those papers whose importance is judged to be over a  certain threshold. A few, however, explicitly do not ask this question and have no such threshold.

The latter journals publish all research that is within the scope of the journal that reviewers find to be scientifically sound, regardless of how important or interesting they judge it to be. Some examples of such journals are:

Your experience

Do you know of other variations on the usual peer review procedure? Has a journal’s peer review process been a factor in choosing to submit your paper to it?

Journal News

A brief summary of recent news related to journals and scientific publishing.

Journal of Errology

A new venture came to my notice this week that aims to provide “an experimental online research repository that enables sharing and discussions on those unpublished futile hypothesis, errors, iterations, negative results, false starts and other original stumbles that are part of a larger successful research in biological sciences.” It is not clear whether the Journal of Errology will succeed, but it is an interesting development that might fill a gap that journals are currently neglecting.

Figshare

Another place to send your miscellaneous data is figshare, which relaunched this week. This “allows researchers to publish all of their research outputs in seconds in an easily citable, sharable and discoverable manner”. They are encouraging researchers to upload negative data, supplementary material that is too large for journal limits, and miscellaneous figures that aren’t likely to get written up as a paper.

The Research Works Act

You’ll probably have heard about the Research Works Act (RWA) being proposed in the US, which would prohibit the NIH or other federal bodies from mandating (as the NIH currently does) that taxpayer-funded research should be freely accessible online.  A summary for UK readers by Mike Taylor (@SauropodMike) is here. The act is supported by the American Publishers Association, and Twitter has been full of scientists lobbying journal publishers to come out against it. So far, the AAAS (publisher of Science) and Nature Publishing Group have been among the journal publishers opposing the RWA.

An open peer review experiment

AJ Cann (@AJCann) is inviting comments on a research paper (entitled “An efficient and effective system for interactive student feedback using Google+ to enhance an institutional virtual learning environment”) on his blog, as a form of open peer review. He’s received several reviews so far, as well as comments on the process.

A journal using WordPress

Andrés Guadamuz, the technical editor of SCRIPTed, the open access journal of Law and Technology, has written a blog post “Confessions of an open access editor” that mentions that the journal is now one of the few hosted by WordPress. Given the recent launch of Annotum, the WordPress add-on for authoring scholarly publications, it looks like WordPress is going to become more important as a platform in the future.

A survey on attitudes to open access

The International Journal of Clinical Practice (IJCP), published by Wiley, has launched a survey on what authors think about the idea of the journal going completely open access (rather than having it as an option as at present). They will be asking all submitting authors for the next six months and are also inviting others to write a Letter to the Editor with their thoughts. They seem to be genuinely interested in authors’ views and not pushing either for or against open access.

The ‘academic dollar’ altmetric

A post by Sabine Hossenfelder on the BackReaction blog (which I heard about via @ScholarlyKitchn) discusses a 2010 paper entitled “An Auction Market for Journal Articles” that suggests an ‘academic dollar’ “that would be traded among editors, authors, and reviewers and create incentives for each involved party to improve the quality of articles”. They are scathing about this proposal, describing it as an example of “Verschlimmbesserung”, defined by Urban Dictionary as “an attempted improvement that makes things worse than they already were”. Altmetrics may be on the rise, but it looks like this one won’t be taking off.

http://backreaction.blogspot.com/2012/01/academic-dollar.html

Choosing a journal II: getting your paper noticed

This is the second in a series of posts on factors to consider when choosing which journal to submit your paper to. Here, I will look at how your choice of journal can affect the extent to which your work is noticed.

Part one of the series, on getting your paper published quickly, is here.

How well known is the journal?

It goes without saying that papers in very well known journals like NatureScience, Proc Natl Acad Sci USA and Cell will be seen by more people than those in other journals. The more specialist journals may, however, be the ones that are seen by the people in your field, who you mainly want to reach.

How well they do they publicise their papers?

Most journals send out their tables of contents to interested readers by email. This is really a minimum, and any journal that does not doesn’t deserve your paper. But there are other ways that journals can publicise papers as well.

  • Press releases – do they have an active press office that writes attractive press releases and sends them to relevant media together with the embargoed article before publication? Do they also send press releases to bloggers (who are often more interested in science news than are the mainstream media)?
  • Do they have a blog highlighting recent papers?
  • Do they have a Twitter account with plenty of followers?
  • Do they have a Facebook page ‘liked’ by plenty of people?
  • Do they give awards for top papers of the year or similar?

Do they publish articles highlighting research papers?

Nature, for example, publishes News and Views articles, which are short pieces highlighting the most interesting research papers in the current issue. Many other journals publish similar ‘minireviews’ about recent papers.

Some journals have an editorial in each issue summarising the papers in that issue (such as in the current issue of Gut).

Some have news sections in print or online (e.g. Science) where more accessible pieces about recent papers from that journal or elsewhere can reach a general readership.

Open access versus subscription

There is some evidence that open access articles are cited more than those in subscription journals; there is also some evidence against a citation advantage. It is certainly worth considering whether you want your paper to be read just by potential readers in an institute that subscribes to the journal, or also by independent researchers, those in less well funded institutes, journalists and members of the public.

If you don’t go for a full open access journal (called ‘gold’ open access), will the journal allow you post a version on your own website or in a repository (‘green’ open access; definitions here)? Does the paper become free to read online 6 months or 12 months after publication, or never? Journals vary a lot in their restrictions on how an author can distribute their paper (see the SHERPA ROMEO directory for details).

Is the journal indexed by indexing services?

When a journal is set up it takes a while for it to be indexed by services such as PubMed, ISI Web of Science and AGRICOLA. New journals will therefore not yet be indexed, so articles won’t be findable in searches of these databases. This makes it slightly more risky to publish in very new journals.

Your experience

Have you had a published paper promoted in any of these ways, or others? Did you choose open access to get your paper seen by more people? What difference do you think these choices made?

Choosing a journal I: getting your paper published quickly

So, you’ve got a great result from your experiments or data analysis and are starting to write it up as a paper. Now is the time to think about which journal to submit your paper to. How do you decide?

In this series of posts I will discuss various factors worth considering. In this first post, I will look at differences between journals that can have a big effect on how long it is before your paper is finally published.

Speed of peer review and publication

Many journals publish, on their website, either statistics on their speed of peer review or a statement of their ideal time from submission to first decision. The dates of submission and acceptance on published articles, which are also frequently published with them, aren’t very useful for this: the time between submission and acceptance includes the time the authors took to revise, and you can’t tell how many rounds of peer review took place.

You should also check how long the journal takes to reject papers by the editors without being sent to reviewers, if this information is available. If a journal is going to reject your paper you want this to happen as soon as possible so that you can try elsewhere without a long delay. If you’re not sure whether your paper will be of interest to a journal, check whether they will look at presubmission enquiries; you might be able to get a quick answer by just sending the abstract.

Journals sometimes publish statistics on the speed of publication after acceptance. If they don’t, look at the acceptance and publication dates on papers in recent issues to get an idea. Do they publish the accepted version quickly, with a copyedited, formatted and proofed version going online later, or do they wait until the final version is ready before publishing? If you can’t find out this information from the journal’s website, email the editors to check or ask colleagues who have published there.

Cascading peer review

The bigger publishers have a system whereby if a paper is rejected from one of their journals, the reviewers’ reports can be passed to another journal within the same publisher. This means that you don’t have to start from scratch after a rejection, so the process of getting published isn’t delayed.

The following are some examples that I know of:

Print/online or online only

Should you choose a journal that has a print issue or one that is online only? Nowadays many prestigious journals do nor produce a print issue, so a print copy is no longer widely considered to be essential for ‘proper’ publication.

But the choice isn’t simply between print/online and online only journals when you are thinking about  getting your paper out as soon as possible. If the journal has a print issue, does it publish papers online soon after acceptance or in batches corresponding to a print issue? If online publication waits for print publication, that slows things down considerably. Fortunately, the journals I know of that used to do this (which will remain nameless) now publish soon after acceptance.

Other factors

Some other things affect speed of publication that may be harder to allow for.

If the journal is very new the editors may not yet be overloaded with papers and might be able to push yours through peer review quickly. On the other hand, they may be too busy telling people that the journal exists.

If the editors are academics themselves, they could be busy with research and teaching, which might take priority over dealing quickly with your paper. On the other hand, in-house editors may also have other tasks that slow things down, such as going to conferences or managing other editors.

Your experience

Do you know of journals or publishers that have particularly short peer review or publication times, or that have processes that speed things up? Are there other factors that can affect speed besides those I’ve listed here? Please do leave a comment.

Introductory post

Hi! Welcome to my new blog. I hope you will find it interesting and useful.

This blog will focus mainly on scientific journals: how they work, differences between them, ways in which they are changing, and what researchers can do to make the best of them. I will look at journal policies, their peer review systems, their editing and production processes, their speed, and the economics and politics of journal publishing. And no doubt other related topics too.

The people who will find the blog of most interest are researchers and journal editors in biology and medicine. I hope that editors working on a range of journals will add their comments so that they can learn from each other and so that researchers can learn from them. And I hope researchers will comment so that they can feed back to journals what they like and don’t like, and so that they too can learn from each other’s experience.

And of course I will offer my experience. I have been a PhD student, a postdoc, an in-house journal editor and a freelance editor, proofreader and scientific publishing consultant, and over those years I have learnt a lot about how journals work. I have worked for most of the main UK biomedical journal publishers, including Elsevier, BioMed Central, BMJ Group, Public Library of Science and Nature Publishing Group. I have edited papers in many areas of biology and medicine, and even proofread a few in physics and chemistry too. And, not least, I have learnt from people I follow on Twitter (@sharmanedit) about the current state of scientific publishing. I think I have some useful insights, and you might find some of what I say interesting.

A few formalities: I don’t speak for any journal or research institute and the opinions expressed here are my own. Many of the journals mentioned are among my clients, except for Nature, for whom I work as an employee on a casual temporary contract. The material here by me is licensed under a Creative Commons Attribution licence (CC:BY); material in comments is copyright the commenter unless they say otherwise. You are free to reproduce material by me elsewhere as long as you credit me for it (though I would appreciate being informed and ask that you also link to this blog).

I look forward to your comments!

Anna

Choosing a journal III: practicalities

In this series I am looking at various aspects of choosing a journal: so far I have covered getting your paper published quickly and getting it noticed. In this third post I look at a few practical issues that might affect your choice of journal.

Do they copyedit?

If your paper is read by lots of people, any errors in it will be noticed and will reflect badly on you. Most journals use copyeditors (freelance or employed) to edit papers after they are accepted. They ensure that papers are clearly and grammatically written and query obvious potential errors with the authors; they also ensure that there is a consistent house style. Some, but not all, journals also use proofreaders for a further quality check after the authors’ corrections have been made; others rely on the authors for this.

Notable examples of journals that do not use copyeditors for their research papers are PLoS One and BMC series journals; instead, they recommend that authors use an editing service.

You may feel that you won’t make any errors, so your paper will be the one that won’t need to be edited or proofread. In my long experience of editing, however, I have not once found a paper that needed no changes. You know what you are talking about; this means it is easy to miss the omitted explanation without which your methods will be incomprehensible to some readers. Everyone needs someone else to edit their writing, even professional writers.

So if you do choose a journal that doesn’t copyedit their papers, for your reputation’s sake make sure you hire an editor (perhaps me!) to check it first.

Policies on data publication and supplementary material

If you have a large dataset, what mechanisms does the journal have for publishing it? Do they encourage supplementary material?

I know of one journal, Journal of Neuroscience, that does not allow supplementary material. Another journal, GigaScience, is set up specifically to publish very large datasets. And there are many journals with policies between these two extremes.

Also, does the supplementary material get checked or copyedited? For many journals it does not. Bear this in mind when preparing it.

Costs of publication

If the journal is open access, how much does it charge authors?

Some ‘hybrid’ journals allow authors to choose whether their article is open access or not: a list of the author charges for such journals is on SHERPA/RoMEO (updated July 2011 when I viewed it). The average charge is around US$2500.

I haven’t been able to find an up-to-date table of author charges for journals that are completely open access, but a table from 2009 is at openwetware. The average charge for these journals then was about $2350, but the difference may be just because of the time difference.

Some closed access journals have page charges or charge for colour printing.

If the journal doesn’t copyedit papers after acceptance, you will also need to factor in the cost of getting your paper edited.

Ease of use of online submission system

Nowadays, if a journal does not have an online submission system it is unusual. The systems use vary a lot. Check out the experience of other authors with submission systems and find out whether the system is easy to use. Given the many other factors to consider, however, an online submission system would probably have to be really bad to make a difference to whether you would submit your paper there.

Previous experience with the publisher/journal

If you have published with the journal before, or with others from the same publisher, you might be tempted to stick with what you know. In particular, if you know an editor on a journal this might make you more confident in submitting there. I would recommend investigating alternatives first, however.

Recommendations from other authors

Do you know anyone who has published with the journal or with others owned by the same publisher? Whether or not you have a personal recommendation, search online for comments (good or bad) from others who have published there. Some publishers (e.g. BioMed Central) have surveyed their authors to see how satisfied they are).

Your experience

How important are each of these factors in your choice of journal? Do you know of any journal publishers that have particularly good or bad online submission systems or supplementary material policies? Do you know which other journals copyedit papers or do not?

Follow

Get every new post delivered to your Inbox.

Join 48 other followers