I profoundly disagree with the current system of academic publishing, and so I have decided that I will no longer give any voluntary labour to support it. I believe it no longer serves science well for us to maintain this unhealthy system. Instead, I will spend that time building alternatives.

Originally, the purpose of a journal was simple dissemination of knowledge. The internet has made this redundant. Only relatively recently did journals acquire the secondary purposes of organising peer review and editorial selection. Peer review was not common until the mid 20th century, with Nature only starting to use it systematically in 1973 and The Lancet in 1976. It has several functions: it can serve to evaluate the technical correctness of a study, give feedback to authors to help communicate their work better, and give an opinion on the significance of work to help editors make publishing decisions. Normally when we think about peer review we consider all these functions and their value to science bundled together, but they are separate and I believe each can be done better in a different way.

Evaluating technical correctness of work has a clear value and important role in science, however peer review as currently managed is not reliable enough. Firstly, having typically only two reviewers means that there are relatively few opportunities to catch errors, and it is therefore unsurprising that many errors are found in published papers. Errors found following publication are rarely corrected. Secondly, the quality of peer reviews is very uneven, with some reviewers giving very careful and detailed analysis that can transform a paper, and some giving a quick opinion based on a skim read. This latter is not a moral failing of reviewers, but an unavoidable consequence of the excessive demands made on scientists’ time, and the fact that time spent on peer review is rarely, if ever, counted or valued by people making decisions about funding, hiring and promotion. We are expected to do it, but not rewarded for doing it well. Thirdly, the process by which reviewers are selected is not transparent and cannot guarantee that appropriate reviewers are chosen. Indeed, it seems unlikely that we are finding the best reviewers for a paper given how difficult it can be for editors to even find enough reviewers for a paper. In practice then, peer review as currently constituted fails in its role of giving confidence in the technical correctness of published work.

We need to move to a system where reviews are given on a rolling basis to work that is immediately published on submission (post-publication peer review). This will increase the chance that errors are found, because there will have been more eyes on the paper, including from people who are more invested in the results. Some papers cannot be adequately reviewed by just two reviewers because they use a broad range of techniques, and post-publication peer review addresses this by hugely widening the pool of potential reviewers. Papers that are very widely used and cited should be subject to much more stringent review because the consequences of an error are much graver, and post-publication peer review makes this happen organically.

The second function of peer review is giving feedback to authors to improve their work or how it’s communicated. This is laudable, but I see no reason why this should be a required step for publication rather than an optional service available to authors. Making a response to reviewers’ comments non-optional (unavoidable when the feedback role is integrated with the selection role of peer review) sometimes improves a paper and sometimes makes it worse. It should be the authors’ choice how to write their paper.

The third function is giving an opinion on significance. The potential value of this to science is to use the journal in which a paper is published to provide a signal to scientists about its likely importance. This comes with a risk of bias because those decisions are taken by a small group of mostly senior scientists who cannot be representative of the community as a whole. This bias is then compounded by the fact that future career success depends on journal track record. Despite the issues of bias, curation of a selection of papers by a small group of field experts can provide some valuable information, but this information should be provided separately from publication and non-exclusively. We should have a variety of ways in which papers are recommended, including group curation, individual curation, social network driven (“likes”), and purely algorithmic (topic modelling). Scientists should use whatever works for them. Singling out one such mechanism as more important than others hugely amplifies its significance and sends a distorted signal, both to the community and outside, that a selected paper is objectively good and important.

Integrating all these functions into a single system of peer review and journal publishing rather than keeping them separate introduces additional problems. Since evaluation of technical correctness is considered together with opinions on significance that determine future career success, authors are highly incentivised to write their papers in a less transparent way that makes it harder to find errors, and to overstate the significance of their findings. This leads to a situation where the most prestigious journals with the highest competition also have the lowest reliability and highest rates of retraction.

The current system is incredibly wasteful in terms of time, effort and money. Competition for inclusion in journals means that papers often go through multiple rounds of peer review, being rejected by a series of journals after many hours of work by authors, editors and reviewers. The huge effort involved contributes to a culture of overwork in science that excludes people with caring duties and is damaging to mental health. Many scientists do their reviewing and editorial work in the evenings and weekends, for example.

Inefficient publisher processes waste huge amounts of time in submission, formatting and reformatting of papers, and publisher monopolies mean they have little reason to improve these antiquated systems. Pre-publication peer review delays dissemination by months or even years, slowing down the rate of scientific progress. The financial costs can be eye-watering, thousands to tens of thousands of dollars per paper, much of it coming from tight science budgets and going straight to the huge profit margins of scientific publishers (some of the highest profit margins in the world, for example in 2010 Elsevier posted a 36% profit margin, higher than Google, Apple or Amazon). Exclusive publishing and copyrights means that the results of (often publicly funded) work are not freely available to view or re-use, leading to slower progress and time wasted duplicating work.

Journals were historically important in disseminating work and in organising peer review, feedback and curation. These are important functions and the hard work that we put into them is not wasted, but it is inefficient. We do not need journals as they exist now. With preprint servers, publication and dissemination is a solved problem. There are already multiple solutions for post-publication peer review and paper recommendation, and there are many active projects exploring alternatives. We need to find a way to maintain the good things about the current system while getting rid of the harmful aspects.

I am, therefore, resigning from all my editorial roles. I will no longer review for any profit-driven journal. I will no longer write pre-publication reviews for any journal, but will happily provide feedback to authors or post-publication reviews for technical correctness in cases where this is necessary. I am particularly sad to leave eLife, a journal that is not only publishing some of the most interesting science, but that is also doing a huge amount to move us forwards. However, this role still required me to make editorial judgements that I do not believe we should be making.

With regret, I will continue to submit some papers to legacy journals. For the moment, this is a necessity if I wish to continue in research and for my trainees’ careers. I hope to change that, but it won’t happen overnight. Some will say that it is hypocritical to refuse to review others’ work but expect them to review mine. I respect and understand this point of view but I do not agree. Firstly, I encourage others not to review my work for these journals, or indeed anyone else’s. Please join me in refusing to do this! Forcing a crisis will be painful, but it’s how we change this broken system. Secondly, I’m not doing this because I only want to take from the community and give nothing back. I’ve spent the majority of my career building freely accessible tools to help other scientists (Brian, KlustaKwik, Neuromatch), and will continue to do so. I simply choose to give back in a different way.

Reviewing and editorial work is sometimes considered a part of academic “service” work, but I have come to believe that it does not serve the scientific community well to maintain institutions that hold us back from changing to a better system, but rather to oppose them. I want to be clear that it is the institutions (and particularly the profit driven ones) that I oppose, and not the majority of people working hard within those institutions. I did not come to this decision easily, and I make no judgement on anyone who chooses to continue working within the current system. For myself, I believe that I can be of greater service to the scientific community by building a viable alternative to the current system. I hope that you will join me.