I’ve written before about why the current system of peer review is failing science. Now I’d like to set out a few ideas about what I think could replace it. This is an opinionated list. It’s what I’d like to happen. However, different people and fields will want different things and it’s vital to come up with a flexible system that allows for this diversity.

It must be ongoing, open, post-publication peer review, for the reasons set out in the previous article: pre-publication peer review systematically fails as a check on technical correctness and creates perverse incentives and problematic biases in the system as a whole. Making it an ongoing and open process allows us to discover problems that may be missed in a closed process with a small number of opaquely selected reviewers. It lets us concentrate more review effort on influential work that ought to be more closely scrutinised than it is at the moment. And of course, instant publication speeds up science allowing people to immediately start making use of new work rather than waiting months or years for it to be available.

Reviews should be broken down into separate components: technical correctness, relevance for different audiences, opinions of likely importance, etc. Reviewers do not need to contribute all of these, and readers or other evaluators should feel free to weight these components in whatever way suits them.

We need better user interface / experience to navigate and contribute reviews. Every time you look at a paper, just the abstract or the full text, you should be presented with an up to date set of indicators like 3✅ 2❌ 1⚠ for three positive reviews, 2 reviews with unresolved major issues and 1 with minor issues. Clicking on these would pop up the relevant reviews and allow the reader to quickly drill down to more detailed information. Similarly, contributing reviews or commentary should be frictionless. While reading a paper you should be able to highlight text and add a review or commentary with virtually no effort. A huge amount of evaluation of papers is done now by individual readers and journal clubs, but all that work is lost because there’s no easy way to contribute it. Realising all this requires shifting away from the PDF to a more dynamic format, and abandoning the outmoded notion of the version of record.

There needs to be a way to integrate all the different sources of feedback so you don’t have to visit a bunch of different websites to find out what people are thinking about a paper, but instead it just pops up automatically when you open it. That will require standardised ways of sharing information between different organisations doing this sort of feedback.

None of these are particularly new ideas. The new model of eLife is about creating structured post-publication reviews with standardised vocabulary, and their nascent sciety platform is an attempt to integrate all the different sources of feedback. hypothes.is is a great first step towards improving the interface for review (although their recent move to for-profit status is worrying). The key will be to organise a way to put them all together and make it frictionless and pleasant to engage with it. It will require a revolutionary change because to make all these things work together it all has to be open and free, and legacy publishers will fight that.

Finally, with Neuromatch Open Publishing, we are working towards building a non-profit infrastructure to enable us to collectively do all these experiments and more.