Science is stuck in a vicious cycle it is hard to escape from.

The decision to publish a scientific paper is made based on an evaluation of its likely importance and technical correctness. Scientists are evaluated based on these publication decisions, and resources (jobs, grants and promotions) are distributed accordingly.

The current system distorts scientific priorities.

Science is incredibly competitive, resources are allocated on a short term basis, and the primary metric used to evaluate scientists is their publication record. As a consequence, there is an unavoidable pressure to select problems and design studies that can lead to results that are likely to be favourably evaluated and published in the short term. This is in opposition to the long term scientific value, a fact that appears to be widely acknowledged by working scientists.

The current system is a vicious cycle and stable equilibrium.

In principle, we could choose to evaluate scientists and their work in a better way. However, no individual or small group can do this alone. If an institution chooses to hire scientists who do work that they believe will be of enduring scientific value despite being unlikely to win short term grant funding, they will take a huge financial hit. Public research is under such severe resource constraints that this is simply not feasible for most institutions even if they wished to do so. Similarly, a public funding body that makes decisions based on long term scientific value and not short term publishability is likely to be able to count fewer high profile papers in their output, and compared to other funding bodies will appear to be underperforming when they are reviewed at the government level. Individual scientists have even less flexibility than these institutions.

Journal prestige cements this problem.

It is the widespread availability of an easily calculated metric based on journal prestige that makes this cycle so hard to break. If there were no such metric, different groups could try different approaches and the effect would not be so obvious in the short term. The availability of the metric forces all institutions to follow the same strategy, and makes it hard to deviate from this strategy.

The majority of big publishers commercial value rests on their journal prestige.

If there were no funding implications to publishing in one journal rather than another, scientists would be free to choose based on price or features. There are widely available solutions with better features at virtually no cost. Consequently, the entire business model of these publishers would collapse without the journal prestige signal.

Big publishers therefore cannot be part of the needed reforms.

The success of these reforms would untie the evaluation of the quality of scientific work from the journal it is published in, and this would destroy the business model of these publishers. They will therefore do everything in their power to resist such reform.

Divorcing from the big publishers will not be enough.

Journal prestige is the cement of the current negative stable equilibrium, but eliminating that will not guarantee a globally better system. We need systems for publishing and evaluating science that is diverse and under the control of researchers. This is what we intend to do with Neuromatch Open Publishing.