I was asked on Mastodon what I think of scientific editors, this article is my reply.

I think they’re almost universally very public spirited people with a strong sense of duty, willing to do a thankless task to make science better.

With that said - and this is the bit that will get me in trouble - I think they’re wrong that it makes science better.

I see two possible roles of an academic editor, and for both of them the journal structure with pre-publication peer review is the wrong way to achieve those ends, and leads to systematic biases that distort science.

The first role of an editor is to find and ideally fix errors. Scientists all know in practice that this doesn’t work, and the evidence bears that out. Most errors are not picked up by pre-publication peer review, and post-publication ongoing peer review does a much better job. We should bite the bullet and switch to that immediately.

The second role is to curate good science. I want to divide this role into two. The first part of the role is picking work that would be of interest to a particular community. This is great, but doesn’t have to be - and shouldn’t be - tied to publication. I love it when individuals or groups come up with curated weekly or monthly reading lists of papers/preprints for example.

The second part of the role is the problematic one - it’s selecting work for publication based on predictions of its likely impact. I think this is an impossible task. Or rather, it’s impossible to predict what will have meaningful impact. It’s probably rather easy to predict what will get well cited. I’d guess a fairly simple machine learning model could probably do as well or better than most of us just using word frequencies in the abstract. But predicting what will have meaningful lasting impact on a field is - to me - obviously impossible. And pretending it’s possible leads to bias. If you have judgements that can be factorised as signal + bias + noise, and there is no reliable signal, then your judgement is either random if noise dominates (the best case) or bias if not (the worst case). If your decisions are consistent, this is almost certainly just an indication that they are biased, not that you are picking up on signal.

So to get back to the question. I think editors are trying to do the right thing, but inadvertently they are just reinforcing structural biases that are present throughout science.

And if you find that sort of thing interesting, check out the other articles on this blog.