Turning to AI to help with peer review

A handful of academic publishers are piloting AI tools to do anything from selecting reviewers to checking statistics and summarizing a paper’s findings.

In June, software called StatReviewer, which checks that statistics and methods in manuscripts are sound, was adopted by Aries Systems, a peer-review management system owned by Amsterdam-based publishing giant Elsevier.

And ScholarOne, a peer-review platform used by many journals, is teaming up with UNSILO of Aarhus, Denmark, which uses natural language processing and machine learning to analyse manuscripts. UNSILO automatically pulls out key concepts to summarize what the paper is about.

Crucially, in all cases, the job of ruling on what to do with a manuscript remains with the editor.

“It doesn’t replace editorial judgement but, by God, it makes it easier,” says David Worlock, a UK-based publishing consultant who saw the UNSILO demonstration at the Frankfurt Book Fair in Germany last month.

UNSILO uses semantic analysis of the manuscript text to extract what it identifies as the main statements. This gives a better overview of a paper than the keywords typically submitted by authors, says Neil Christensen, sales director at UNSILO. “We find the important phrases in what they have actually written,” he says, “instead of just taking what they’ve come up with five minutes before submission.”

UNSILO then identifies which of these key phrases are most likely to be claims or findings, giving editors an at-a-glance summary of a study’s results. It also highlights whether the claims are similar to those from previously published papers, which could be used to detect plagiarism or simply to place the manuscript in context with related work in the wider literature.

“The tool’s not making a decision,” says Christensen. “It’s just saying: ‘Here are some things that stand out when comparing this manuscript with everything that’s been published before. You be the judge.’”