Survey reveals split views on AI in academic peer review
Many physicists are divided as AI enters peer review, with supporters praising extra efficiency instead of slow processes, while critics fear weaker expert judgement.
Growing use of generative AI within peer review is creating a sharp divide among physicists, according to a new survey by the Institute of Physics Publishing.
Researchers appear more informed and more willing to express firm views, with a notable rise in those who see a positive effect and a large group voicing strong reservations. Many believe AI tools accelerate early reading and help reviewers concentrate on novelty instead of routine work.
Others fear that reviewers might replace careful evaluation with automated text generation, undermining the value of expert judgement.
A sizeable proportion of researchers would be unhappy if AI-shaped assessments of their own papers, even though many quietly rely on such tools when reviewing for journals. Publishers are now revisiting their policies, yet they aim to respect authors who expect human-led scrutiny.
Editors also report that AI-generated reports often lack depth and fail to reflect domain expertise. Concerns extend to confidentiality, with organisations such as the American Physical Society warning that uploading manuscripts to chatbots can breach author trust.
Legal disputes about training data add further uncertainty, pushing publishers to approach policy changes with caution.
Despite disagreements, many researchers accept that AI will remain part of peer review as workloads increase and scientific output grows. The debate now centres on how to integrate new tools in a way that supports researchers instead of weakening the foundations of scholarly communication.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
