Publication Overload and the Case for AI-Assisted Scholarly Review

Matt Martin

Essay / viewpoint article

Prepared as a revised manuscript-style web page with a restrained academic tone.

Abstract

The volume of scientific publishing has increased to the point that purely manual review and editorial workflows are under growing strain. Delays in review, difficulty identifying appropriate reviewers, and inconsistent evaluation across journals are widely discussed features of the contemporary publishing environment. At the same time, the use of artificial intelligence in scholarly publishing remains contentious, particularly where it appears to blur the boundary between assistance and judgment.

This article argues for a narrower and more defensible role for AI in academic publishing: not as an author, not as a substitute reviewer, and not as an autonomous decision-maker, but as an organisational layer that may help human editors and reviewers navigate large volumes of material more efficiently. Under this view, AI is most useful when it supports triage, clustering, manuscript screening, reviewer matching, synthesis of reviewer comments, and other information-management tasks, while responsibility for interpretation, critique, and editorial decisions remains with people.

To formalise this distinction, the article outlines an augmented review model in which AI tools operate only under explicit human supervision. A proposed “shadow review” approach is also described as a way of exploring such systems without allowing them to determine editorial outcomes. The purpose of this article is not to claim that AI resolves the difficulties of peer review, but to offer a conceptual framework for thinking about how human scholarly judgment may be supported under conditions of publication overload.

1. Introduction

Scholarly publishing now operates at a scale that places increasing pressure on traditional editorial and peer review systems. Across many disciplines, editors, reviewers, and authors are working within an environment defined by high submission volume, uneven review timelines, and growing difficulty sustaining careful manuscript assessment at scale. These pressures are widely recognised, even if their magnitude varies by journal and field.

This article begins from a simple observation: the problem may no longer be understood only as one of quality control, but also as one of information management. Human review remains central to scientific judgment, but the volume of material to be screened, sorted, matched, checked, and synthesised increasingly exceeds what can be handled comfortably through manual processes alone. In that setting, artificial intelligence becomes relevant not because it can replace scholarly judgment, but because it may help organise the conditions under which that judgment is exercised.

That distinction matters. Discussion of AI in publishing often collapses several very different roles into one category. AI may be used to generate text, to evaluate text, or to organise text for later human evaluation. These are not equivalent. The present article is concerned only with the third role. It does not argue that AI should replace reviewers, assume editorial responsibility, or determine what counts as valid science. It instead considers whether AI may function as an organisational aid within scholarly publishing while leaving judgment, accountability, and final decisions to human editors and reviewers.

Seen this way, the most plausible use of AI in review is comparatively modest. AI systems may help identify administrative non-compliance, cluster manuscripts by topic or method, support reviewer matching, summarise areas of agreement and disagreement in reviewer reports, and surface patterns that deserve closer human inspection. These uses do not eliminate the need for expertise. They are intended to conserve it.

The purpose of this article is therefore not to announce a solved system for peer review, nor to claim that AI can independently improve research quality. It is to formalise a way of thinking about AI-assisted scholarly review under conditions of publication overload: AI as an organising instrument, humans as the responsible evaluators, and governance as the condition of legitimate use.

2. Publication overload as an organisational problem

Debate about peer review often focuses on fairness, rigor, delay, or inconsistency. All of those are real concerns. Yet one of the most basic pressures in the modern publication environment is simply volume. More manuscripts are submitted, more journals compete for reviewer attention, and more specialised domains require increasingly narrow expertise. The result is a workflow burden distributed across editors and reviewers who must sort, interpret, and prioritise more material than previous systems were designed to absorb.

In that setting, overload is not merely an inconvenience. It changes the conditions under which judgment occurs. Editors are forced to triage under time pressure. Reviewers are harder to secure. Reports vary in depth and comparability. Administrative screening, plagiarism checks, methodological assessment, and reviewer matching all consume time before substantive scientific evaluation has even begun. The publishing system therefore faces a compounded problem: not only how to assess quality, but how to manage the increasing amount of material that must pass through the assessment process.

This is why publication overload should be understood partly as an organisational problem. A significant portion of editorial labor is spent not on making final judgments, but on preparing the conditions under which final judgments become possible. Manuscripts must be routed, classified, checked for completeness, paired with suitable expertise, and compared against existing submissions and standards. These are tasks that remain consequential, but they are not identical to scientific reasoning itself.

Once that distinction is made, a narrower role for AI becomes easier to define. If the problem includes excessive information-management burden, then AI may be relevant as an information-management tool. That does not mean the deeper epistemic and ethical functions of peer review can be outsourced. It means that some of the surrounding workflow may be structured more efficiently if computational tools are used to organise what humans later inspect and decide upon.

3. Distinguishing organisation from judgment

The central conceptual claim of this article is that AI should be discussed in scholarly publishing according to function, not as a single undifferentiated category. At least three roles must be kept separate: AI generating research-related content, AI evaluating research-related content, and AI organising research-related content for human assessment. Conflating these roles produces confusion and weakens debate.

The strongest objections to AI in publishing usually arise when AI is imagined as replacing expert reasoning or exercising judgment without accountability. Those objections are often justified. Scientific review is not a simple pattern-recognition exercise. It involves contextual interpretation, methodological understanding, disciplinary nuance, ethical awareness, and the ability to recognise what falls outside existing expectations. These are not functions that can be reduced neatly to automated scoring.

By contrast, AI used as an organisational layer occupies a narrower and more defensible position. It may help sort manuscripts into topic clusters, flag missing reporting components, match papers to reviewer pools, summarise overlapping reviewer comments, identify duplicate submissions, or assist editors in tracking where disagreements among reviewers actually lie. In each of these uses, the system influences workflow, but responsibility for evaluation remains human.

This is the distinction that future debate will likely need to preserve. AI can shape the order in which information appears, the categories into which material is placed, and the way complexity is summarised for human readers. That is already significant. But it is still different from assigning AI the authority to determine validity, significance, or publishability.

AI may be useful in scholarly review not because it can replace scientific judgment, but because it may help organise the conditions under which scientific judgment is exercised.

4. A human-supervised model of AI-assisted review

Once AI is framed as an organisational aid rather than a substitute reviewer, a more modest workflow model becomes possible. In such a model, AI tools would sit around the editorial process rather than above it. Their purpose would be to reduce friction in the movement of information, not to declare outcomes.

A human-supervised review model might include several bounded functions. Manuscripts could first be screened for basic completeness, formatting, and adherence to submission requirements. They could then be clustered according to topic, study design, methods, or clinical domain in order to aid routing and reviewer selection. Reviewer suggestions might be generated from structured expertise matching rather than informal editorial memory alone. Reviewer reports, once received, might be synthesised into comparable summaries that help editors see where concerns converge and where they conflict.

None of these tasks removes the need for editors or expert reviewers. On the contrary, the rationale for such a model is to preserve scarce human attention for the parts of the process that most require it. If routine organisation can be made more efficient, then human expertise can be spent on interpretation, critique, contextualisation, and decision-making rather than being consumed by avoidable workflow drag.

This is also the context in which the phrase AI Board Chair can be understood more soberly. In this article, the term is used only as shorthand for a supervised synthesis layer that organises inputs for editors. It is not meant to imply independent authority, machine adjudication, or the replacement of editorial responsibility. The metaphor is useful only if it remains clearly bounded.

5. The case for shadow review

If AI is to be introduced into scholarly workflows at all, it should not first appear in a decision-making role. A more careful approach would be shadow review: a parallel process in which AI-assisted organisational outputs are generated alongside conventional editorial workflows without influencing live publication outcomes until they are properly assessed.

The value of shadow review is methodological as much as practical. It allows journals to observe whether AI-supported triage, reviewer matching, or synthesis functions are actually helpful before they are trusted. It also makes it easier to identify where errors arise, what kinds of manuscripts are mishandled, how different fields respond, and whether supposedly helpful summaries distort rather than clarify the issues under discussion.

Shadow review also has a governance advantage. It keeps experimental tools visible and bounded. Editors can compare machine-supported workflow outputs with ordinary editorial practice, rather than quietly absorbing automated recommendations into a black box. If AI is to be used responsibly in review, it should first be made legible.

For an essay such as this one, the significance of shadow review is not that it proves a system works. It is that it provides a plausible way of thinking about responsible experimentation. The introduction of AI into publishing should not begin with trust. It should begin with observation.

6. Governance, responsibility, and the limits of assistance

Even a narrow organisational role for AI raises governance questions that cannot be treated as afterthoughts. A system that influences what editors see first, what is flagged for attention, how reviewer disagreement is summarised, or which manuscripts are grouped together still shapes outcomes indirectly. Assistance is not neutral simply because it stops short of the final decision.

For that reason, a human-supervised model requires clear boundaries. Editors must remain accountable for decisions. Reviewers must remain accountable for the substance of their critique. Authors must be able to understand the general conditions under which their manuscripts are handled. Journals must know what tools are used, where their outputs enter the workflow, how errors are detected, and how staff may override or disregard automated suggestions.

Transparency is therefore not a cosmetic virtue in this context. It is part of the legitimacy of the system. If an AI tool is introduced as an organisational layer, then its role should be explicit, bounded, and open to audit. The more deeply it shapes workflow, the stronger the requirement for oversight becomes.

There are also confidentiality and data stewardship issues that deserve caution. Manuscripts under review are not generic datasets. They are part of a protected editorial process and often contain unpublished arguments, analyses, or findings. Any proposal for AI use in review must therefore be attentive not only to efficiency, but also to the conditions under which manuscript material is handled, processed, and stored.

7. Risks of formalising old bias in new systems

It is tempting to assume that organisational tools are safer than evaluative tools because they do not openly pronounce judgment. That assumption should be resisted. A system can still encode bias through the categories it uses, the benchmarks it prefers, the language patterns it rewards, or the cases it prioritises for attention.

This problem becomes acute when computational systems are trained on historical publishing outcomes and then presented as aids to future review. What looks like a neutral model of scholarly quality may instead reproduce prestige hierarchies, citation-driven visibility, disciplinary conservatism, or English-language advantage. In that case, AI would not be correcting the strain of the current system. It would be stabilising its inherited preferences.

For a human-supervised organisational model, this means that the design problem is not simply whether the system is accurate in a narrow technical sense. It is whether the system quietly channels attention in ways that privilege familiar institutions, established forms of argument, or dominant methodological styles. A tool that helps manage overload but narrows intellectual range may still damage the review process.

Any serious future development in this area would therefore need to examine not only performance, but also whose work is surfaced, whose work is deprioritised, and how those patterns differ across field, geography, language, methodology, and career stage.

8. Why this matters now

The reason to formalise these distinctions now is not that AI has already solved the publishing problem. It clearly has not. The reason is that publication overload is unlikely to recede, while pressure to adopt computational tools will continue. In such an environment, failure to think clearly about the role of AI increases the risk that poorly bounded systems will be adopted by convenience rather than by design.

A useful public discussion of AI in scholarly publishing therefore needs more precision than broad statements of enthusiasm or fear. The relevant question is not whether AI is “good” or “bad” for peer review in the abstract. It is which functions are legitimate, which are unsafe, and which should remain non-delegable because they are inseparable from human scholarly responsibility.

If that conversation is not structured, then the distinction between organisation and judgment will continue to blur. Once blurred, it becomes harder to challenge weak implementations because the language used to describe them remains vague. Formalising the organisational role of AI is therefore also a way of defending the parts of scholarly review that should remain human.

9. Discussion

The central claim of this article is deliberately limited. It is not that peer review should be turned over to AI, nor that editorial judgment can be reduced to a machine-readable procedure. It is that the scale of modern publication creates a growing organisational problem, and that AI may have a legitimate role in helping humans manage that problem.

This framing is important because many objections to AI in publishing are strongest when AI is imagined as replacing scholarly reasoning. Those objections are often justified. Questions of accountability, confidentiality, bias, and overreliance become much harder when AI systems are given authority they cannot properly bear. By contrast, a more constrained role for AI, focused on sorting, screening, summarising, and routing information, may be easier to justify both practically and ethically.

Even under that narrower model, caution remains necessary. Organisational tools are not neutral simply because they stop short of making final decisions. Any system that shapes what editors and reviewers see first, what gets flagged, what gets grouped together, or what is presented as a priority can still influence outcomes. For that reason, the use of AI in review should be understood as a governance question as much as a technical one.

The value of a human-supervised approach is therefore not that it removes risk, but that it keeps responsibility located where it belongs. Human editors remain accountable for judgment. Reviewers remain accountable for critique. AI, in this model, is not a scientific arbiter. It is an instrument for organising complexity.

If publication overload is one of the defining structural problems of contemporary research communication, then it is reasonable to ask what kinds of tools may help address it. The argument advanced here is that AI may be useful, but only when its role is clearly bounded and when the distinction between organisation and judgment is preserved.

10. Conclusion

Publication overload has made scholarly review not only a problem of evaluation, but also a problem of organisation. That does not diminish the importance of human peer review. It makes the surrounding workflow more difficult to sustain through manual effort alone.

This article has argued for a narrow and human-supervised role for AI in that environment. The relevant question is not whether AI should replace reviewers or editors, but whether it may help them manage increasing volume by organising information more efficiently. Used in this way, AI may have value in screening, routing, summarising, and structuring large bodies of material without assuming responsibility for scientific judgment.

The aim of this essay is not to claim that such systems are already validated or that they resolve the deeper limitations of peer review. It is to make explicit a distinction that is likely to matter increasingly in future scholarly publishing: AI may assist with organisation, while humans remain responsible for interpretation, critique, and decision-making.

Declarations

Article type: Conceptual essay / viewpoint article.

Author responsibility: The author is responsible for the final framing and content of this document.

Use of AI tools: AI-assisted drafting tools were used during development and revision. The final text was reviewed and substantially rewritten to align with the intended argument and genre.