News

When AI Meets Systematic Reviews and Shapes the Future of Evidence Synthesis

Written by Tanja Fens | 17 October 2025 15:56:04 Z

Imagine a researcher at their desk, surrounded by hundreds of open articles, spreadsheets filled with study data, and the constant ticking of deadlines. Systematic literature reviews (SLRs) are essential for understanding the state of scientific evidence, yet the process can feel like wading through an endless sea of studies. Every title must be screened, every abstract evaluated, and every finding carefully extracted. It is meticulous, exhausting work—and for many, the sheer volume can feel overwhelming.

Systematic reviews are the backbone of evidence-based research—but conducting them can feel like an uphill battle against time and data.

Amid this challenge, artificial intelligence (AI) offers a tantalizing promise: tools that can help screen studies faster, reduce repetitive workload, and assist researchers without replacing their expertise. But how do researchers actually perceive AI in this context, and what drives their willingness to adopt it?

Exploring Researchers’ Preferences

A recent study "The introduction and adoption of artificial intelligence in systematic literature reviews: a discrete choice experiment" tackled this question through a discrete choice experiment involving 187 participants with diverse experience in SLRs and varying familiarity with AI. Participants were presented with hypothetical AI tools that differed in key characteristics, including the role of the AI in screening, the sensitivity of its outputs, the level of user proficiency required, and the effort needed for training.

The aim was to understand the trade-offs researchers make when deciding whether to adopt AI tools in their work.

Insights from the Study

The findings revealed a nuanced landscape. Workload reduction emerged as an important motivator, but it was not the only consideration. Researchers preferred tools that complemented human judgment rather than replaced it. Tools requiring moderate proficiency were more likely to be adopted, while AI solutions demanding expert-level skills sometimes discouraged even highly experienced scientists.

Interestingly, researchers with extensive experience in conducting SLRs were more open to adopting AI, whereas those with broader scientific expertise tended to hesitate if the tools seemed too complex.

 

Implications for the Future

The study highlights a clear path for AI in systematic reviews: tools must be reliable, sensitive, and supportive, reducing effort without compromising judgment. When designed thoughtfully, AI can transform SLRs from a daunting, resource-heavy process into a more manageable, efficient workflow—allowing researchers to focus on analysis, insights, and decision-making rather than repetitive screening.

This research was conducted by Seye Abogunrin, Bart P H Slob, Marie Lane, Sajad Emamipour, Piotr Twardowski, Cornelis Boersma, and Jurjen van der Schans, offering valuable guidance for developers and research teams working to integrate AI into evidence synthesis.