Cochrane Evidence Synthesis and Methods
Special Issue: AI in Evidence Synthesis
Call for Papers: https://onlinelibrary.wiley.com/page/journal/28329023/homepage/call-for-papers/si-2024-000889
Submission deadline: Friday 28 February 2025
AI promises to transform the way we practise evidence synthesis (ES) (Coiera & Liu, 2022). In light of the increasing quantity of primary research, making the process of ES more efficient and reliable would allow the ES community to better respond to the many demands for evidence-based decision making across healthcare.
Possibilities are expanding with the development of generative AI, and large language models (LLMs) in particular. Recent research has explored the use of LLMs for search, screening, data extraction, critical appraisal, and summarisation (Gartlehner et al., 2024; Hasan et al., 2024; Li et al., 2024; Ovelman et al., 2024; Wang et al., 2023; Zhang et al., 2024).
However, the application of AI to automate steps in the ES process is not new. Moreover, the almost two-decade long history of the field of machine learning (ML) in ES is in part a history of promises that have not been fulfilled. For example, nearly two decades after ML-prioritised screening – the most common of all applications of AI for ES – was first suggested to save screening labour (Cohen et al., 2006), the practice is not recommended in the Cochrane handbook due to a lack of reliable processes to manage the risk of missing studies (Lefebvre et al., 2023).
This is in part explained by a focus in research on demonstrating large potential gains in efficiency using retrospective evaluation, rather than on implementation, and on developing processes that enable the responsible use of AI for ES processes in active reviews where validation data is not available, or where it is scarce and produced during the review (O'Connor et al., 2019). In other words, we lack research that asks questions about how to design human-in-the-loop processes to evaluate as we go when we apply AI in ES in new projects. Or on how we can better quantify the risks of using AI, in order to weigh these appropriately against their benefits.
The enthusiasm around LLMs means that available evaluations can display varying degrees of robustness and quality. For example, "prompt engineering", whether explicit or implicit, is frequently practised using the same data used to evaluate the system, making the validation scores unreliable in predicting future performance where labelled data is not available. We must therefore prioritise, and encourage, better validation practices, especially where there is excitement about a new technology. We also need to synthesise results across validation studies, lest we fall victim to the same risks of making decisions on biassed subsets of the evidence that ES itself was designed to mitigate.
This call aims to bring together papers which bridge the gap between the demonstration of AI's potential and the responsible implementation of AI, whether by collecting, appraising, and synthesising evidence on the use of AI across ES tasks in order to guide decision-making on its use, or through empirical or theoretical research that shows how AI can be used in active reviews, where pre-annotated validation datasets are not available.
This is a joint call between the Collaboration for Environmental Evidence (CEE), Campbell and Cochrane for papers to be considered for publication in CEE's Environmental Evidence Journal (https://www.biomedcentral.com/collections/AISESEM), the Campbell Systematic Reviews journal (https://onlinelibrary.wiley.com/journal/18911803), and Cochrane Evidence Synthesis and Methods journal. Our intention in working together reflects our joint recognition that stronger collaboration between the fields of AI and ES should be based on shared interests. We want to curate a collection of papers from our respective journals that will increase the discoverability of research in this area, foster innovation across our respective disciplines, and serve the generation of knowledge for policy making in the future, all with responsible use of AI.
Topics in AI in evidence synthesis including but not limited to:
- Evaluation of benefits and risks of AI for evidence synthesis
- Validation methods for AI in evidence synthesis
- Exploration of new types of evidence synthesis enabled by AI
- Critical studies of social implications of AI in evidence synthesis
- Studies within a review (SWAR) involving AI implementation
- Evidence syntheses on AI in evidence synthesis
- Tutorials on utilizing AI in evidence synthesis
Guest Editor:
Gregory Laynor (gregory.laynor@nyulangone.org)
NYU Grossman School of Medicine
New York, USA
------------------------------
Gregory Laynor
Systematic Review Librarian
NYU Health Sciences Library
------------------------------