Introduction

This page provides a brief introduction to evidence synthesis automation, including but not limited to AI-based automation tools.

Evidence synthesis automation has been a significant area of research for over two decades; recent developments in artificial intelligence have enabled considerable forward movement in the field. AI automation tools hold great promise to increase the efficiency, speed, and accuracy of evidence syntheses. However, AI automation tools also hold significant risk for the transparency, reproducibility, and accuracy of a project. 

Given the importance of transparency, reproducibility, and above all, accuracy, to evidence synthesis methodologies, research organizations including Cochrane, Campbell, JBI and others have begun to define best practices for AI use in evidence synthesis. The information on this page derives from the following resources.

Recommendations for authors

The following recommendations derive from the Responsible AI in Evidence Synthesis (RAISE): guidance and recommendations. RAISE 1 provides more detailed recommendations for the roles of authors ("evidence synthesists"), as well as evidence synthesis methodologists and other roles.

  1. Evidence synthesis authors are ultimately responsible for the evidence synthesis.
  2. Evidence synthesis authors have a duty to make sure any AI tool they intend to use does what it says it does, and is appropriate for their use case. Authors must provide justification and point to the evidence base.
  3. Evidence synthesis authors must report AI use in the manuscript transparently and in detail.
  4. Evidence synthesis authors must ensure that ethical, legal, and regulatory standards are adhered to when using AI.
  5. Evidence synthesis authors should contribute to the evidence synthesis automation ecosystem to help it grow and develop.

Deciding whether or not to use an automation tool

One should consider the following questions when considering whether or not to use an automation tool:

  • Is there good quality evidence that the tool does what it claims to do?
  • Is there good quality evidence that the tool is appropriate for your specific use case?
  • Does your team have the money to purchase access to the tool?
  • Does your team have the time and expertise to develop or adapt an existing tool?
  • Will the time spent learning (and possibly training) the tool outweigh the time spent doing the task manually?
  • Can you use the tool while following ethical and legal codes?
  • Will using the tool undermine the trustworthiness or reliability of your project?

See page 9 of Responsible AI in Evidence Synthesis (RAISE) 3 for a detailed framework for selecting and using AI automation tools.

Methods guidance and reporting of automation tools

The Cochrane Handbook for Systematic Reviews of Interventions provides clarification on the acceptability and feasibility of automation at several points:

Additional Cochrane Handbooks can be found here.


PRISMA 2020, the reporting standard* for systematic reviews, has several items which cover required reporting of automation tools used, including: 

  • #7 search strategy
  • #8 selection process
  • #9 data collection process
  • #11 study risk of bias assessment
  • #14 reporting bias assessment

*See here for clarification on the difference between methodological standards vs reporting standards.

Spotlighted automation tools

Below you will find a small selection of tools, organized by the stage of the evidence synthesis process for which they are most relevant.

More comprehensive lists of evidence synthesis automation tools can be found:


Searching

Litsearchr is an R package which facilitates semi-automatic search strategy development.

Polyglot Search translates the syntax of database search strings from one database to another (e.g. PubMed to Web of Science).

Yale MeSH Analyzer examines MeSH terms attached to groups of PubMed citations.

Screening

Covidence is a tool which aids with deduplication, screening, data extraction, and critical appraisal. It has a selection of automation features.

Rayyan is a tool which aids with deduplication and screening. It has a selection of automation features.

Revtools is an R package can support articles screening.

Critical appraisal

RobotReviewer is a machine learning system which allows users to upload RCT articles and see automatically determined information concerning the trial conduct (the 'PICO', study design, and whether there is a risk of bias).

Selected resources for learning about evidence synthesis automation