4 September 2023
Researchers conducting living systematic reviews could benefit from automation tools to make workflow more efficient. Tools using artificial intelligence and machine learning could improve the screening process but have limitations, according to a study published in Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen.
The team behind this research used a case study from their own work to highlight some of the practical challenges and barriers to implementing and using reliable living review tools. Their case study was a living review of the impact of COVID-19 on self-harm and suicidal behaviour. It involved daily monitoring of evidence for two years, at the height of the pandemic.
The team identified 11 tools with automation capabilities that could be used to make updating living systematic reviews more efficient. They found that automation approaches like machine learning and text mining could help make living systematic reviews more efficient, especially when it came to screening references.
However, none of the tools covered the whole living review workflow while at the same time providing a complete, transparent, and published validation of their automation methods. In particular, researchers identified two areas where the tools needed improving: linking related studies and synthesising supporting evidence between updates.
They found that developing tools to improve these two issues could make the reviewing process more efficient. They also found that preprint servers and websites dedicated to sharing updated results could help how results were communicated between formal publication updates. This is especially important in the case of living systematic reviews because of the usually time-sensitive nature of their results.
Living systematic reviews are most useful when there is a need for up-to-date evidence on a given topic. For example, during a public health emergency following disasters such as earthquakes, floods or pandemics.
Working on a living systematic review involves continuously updating it with the latest research on a specific topic. This type of work can be time consuming and resource intensive because of the volume of new research and the possibility of new developments in a particular field.
Four of the reviewed tools covered multiple workflow steps like reference screening, data extraction, and visualization. The other tools focused on specific tasks like scoping, reference retrieval, automated extraction, and dissemination.
Julian P. T. Higgins, Professor of Evidence Synthesis at Bristol Medical School, said:
“There are several ways to make living reviews more feasible with most (but not all) focusing on automation and efficiency of the review process.
“Despite offering basic living review functionalities, the tools we looked at need to be developed further. Areas with research potential are broader evidence retrieval from additional data sources, developing automation beyond active-learning for screening, grouping multiple papers from the same study, matching pre-prints with final publications of the same article, and ensuring interoperability across tools to improve data sharing and dissemination pipelines.”