EasySLR Blog

Systematic Literature Reviews in HEOR : Inevitability of AI

Dev Chandan

#Blog

Introduction

In Health Economics and Outcomes Research (HEOR), systematic literature reviews are critical for evidence-based decision-making. PubMed, with over 36 million citations and nearly 2 million added annually, highlights the explosive growth in clinical research (U.S. National Library of Medicine, 2023). From 1991 to 2020, primary literature grew at an average annual rate of 10.28%, led by pragmatic clinical trials at 83.68%. Secondary literature grew at 10.57%, with net meta-analyses increasing by 48.97%. A recent study by Michelson and Reuter (2019) quantifies the magnitude of this challenge, noting that over 90% of clinical-trial compounds fail to demonstrate sufficient efficacy and safety, with systematic reviews and meta-analyses representing a significant portion of this research. The authors estimate that each systematic literature review costs around $141,194 on average. This surge in data volume and complexity necessitates AI integration to enhance efficiency, accuracy, and consistency in systematic reviews, making AI indispensable for managing HEOR research demands.

The Burden of Traditional Systematic Reviews

The time and resource burden of conducting systematic reviews is substantial. On average, a systematic review takes 67 weeks to complete, from protocol registration to publication (Borah et al., 2017). The median number of citations screened for a systematic review is 1,781, with some reviews screening over 50,000 citations (Bannach-Brown et al., 2019). This screening burden makes manual screening impractical, especially given the increasing demand for rapid reviews with timelines of 1-6 months compared to 12-24 months for full systematic reviews (Tricco et al., 2015).

At EasySLR, we have developed an intuitive web-based application designed to automate and enhance every step of the SLR process. Our platform harnesses advanced Large Language Models (LLMs) to handle tasks that previously consumed countless hours and resources. From study screening to data extraction, EasySLR is fine-tuned to meet the rigorous demands of HEOR research.

The Role of AI in Screening

AI can quickly process and screen vast amounts of data, allowing researchers to handle larger volumes without sacrificing thoroughness. AI-powered tools, generally as described by Chai et al. (2021), offer a promising solution. By semi-automating the abstract screening process, the AI tool delivered workload savings of 60-96% across a sample of systematic reviews and scoping reviews. In a real-world interactive analysis, this tool demonstrated significant time savings compared to manual screening. We have seen with EasySLR that transitioning from Excel to AI tools has significantly improved screening rates, with up to a 1.5x increase in the TiAB (Title and Abstract) stages. AI-assisted screening can reduce workload by 30-70% compared to manual screening (O'Mara-Eves et al., 2015), enabling much faster turnaround times for rapid evidence synthesis (Clark et al., 2020). This shift is crucial because traditional tools often lack the collaborative features and advanced capabilities necessary for modern systematic reviews.

Boosting Accuracy and Precision

AI enhances the accuracy of systematic reviews by minimising human error during full-text screening. By consistently applying predefined inclusion and exclusion criteria, AI tools reduce the risk of overlooking pertinent studies. Features like AI-extracted PICOS (Population, Intervention, Comparison, Outcome, and Study design) ensure comprehensive and accurate data capture, with each citation and source meticulously highlighted to maintain transparency and reliability (Jonnalagadda et al., 2015).

EasySLR AI ensures high precision during data extraction by systematically capturing relevant data points from studies along with source citations. This reduces the likelihood of missing critical information and enhances the reliability of the review findings. Data extraction errors occur in up to 30% of included studies in systematic reviews (Mathes et al., 2017), but AI-assisted extraction can reduce extraction time by 30-50% (Marshall et al., 2019). With AI, the precision in data capture achieves a high recall rate of 95%, doubling productivity compared to manual methods (Przybyła et al., 2018).

The Future of AI in HEOR

Systematic literature reviews often involve complex analyses of diverse study designs and outcomes. AI handles this complexity effortlessly, managing multi-layered data extraction and synthesis. The integration of AI in systematic reviews is a game-changer, and improves the efficiency of the entire workflow. However, it is crucial to recognise that AI should supplement rather than replace human expertise entirely. As Chai et al. (2021) emphasise, researchers must still apply their domain knowledge to verify the information provided by AI tools.

Transitioning to AI-based systematic literature reviews is not just a technological upgrade; it is a necessary evolution to keep pace with the growing demands of clinical research. AI-based software not only streamlines the screening process but also ensures more precise and consistent data extraction, ultimately improving the quality and speed of reviews. The future of evidence-based decision-making in HEOR is undeniably intertwined with AI.

References

  • Bannach-Brown, A., Przybyła, P., Thomas, J., Rice, A. S., Ananiadou, S., Liao, J., & Macleod, M. R. (2019). Machine learning algorithms for systematic review: reducing workload in a preclinical review of animal studies and reducing human screening error. Systematic Reviews, 8(1), 23. https://doi.org/10.1186/s13643-019-0942-7

  • Borah, R., Brown, A. W., Capers, P. L., & Kaiser, K. A. (2017). Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open, 7(2), e012545. https://doi.org/10.1136/bmjopen-2016-012545

  • Chai, K. E. K., Lines, R. L. J., Gucciardi, D. F., & Ng, L. (2021). Research screener: a machine learning tool to semi-automate abstract screening for systematic reviews. Systematic Reviews, 10(1), 93. https://doi.org/10.1186/s13643-021-01635-3

  • Clark, J., Glasziou, P., Del Mar, C., Bannach-Brown, A., Stehlik, P., & Scott, A. M. (2020). A full systematic review was completed in 2 weeks using automation tools: a case study. Journal of Clinical Epidemiology, 121, 81-90. https://doi.org/10.1016/j.jclinepi.2020.01.008

  • Jonnalagadda, S. R., Goyal, P., & Huffman, M. D. (2015). Automating data extraction in systematic reviews: a systematic review. Systematic Reviews, 4(1), 78. https://doi.org/10.1186/s13643-015-0066-7

  • Marshall, I. J., Wallace, B. C., & Brassey, J. (2019). Rapid reviews may produce different results to systematic reviews: a meta-epidemiological study. Journal of Clinical Epidemiology, 109, 30-41. https://doi.org/10.1016/j.jclinepi.2019.01.014

  • Mathes, T., Klaßen, P., & Pieper, D. (2017). Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Medical Research Methodology, 17(1), 152. https://doi.org/10.1186/s12874-017-0431-4

  • Michelson, M., & Reuter, K. (2019). The significant cost of systematic reviews and meta-analyses: A call for greater involvement of machine learning to assess the promise of clinical trials. Contemporary Clinical Trials Communications, 16, 100443. https://doi.org/10.1016/j.conctc.2019.100443

  • O'Mara-Eves, A., Thomas, J., McNaught, J., Miwa, M., & Ananiadou, S. (2015). Using text mining for study identification in systematic reviews: a systematic review of current approaches. Systematic Reviews, 4(1), 5. https://doi.org/10.1186/2046-4053-4-5

  • Przybyła, P., Brockmeier, A. J., Kontonatsios, G., Le Pogam, M.-A., McNaught, J., von Elm, E., Nolan, K., & Ananiadou, S. (2018). Prioritising references for systematic reviews with RobotAnalyst: a user study. Research Synthesis Methods, 9(3), 470–488. https://doi.org/10.1002/jrsm.1311

  • Tricco, A. C., Antony, J., Zarin, W., Strifler, L., Ghassemi, M., Ivory, J., Perrier, L., Hutton, B., Moher, D., & Straus, S. E. (2015). A scoping review of rapid review methods. BMC Medicine, 13(1), 224. https://doi.org/10.1186/s12916-015-0465-6

  • U.S. National Library of Medicine. (2023). PubMed Overview. https://pubmed.ncbi.nlm.nih.gov/about/