

Transparency, accountability, and human oversight in evidence synthesis.
Transparency, accountability, and human oversight in evidence synthesis.


EasySLR is an AI-assisted platform for systematic and targeted literature reviews. It's designed to augment evidence synthesis teams and can be configured from human-only workflows to AI-assisted workflows. In all cases, review teams remain accountable for protocols, decisions and conclusions.
This page explains how we use AI. Our approach is aligned with the principles set out in Responsible use of AI in evidence synthesis (RAISE), which provides guidance on the responsible development, evaluation, selection and use of AI tools in evidence synthesis, together with the 2025 position statement from Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence. It also reflects more recent task-based guidance on how different AI tool classes are being used across review tasks.
References: RAISE guidance | 2025 Position Statement
Note: EasySLR is not affiliated with or endorsed by these organisations. RAISE is guidance, not a certification.
EasySLR supports the main stages of an SLR or targeted review, including:
Projects can be run with varying levels of AI involvement:
| Mode | Description |
|---|---|
| No AI | All screening and decisions performed by human reviewers |
| AI as Assistant | AI suggests decisions and supporting rationale; humans make final decisions |
| AI as one reviewer | AI acts alongside a human reviewer; a human conflict resolver adjudicates disagreements |
| AI Only + Human QC | AI completes initial screening, with human sampling, review and override through QC workflows |
All modes retain human accountability, audit trails and configurable governance controls.
Separately from AI usage, projects can require either one or two independent reviews per article:
These design choices aim to be consistent with the core RAISE themes and the 2025 joint position statement: human accountability, transparent use, function-specific evaluation and appropriate oversight.
EasySLR uses large language models (LLMs) via API from established providers. These models are trained by the vendor and are not fine-tuned on customer data. For full-text screening, data extraction and checklist support, the platform uses retrieval-augmented generation (RAG), converting PDF documents into structured text and retrieving relevant passages. This is designed to ground outputs in source passages from your PDFs. Human reviewers still verify and approve outputs before use.
RAISE groups tools into five broad classes: 1. Rule-based algorithms, 2. Unsupervised classifiers, 3. Supervised classifiers, 4. Generative LLMs, and 5. Agentic AI. The summary below shows how EasySLR maps to the classes relevant to our workflow. See the RAISE guidance.
| Review task | RAISE tool class | How EasySLR uses it | Human role |
|---|---|---|---|
| Protocol generation | 4. Generative LLMs | AI generates structured protocol drafts from a research objective and flags gaps or missing elements for review. | Human reviewers verify, edit and approve the protocol before use. |
| Search strategy | 4. Generative LLMs | AI generates draft search queries and supports iterative refinement. | Review teams review and refine the strategy before final use. |
| De-duplication | 1. Rule-based algorithms | Configurable rule-based matching across 10 fields identifies likely duplicates before screening; no ML model is involved. | Review teams configure the matching rules and review the final duplicate set. |
| Screening | 4. Generative LLMs | Title and abstract screening provides inclusion or exclusion suggestions with rationales. Full-text screening uses RAG over PDFs to ground suggestions in retrieved passages. | Teams choose Assistant, Reviewer or AI Only + Human QC modes according to project governance. |
| Data extraction | 4. Generative LLMs | AI pre-populates extraction forms using RAG from full PDF content, with each value linked to highlighted source text. | Human reviewers verify, edit and approve extracted data. |
| Risk of bias and appraisal checklists | 4. Generative LLMs | AI can pre-populate responses across built-in checklists, with supporting references shown alongside checklist items. | All final responses are reviewed and approved by humans. |
| Report generation | 4. Generative LLMs | AI generates structured review reports and supporting tables from project outputs. | Teams edit and approve the final report before use. |
| AI agent for targeted reviews | 5. Agentic AI | Connected workflow for targeted reviews, from research question to draft report, with optional manual approval between stages or auto-progression. | Human checkpoints, notifications and audit records are available throughout. |
Multiple organisations have carried out evaluations and reported their findings through conference abstracts, posters and manuscripts. A consolidated list is available on our Publications page.
| Study | Venue | Key findings |
|---|---|---|
| Radotra et al | ISPOR 2025, Montreal | Replicated five published SLRs; AI recall ranged from 73% to 100% |
| Rathi et al | ISPOR 2024, Atlanta | LLM-assisted full text screening matched human decisions with oversight for complex eligibility |
| Rathi et al | Global Evidence Summit 2024 | High recall with human-in-the-loop configuration |
| Povsic M & Armitage E | World EPA Congress 2025 | Independent evaluation: ~40% time reduction in a complex breast cancer review |
AI-assisted workflows can deliver high recall and meaningful time savings, though performance varies by topic and human oversight remains essential.
We recommend:
Additional studies are in progress and will be added to our Publications page.
EasySLR is built on the principle that review teams remain accountable for their reviews. Here's how that works in practice:
EasySLR is provided as a secure cloud service hosted on Amazon Web Services (AWS). We follow recognised security and privacy standards:
| Framework | Status |
|---|---|
| SOC 2 Type II | Compliant (audit reports available under Non-Disclosure Agreement) |
| ISO 27001:2022 | Compliant (audit reports available under Non-Disclosure Agreement) |
| GDPR | Aligned |
Full details are in our Privacy Policy, Terms of Service and Trust Center. Security documentation and audit reports are available under Non-Disclosure Agreement for regulators and data protection teams.
We're transparent about where EasySLR works best and where it has limitations.
EasySLR is optimised for English-language systematic reviews in clinical, economic, humanistic and epidemiological domains. Performance may be weaker for:
Users should pilot EasySLR on a subset of records for each new protocol and topic. That said, researchers have used the platform successfully beyond typical domains, including road safety, implementation science and governance.
AI reflects the published literature and underlying model training data, which can under-represent some regions, languages and study types. Protocols and search strategies should still be designed with equity and inclusion in mind. EasySLR's explanations and reviewer statistics can help identify patterns in AI behaviour.
Third-party models evolve over time. We validate new versions internally before deployment and run regression tests on benchmark projects.
RAISE provides guidance on the responsible use of AI in evidence synthesis. More recent task-based guidance adds task-specific recommendations by tool class. EasySLR is designed to align with the main themes across both.
| RAISE Theme | How EasySLR Addresses It |
|---|---|
| Human accountability | Review teams and organisations remain responsible. AI modes are deliberately enabled, and QC workflows allow override of any decision. |
| Transparent purpose | We state which review tasks EasySLR supports, how human verification fits, and where limitations remain. AI rationales and linked source passages are surfaced where applicable. |
| Evaluation and monitoring | Published studies, internal regression testing, reviewer statistics and conflict reporting support local evaluation and ongoing monitoring. |
| Data protection | SOC 2 Type II and ISO 27001:2022 compliant. Customer data are not used to train EasySLR models. |
| Human oversight | Assistant, Reviewer, and AI Only + Human QC modes let teams choose the level of automation that fits their governance model. |
For procurement, legal, compliance, ethics committees or institutional review boards that need a fuller description, we can provide:
Contact: support@easyslr.com
We'll route procurement, legal and compliance requests to the right team. Some materials are shared under Non-Disclosure Agreement where appropriate.
Last updated: April 2026