EasySLR
    • Features
    • Pricing
    • Events
    small-ges-eventplaceholder-ges-event

    How We Use AI

    Transparency, accountability, and human oversight in evidence synthesis.

    Decorative background

    How We Use AI

    Transparency, accountability, and human oversight in evidence synthesis.

    small-ges-eventplaceholder-ges-event

    Key Points

    • ✓AI assists, humans decide
    • ✓Zero data retention with our AI model provider
    • ✓SOC 2 Type II & ISO 27001:2022 compliant
    • ✓AI decisions labelled and exportable in Excel
    • ✓Customer data not used to train EasySLR models
    • ✓Configurable AI modes, QC workflows and audit trails

    EasySLR is an AI-assisted platform for systematic and targeted literature reviews. It's designed to augment evidence synthesis teams and can be configured from human-only workflows to AI-assisted workflows. In all cases, review teams remain accountable for protocols, decisions and conclusions.

    This page explains how we use AI. Our approach is aligned with the principles set out in Responsible use of AI in evidence synthesis (RAISE), which provides guidance on the responsible development, evaluation, selection and use of AI tools in evidence synthesis, together with the 2025 position statement from Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence. It also reflects more recent task-based guidance on how different AI tool classes are being used across review tasks.

    References: RAISE guidance | 2025 Position Statement

    Note: EasySLR is not affiliated with or endorsed by these organisations. RAISE is guidance, not a certification.

    1. How EasySLR uses AI

    EasySLR supports the main stages of an SLR or targeted review, including:

    • Search strategy support
    • Protocol support
    • De-duplication
    • Title and abstract screening
    • Full-text screening
    • Conflict analysis (human-human and human-AI)
    • Data extraction during screening and full data extraction
    • Risk of bias and appraisal checklists (14 built-in checklists supported; AI may pre-populate responses, but all outputs require human verification and approval)
    • Report generation
    • AI agent for targeted reviews

    AI modes

    Projects can be run with varying levels of AI involvement:

    ModeDescription
    No AIAll screening and decisions performed by human reviewers
    AI as AssistantAI suggests decisions and supporting rationale; humans make final decisions
    AI as one reviewerAI acts alongside a human reviewer; a human conflict resolver adjudicates disagreements
    AI Only + Human QCAI completes initial screening, with human sampling, review and override through QC workflows

    All modes retain human accountability, audit trails and configurable governance controls.

    Review configurations

    Separately from AI usage, projects can require either one or two independent reviews per article:

    • Single reviewer: Suitable for targeted literature reviews, rapid reviews, scoping reviews, etc. QC workflows allow sampling, review and override of decisions.
    • Dual reviewer: Suitable for systematic literature reviews or reviews with AI as one of the reviewers. Reviews can be by two humans or one human and one AI. Decisions are blinded, with a human conflict resolver for disagreements.

    Governance and controls

    • QC workflows: Project owners can sample, review and override any decision, human or AI
    • Audit trails: All decisions are recorded with clear identification of which reviewer, human or AI, made each decision
    • Governance settings: Organisation administrators can enable or restrict AI by project and stage, with usage limits where required
    • Stage approvals:The AI agent can run with manual approval between stages or auto-progression, depending on the team's workflow

    Key design choices

    • Humans remain responsible for protocols, final decisions and interpretation
    • AI outputs are traceable to specific articles, with suggested values linked to highlighted text in source PDFs
    • For screening decisions and extracted fields, AI rationales are surfaced for reviewer verification rather than treating the system as a black box
    • All AI decisions are labelled and exportable in Excel for external audit

    These design choices aim to be consistent with the core RAISE themes and the 2025 joint position statement: human accountability, transparent use, function-specific evaluation and appropriate oversight.

    AI technology

    EasySLR uses large language models (LLMs) via API from established providers. These models are trained by the vendor and are not fine-tuned on customer data. For full-text screening, data extraction and checklist support, the platform uses retrieval-augmented generation (RAG), converting PDF documents into structured text and retrieving relevant passages. This is designed to ground outputs in source passages from your PDFs. Human reviewers still verify and approve outputs before use.

    How EasySLR maps to RAISE tool classes

    RAISE groups tools into five broad classes: 1. Rule-based algorithms, 2. Unsupervised classifiers, 3. Supervised classifiers, 4. Generative LLMs, and 5. Agentic AI. The summary below shows how EasySLR maps to the classes relevant to our workflow. See the RAISE guidance.

    Review taskRAISE tool classHow EasySLR uses itHuman role
    Protocol generation4. Generative LLMsAI generates structured protocol drafts from a research objective and flags gaps or missing elements for review.Human reviewers verify, edit and approve the protocol before use.
    Search strategy4. Generative LLMsAI generates draft search queries and supports iterative refinement.Review teams review and refine the strategy before final use.
    De-duplication1. Rule-based algorithmsConfigurable rule-based matching across 10 fields identifies likely duplicates before screening; no ML model is involved.Review teams configure the matching rules and review the final duplicate set.
    Screening4. Generative LLMsTitle and abstract screening provides inclusion or exclusion suggestions with rationales. Full-text screening uses RAG over PDFs to ground suggestions in retrieved passages.Teams choose Assistant, Reviewer or AI Only + Human QC modes according to project governance.
    Data extraction4. Generative LLMsAI pre-populates extraction forms using RAG from full PDF content, with each value linked to highlighted source text.Human reviewers verify, edit and approve extracted data.
    Risk of bias and appraisal checklists4. Generative LLMsAI can pre-populate responses across built-in checklists, with supporting references shown alongside checklist items.All final responses are reviewed and approved by humans.
    Report generation4. Generative LLMsAI generates structured review reports and supporting tables from project outputs.Teams edit and approve the final report before use.
    AI agent for targeted reviews5. Agentic AIConnected workflow for targeted reviews, from research question to draft report, with optional manual approval between stages or auto-progression.Human checkpoints, notifications and audit records are available throughout.

    2. Evidence on performance

    Multiple organisations have carried out evaluations and reported their findings through conference abstracts, posters and manuscripts. A consolidated list is available on our Publications page.

    Published evaluations

    StudyVenueKey findings
    Radotra et alISPOR 2025, MontrealReplicated five published SLRs; AI recall ranged from 73% to 100%
    Rathi et alISPOR 2024, AtlantaLLM-assisted full text screening matched human decisions with oversight for complex eligibility
    Rathi et alGlobal Evidence Summit 2024High recall with human-in-the-loop configuration
    Povsic M & Armitage EWorld EPA Congress 2025Independent evaluation: ~40% time reduction in a complex breast cancer review

    What this means

    AI-assisted workflows can deliver high recall and meaningful time savings, though performance varies by topic and human oversight remains essential.

    We recommend:

    • Running a pilot project before large-scale adoption
    • Setting project-specific targets for recall and agreement
    • Using EasySLR's reviewer quality statistics and conflict reports to monitor AI performance

    Additional studies are in progress and will be added to our Publications page.

    3. Human oversight

    EasySLR is built on the principle that review teams remain accountable for their reviews. Here's how that works in practice:

    • Project owners configure protocols, AI modes and decision rules
    • Leads can see who made each decision, including AI decisions
    • AI outputs are visible, attributable and reviewable within the workflow
    • QC mode allows sampling and override of any decision
    • Credit limits allow governance teams to control AI usage
    • For higher-stakes reviews, we recommend dual human screening or AI Assistant mode until local pilots validate broader automation modes
    • We recommend documenting AI configuration (mode, stages, model identifiers) in the protocol or methods section

    4. Data protection and privacy

    EasySLR is provided as a secure cloud service hosted on Amazon Web Services (AWS). We follow recognised security and privacy standards:

    FrameworkStatus
    SOC 2 Type IICompliant (audit reports available under Non-Disclosure Agreement)
    ISO 27001:2022Compliant (audit reports available under Non-Disclosure Agreement)
    GDPRAligned

    Key data protection points

    • Ownership: You retain ownership of your uploaded content and results
    • No training on customer data: Customer data are not used to train EasySLR models
    • Zero data retention with our AI model provider: For customer content sent for AI processing, we use zero data retention
    • LLM provider controls: We use provider data controls and contractual terms intended to prevent submitted content being used for provider training
    • Role-based access: Access controls restrict access to authorised users, typically the project team; organisation administrators have configurable permissions
    • Regional deployment: Enterprise customers requiring EU or US data residency can request separate deployment regions
    • DPAs available: Data Processing Agreements provided to institutional customers

    Full details are in our Privacy Policy, Terms of Service and Trust Center. Security documentation and audit reports are available under Non-Disclosure Agreement for regulators and data protection teams.

    5. Limitations

    We're transparent about where EasySLR works best and where it has limitations.

    Language and domain coverage

    EasySLR is optimised for English-language systematic reviews in clinical, economic, humanistic and epidemiological domains. Performance may be weaker for:

    • Non-English texts
    • Very recent terminology
    • Highly specialised subdomains poorly represented in LLM training data
    • Some forms of grey literature

    Users should pilot EasySLR on a subset of records for each new protocol and topic. That said, researchers have used the platform successfully beyond typical domains, including road safety, implementation science and governance.

    Bias in the evidence base

    AI reflects the published literature and underlying model training data, which can under-represent some regions, languages and study types. Protocols and search strategies should still be designed with equity and inclusion in mind. EasySLR's explanations and reviewer statistics can help identify patterns in AI behaviour.

    Model updates

    Third-party models evolve over time. We validate new versions internally before deployment and run regression tests on benchmark projects.

    6. RAISE alignment

    RAISE provides guidance on the responsible use of AI in evidence synthesis. More recent task-based guidance adds task-specific recommendations by tool class. EasySLR is designed to align with the main themes across both.

    RAISE ThemeHow EasySLR Addresses It
    Human accountabilityReview teams and organisations remain responsible. AI modes are deliberately enabled, and QC workflows allow override of any decision.
    Transparent purposeWe state which review tasks EasySLR supports, how human verification fits, and where limitations remain. AI rationales and linked source passages are surfaced where applicable.
    Evaluation and monitoringPublished studies, internal regression testing, reviewer statistics and conflict reporting support local evaluation and ongoing monitoring.
    Data protectionSOC 2 Type II and ISO 27001:2022 compliant. Customer data are not used to train EasySLR models.
    Human oversightAssistant, Reviewer, and AI Only + Human QC modes let teams choose the level of automation that fits their governance model.

    7. Resources

    • Research & Publications
    • Knowledge Base
    • Video Tutorials
    • Live chat available within the platform

    8. Enterprise review materials

    For procurement, legal, compliance, ethics committees or institutional review boards that need a fuller description, we can provide:

    • RAISE-aligned governance pack:Documentation describing EasySLR's AI use, human oversight model, governance controls and review workflow for enterprise assessment
    • Technical architecture overview
    • Security and compliance documentation (SOC 2 and ISO 27001 audit reports)
    • Supporting evaluation references on request

    Contact: support@easyslr.com

    We'll route procurement, legal and compliance requests to the right team. Some materials are shared under Non-Disclosure Agreement where appropriate.

    Last updated: April 2026

    EasySLR
    soc.pngiso.pnggdpr.png
    LinkedInXInstagramYouTube

    Product

    • Features
    • Pricing
    • Systematic Review Software
    • Tools Free
    • Knowledge Base
    • Changelog

    Company

    • About Us
    • Events
    • Research
    • Methods
    • Blog
    • Book a Demo

    Legal

    • Privacy Policy
    • Terms of Service
    • Refund Policy
    • Trust Center

    Address

    • Minarch Tower,
      Sector 44,
      Gurugram, HR
      India 122003

    Request an AI summary of EasySLR

    ChatGPTClaudeGeminiGrokPerplexity

    © 2026 EasySLR All rights reserved.