EasySLR now aligns with the latest RAISE 3 Guidelines, enhancing how users select, evaluate, and use AI tools responsibly within systematic review workflows.
Overview
The RAISE 3 framework focuses on responsible AI adoption in evidence synthesis, ensuring that AI usage is transparent, validated, and methodologically sound.
With this update, EasySLR integrates RAISE 3 principles directly into the workflow, complementing its existing methodology framework outlined here: https://www.easyslr.com/methods
This ensures that both methodological rigor and AI governance are aligned throughout the review lifecycle.
What’s New
1. AI Usage Transparency
Visibility into AI usage across different stages (screening, extraction, reporting)
Ability to review how AI contributes to decisions
Supports documentation for audits and publications
2. AI Decision Validation
Compare AI outputs with human decisions
Built-in conflict analysis for Human vs AI decisions
Helps assess accuracy, reliability, and trustworthiness of AI outputs
3. Tool Selection & Evaluation Support
Enables users to evaluate:
Whether AI is appropriate for their review
If outputs meet required quality standards
Aligns with RAISE 3 guidance on fit-for-purpose AI usage
4. Structured AI Workflow Integration
AI is embedded across:
Title–Abstract screening
Full-text screening
Data extraction
Report generation
Ensures AI usage remains consistent and traceable
5. Auditability & Reporting
Track AI models used at each stage
Maintain logs for:
Decisions
Outputs
Workflow steps
Supports regulatory, publication, and governance requirements
6. Ethical & Responsible AI Use
Encourages human oversight in all AI-assisted stages
Ensures:
AI outputs are reviewed, not blindly accepted
Decisions remain human-controlled
Why This Matters
This update helps teams:
Use AI tools responsibly and confidently
Ensure methodological rigor and transparency
Meet emerging global standards for AI in evidence synthesis
Improve trust, reproducibility, and audit readiness