Works with: Claude Code, Codex CLI, OpenCode, Gemini CLI, pi-agent, and more.
Getting Started | Usage Guide | Handbook - Skills, Agents, Templates

ace-review runs focused, repeatable reviews with configurable presets and parallel model execution via ace-llm. Findings are captured as feedback items with a verify, apply, and resolve lifecycle so review outcomes stay actionable.
How It Works
- Select a review preset (code, security, docs, PR, or custom) and target (diff, file set, or PR number) via
ace-review. - The review engine executes the prompt across one or more models through ace-llm, loading context from ace-bundle and diffs from ace-git.
- Findings are synthesized into feedback items with a tracked lifecycle (draft, verified, pending, resolved, skipped) and saved as session artifacts.
Use Cases
Review pull requests with consistent quality gates - use /as-review-pr or ace-review --pr to run preset-driven reviews over PR diffs with optional GitHub comment publication.
Run multi-model analysis in parallel - execute the same review prompt across multiple ace-llm providers, then synthesize overlapping and conflicting findings.
Manage feedback as tracked work - use /as-review-verify-feedback and /as-review-apply-feedback to move findings through draft, verified, pending, resolved, and skipped states, or use ace-review-feedback from the CLI to list, verify, apply, and resolve feedback items directly.
Scope reviews to packages or tasks - use /as-review-package for package-level analysis or connect reviews to ace-task workflows for task-scoped quality checks.
Audit review history through session artifacts - keep saved review sessions under .ace-local/ for traceability, comparison, and handoff across contributors.
Testing
Run package deterministic checks with:
ace-test ace-review
ace-test ace-review feat
ace-test ace-review all
Run retained workflow scenarios with:
ace-test-e2e ace-review
Getting Started | Usage Guide | Handbook - Skills, Agents, Templates | Part of ACE