Skip to main content

AI-Powered Sherpa Feedback Processing Loop

1. Overview

When your Sherpa reviews planning specs, their feedback now gets processed by AI in under 20 seconds - critically evaluating suggestions, surgically applying validated improvements, and showing you before/after quality scores. You see exactly what changed, why it changed, and the measurable impact on spec quality before approving. This eliminates hours of back-and-forth iterations and gives you objective proof that changes improve your spec.

2. Step-by-Step Guide

  1. Submit your feedback - On the Sherpa Planning Review page, enter your feedback in the text field (minimum 10 characters). Click Save. The system creates an iteration record and shows an estimated completion time of 15-20 seconds.
  2. Wait for AI processing - The system runs three phases automatically: evaluating your feedback for quality impact, applying only validated suggestions surgically without rewriting the entire spec, and re-scoring the enhanced spec across 5 dimensions (story completeness, requirement testability, success criteria measurability, entity coverage, scope clarity).
  3. Review the results - Once processing completes, the FeedbackIterationView displays your iteration number, processing time, cost, and tokens used. Scroll through four key sections: AI Evaluation (valid suggestions with green badges, invalid suggestions with orange badges), Quality Impact (before/after grades like C 72% to B 85% with dimension breakdowns), Changes Applied (expandable list showing before/after text with reasoning), and Spec Comparison (side-by-side markdown with syntax-highlighted diffs).
  4. Make your decision - If the quality improved and changes look good, click Approve Changes to lock in the enhanced spec. If you want to provide additional guidance, click Add More Feedback to create a new iteration. If quality degraded or changes missed the mark, click Revert Changes to restore the original spec.
  5. Track iteration history - Access the complete audit trail showing all iterations with their feedback text, status, score changes, and decisions. Each iteration record includes processing metrics, cost breakdown, and the full reasoning behind AI decisions.

3. Common Questions

Q: What happens if I submit feedback that’s completely irrelevant to spec quality?
A: The AI will still process it, but the evaluation may return 0 valid suggestions. No changes get applied, and your score remains unchanged. You’ll see this clearly in the AI Evaluation section with reasoning explaining why suggestions weren’t actionable.
Q: Can I submit new feedback while an iteration is processing?
A: No. The system blocks concurrent processing for the same review. If you try to submit while another iteration is running, you’ll see an error: “Processing already in progress.” Wait for the current iteration to complete, then submit your next round of feedback.
Q: What if the enhanced spec has a worse quality score than the original?
A: The system detects quality degradation when the score drops by more than 5 points. You’ll see a warning toast and the score delta displayed in red. The system recommends reverting to the original spec, and you can do so with one click.
Q: How much does AI processing cost?
A: Each iteration displays the exact cost calculated from Anthropic token usage: input tokens at 0.003per1,000andoutputtokensat0.003 per 1,000 and output tokens at 0.015 per 1,000. You’ll see the total cost rounded to 4 decimal places in the iteration summary.
Q: Can I see what the AI rejected and why?
A: Yes. The AI Evaluation section shows both valid suggestions (green badges) and invalid suggestions (orange badges). Each invalid suggestion includes reasoning - for example, “Over-engineering for feature scope” or “Architecture mismatch” - so you understand why the AI declined to apply it.

4. Troubleshooting

Issue: My feedback submission fails with a validation error
Solution: Check that your feedback text is at least 10 characters and under 10,000 characters. The system requires this minimum to ensure meaningful input for AI evaluation.
Issue: Processing fails with “Enhancement timeout” error
Solution: This happens when AI processing takes longer than 60 seconds in any phase. The system fails gracefully and stores the error in the iteration record. Try submitting more focused, specific feedback in smaller chunks rather than broad, sweeping suggestions.
Issue: I accidentally approved changes but want to undo it
Solution: Once approved, you cannot revert that specific iteration. However, you can submit new feedback asking the AI to restore specific aspects of the original spec. Each new iteration starts from the current approved version.
Issue: The side-by-side diff is hard to read
Solution: The Spec Comparison section uses syntax highlighting with green for additions, red for deletions, and white for unchanged content. If sections are long, use the expandable Changes Applied list instead - it breaks down modifications section by section with clear before/after snippets.
Planning Artifact Management - After approving feedback iterations, access your enhanced planning specs through the Planning Artifact pages. Download finalized versions, view complete review history, and share specs with stakeholders using secure tokens. Sherpa Review Results - Track the full lifecycle of Sherpa reviews on the Sherpa Review Results page. See all iterations, approval decisions, and quality progression over time. This connects your feedback loop to the broader review workflow. Quality Certification - Once your planning spec reaches high quality scores through iterative feedback, trigger quality certification to get official validation. The AI feedback loop builds the foundation for certification by systematically improving spec quality across all 5 dimensions.