Project 5: The “Corrector” Agent with Retry Logic

Build a PydanticAI agent that detects its own output errors, applies fixes, and retries safely.

Quick Reference

Attribute Value
Difficulty Level 3: Advanced
Time Estimate 10-16 hours
Language Python
Prerequisites Validation workflows, retry logic
Key Topics self-correction, retries, validation loops

1. Learning Objectives

By completing this project, you will:

  1. Detect invalid outputs with schema validation.
  2. Generate fix instructions based on errors.
  3. Retry with corrected prompts.
  4. Limit retries with budgets.
  5. Measure improvement rate from corrections.

2. Theoretical Foundation

2.1 Self-Correction Loops

Validation-driven retries turn vague outputs into reliable, schema-compliant data.


3. Project Specification

3.1 What You Will Build

An agent that validates its output, identifies errors, and retries with targeted fixes.

3.2 Functional Requirements

  1. Validation step on each output.
  2. Fix prompt derived from error messages.
  3. Retry budget to prevent infinite loops.
  4. Metrics for correction success.
  5. Trace logs for each retry.

3.3 Non-Functional Requirements

  • Deterministic mode for testing.
  • Clear error reporting per retry.
  • Safe fallback when retries fail.

4. Solution Architecture

4.1 Components

Component Responsibility
Validator Check output schema
Fix Generator Create repair instructions
Retry Controller Manage attempts
Metrics Track improvement

5. Implementation Guide

5.1 Project Structure

LEARN_PYDANTIC_AI/P05-corrector-agent/
├── src/
│   ├── validate.py
│   ├── fix.py
│   ├── retry.py
│   └── metrics.py

5.2 Implementation Phases

Phase 1: Validation (3-4h)

  • Validate outputs against schema.
  • Checkpoint: errors are captured.

Phase 2: Fix + retry (4-6h)

  • Generate fix instructions.
  • Retry within budget.
  • Checkpoint: corrected output validates.

Phase 3: Metrics (3-4h)

  • Track correction success rate.
  • Checkpoint: report shows improvement.

6. Testing Strategy

6.1 Test Categories

Category Purpose Examples
Unit validator missing field fails
Integration retries fix on second attempt
Regression budget stop after N retries

6.2 Critical Test Cases

  1. Invalid output triggers fix prompt.
  2. Retry budget stops after limit.
  3. Success rate improves with corrections.

7. Common Pitfalls & Debugging

Pitfall Symptom Fix
Endless retries cost spikes enforce budget
Vague fixes same error repeats include exact error text
Over-correction valid output altered stop retries once valid

8. Extensions & Challenges

Beginner

  • Add a manual approve step.
  • Add retry backoff.

Intermediate

  • Add confidence scoring for success.
  • Add per-error metrics.

Advanced

  • Add multiple correction strategies.
  • Add A/B testing of fix prompts.

9. Real-World Connections

  • Data pipelines rely on validation + retries.
  • Enterprise agents need safe correction loops.

10. Resources

  • PydanticAI docs
  • Validation and retry best practices

11. Self-Assessment Checklist

  • I can validate outputs and retry safely.
  • I can generate fix prompts from errors.
  • I can measure correction success.

12. Submission / Completion Criteria

Minimum Completion:

  • Validation + retry loop
  • Error logs

Full Completion:

  • Retry budget and metrics
  • Safe fallback mode

Excellence:

  • Multiple correction strategies
  • A/B tested prompts

This guide was generated from project_based_ideas/AI_AGENTS_LLM_RAG/LEARN_PYDANTIC_AI.md.