Project 12: Code Review Skill with Specialized Subagents
Project 12: Code Review Skill with Specialized Subagents
Build a code review skill that spawns specialized subagents for security, performance, style, and testing analysis.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Advanced |
| Time Estimate | 1-2 weeks |
| Language | Markdown |
| Prerequisites | Projects 9-11 completed, code review experience |
| Key Topics | Subagent architecture, Task tool, parallel execution, result aggregation |
| Knowledge Area | Skills / Subagents / Code Quality |
| Main Book | โCode Completeโ by Steve McConnell |
1. Learning Objectives
By completing this project, you will:
- Master subagent orchestration: Learn how to spawn and coordinate multiple specialized agents
- Use the Task tool effectively: Understand subagent types, prompts, and result handling
- Design specialized agents: Create focused agents that excel at specific analysis types
- Aggregate multi-agent results: Combine findings into coherent, prioritized reports
- Handle agent failures: Build fault tolerance into multi-agent systems
- Apply code review best practices: Cover security, performance, style, and testing
2. Theoretical Foundation
2.1 Multi-Agent Architecture
Instead of one agent trying to do everything, specialized agents each focus on one aspect:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ SINGLE AGENT vs MULTI-AGENT โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ SINGLE AGENT (Limited) MULTI-AGENT (Powerful) โ
โ โโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ โ ORCHESTRATOR โ โ
โ โ Generalist โ โ Coordinates, aggregates โ โ
โ โ Agent โ โโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ โ โ
โ โ Tries to do โ โโโโโโโโโโโโโโโผโโโโโโโโโโโโโโ โ
โ โ security AND โ โ โ โ โ
โ โ performance โ โผ โผ โผ โ
โ โ AND style โ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โ
โ โ AND testing โ โSecurity โ โ Perf. โ โ Style โ โ
โ โ โ โReviewer โ โReviewer โ โReviewer โ โ
โ โ Jack of all โ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โ
โ โ trades, โ โ โ โ โ
โ โ master of none โ Deep security Perf patterns Formatting โ
โ โ โ knowledge N+1 detection conventionsโ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Benefits of multi-agent: โ
โ - Each agent can be deeply specialized โ
โ - Parallel execution (faster) โ
โ - Easier to update/improve individual aspects โ
โ - Better coverage of each domain โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2.2 The Task Tool for Subagents
The Task tool spawns subagents with specific instructions:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ TASK TOOL ANATOMY โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ Task Tool Call: โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ โ
โ โ prompt: "Review this code for security vulnerabilities. โ โ
โ โ Focus on: โ โ
โ โ - SQL injection โ โ
โ โ - XSS vulnerabilities โ โ
โ โ - Hardcoded secrets โ โ
โ โ - Auth/authz issues โ โ
โ โ โ โ
โ โ Code to review: โ โ
โ โ [code here] โ โ
โ โ โ โ
โ โ Return findings in this format: โ โ
โ โ - severity (critical/warning/info) โ โ
โ โ - location (file:line) โ โ
โ โ - description โ โ
โ โ - recommendation" โ โ
โ โ โ โ
โ โ model: "claude-haiku-4-..." (fast, for parallelism) โ โ
โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Subagent Response: โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ { โ โ
โ โ "findings": [ โ โ
โ โ { โ โ
โ โ "severity": "critical", โ โ
โ โ "location": "query.ts:45", โ โ
โ โ "description": "SQL injection via string concatenation", โ โ
โ โ "recommendation": "Use parameterized queries" โ โ
โ โ } โ โ
โ โ ] โ โ
โ โ } โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2.3 Code Review Aspects
A comprehensive code review covers multiple dimensions:
| Aspect | Focus Areas | Examples |
|---|---|---|
| Security | Vulnerabilities, secrets, auth | SQL injection, XSS, hardcoded API keys |
| Performance | Efficiency, scaling, resources | N+1 queries, memory leaks, O(n^2) loops |
| Style | Formatting, naming, documentation | Inconsistent casing, missing JSDoc |
| Testing | Coverage, assertions, edge cases | Missing tests, weak assertions |
2.4 The OWASP Top 10
Security reviewers should know these common vulnerabilities:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ OWASP TOP 10 (2021) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ 1. Broken Access Control โ
โ - Missing authorization checks โ
โ - Insecure direct object references โ
โ โ
โ 2. Cryptographic Failures โ
โ - Sensitive data not encrypted โ
โ - Weak algorithms โ
โ โ
โ 3. Injection โ
โ - SQL, NoSQL, OS command injection โ
โ - XSS (Cross-Site Scripting) โ
โ โ
โ 4. Insecure Design โ
โ - Missing security controls โ
โ - Business logic flaws โ
โ โ
โ 5. Security Misconfiguration โ
โ - Default credentials โ
โ - Unnecessary features enabled โ
โ โ
โ 6. Vulnerable Components โ
โ - Outdated dependencies โ
โ - Known CVEs โ
โ โ
โ 7. Authentication Failures โ
โ - Weak passwords allowed โ
โ - Missing brute-force protection โ
โ โ
โ 8. Software Integrity Failures โ
โ - Unsigned updates โ
โ - Compromised CI/CD โ
โ โ
โ 9. Logging/Monitoring Failures โ
โ - Missing audit logs โ
โ - No alerting โ
โ โ
โ 10. Server-Side Request Forgery (SSRF) โ
โ - Unrestricted URL fetching โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2.5 Performance Anti-Patterns
Performance reviewers should catch:
| Pattern | Problem | Solution |
|---|---|---|
| N+1 Queries | Query per item in a loop | Use eager loading, JOIN |
| Missing Indexes | Slow queries on large tables | Add database indexes |
| Memory Leaks | Growing memory usage | Clean up event listeners, close connections |
| Synchronous I/O | Blocking the event loop | Use async/await |
| Excessive Re-renders | Slow UI updates | Memoization, useMemo |
2.6 Result Aggregation
Combining multiple agent outputs requires:
- Normalization: Convert different formats to a common structure
- Deduplication: Remove overlapping findings
- Prioritization: Sort by severity and actionability
- Grouping: Organize by category or file
3. Project Specification
3.1 What You Will Build
A code review skill that:
- Accepts a PR, file, or directory for review
- Spawns specialized subagents in parallel
- Each subagent focuses on one aspect (security, performance, style, testing)
- Aggregates all findings into a comprehensive report
- Prioritizes issues by severity
3.2 Functional Requirements
- File Identification: Determine what files to review:
- PR diff (via gh CLI)
- Specific files
- Entire directories
- Subagent Spawning: Create specialized reviewers:
- SecurityReviewer: OWASP checks, secrets detection
- PerformanceReviewer: N+1, complexity, memory
- StyleReviewer: Formatting, naming, documentation
- TestReviewer: Coverage, assertions, edge cases
-
Parallel Execution: Run all reviewers simultaneously
- Result Aggregation: Combine and format findings:
- Group by category
- Sort by severity
- Deduplicate overlapping issues
- Report Generation: Produce actionable output:
- Summary with counts
- Detailed findings with locations
- Recommendations
3.3 Non-Functional Requirements
- Speed: Parallel execution for faster results
- Accuracy: Specialized agents for better detection
- Completeness: Cover all major code quality aspects
- Actionability: Clear recommendations for each finding
4. Real World Outcome
When you complete this project, hereโs exactly what youโll experience:
You: Review the PR for the payment module
Claude: [Invokes code-review skill]
I'll perform a comprehensive code review using specialized reviewers...
Identifying files to review...
- src/payment/processor.ts (modified)
- src/payment/gateway.ts (new file)
- tests/payment.test.ts (modified)
Spawning review agents...
[SecurityReviewer] Analyzing for vulnerabilities...
[PerformanceReviewer] Checking for efficiency issues...
[StyleReviewer] Reviewing code style...
[TestReviewer] Evaluating test coverage...
All agents complete. Aggregating results...
============================================================
CODE REVIEW REPORT
payment-module PR #142
============================================================
SECURITY (by SecurityReviewer)
โโโ [CRITICAL] SQL injection risk in query.ts:45
โ โโโ Use parameterized queries instead of string concatenation
โ โโโ Fix: db.query('SELECT * FROM users WHERE id = ?', [userId])
โ
โโโ [WARNING] Missing input validation in payment.ts:78
โ โโโ Validate amount is positive before processing
โ โโโ Fix: Add validation: if (amount <= 0) throw new Error(...)
โ
โโโ [OK] No hardcoded secrets detected
PERFORMANCE (by PerformanceReviewer)
โโโ [WARNING] N+1 query in fetchOrders():23
โ โโโ 10 queries for 10 orders, should be 1
โ โโโ Fix: Use eager loading: include('items')
โ
โโโ [OK] Database indexes are appropriate
โ
โโโ [OK] No memory leaks detected
STYLE (by StyleReviewer)
โโโ [SUGGESTION] Inconsistent naming in utils.ts
โ โโโ Mix of camelCase and snake_case
โ โโโ Fix: Use camelCase for all function names
โ
โโโ [SUGGESTION] Missing JSDoc on 3 public functions
โ โโโ processPayment, validateCard, getReceipt
โ
โโโ [OK] Formatting is consistent
TESTS (by TestReviewer)
โโโ [WARNING] Payment edge cases not covered
โ โโโ Missing tests for: zero amount, negative, overflow
โ โโโ Add: test('rejects zero amount', ...)
โ
โโโ [OK] Happy path is tested
โ
โโโ [INFO] Coverage: 72% (target: 80%)
============================================================
SUMMARY
------------------------------------------------------------
Critical: 1
Warnings: 3
Suggestions: 2
Info: 1
RECOMMENDATION: Request changes before merge
Address critical SQL injection issue first
============================================================
Would you like me to explain any finding in more detail,
or help fix any of these issues?
5. The Core Question Youโre Answering
โHow can I create a skill that orchestrates multiple specialized subagents to perform comprehensive, parallel analysis?โ
This project teaches you:
- When and why to use multi-agent architectures
- How to spawn and coordinate subagents with the Task tool
- How to aggregate results from multiple sources
- Trade-offs between specialized and generalist agents
6. Concepts You Must Understand First
6.1 Task Tool for Subagents
| Concept | Questions to Answer | Reference |
|---|---|---|
| Spawning subagents | How do you create a subagent? | Claude Code Docs - Task tool |
| Subagent types | What types are available? | โgeneral-purposeโ, etc. |
| Model selection | Which model for subagents? | Haiku for speed, Sonnet for depth |
| Getting results | How do you retrieve subagent output? | Task tool returns the response |
6.2 Code Review Aspects
| Concept | Questions to Answer | Reference |
|---|---|---|
| Security review | What vulnerabilities to check? | OWASP Top 10 |
| Performance review | What patterns indicate issues? | โCode Completeโ Ch. 25 |
| Style review | What standards to enforce? | Team style guides |
| Test review | What makes tests effective? | โGrowing Object-Oriented Softwareโ |
6.3 Result Aggregation
| Concept | Questions to Answer | Reference |
|---|---|---|
| Normalization | How to unify different formats? | Common output schema |
| Deduplication | How to identify overlapping findings? | Location + type matching |
| Prioritization | How to rank findings? | Severity levels |
7. Questions to Guide Your Design
7.1 What Subagents Do You Need?
Define each specialized agent:
| Agent | Focus Area | Tools Needed |
|---|---|---|
| SecurityReviewer | OWASP, secrets, injection | Grep, Read |
| PerformanceReviewer | Complexity, queries, memory | Read, Bash (linters) |
| StyleReviewer | Formatting, naming, docs | Read, Bash (prettier) |
| TestReviewer | Coverage, assertions, cases | Read, Bash (test runner) |
7.2 How to Configure Subagents?
Each subagent needs:
- Specific prompt: Focused on their domain
- Output format: Structured for aggregation
- Model choice: Haiku for speed, Sonnet for complex analysis
7.3 How to Handle Failures?
Build fault tolerance:
- What if a subagent times out?
- What if one finds nothing (is that success or failure)?
- Should you continue if one fails?
8. Thinking Exercise
8.1 Design the Orchestration Flow
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ CODE REVIEW ORCHESTRATOR โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ User: "Review this PR" โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ 1. Parse Input โ Identify files to review โ
โ โ - PR number? โ โ gh pr diff โ
โ โ - File path? โ โ Read directly โ
โ โ - Directory? โ โ List and read โ
โ โโโโโโโโโโฌโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ 2. SPAWN SUBAGENTS (parallel) โ โ
โ โ โ โ
โ โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโ โ
โ โ โ Security โ โ Performance โ โ Style โ โ Test โโ โ
โ โ โ Reviewer โ โ Reviewer โ โ Reviewer โ โ Reviewer โโ โ
โ โ โ โ โ โ โ โ โ โโ โ
โ โ โ OWASP โ โ N+1 queries โ โ Formatting โ โ Coverage โโ โ
โ โ โ Secrets โ โ Complexity โ โ Naming โ โ Assertionsโโ โ
โ โ โ Injection โ โ Memory โ โ Docs โ โ Edge casesโโ โ
โ โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโ โ
โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ 3. Collect โ Wait for all agents to complete โ
โ โ Results โ Handle any failures gracefully โ
โ โโโโโโโโโโฌโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ 4. Aggregate โ Normalize formats โ
โ โ โ Deduplicate findings โ
โ โ โ Prioritize by severity โ
โ โโโโโโโโโโฌโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ 5. Generate โ Create formatted report โ
โ โ Report โ Include summary + details โ
โ โ โ Provide recommendations โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Questions:
- Should subagents run in parallel or sequence? (Parallel for speed)
- How do you prevent duplicate findings? (Check location + issue type)
- Whatโs the format for subagent results? (JSON or structured markdown)
9. The Interview Questions Theyโll Ask
- โHow would you design a multi-agent system for code review?โ
- Expected: Specialized agents for each aspect, orchestrator for coordination
- Bonus: Discuss trade-offs of granularity (too many agents = overhead)
- โWhat are the trade-offs of specialized vs generalist agents?โ
- Expected: Specialized = deeper analysis, generalist = simpler architecture
- Bonus: When to use each approach
- โHow do you handle agent coordination and result aggregation?โ
- Expected: Common output format, deduplication, prioritization
- Bonus: Error handling, timeout management
- โWhatโs the right granularity for agent specialization?โ
- Expected: Balance depth with overhead; 4-6 agents is manageable
- Bonus: Discuss when to combine or split agents
- โHow do you ensure consistency across multiple agent outputs?โ
- Expected: Structured output format, clear instructions
- Bonus: Post-processing for normalization
10. Solution Architecture
10.1 Skill Component Diagram
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ CODE-REVIEW SKILL ARCHITECTURE โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ SKILL.md โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ Orchestrator: coordinates subagents, aggregates results โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ โ โ
โ โผ โผ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โ
โ โ agent_prompts/ โ โ REFERENCES.md โ โ output_format/ โ โ
โ โ โ โ โ โ โ โ
โ โ security.md โ โ OWASP Top 10 โ โ finding.md โ โ
โ โ performance.md โ โ Perf patterns โ โ report.md โ โ
โ โ style.md โ โ Style guides โ โ โ โ
โ โ testing.md โ โ โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ TASK TOOL โ โ
โ โ Used to spawn specialized subagents โ โ
โ โ โ โ
โ โ SecurityReviewer โโฌโโถ Returns security findings โ โ
โ โ PerformanceReviewerโโค โ โ
โ โ StyleReviewer โโคโโถ (All run in parallel) โ โ
โ โ TestReviewer โโ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
10.2 Subagent Communication Flow
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ SUBAGENT COMMUNICATION โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ ORCHESTRATOR โ
โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Task(prompt: "Security review...", model: "haiku") โ โ
โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค โ
โ โ Task(prompt: "Performance review...", model: "haiku") โ โ
โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค โ
โ โ Task(prompt: "Style review...", model: "haiku") โ โ
โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค โ
โ Task(prompt: "Test review...", model: "haiku") โ โ
โ โ
โ (All four run in parallel - spawned in same message) โ
โ โ
โ โ โ
โ โผ โ
โ WAIT FOR ALL RESPONSES โ
โ โ โ
โ โผ โ
โ AGGREGATE RESULTS โ
โ โ โ
โ โผ โ
โ GENERATE REPORT โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
11. Implementation Guide
11.1 Phase 1: Create Skill Structure
# Create skill directory
mkdir -p ~/.claude/skills/code-review/{agent_prompts,output_format}
# Create files
touch ~/.claude/skills/code-review/SKILL.md
touch ~/.claude/skills/code-review/REFERENCES.md
touch ~/.claude/skills/code-review/agent_prompts/{security,performance,style,testing}.md
touch ~/.claude/skills/code-review/output_format/{finding,report}.md
11.2 Phase 2: Define Agent Prompts
agent_prompts/security.md:
# Security Reviewer Prompt
You are a security-focused code reviewer. Analyze the provided code for:
## Focus Areas
1. **Injection Vulnerabilities**
- SQL injection (string concatenation in queries)
- NoSQL injection
- Command injection
- XSS (unsanitized user input in HTML)
2. **Authentication/Authorization**
- Missing auth checks
- Weak password policies
- Insecure session handling
3. **Sensitive Data**
- Hardcoded secrets, API keys, passwords
- Credentials in logs
- Unencrypted sensitive data
4. **Input Validation**
- Missing validation
- Insufficient validation
- Type confusion
## Output Format
Return findings as JSON:
```json
{
"reviewer": "security",
"findings": [
{
"severity": "critical|warning|info",
"location": "file:line",
"issue": "Brief description",
"evidence": "The problematic code",
"recommendation": "How to fix it"
}
]
}
If no issues found, return empty findings array.
**agent_prompts/performance.md:**
```markdown
# Performance Reviewer Prompt
You are a performance-focused code reviewer. Analyze the provided code for:
## Focus Areas
1. **Database Issues**
- N+1 queries (query per item in a loop)
- Missing indexes (large table scans)
- Unbounded queries (no LIMIT)
2. **Algorithmic Complexity**
- O(n^2) or worse in critical paths
- Repeated calculations
- Inefficient data structures
3. **Memory Issues**
- Memory leaks (unclosed resources)
- Excessive allocations
- Large object retention
4. **I/O Patterns**
- Synchronous I/O blocking
- Missing caching
- Redundant network calls
## Output Format
Return findings as JSON:
```json
{
"reviewer": "performance",
"findings": [
{
"severity": "critical|warning|info",
"location": "file:line",
"issue": "Brief description",
"evidence": "The problematic code",
"recommendation": "How to fix it"
}
]
}
### 11.3 Phase 3: Write the SKILL.md
```yaml
---
name: code-review
description: Comprehensive code review using specialized reviewers for security, performance, style, and testing. Use when the user wants a thorough review of code, a PR, or files.
---
# Code Review Orchestrator
Perform comprehensive code reviews by spawning specialized subagents.
## Review Process
### 1. Identify Files to Review
Determine what to review based on user input:
- PR number โ Use `gh pr diff <number>` to get changed files
- File path โ Read the specified file
- Directory โ List and read relevant source files
### 2. Prepare Code for Review
For each file:
1. Read the file contents
2. Note the file path and language
3. Prepare a code snippet for each reviewer
### 3. Spawn Specialized Reviewers
Use the Task tool to spawn four specialized agents in parallel:
#### SecurityReviewer
Prompt: Read agent_prompts/security.md and include the code to review. Focus on: SQL injection, XSS, hardcoded secrets, auth issues. Model: Use a fast model for parallelism.
#### PerformanceReviewer
Prompt: Read agent_prompts/performance.md and include the code to review. Focus on: N+1 queries, complexity, memory leaks. Model: Use a fast model for parallelism.
#### StyleReviewer
Prompt: Review code style - formatting, naming, documentation. Focus on: Consistent naming, missing docs, code organization. Model: Use a fast model for parallelism.
#### TestReviewer
Prompt: Review test quality - coverage, assertions, edge cases. Focus on: Missing tests, weak assertions, edge cases. Model: Use a fast model for parallelism.
**Important**: Spawn all four in the same message for parallel execution.
### 4. Aggregate Results
After all reviewers complete:
1. Parse each reviewer's JSON output
2. Combine all findings into one list
3. Deduplicate (same location + similar issue)
4. Sort by severity (critical > warning > info)
### 5. Generate Report
Format the report with:
- Summary section with counts
- Findings grouped by category (Security, Performance, Style, Testing)
- Each finding shows severity, location, description, recommendation
- Final recommendation (Approve, Request Changes)
## Severity Definitions
| Level | Meaning | Action |
|-------|---------|--------|
| Critical | Must fix before merge | Request changes |
| Warning | Should fix | Request changes or note |
| Info | Consider fixing | Note for later |
## Error Handling
If a reviewer fails:
- Log the failure
- Continue with other reviewers
- Note the gap in the report
## Report Format
Use this structure for the final report:
============================================================ CODE REVIEW REPORT [context] ============================================================
[CATEGORY] (by [Reviewer]) โโโ [SEVERITY] [Issue title] in [location] โ โโโ [Description] โ โโโ Fix: [Recommendation] โฆ
============================================================ SUMMARY โโโโโโโโโโโโโโโโโโโโ Critical: [count] Warnings: [count] Suggestions: [count]
RECOMMENDATION: [Approve / Request Changes]
11.4 Phase 4: Create Output Format Templates
output_format/finding.md:
# Finding Format
Each finding should include:
โโโ [SEVERITY] [Short title] in [file:line] โ โโโ [Detailed description of the issue] โ โโโ Evidence: [The problematic code snippet] โ โโโ Fix: [Specific recommendation]
Severity icons:
- Critical: [CRITICAL] or red indicator
- Warning: [WARNING] or yellow indicator
- Info: [INFO] or blue indicator
- OK: [OK] or green indicator
12. Hints in Layers
Hint 1: Define Agent Prompts
Create a specific prompt for each reviewer type:
You are a security reviewer. Focus ONLY on:
1. SQL injection
2. XSS vulnerabilities
3. Hardcoded secrets
4. Missing auth checks
Code to review:
[paste code here]
Return findings as JSON with severity, location, issue, recommendation.
Hint 2: Use Task Tool
In your skill, tell Claude to use Task like this:
Use the Task tool to spawn a subagent with:
- A focused prompt for security review
- Include the code to review in the prompt
- Request JSON output format
Spawn all agents in one message for parallelism.
Hint 3: Parallel Execution
To run agents in parallel, spawn all in the same tool call block:
[First Task call for SecurityReviewer]
[Second Task call for PerformanceReviewer]
[Third Task call for StyleReviewer]
[Fourth Task call for TestReviewer]
All four run simultaneously.
Hint 4: Result Format
Ask each agent to return JSON:
{
"reviewer": "security",
"findings": [
{
"severity": "critical",
"location": "file.ts:42",
"issue": "SQL injection",
"recommendation": "Use parameterized queries"
}
]
}
This makes aggregation straightforward.
13. Common Pitfalls & Debugging
13.1 Frequent Mistakes
| Pitfall | Symptom | Solution |
|---|---|---|
| Sequential spawning | Slow reviews | Spawn all agents in one message |
| Inconsistent output | Hard to aggregate | Define strict JSON format |
| Missing context | Agents lack info | Include file path + full code |
| No error handling | Crashes on agent failure | Check for and handle failures |
| Duplicate findings | Redundant report | Deduplicate by location + type |
13.2 Debugging Steps
- Test one agent first: Verify SecurityReviewer works before adding others
- Check JSON parsing: Ensure agent output is valid JSON
- Log agent prompts: Verify each agent gets appropriate context
- Time execution: Confirm parallel execution is actually parallel
14. Extensions & Challenges
14.1 Beginner Extensions
- Add severity icons/colors to the report
- Include line numbers with code snippets
- Add a โquick fixesโ section with copy-paste solutions
14.2 Intermediate Extensions
- Integrate with GitHub PR comments
- Add architecture reviewer (design patterns, coupling)
- Support for multiple programming languages
14.3 Advanced Extensions
- Learning from feedback (improve based on accepted/rejected findings)
- Custom rules configuration
- Integration with static analysis tools (ESLint, Pylint)
15. Books That Will Help
| Topic | Book/Resource | Chapter/Section |
|---|---|---|
| Code review | โCode Completeโ by McConnell | Chapter 21: Collaborative Construction |
| Security | OWASP Testing Guide | All sections |
| Multi-agent | โMulti-Agent Systemsโ by Wooldridge | Chapters 1-3 |
| Performance | โHigh Performance Browser Networkingโ | Patterns sections |
16. Self-Assessment Checklist
Understanding
- I can explain why multi-agent is better than single agent for reviews
- I understand how the Task tool spawns subagents
- I know how to aggregate results from multiple sources
- I can describe the OWASP Top 10
Implementation
- All four reviewers spawn and return results
- Results are aggregated into a unified report
- Report is organized by category with severity levels
- Error handling works for agent failures
Growth
- I can add new reviewer types
- I understand trade-offs in agent granularity
- I can debug multi-agent coordination issues
17. Learning Milestones
| Milestone | Indicator |
|---|---|
| Subagents spawn correctly | You understand Task tool |
| Parallel execution works | All agents run simultaneously |
| Results are aggregated | You can coordinate multiple agents |
| Report is comprehensive | Youโve built a useful review system |
This guide was expanded from CLAUDE_CODE_MASTERY_40_PROJECTS.md. For the complete learning path, see the project index.