Project 22: Test Generator Hook (Auto-Test on Write)
Build a PostToolUse hook that automatically generates or updates tests when Kiro writes production code - maintaining test coverage without manual effort.
Learning Objectives
By completing this project, you will:
- Master Kiro’s hook system including PostToolUse triggers, matchers, and exit codes
- Understand test file mapping conventions and how to maintain test/source correspondence
- Implement recursive AI invocation by calling Kiro from within a hook
- Design test generation strategies that produce meaningful, runnable tests
- Build quality gates that validate generated tests before accepting them
Deep Theoretical Foundation
The Testing Paradox
Developers know tests are important, yet test coverage often lags behind production code. This creates a paradox:
The Testing Debt Cycle:
┌──────────────────────────────────────────────────────┐
│ │
▼ │
┌─────────────┐ │
│ Write Code │ │
└─────────────┘ │
│ │
▼ │
┌─────────────┐ "I'll add tests later" ┌─────────────┐ │
│ Ship Fast │─────────────────────────────►│ Test Debt │─┘
└─────────────┘ │ Accumulates │
└─────────────┘
│
▼
┌─────────────┐
│ Refactoring │
│ Becomes │
│ Terrifying │
└─────────────┘
The Solution: Automatic test generation as code is written - not later.
Kiro Hooks Architecture
Hooks are the extension points that let you customize Kiro’s behavior. Think of them as middleware for AI operations:
Hook Execution Flow:
User Request: "Create a user service"
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Kiro Agent │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ preToolUse Hook (optional) ││
│ │ • Inspect planned operation ││
│ │ • Can BLOCK with exit code 2 ││
│ │ • Can MODIFY with JSON output ││
│ └─────────────────────────────────────────────────────────────┘│
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ Tool Execution ││
│ │ • write: Create/modify files ││
│ │ • bash: Run commands ││
│ │ • etc. ││
│ └─────────────────────────────────────────────────────────────┘│
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ postToolUse Hook ◄──── YOUR HOOK RUNS HERE ││
│ │ • Observe completed operation ││
│ │ • Trigger side effects (test generation!) ││
│ │ • Can provide feedback to agent ││
│ └─────────────────────────────────────────────────────────────┘│
│ │ │
│ ▼ │
│ Continue to next tool │
└─────────────────────────────────────────────────────────────────┘
Hook Input/Output Protocol
Hooks communicate with Kiro through stdin/stdout and exit codes:
Hook Input (JSON via stdin):
┌─────────────────────────────────────────────────────────────────┐
│ { │
│ "hook_event_name": "postToolUse", │
│ "cwd": "/path/to/project", │
│ "tool_name": "write", │
│ "tool_input": { │
│ "file_path": "src/services/user.ts", │
│ "content": "export class UserService { ... }" │
│ }, │
│ "tool_response": { │
│ "success": true, │
│ "result": "File written successfully" │
│ } │
│ } │
└─────────────────────────────────────────────────────────────────┘
Hook Output (exit codes):
┌────────────┬────────────────────────────────────────────────────┐
│ Exit Code │ Meaning │
├────────────┼────────────────────────────────────────────────────┤
│ 0 │ Success - continue normally │
│ 1 │ Error - but continue (logged as warning) │
│ 2 │ BLOCK - stop and send stdout to agent as feedback │
└────────────┴────────────────────────────────────────────────────┘
Test File Mapping Conventions
Different projects use different conventions for mapping source files to test files:
Common Mapping Patterns:
Pattern 1: __tests__ directories (Jest default)
┌─────────────────────────────────────────┐
│ src/ │
│ ├── services/ │
│ │ ├── user.ts ─────────┐ │
│ │ └── __tests__/ │ │
│ │ └── user.test.ts ◄─────────┘ │
│ └── utils/ │
│ ├── format.ts ─────────┐ │
│ └── __tests__/ │ │
│ └── format.test.ts◄─────────┘ │
└─────────────────────────────────────────┘
Pattern 2: Parallel test directory
┌─────────────────────────────────────────┐
│ src/ │
│ └── services/ │
│ └── user.ts ─────────────┐│
│ ││
│ test/ ││
│ └── services/ ││
│ └── user.test.ts ◄─────────────┘│
└─────────────────────────────────────────┘
Pattern 3: Co-located tests
┌─────────────────────────────────────────┐
│ src/ │
│ └── services/ │
│ ├── user.ts ─────────┐ │
│ └── user.test.ts ◄─────────┘ │
└─────────────────────────────────────────┘
Test Generation Strategies
AI can generate tests in multiple ways, each with trade-offs:
Strategy Comparison:
1. EXAMPLE-BASED TESTING
┌─────────────────────────────────────────────────────────────┐
│ describe('add', () => { │
│ it('adds 2 + 2 to equal 4', () => { │
│ expect(add(2, 2)).toBe(4); │
│ }); │
│ }); │
└─────────────────────────────────────────────────────────────┘
Pros: Simple, readable, fast
Cons: Limited coverage, may miss edge cases
2. BOUNDARY-BASED TESTING
┌─────────────────────────────────────────────────────────────┐
│ describe('divide', () => { │
│ it('handles division by zero', () => { │
│ expect(() => divide(1, 0)).toThrow(); │
│ }); │
│ it('handles MAX_SAFE_INTEGER', () => { │
│ expect(divide(Number.MAX_SAFE_INTEGER, 1)).toBe(...); │
│ }); │
│ }); │
└─────────────────────────────────────────────────────────────┘
Pros: Catches edge cases, tests limits
Cons: Requires understanding of domain
3. BEHAVIOR-BASED TESTING
┌─────────────────────────────────────────────────────────────┐
│ describe('UserService', () => { │
│ it('creates user with valid data', async () => { │
│ const user = await service.create({ name: 'Alice' }); │
│ expect(user.id).toBeDefined(); │
│ }); │
│ it('rejects duplicate email', async () => { │
│ await service.create({ email: 'a@b.com' }); │
│ await expect(service.create({ email: 'a@b.com' })) │
│ .rejects.toThrow('Email already exists'); │
│ }); │
│ }); │
└─────────────────────────────────────────────────────────────┘
Pros: Tests real behavior, maintainable
Cons: Requires understanding of requirements
Recursive AI Invocation Pattern
Your hook can call Kiro itself to generate tests. This creates a powerful meta-pattern:
Recursive AI Invocation:
┌─────────────────────────────────────────────────────────────────┐
│ Kiro Agent (Main Session) │
│ │
│ User: "Create a UserService class" │
│ │ │
│ ▼ │
│ [Writes src/services/user.ts] │
│ │ │
│ ▼ │
│ postToolUse Hook Fires │
│ │ │
│ ▼ │
│ ┌────────────────────────────────────────────────────────┐ │
│ │ Test Generator Hook (subprocess) │ │
│ │ │ │
│ │ 1. Read the new file content │ │
│ │ 2. Spawn Kiro in non-interactive mode │ │
│ │ 3. Prompt: "Write tests for this code: ..." │ │
│ │ │ │
│ │ ┌────────────────────────────────────────────────┐ │ │
│ │ │ Kiro (Subprocess) │ │ │
│ │ │ • Analyzes code │ │ │
│ │ │ • Generates test file │ │ │
│ │ │ • Returns test content │ │ │
│ │ └────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ 4. Validate generated tests │ │
│ │ 5. Write test file │ │
│ │ 6. Run tests to verify │ │
│ │ │ │
│ └────────────────────────────────────────────────────────┘ │
│ │
│ [Continues with main conversation] │
└─────────────────────────────────────────────────────────────────┘
The Testing Pyramid and AI Generation
Understanding where AI-generated tests fit in the testing pyramid:
Testing Pyramid with AI Generation:
┌─────────────┐
╱ E2E Tests ╲ AI can help
╱ (Few, Slow) ╲ but human
╱─────────────────╲ review needed
╱ ╲
╱ Integration ╲ AI good at
╱ Tests ╲ mocking setup
╱ (Some, Medium) ╲
╱───────────────────────────╲
╱ ╲
╱ Unit Tests ╲ AI EXCELS
╱ (Many, Fast) ╲ HERE
╱───────────────────────────────────╲
AI-Generated Test Distribution:
┌────────────────────────────────────────────────────────────────┐
│ Unit Tests (90%) Integration (9%) E2E (1%) │
│ ████████████████████████ ██ █ │
└────────────────────────────────────────────────────────────────┘
Real-World Analogy: The Automated Proofreader
Think of this hook like having a proofreader who automatically writes a quiz for every chapter you write:
- Author writes Chapter 5 (Developer writes UserService)
- Proofreader reads it (Hook detects file write)
- Creates comprehension questions (Generates test cases)
- Runs questions on chapter (Executes tests)
- Verifies answers are correct (Tests pass)
If the quiz fails, either:
- The chapter has errors (bugs in code)
- The quiz is wrong (bad test generation)
Historical Context
Test generation has evolved significantly:
Evolution of Test Generation:
1970s: Manual Testing Only
└─► Testers write all test cases by hand
1990s: Code Coverage Tools
└─► Tools show WHAT to test, humans write HOW
2000s: Test Frameworks (JUnit, pytest)
└─► Structured test organization, still manual
2010s: Property-Based Testing
└─► Tools generate inputs, humans define properties
2020s: AI-Powered Generation ◄─── YOU ARE HERE
└─► AI understands code, generates meaningful tests
Book References
For deeper understanding:
- “Test Driven Development” by Kent Beck - The foundational TDD text
- “Growing Object-Oriented Software, Guided by Tests” by Freeman & Pryce - Testing as design
- “Working Effectively with Legacy Code” by Michael Feathers - Testing untested code
- “xUnit Test Patterns” by Gerard Meszaros - Test organization and patterns
Complete Project Specification
What You Are Building
A PostToolUse hook that:
- Triggers on file writes to production code (src/*)
- Maps source to test files using project conventions
- Invokes Kiro to generate appropriate tests
- Validates generated tests by running them
- Reports results back to the main Kiro session
Functional Requirements
| Feature | Behavior |
|---|---|
| Detection | Trigger on any write to src/**/*.ts or src/**/*.js |
| Mapping | Find or create corresponding test file |
| Generation | Generate tests covering all exported functions |
| Validation | Run generated tests with Vitest/Jest |
| Reporting | Output summary of generated tests |
Non-Functional Requirements
- Latency: Complete test generation within 30 seconds
- Reliability: Handle failures gracefully, never block main workflow
- Quality: Generated tests should pass and provide real coverage
- Configurability: Support different test conventions and frameworks
Solution Architecture
High-Level Component Diagram
┌─────────────────────────────────────────────────────────────────────┐
│ Kiro CLI │
│ ┌─────────────────────────────────────────────────────────────────┐│
│ │ Main Agent Session ││
│ │ ││
│ │ User: "Create a UserService with CRUD operations" ││
│ │ │ ││
│ │ ▼ ││
│ │ [write tool: src/services/user.ts] ││
│ │ │ ││
│ └─────────┼────────────────────────────────────────────────────────┘│
└────────────┼────────────────────────────────────────────────────────┘
│
│ postToolUse event
▼
┌─────────────────────────────────────────────────────────────────────┐
│ Test Generator Hook │
│ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │
│ │ File Analyzer │ │ Test Mapper │ │ Kiro Invoker │ │
│ │ • Parse TS/JS │ │ • Find test │ │ • Spawn Kiro │ │
│ │ • Extract │ │ file path │ │ • Send prompt │ │
│ │ exports │ │ • Apply │ │ • Get tests │ │
│ │ │ │ conventions │ │ │ │
│ └───────────────┘ └───────────────┘ └───────────────┘ │
│ │ │ │ │
│ └──────────────────┼──────────────────┘ │
│ ▼ │
│ ┌─────────────────────┐ │
│ │ Test Validator │ │
│ │ • Write test file │ │
│ │ • Run vitest/jest │ │
│ │ • Parse results │ │
│ └─────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────┐ │
│ │ Result Reporter │ │
│ │ • Format output │ │
│ │ • Exit code │ │
│ └─────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
Data Flow: End-to-End
1. File Write Detected
┌─────────────────────────────────────────────────────────────────┐
│ stdin: { │
│ "tool_name": "write", │
│ "tool_input": { │
│ "file_path": "src/services/user.ts", │
│ "content": "export class UserService {\n create()..." │
│ } │
│ } │
└─────────────────────────────────────────────────────────────────┘
│
▼
2. Analyze Source File
┌─────────────────────────────────────────────────────────────────┐
│ Exports found: │
│ • class UserService │
│ - method: create(data: CreateUserDto): Promise<User> │
│ - method: findById(id: string): Promise<User | null> │
│ - method: update(id: string, data: UpdateDto): Promise<User> │
│ - method: delete(id: string): Promise<void> │
└─────────────────────────────────────────────────────────────────┘
│
▼
3. Map to Test File
┌─────────────────────────────────────────────────────────────────┐
│ Source: src/services/user.ts │
│ Test: src/services/__tests__/user.test.ts │
│ Status: Does not exist (CREATE) │
└─────────────────────────────────────────────────────────────────┘
│
▼
4. Generate Tests via Kiro
┌─────────────────────────────────────────────────────────────────┐
│ kiro-cli --print " │
│ Write comprehensive tests for this TypeScript class. │
│ Use Vitest. Mock external dependencies. │
│ │
│ Source code: │
│ \`\`\`typescript │
│ ${sourceCode} │
│ \`\`\` │
│ │
│ Output ONLY the test file content, no explanations. │
│ " │
└─────────────────────────────────────────────────────────────────┘
│
▼
5. Write and Validate Tests
┌─────────────────────────────────────────────────────────────────┐
│ $ vitest run src/services/__tests__/user.test.ts │
│ │
│ ✓ UserService │
│ ✓ create() should create user with valid data │
│ ✓ create() should throw on invalid email │
│ ✓ findById() should return user │
│ ✓ findById() should return null for missing │
│ ✓ update() should update existing user │
│ ✓ delete() should remove user │
│ │
│ 6 passed │
└─────────────────────────────────────────────────────────────────┘
│
▼
6. Report to Main Session
┌─────────────────────────────────────────────────────────────────┐
│ stdout: │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ TEST GENERATION COMPLETE │ │
│ ├─────────────────────────────────────────────────────────────┤ │
│ │ Source: src/services/user.ts │ │
│ │ Tests: src/services/__tests__/user.test.ts │ │
│ │ │ │
│ │ Generated: 6 test cases │ │
│ │ Passed: 6/6 │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ exit code: 0 │
└─────────────────────────────────────────────────────────────────┘
Key Interfaces
// Hook input structure (from Kiro)
interface HookInput {
hook_event_name: 'postToolUse';
cwd: string;
tool_name: string;
tool_input: {
file_path: string;
content: string;
};
tool_response: {
success: boolean;
result: string;
};
}
// Parsed export information
interface ExportInfo {
type: 'function' | 'class' | 'const';
name: string;
parameters?: Parameter[];
returnType?: string;
methods?: MethodInfo[];
}
// Test generation result
interface TestGenerationResult {
sourcePath: string;
testPath: string;
testContent: string;
testCount: number;
passed: number;
failed: number;
errors: string[];
}
// Configuration
interface TestGeneratorConfig {
testFramework: 'vitest' | 'jest';
testPattern: '__tests__' | 'parallel' | 'colocated';
testSuffix: '.test.ts' | '.spec.ts';
excludePaths: string[];
}
Technology Choices
| Component | Technology | Rationale |
|---|---|---|
| Hook Runtime | Bun | Fast startup, native TS |
| AST Parsing | TypeScript Compiler API | Accurate export detection |
| Test Framework | Vitest | Modern, fast, ESM-native |
| Kiro Invocation | Child process | Clean separation |
| Config | JSON in .kiro/ | Standard Kiro pattern |
Phased Implementation Guide
Phase 1: Hook Foundation (Days 1-2)
Goal: Create a hook that triggers on file writes and logs events.
Tasks:
- Create the hook script file
- Configure hook in
.kiro/settings.json - Parse stdin JSON input
- Filter for relevant file writes (src/*/.ts)
- Log detection to a file for debugging
Hints:
- Hooks receive JSON on stdin, not as arguments
- Use a matcher pattern to only trigger on
writetool - Start with exit code 0 to not block Kiro
Configuration (.kiro/settings.json):
{
"hooks": {
"postToolUse": [
{
"matcher": "write",
"command": "bun run /path/to/hooks/test-generator.ts"
}
]
}
}
Starter Code:
#!/usr/bin/env bun
import { readFileSync } from 'fs';
// Read all of stdin
const input = readFileSync(0, 'utf-8');
const event = JSON.parse(input);
// Only process TypeScript/JavaScript files in src/
const filePath = event.tool_input?.file_path;
if (!filePath?.match(/^src\/.*\.(ts|js)$/)) {
process.exit(0);
}
console.log(`Detected write to: ${filePath}`);
process.exit(0);
Phase 2: Test File Mapping (Days 3-4)
Goal: Determine the correct test file path for any source file.
Tasks:
- Detect project test convention (check for existing patterns)
- Implement path transformation logic
- Handle edge cases (index files, nested directories)
- Check if test file already exists
Hints:
- Look for existing
__tests__directories or*.test.tsfiles - The
package.jsonmay have jest/vitest config withtestMatch - Create missing directories as needed
Path Mapping Logic:
function mapSourceToTest(sourcePath: string, config: TestGeneratorConfig): string {
// Example: src/services/user.ts
const dir = path.dirname(sourcePath); // src/services
const base = path.basename(sourcePath, path.extname(sourcePath)); // user
const ext = path.extname(sourcePath); // .ts
switch (config.testPattern) {
case '__tests__':
// src/services/__tests__/user.test.ts
return path.join(dir, '__tests__', `${base}${config.testSuffix}`);
case 'parallel':
// test/services/user.test.ts
return sourcePath
.replace(/^src\//, 'test/')
.replace(ext, config.testSuffix);
case 'colocated':
// src/services/user.test.ts
return sourcePath.replace(ext, config.testSuffix);
}
}
Phase 3: AI Test Generation (Days 5-8)
Goal: Invoke Kiro to generate test content.
Tasks:
- Extract exports from source file (using regex or TS compiler)
- Build a prompt describing what tests to generate
- Invoke Kiro CLI in non-interactive mode
- Parse and clean the generated test code
- Handle generation failures gracefully
Hints:
- Use
kiro-cli --print "prompt"for single-response mode - Include the full source code in the prompt for context
- Ask for specific test patterns (describe/it/expect)
- Request no markdown code fences in output
Prompt Template:
const prompt = `
Generate comprehensive unit tests for the following TypeScript code.
Requirements:
- Use Vitest (import { describe, it, expect } from 'vitest')
- Mock any external dependencies
- Test both success and error cases
- Include edge cases (null, undefined, empty arrays)
- Use descriptive test names
Source code to test:
\`\`\`typescript
${sourceCode}
\`\`\`
Output ONLY the complete test file. No explanations or markdown.
`;
Phase 4: Test Validation and Polish (Days 9-14)
Goal: Validate generated tests and provide quality feedback.
Tasks:
- Write generated tests to test file
- Run tests with Vitest/Jest
- Parse test runner output
- Handle test failures (regenerate or report)
- Format final output for Kiro session
Hints:
- Use
vitest run --reporter=jsonfor parseable output - If tests fail, you might try regenerating once
- Consider a “dry run” mode that shows but doesn’t write tests
- Cache successful generations to avoid redundant work
Test Execution:
async function validateTests(testPath: string): Promise<TestResult> {
const result = await $`vitest run ${testPath} --reporter=json`.quiet();
const report = JSON.parse(result.stdout);
return {
passed: report.numPassedTests,
failed: report.numFailedTests,
errors: report.testResults
.flatMap(r => r.assertionResults)
.filter(a => a.status === 'failed')
.map(a => a.failureMessages)
};
}
Testing Strategy
Unit Tests for the Hook
describe('TestGenerator Hook', () => {
describe('shouldProcessFile', () => {
it('returns true for src TypeScript files', () => {
expect(shouldProcessFile('src/services/user.ts')).toBe(true);
});
it('returns false for test files', () => {
expect(shouldProcessFile('src/services/user.test.ts')).toBe(false);
});
it('returns false for non-src files', () => {
expect(shouldProcessFile('node_modules/pkg/index.ts')).toBe(false);
});
});
describe('mapSourceToTest', () => {
it('maps to __tests__ directory', () => {
const config = { testPattern: '__tests__', testSuffix: '.test.ts' };
expect(mapSourceToTest('src/services/user.ts', config))
.toBe('src/services/__tests__/user.test.ts');
});
});
});
Integration Tests
describe('Test Generation Integration', () => {
it('generates valid tests for a simple function', async () => {
const source = `
export function add(a: number, b: number): number {
return a + b;
}
`;
const testContent = await generateTests(source);
// Write and run
await writeFile('test-output.test.ts', testContent);
const result = await $`vitest run test-output.test.ts`.quiet();
expect(result.exitCode).toBe(0);
});
});
End-to-End Validation
# 1. Start Kiro with the hook configured
kiro-cli chat
# 2. Ask Kiro to write some code
> "Create a simple StringUtils class with capitalize and reverse methods"
# 3. Observe hook output
# Should see test generation summary after file is written
# 4. Verify test file exists
ls src/**/__tests__/*.test.ts
# 5. Run tests manually
npm test
Common Pitfalls and Debugging
Pitfall 1: Hook Not Triggering
Symptom: No output when Kiro writes files
Debugging:
# Check hook is configured
cat .kiro/settings.json | jq '.hooks.postToolUse'
# Test hook manually
echo '{"hook_event_name":"postToolUse","tool_name":"write","tool_input":{"file_path":"src/test.ts","content":"test"}}' | bun run hooks/test-generator.ts
# Check for errors in hook script
bun check hooks/test-generator.ts
Pitfall 2: Kiro Subprocess Hangs
Symptom: Hook never completes, blocks main session
Cause: Kiro waiting for interactive input
Solution:
// Use --print for non-interactive mode
const result = await $`kiro-cli --print ${prompt}`.timeout(30000);
// Add timeout to prevent infinite hangs
// Consider --no-mcp to avoid nested MCP issues
Pitfall 3: Generated Tests Have Syntax Errors
Symptom: Tests fail to parse/compile
Debugging:
# Save raw AI output for inspection
echo "$generatedCode" > /tmp/raw-test-output.ts
# Check for common issues
# - Markdown code fences in output
# - Missing imports
# - Wrong test framework syntax
Prevention:
// Clean markdown artifacts
function cleanGeneratedCode(code: string): string {
return code
.replace(/```typescript\n?/g, '')
.replace(/```\n?/g, '')
.trim();
}
Pitfall 4: Tests Pass But Don’t Test Anything
Symptom: 100% passing but no assertions
Cause: AI generated empty test bodies
Solution:
// Validate test quality
function hasRealAssertions(testCode: string): boolean {
const expectCount = (testCode.match(/expect\(/g) || []).length;
const testCount = (testCode.match(/it\(/g) || []).length;
// At least one expect per test
return expectCount >= testCount;
}
Extensions and Challenges
Extension 1: Incremental Test Updates
When code changes, update only affected tests instead of regenerating all:
interface TestDiff {
added: string[]; // New functions to test
modified: string[]; // Changed signatures
removed: string[]; // Deleted functions
}
function computeTestDiff(oldCode: string, newCode: string): TestDiff { ... }
Extension 2: Test Coverage Targeting
Generate tests specifically for uncovered lines:
# Get coverage report
vitest run --coverage --reporter=json
# Parse uncovered lines
# Generate targeted tests for those lines
Extension 3: Mock Generation
Automatically generate mocks for dependencies:
// Detect imports
import { db } from '../database';
import { logger } from '../utils/logger';
// Generate mock file
// __mocks__/database.ts
Extension 4: Mutation Testing Integration
Use mutation testing to verify test quality:
# After generating tests
stryker run --mutate="src/services/user.ts"
# Report mutation score
Extension 5: Test Style Configuration
Support different testing styles per project:
{
"testGenerator": {
"style": "bdd", // or "tdd", "given-when-then"
"assertions": "chai", // or "expect", "assert"
"mocking": "vitest", // or "jest", "sinon"
"coverage": 80 // minimum target
}
}
Real-World Connections
Industry Adoption
AI-assisted test generation is being adopted by:
- GitHub Copilot: Suggests test completions in IDE
- Tabnine: Generates test snippets from comments
- Amazon CodeWhisperer: Creates test cases from function signatures
- Diffblue Cover: Enterprise Java test generation
Production Patterns
| Pattern | Use Case |
|---|---|
| Pre-commit | Generate tests before commit, block if tests fail |
| PR Validation | Generate tests for changed files in CI |
| Coverage Gate | Require minimum coverage for new code |
| Mutation Score | Validate test quality, not just coverage |
Limitations to Acknowledge
AI-generated tests are not perfect:
- May miss business logic edge cases - AI doesn’t know your domain
- Can create brittle tests - Testing implementation, not behavior
- Might mock incorrectly - Dependencies need human review
- Coverage != Quality - High coverage with weak assertions
Self-Assessment Checklist
Knowledge Verification
- Can you explain the difference between preToolUse and postToolUse hooks?
- What exit codes can a hook return and what do they mean?
- How does the hook receive information about the tool operation?
- What is the recursive AI invocation pattern?
- Why might you want to validate generated tests before accepting them?
Implementation Verification
- Your hook triggers when Kiro writes TypeScript files
- Test files are created in the correct location per project convention
- Generated tests import the correct testing framework
- Tests actually pass when run
- The hook completes within the timeout period
Quality Verification
- Generated tests have meaningful assertions (not just empty it() blocks)
- Edge cases are tested (null, undefined, empty, boundaries)
- Mocks are properly set up for external dependencies
- Test names describe the expected behavior
Integration Verification
- The hook works seamlessly during normal Kiro usage
- Failures are reported helpfully, not silently swallowed
- The hook can be disabled when not wanted
- Configuration is flexible enough for different projects
Summary
Building a test generator hook teaches you:
- Hook System Mastery: How to extend Kiro’s behavior at runtime
- Test Strategy: Different approaches to test generation and validation
- AI Orchestration: Using AI to call AI (recursive invocation)
- Quality Gates: Validating AI output before accepting it
The pattern you have learned here - triggering AI-powered automation on file system events - applies far beyond testing. You could use the same approach for documentation, linting, formatting, or any other post-write workflow.
Next Project: P23-documentation-generator.md - Automatic documentation generation on code changes