← Back to all projects

CLAUDE CODE CAPABILITIES LEARNING PROJECTS

Deep Learning Claude Code Capabilities

Master Claude Code’s advanced features through hands-on projects that produce real, verifiable outcomes.


Core Concept Analysis

To deeply understand Claude Code, you need to grasp four interconnected capability areas. Each represents a different facet of how to extend and automate Claude’s behavior:

1. Agent Skills (Progressive Loading & Domain Packaging)

  • Progressive disclosure: Metadata (~100 tokens) → Instructions (<5k) → Resources (unlimited)
  • SKILL.md structure: YAML frontmatter + markdown instructions
  • Code vs. context tradeoff: Scripts execute without entering context window
  • Trigger design: How descriptions match to user intent

2. Subagents (Task Delegation & Context Isolation)

  • Context windows: Each subagent operates in isolation
  • Tool restrictions: Different capabilities per agent type
  • Model selection: Choosing haiku/sonnet/opus for cost vs. capability
  • Handoff patterns: Coordinating multiple agents on a single task

3. Output Styles (System Prompt Engineering)

  • Prompt modification: What gets replaced vs. appended
  • Behavior shaping: How instructions translate to Claude’s responses
  • Default retention: keep-coding-instructions tradeoffs
  • Domain adaptation: Transforming Claude’s interaction mode entirely

4. Headless Mode (Automation & CI/CD Integration)

  • Non-interactive execution: -p flag and output formats
  • Session management: --resume and --continue for multi-turn
  • JSON streaming: Parsing stream-json for real-time feedback
  • Tool permissions: --allowedTools and --disallowedTools in automated contexts

Project 1: Build a Git Commit Message Generator Skill

  • File: CLAUDE_CODE_CAPABILITIES_LEARNING_PROJECTS.md
  • Programming Language: Python
  • Coolness Level: Level 2: Practical but Forgettable
  • Business Potential: 2. The “Micro-SaaS / Pro Tool”
  • Difficulty: Level 1: Beginner
  • Knowledge Area: AI Engineering / DevTools
  • Software or Tool: Claude Code / Git
  • Main Book: “Pragmatic Thinking and Learning” by Andy Hunt

What you’ll build: An Agent Skill that analyzes staged changes, understands your team’s commit conventions, and generates contextual commit messages that follow Conventional Commits format—visible in your terminal as formatted output.

Why it teaches Agent Skills: Forces you to understand progressive loading (metadata triggers → instructions load → scripts analyze diffs), the boundary between instructions and executable code, and how to package reusable capabilities.

Core challenges you’ll face:

  • Writing a SKILL.md description that triggers when user says “commit”, “stage”, or “write commit message” (maps to trigger design)
  • Deciding whether diff analysis belongs in instructions or a Python script (maps to code vs. context tradeoff)
  • Structuring multi-file skills with CONVENTIONS.md and templates (maps to progressive disclosure)

Key Concepts:

Difficulty: Beginner-Intermediate Time estimate: Weekend Prerequisites: Basic git knowledge, understanding of commit message conventions

Real world outcome: Type git add . then ask Claude “write a commit message” and watch it:

  1. Run your skill’s diff analysis script
  2. Output a formatted commit message like: feat(auth): add OAuth2 flow with refresh token support
  3. Optionally execute the commit with your approval

You’ll know it works when your git log shows consistently formatted commits that accurately describe changes.

Learning milestones:

  1. Create SKILL.md that Claude triggers on “commit” keyword → see your skill instructions load in Claude’s output
  2. Add analyze_diff.py script that extracts file changes without context cost → verify script runs via bash output
  3. Integrate CONVENTIONS.md with your team’s specific rules → generated messages match your standards

Project 2: Create a Debugger-Fixer Subagent Duo

  • File: CLAUDE_CODE_CAPABILITIES_LEARNING_PROJECTS.md
  • Programming Language: Markdown/YAML (Config) + Python (Scripts)
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 2. The “Micro-SaaS / Pro Tool”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: AI Engineering / Agents
  • Software or Tool: Claude Code / Subagents
  • Main Book: “The Art of Debugging” by Norman Matloff

What you’ll build: Two coordinated subagents—debugger (diagnoses issues, adds logging, identifies root cause) and fixer (implements minimal fixes, verifies solutions)—that Claude chains automatically when you encounter errors.

Why it teaches Subagents: You’ll master subagent configuration files, tool restrictions (debugger gets read-only access, fixer gets Edit), model selection (haiku for fast diagnosis, sonnet for fixes), and context handoffs between specialized agents.

Core challenges you’ll face:

  • Defining clear boundaries: debugger finds the problem, fixer solves it (maps to responsibility boundaries)
  • Restricting debugger to Read, Grep, Glob, Bash(read-only) while fixer gets Edit (maps to tool restrictions)
  • Writing prompts that make Claude proactively invoke debugger on any error (maps to proactive invocation)
  • Passing diagnosis context from debugger output to fixer input (maps to handoff patterns)

Key Concepts:

Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Understanding of debugging workflows, familiarity with error patterns in your stack

Real world outcome: Run your code, hit a TypeError, and watch Claude:

  1. Automatically invoke debugger subagent
  2. See debugger output: “Root cause: null check missing at src/api/users.ts:47”
  3. Watch Claude invoke fixer with the diagnosis
  4. See the fix applied and tests pass

You’ll know it works when errors get diagnosed and fixed without you having to explain the debugging process each time.

Learning milestones:

  1. Create single debugger.md subagent that diagnoses one error type → see diagnosis output in isolated context
  2. Add fixer.md with appropriate tool permissions → verify Edit tool is available to fixer but not debugger
  3. Achieve automatic chaining: “fix this error” → debugger → fixer → working code

Project 3: Build a Learning Tutor Output Style

  • File: CLAUDE_CODE_CAPABILITIES_LEARNING_PROJECTS.md
  • Programming Language: Markdown (Prompt Engineering)
  • Coolness Level: Level 2: Practical but Forgettable
  • Business Potential: 2. The “Micro-SaaS / Pro Tool”
  • Difficulty: Level 1: Beginner
  • Knowledge Area: AI Engineering / Prompting
  • Software or Tool: Claude Code / System Prompts
  • Main Book: “Pragmatic Thinking and Learning” by Andy Hunt

What you’ll build: A custom output style that transforms Claude Code into an interactive programming tutor—explaining concepts before implementing, quizzing you on key decisions, and adapting difficulty based on your responses.

Why it teaches Output Styles: Output styles directly replace portions of Claude’s system prompt, so you’ll see exactly how instructions shape Claude’s behavior. You’ll learn the difference between replacing defaults vs. keeping coding-instructions.

Core challenges you’ll face:

  • Writing comprehensive instructions that cover all interaction patterns without being verbose (maps to prompt engineering)
  • Deciding whether to keep coding instructions or replace them entirely (maps to keep-coding-instructions tradeoff)
  • Testing that explanatory mode doesn’t break actual code generation (maps to behavior verification)

Key Concepts:

Difficulty: Beginner-Intermediate Time estimate: Weekend Prerequisites: Basic understanding of how system prompts work, interest in learning workflows

Real world outcome: Activate your output style with /output-style tutor, then ask “implement a binary search”:

  1. Claude explains binary search concept before coding
  2. Asks: “What’s the time complexity? a) O(n) b) O(log n) c) O(n²)”
  3. You answer, Claude confirms/corrects
  4. Claude implements with inline explanations
  5. Offers follow-up: “Want to try implementing it yourself first?”

You’ll know it works when Claude’s responses always include explanations before implementations and periodically check your understanding.

Learning milestones:

  1. Create basic tutor style that adds explanations before code → see “Let me explain…” prefix on all responses
  2. Add quiz generation that asks conceptual questions → receive questions after explanations
  3. Implement adaptive difficulty: easier explanations on wrong answers → track your progress across a session

Project 4: Build a PR Review Bot with Headless Mode

  • File: pr_review_bot_headless.md
  • Main Programming Language: TypeScript
  • Alternative Programming Languages: Python, Bash
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: Level 3: The “Service & Support” Model
  • Difficulty: Level 3: Advanced (The Engineer)
  • Knowledge Area: CI/CD, Code Review Automation
  • Software or Tool: GitHub Actions, Claude Code, jq
  • Main Book: “Continuous Delivery” by Jez Humble and David Farley

What you’ll build: A GitHub Actions workflow that uses Claude Code’s headless mode to automatically analyze pull requests, post review comments, and generate structured JSON reports—all visible in your PR’s conversation thread.

Why it teaches Headless Mode: Forces you to master non-interactive execution (-p), JSON output parsing, session management (--resume), tool permissions for CI, and error handling in automated contexts.

Core challenges you’ll face:

  • Parsing stream-json output reliably in bash (maps to JSON streaming)
  • Managing sessions for multi-turn reviews: initial analysis → follow-up questions (maps to session management)
  • Handling timeouts and API errors gracefully without breaking CI (maps to error handling)
  • Configuring --allowedTools for safe CI execution (maps to tool permissions)

Resources for key challenges:

Key Concepts:

Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: GitHub Actions experience, bash scripting, JSON handling with jq

Real world outcome: Push a branch, open a PR, and see:

  1. GitHub Action triggers automatically
  2. Comment appears: “🤖 Claude is reviewing your PR…”
  3. Structured review posts with sections: Security Performance Code Quality
  4. Each section has specific line comments linking to code
  5. JSON artifact uploaded with full analysis for downstream tooling

You’ll know it works when every PR gets automated review comments within 3-5 minutes of opening.

Learning milestones:

  1. Basic headless call that outputs PR diff summary → see text output in Actions logs
  2. --output-format json parsing with jq → extract structured review sections
  3. Multi-turn with --resume: initial review → “elaborate on security concerns” → detailed follow-up
  4. Full pipeline: trigger on PR → review → post comments → upload JSON artifact

Project 5: Build a Documentation Generator Skill with MCP Integration

  • File: documentation_generator_mcp.md
  • Main Programming Language: TypeScript
  • Alternative Programming Languages: Python, JavaScript
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: Level 2: The “Micro-SaaS / Pro Tool”
  • Difficulty: Level 2: Intermediate (The Developer)
  • Knowledge Area: Documentation, MCP Protocol
  • Software or Tool: Claude Code, MCP Servers, Notion
  • Main Book: “Code Complete, 2nd Edition” by Steve McConnell

What you’ll build: An Agent Skill that analyzes your codebase, generates comprehensive documentation (README, API docs, architecture diagrams as Mermaid), and optionally publishes to your wiki—using MCP servers for external integrations.

Why it teaches Skills + MCP: Combines skill architecture with MCP server integration, showing how skills can leverage external tools (Notion MCP, GitHub MCP) while maintaining the progressive loading pattern.

Core challenges you’ll face:

  • Structuring instructions that work with or without MCP servers available (maps to graceful degradation)
  • Writing scripts that generate Mermaid diagrams from code structure (maps to code analysis)
  • Coordinating skill instructions with MCP tool calls (maps to tool orchestration)

Key Concepts:

  • MCP Integration: Model Context Protocol - Official MCP Docs
  • Documentation Patterns: “Code Complete, 2nd Edition” by Steve McConnell - Chapter 32 (Self-Documenting Code)
  • Architecture Documentation: “Fundamentals of Software Architecture” by Mark Richards - Chapter 24 (Documenting Architecture)

Difficulty: Intermediate-Advanced Time estimate: 1-2 weeks Prerequisites: Understanding of documentation practices, optional MCP server setup experience

Real world outcome: Invoke with “document this project”:

  1. Skill loads, analyzes codebase structure
  2. Generates README.md with project overview, setup, usage
  3. Creates docs/API.md with endpoint documentation
  4. Outputs Mermaid architecture diagram in docs/ARCHITECTURE.md
  5. (If MCP configured) Pushes to Notion/Confluence automatically

You’ll know it works when your undocumented project has a complete docs folder with accurate, generated documentation.

Learning milestones:

  1. Create doc-generator skill that produces README.md → see generated file match project structure
  2. Add analyze_structure.py script for architecture diagrams → Mermaid renders correctly in GitHub
  3. Integrate MCP server for wiki publishing → documentation appears in Notion/Confluence

Project Comparison

Project Difficulty Time Concepts Learned Fun Factor
Git Commit Skill Beginner-Int Weekend Progressive loading, script execution ⭐⭐⭐
Debugger-Fixer Duo Intermediate 1-2 weeks Subagent coordination, tool restrictions ⭐⭐⭐⭐
Learning Tutor Style Beginner-Int Weekend System prompt engineering ⭐⭐⭐⭐⭐
PR Review Bot Advanced 1-2 weeks Headless mode, CI/CD integration ⭐⭐⭐⭐
Doc Generator + MCP Int-Advanced 1-2 weeks Skills + MCP orchestration ⭐⭐⭐⭐

Recommendation

Start with Project 3 (Learning Tutor Style) if you want the fastest path to understanding how Claude Code can be customized. Output styles are the simplest to implement (single markdown file) but directly show you how system prompts shape behavior.

Start with Project 1 (Git Commit Skill) if you want immediate practical value. You’ll use this skill every day, and it teaches the core skill architecture you’ll need for more complex skills.

Start with Project 2 (Debugger-Fixer Duo) if you’re already comfortable with Claude Code basics and want to understand the power of specialized agents working together.

  1. Learning Tutor Style → Understand prompt engineering
  2. Git Commit Skill → Learn skill architecture
  3. Debugger-Fixer Duo → Master subagent coordination
  4. PR Review Bot → Apply everything to automation
  5. Doc Generator + MCP → Integrate with external systems

Final Capstone: Personal AI Development Orchestra

What you’ll build: A complete Claude Code customization suite that orchestrates your entire development workflow:

  • Custom output style for your preferred interaction mode (explanatory, terse, teaching)
  • 4 specialized subagents: architect (designs), implementer (codes), reviewer (critiques), documenter (explains)
  • 3 reusable skills: commit-generator, test-writer, refactoring-advisor
  • CI/CD pipeline that chains everything: PR opens → architect reviews design → reviewer checks code → documenter updates docs → comments posted

Why this teaches everything: You’ll combine all four capability areas into a cohesive system where output styles guide interaction, subagents handle specialized tasks, skills provide domain capabilities, and headless mode automates the entire flow.

Core challenges you’ll face:

  • Coordinating subagents that invoke skills (architect uses design-pattern skill)
  • Output styles that activate different subagents based on task type
  • CI/CD that orchestrates multi-agent workflows with proper handoffs
  • Managing context across the entire system without token explosion

Key Concepts:

Difficulty: Advanced Time estimate: 1 month+ Prerequisites: Completed at least 3 of the above projects

Real world outcome: Type “build user authentication feature”:

  1. architect subagent activates, loads design-pattern skill, outputs architecture plan
  2. You approve, implementer codes the feature across files
  3. reviewer automatically critiques with security focus
  4. documenter generates API docs and updates README
  5. On PR, CI pipeline runs full review chain, posts structured comments
  6. Merge triggers release notes generation

You’ll know it works when a single feature request flows through your entire custom system and emerges as a documented, reviewed, tested PR—with minimal manual intervention.

Learning milestones:

  1. Each component works in isolation → verify subagents, skills, styles independently
  2. Components integrate → architect invokes skills, output style triggers appropriate agents
  3. CI/CD orchestrates full workflow → watch automated pipeline execute your custom agents
  4. Package for team sharing → export as .claude/ directory that teammates can adopt

Resources

Official Documentation

Engineering Deep Dives

Community Guides