Project 10: Multi-File Skill - Documentation Generator
Project 10: Multi-File Skill - Documentation Generator
Build a documentation generator skill with multiple supporting files that load on-demand using progressive disclosure.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Intermediate |
| Time Estimate | 1 week |
| Language | Markdown + Python scripts |
| Prerequisites | Project 9 completed, understanding of documentation standards |
| Key Topics | Progressive disclosure, AST parsing, documentation patterns, skill architecture |
| Knowledge Area | Skills / Documentation / Progressive Disclosure |
| Main Book | โDocs for Developersโ by Jared Bhatti et al. |
1. Learning Objectives
By completing this project, you will:
- Master progressive disclosure: Understand how Claude loads supporting files on-demand to save tokens
- Design multi-file skill architecture: Organize complex skills with templates, scripts, and references
- Implement AST parsing: Use Pythonโs ast module to analyze code structure programmatically
- Create reusable templates: Build documentation templates with placeholders
- Understand documentation patterns: Learn what makes good auto-generated documentation
- Balance completeness with context limits: Design for efficiency without sacrificing capability
2. Theoretical Foundation
2.1 Progressive Disclosure in Claude Code Skills
Progressive disclosure is a design pattern where information is loaded only when needed. In Claude Code skills, this means supporting files are loaded on-demand rather than upfront.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ PROGRESSIVE DISCLOSURE โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ UPFRONT LOADING (Bad) ON-DEMAND LOADING (Good) โ
โ โโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Skill invoked Skill invoked โ
โ โ โ โ
โ โผ โผ โ
โ Load SKILL.md Load SKILL.md (small) โ
โ Load templates/python.md โ โ
โ Load templates/jsdoc.md โผ โ
โ Load templates/readme.md Claude: "Generating Python docs" โ
โ Load scripts/analyze.py โ โ
โ Load REFERENCES.md โผ โ
โ โ Load templates/python.md (only this)โ
โ โผ โ โ
โ Context: 10,000 tokens โผ โ
โ (mostly unused) Context: 2,000 tokens โ
โ (exactly what's needed) โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Why it matters:
- Token efficiency: Only load what you need
- Faster responses: Less context to process
- Scalability: Skills can have many resources without overhead
2.2 Skill Directory Structure
A complex skill can contain multiple types of files:
doc-generator/
โโโ SKILL.md # Main instructions (always loaded)
โโโ REFERENCES.md # Style guide references (on-demand)
โโโ templates/
โ โโโ python_docstring.md # Google-style docstring template
โ โโโ jsdoc_comment.md # JSDoc template for JavaScript
โ โโโ readme_section.md # README section template
โ โโโ api_endpoint.md # API documentation template
โโโ scripts/
โโโ analyze.py # AST analysis script
2.3 AST Parsing Fundamentals
Abstract Syntax Trees (AST) let you analyze code structure programmatically:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ AST PARSING OVERVIEW โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ Source Code AST Representation โ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ โ
โ โ
โ def greet(name): Module โ
โ """Say hello.""" โโโ FunctionDef: greet โ
โ return f"Hello, {name}" โโโ arguments โ
โ โ โโโ arg: name โ
โ โโโ docstring: "Say hello." โ
โ โโโ body โ
โ โโโ Return โ
โ โโโ JoinedStr โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
What you can extract:
- Function names and signatures
- Parameter types (from annotations)
- Return types
- Existing docstrings
- Class definitions and methods
- Module-level variables
2.4 Documentation Standards
Good documentation follows consistent patterns:
Google-style Python Docstrings:
def fetch_data(url: str, timeout: int = 30) -> dict:
"""Fetch data from a remote API endpoint.
Args:
url: The URL to fetch data from.
timeout: Request timeout in seconds. Defaults to 30.
Returns:
A dictionary containing the response data.
Raises:
ConnectionError: If the server cannot be reached.
TimeoutError: If the request exceeds the timeout.
Example:
>>> data = fetch_data("https://api.example.com/users")
>>> print(data["count"])
42
"""
JSDoc for JavaScript:
/**
* Fetch data from a remote API endpoint.
*
* @param {string} url - The URL to fetch data from.
* @param {number} [timeout=30] - Request timeout in seconds.
* @returns {Promise<Object>} A promise resolving to the response data.
* @throws {Error} If the server cannot be reached.
*
* @example
* const data = await fetchData("https://api.example.com/users");
* console.log(data.count);
*/
3. Project Specification
3.1 What You Will Build
A documentation generator skill that:
- Analyzes code files using AST parsing
- Identifies functions, classes, and methods missing documentation
- Uses templates to generate consistent documentation
- Loads resources on-demand via progressive disclosure
3.2 Functional Requirements
- Code Analysis: When invoked, the skill should:
- Parse the target file(s) using the analysis script
- Report functions/classes found
- Identify which items are missing docstrings
- Template-Based Generation: Documentation should be:
- Generated from templates for consistency
- Language-appropriate (Python vs JavaScript vs TypeScript)
- Following established standards (Google, NumPy, JSDoc)
- Progressive Loading: Resources should load on-demand:
- Templates loaded only when generating for that language
- References loaded only when user asks about style
- Analysis script executed only when needed
- Interactive Workflow:
- Show analysis results first
- Ask which items to document
- Generate documentation iteratively
3.3 Non-Functional Requirements
- Efficiency: Keep context small through progressive loading
- Accuracy: AST parsing should handle edge cases
- Consistency: Generated docs should match templates exactly
- Extensibility: Easy to add new templates and languages
4. Real World Outcome
When you complete this project, hereโs exactly what youโll experience:
You: Generate documentation for the auth module
Claude: [Invokes doc-generator skill]
I'll analyze your auth module and generate comprehensive documentation.
Running analysis on src/auth/...
Analysis Results (via analyze.py):
- 5 public functions found
- 2 classes with 8 methods
- 3 functions missing docstrings
- 1 class missing class docstring
Missing Documentation:
1. login(username, password, remember) - no docstring
2. logout(token) - no docstring
3. refresh_token(token) - no docstring
4. AuthManager class - no class docstring
Would you like me to generate documentation for all missing items,
or would you prefer to select specific ones?
You: Start with the login function
Claude: [Reads templates/python_docstring.md]
Here's the generated documentation for `login()`:
def login(username: str, password: str, remember: bool = False) -> AuthToken:
"""
Authenticate a user and return an access token.
Args:
username: The user's unique identifier (email or username).
password: The user's password (will be hashed for comparison).
remember: If True, token expires in 30 days; otherwise 24 hours.
Returns:
AuthToken: A JWT token containing user identity and permissions.
Raises:
InvalidCredentialsError: If username/password combination is invalid.
UserLockedError: If account is locked due to failed login attempts.
UserNotVerifiedError: If email verification is pending.
Example:
>>> token = login("user@example.com", "password123")
>>> token.is_valid()
True
"""
Shall I apply this docstring to the file, or would you like to modify it first?
5. The Core Question Youโre Answering
โHow do I create complex skills with multiple components (scripts, templates, references) that load on-demand to save tokens?โ
This project teaches you:
- How to structure multi-file skills
- When and how Claude loads supporting files
- How to trigger on-demand loading from your instructions
- How to augment Claude with external scripts
6. Concepts You Must Understand First
6.1 Skill Directory Structure
| Concept | Questions to Answer | Reference |
|---|---|---|
| File types in skills | What files can a skill contain? | Claude Code Docs - โSkillsโ |
| Supporting files | How does Claude know to load them? | Claude Code Docs - โProgressive Disclosureโ |
| Naming conventions | What should files be named? | Follow patterns in this project |
6.2 Progressive Disclosure
| Concept | Questions to Answer | Reference |
|---|---|---|
| On-demand loading | When does Claude load supporting files? | Claude Code Docs |
| Triggering loads | How do you reference files in SKILL.md? | โRead the template from templates/โฆโ |
| Benefits | Why not load everything upfront? | Token efficiency, speed |
6.3 AST Parsing
| Concept | Questions to Answer | Reference |
|---|---|---|
| Python ast module | How do you parse Python code? | Python ast documentation |
| Walking the tree | How do you find all functions? | ast.walk() or ast.NodeVisitor |
| Extracting info | What can you learn from AST nodes? | Function signatures, docstrings, types |
7. Questions to Guide Your Design
7.1 What Files Does the Skill Need?
Think through each file type:
| File | Purpose | When Loaded |
|---|---|---|
| SKILL.md | Main instructions | Always (on skill invocation) |
| templates/python_docstring.md | Python doc format | When documenting Python |
| templates/jsdoc_comment.md | JavaScript doc format | When documenting JavaScript |
| templates/readme_section.md | README format | When generating README content |
| scripts/analyze.py | Code analysis | When analyzing files |
| REFERENCES.md | Style standards | When user asks about style |
7.2 When Should Each File Load?
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ LOADING DECISION TREE โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ "Generate docs for auth.py" โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ Load SKILL.md โ Always loaded first โ
โ โโโโโโโโโโฌโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ What language? โ โ
โ โโโโโโโโโโฌโโโโโโโโโ โ
โ โ โ
โ โโโโโโโดโโโโโโ โ
โ โ โ โ
โ โผ โผ โ
โ Python JavaScript โ
โ โ โ โ
โ โผ โผ โ
โ Load Load โ
โ python_ jsdoc_ โ
โ docstring comment โ
โ .md .md โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
7.3 What Should the Analysis Script Output?
Design the script output format:
{
"functions": [
{
"name": "login",
"args": ["username", "password", "remember"],
"has_docstring": false,
"line": 42,
"return_annotation": "AuthToken"
}
],
"classes": [
{
"name": "AuthManager",
"methods": ["__init__", "authenticate", "logout"],
"has_docstring": true,
"line": 10
}
]
}
8. Thinking Exercise
8.1 Design the Directory Structure
Sketch out your skill directory:
doc-generator/
โโโ SKILL.md # What instructions?
โโโ REFERENCES.md # What standards to include?
โโโ templates/
โ โโโ ???.md # What templates do you need?
โ โโโ ???.md
โโโ scripts/
โโโ analyze.py # What should this output?
Questions:
- How does Claude know to run analyze.py?
- How do you reference templates/python_docstring.md in SKILL.md?
- Should analyze.py output JSON or plain text?
8.2 Trace Through a Documentation Session
Walk through this scenario mentally:
User: "Document the payment module"
โ
โผ
What does Claude do first?
โ
โผ
What files get loaded?
โ
โผ
What tools are used?
โ
โผ
What's the output?
9. The Interview Questions Theyโll Ask
- โHow would you structure a complex capability with multiple components?โ
- Expected: Separate concerns into filesโmain instructions, templates, scripts
- Bonus: Explain progressive disclosure benefits
- โWhatโs progressive disclosure and why does it matter for AI systems?โ
- Expected: Loading resources on-demand to save context/tokens
- Bonus: Discuss trade-offs (latency of loading vs upfront cost)
- โHow would you analyze code structure programmatically?โ
- Expected: AST parsing with language-specific libraries
- Bonus: Discuss alternatives (regex, tree-sitter, LSP)
- โWhat makes good auto-generated documentation?โ
- Expected: Consistent format, accurate types, useful examples
- Bonus: Discuss when NOT to auto-generate (complex logic needs human explanation)
- โHow do you balance completeness with context limits?โ
- Expected: Load only whatโs needed, summarize large outputs
- Bonus: Discuss chunking strategies for large codebases
10. Solution Architecture
10.1 Skill Component Diagram
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ DOC-GENERATOR SKILL ARCHITECTURE โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ SKILL.md โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ Main orchestrator: decides when to load other files, โ โ
โ โ coordinates the workflow, defines the analysis process โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ โ โ
โ โผ โผ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โ
โ โ templates/ โ โ scripts/ โ โ REFERENCES.md โ โ
โ โ โ โ โ โ โ โ
โ โ python_doc.md โ โ analyze.py โ โ Google style โ โ
โ โ jsdoc.md โ โ (AST parser) โ โ NumPy style โ โ
โ โ readme.md โ โ โ โ JSDoc spec โ โ
โ โ api_doc.md โ โ โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โ
โ โ โ โ โ
โ โผ โผ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ PROGRESSIVE LOADING โ โ
โ โ Files are loaded ONLY when referenced in instructions โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
10.2 Workflow Diagram
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ DOCUMENTATION WORKFLOW โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ User: "Generate docs for src/auth/" โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ 1. Invoke Skill โ SKILL.md loaded โ
โ โโโโโโโโโโฌโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ 2. Run Analysis โ Execute: python scripts/analyze.py src/auth/ โ
โ โ Script โ Output: JSON with functions, classes, docstrings โ
โ โโโโโโโโโโฌโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ 3. Report โ Show: 5 functions, 2 classes โ
โ โ Findings โ Highlight: 3 missing docstrings โ
โ โโโโโโโโโโฌโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ 4. User Choice โ "Document login function" โ
โ โโโโโโโโโโฌโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ 5. Load Templateโ Read: templates/python_docstring.md โ
โ โ (On-Demand) โ Only this template, not all templates โ
โ โโโโโโโโโโฌโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ 6. Generate Doc โ Fill template with function details โ
โ โโโโโโโโโโฌโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ 7. Present & โ Show generated docstring โ
โ โ Confirm โ Ask: Apply? Modify? Next function? โ
โ โโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
11. Implementation Guide
11.1 Phase 1: Create Directory Structure
# Create skill directory
mkdir -p ~/.claude/skills/doc-generator/{templates,scripts}
# Create all files
touch ~/.claude/skills/doc-generator/SKILL.md
touch ~/.claude/skills/doc-generator/REFERENCES.md
touch ~/.claude/skills/doc-generator/templates/python_docstring.md
touch ~/.claude/skills/doc-generator/templates/jsdoc_comment.md
touch ~/.claude/skills/doc-generator/templates/readme_section.md
touch ~/.claude/skills/doc-generator/scripts/analyze.py
11.2 Phase 2: Write the Analysis Script
#!/usr/bin/env python3
"""
Analyze Python files to find functions and classes that need documentation.
Usage:
python analyze.py <filepath>
python analyze.py src/auth/
Output: JSON with functions, classes, and docstring status
"""
import ast
import sys
import json
import os
def analyze_file(filepath):
"""Analyze a single Python file."""
with open(filepath) as f:
try:
tree = ast.parse(f.read())
except SyntaxError as e:
return {"error": f"Syntax error: {e}"}
results = {
"file": filepath,
"functions": [],
"classes": []
}
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
# Skip private/dunder methods at module level
if not node.name.startswith('_') or node.name.startswith('__'):
func_info = {
"name": node.name,
"args": [arg.arg for arg in node.args.args],
"has_docstring": ast.get_docstring(node) is not None,
"line": node.lineno,
"return_annotation": None,
"arg_annotations": {}
}
# Extract return type annotation
if node.returns:
func_info["return_annotation"] = ast.unparse(node.returns)
# Extract argument annotations
for arg in node.args.args:
if arg.annotation:
func_info["arg_annotations"][arg.arg] = ast.unparse(arg.annotation)
results["functions"].append(func_info)
elif isinstance(node, ast.ClassDef):
class_info = {
"name": node.name,
"methods": [],
"has_docstring": ast.get_docstring(node) is not None,
"line": node.lineno
}
for item in node.body:
if isinstance(item, ast.FunctionDef):
class_info["methods"].append({
"name": item.name,
"has_docstring": ast.get_docstring(item) is not None
})
results["classes"].append(class_info)
return results
def analyze_directory(dirpath):
"""Analyze all Python files in a directory."""
all_results = []
for root, dirs, files in os.walk(dirpath):
# Skip common non-source directories
dirs[:] = [d for d in dirs if d not in ['__pycache__', '.git', 'venv', 'node_modules']]
for file in files:
if file.endswith('.py'):
filepath = os.path.join(root, file)
result = analyze_file(filepath)
all_results.append(result)
return all_results
def main():
if len(sys.argv) < 2:
print(json.dumps({"error": "Usage: analyze.py <file_or_directory>"}))
sys.exit(1)
target = sys.argv[1]
if os.path.isfile(target):
results = analyze_file(target)
elif os.path.isdir(target):
results = analyze_directory(target)
else:
results = {"error": f"Path not found: {target}"}
print(json.dumps(results, indent=2))
if __name__ == "__main__":
main()
11.3 Phase 3: Create Documentation Templates
templates/python_docstring.md:
# Python Docstring Template (Google Style)
Use this template when generating Python docstrings:
```python
def {function_name}({parameters}) -> {return_type}:
"""
{brief_description}
{extended_description_if_needed}
Args:
{param_name}: {param_description}
{param_name}: {param_description}. Defaults to {default_value}.
Returns:
{return_type}: {return_description}
Raises:
{exception_type}: {when_raised}
Example:
>>> {example_usage}
{expected_output}
"""
Guidelines
- Brief description: One line, starts with verb (Get, Set, Create, etc.)
- Extended description: Only if the brief isnโt enough
- Args: One line per argument, type is optional if annotated
- Returns: Describe whatโs returned, not just the type
- Raises: List exceptions the caller should handle
- Example: Show typical usage with expected output ```
templates/jsdoc_comment.md:
# JSDoc Comment Template
Use this template when generating JavaScript/TypeScript documentation:
```javascript
/**
* {brief_description}
*
* {extended_description_if_needed}
*
* @param {{type}} {param_name} - {param_description}
* @param {{type}} [{optional_param}={default}] - {param_description}
* @returns {{type}} {return_description}
* @throws {{error_type}} {when_thrown}
*
* @example
* {example_usage}
*/
Guidelines
- First line: Brief description, no period needed
- @param: Use {type} for JavaScript, omit for TypeScript with annotations
- Optional params: Wrap name in brackets, show default
- @returns: Describe the value, not just the type
- @throws: Document exceptions callers should handle
- @example: Show real usage, can have multiple ```
11.4 Phase 4: Write the SKILL.md
---
name: doc-generator
description: Generate documentation for code including Python docstrings, JSDoc comments, and README sections. Use when the user wants to document code, add docstrings, or create API documentation.
---
# Documentation Generator
When the user wants to generate documentation, follow this workflow:
## 1. Analyze the Code
First, identify what files or modules need documentation:
- If user specifies a file/directory, analyze that
- Run the analysis script: `python scripts/analyze.py <target>`
- Parse the JSON output to understand what functions/classes exist
## 2. Report Findings
Present a summary to the user:
- Total functions and classes found
- How many are missing docstrings
- List the specific items without documentation
## 3. Ask for Direction
Before generating, ask the user:
- Document all missing items?
- Select specific items?
- Choose documentation style? (Google, NumPy, JSDoc)
## 4. Load the Appropriate Template
Based on the file type and user preference:
- For Python files: read `templates/python_docstring.md`
- For JavaScript/TypeScript: read `templates/jsdoc_comment.md`
- For README content: read `templates/readme_section.md`
**Important**: Load only the template you need, not all templates.
## 5. Generate Documentation
For each item to document:
1. Read the function/class code to understand what it does
2. Identify parameters, return types, and potential exceptions
3. Fill in the template with accurate information
4. Present the generated documentation to the user
## 6. Apply or Iterate
After presenting generated documentation:
- Ask if user wants to apply it to the file
- Offer to modify if needed
- Move to next item or finish
## Style References
If the user asks about documentation standards or style guidelines,
read `REFERENCES.md` for detailed style information.
11.5 Phase 5: Create REFERENCES.md
# Documentation Style References
## Google Python Style Guide (Docstrings)
From the Google Python Style Guide:
- Use triple double-quotes for all docstrings
- First line should be a summary in imperative mood
- Args section lists each parameter
- Returns section describes the return value
- Raises section lists exceptions raised
## NumPy Documentation Style
An alternative to Google style, more verbose:
- Uses section headers with underlines
- Parameters, Returns, Raises as separate sections
- More common in scientific Python libraries
## JSDoc Reference
For JavaScript/TypeScript:
- Use `@param`, `@returns`, `@throws`
- Type annotations in curly braces: `{string}`
- Optional parameters in brackets: `[name]`
- Default values with equals: `[name="default"]`
## Best Practices
1. **Be concise but complete**: Cover all parameters and return values
2. **Document behavior, not implementation**: What it does, not how
3. **Include examples**: Show typical usage
4. **Document exceptions**: Help callers handle errors
5. **Keep updated**: Outdated docs are worse than none
12. Hints in Layers
Hint 1: SKILL.md References
In your instructions, tell Claude when to load files:
โWhen generating Python docs, read the template from templates/python_docstring.mdโ
This triggers on-demand loadingโClaude reads the file only when it reaches that instruction.
Hint 2: Analysis Script
Create a Python script that uses ast.parse() and ast.walk() to find functions and classes:
import ast
tree = ast.parse(source_code)
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
print(f"Found function: {node.name}")
Output JSON for easy parsing.
Hint 3: Template Format
Use placeholders in templates that Claude can fill in:
{function_name}
{parameters}
{return_type}
{description}
Claude will replace these with actual values from the code analysis.
Hint 4: Progressive Loading
Donโt reference all files upfront. Instead of:
โTemplates are in templates/python_docstring.md, templates/jsdoc_comment.md, โฆโ
Say:
โLoad the appropriate template based on file type when needed.โ
13. Common Pitfalls & Debugging
13.1 Frequent Mistakes
| Pitfall | Symptom | Solution |
|---|---|---|
| Loading all files upfront | High token usage | Reference files conditionally |
| Script not executable | Permission denied | chmod +x scripts/analyze.py |
| JSON parse errors | Script output isnโt valid JSON | Use json.dumps() properly |
| Missing file handling | Crashes on non-Python files | Add file type checking |
| Not handling directories | Only works on single files | Add os.walk() support |
13.2 Debugging Steps
- Test the script standalone:
python scripts/analyze.py test.py - Verify JSON output: Pipe to
jqor online JSON validator - Check file references: Ensure paths in SKILL.md are correct
- Test progressive loading: Ask Claude what it loaded
14. Extensions & Challenges
14.1 Beginner Extensions
- Add template for TypeScript-specific annotations
- Support for class docstrings
- Generate module-level docstrings
14.2 Intermediate Extensions
- Type inference from usage patterns
- Extract examples from test files
- Support for NumPy documentation style
- Batch documentation for entire modules
14.3 Advanced Extensions
- Integration with type checkers (mypy, pyright)
- Documentation coverage reports
- Auto-update stale documentation
- Support for multiple programming languages
15. Books That Will Help
| Topic | Book/Resource | Chapter/Section |
|---|---|---|
| Documentation patterns | โDocs for Developersโ by Bhatti et al. | Chapters 3-5 |
| Python AST | โPython Cookbookโ by David Beazley | Chapter 9: Metaprogramming |
| Code analysis | โSoftware Engineering at Googleโ | Chapter 10: Documentation |
| Python internals | Python Official Docs | ast module documentation |
16. Self-Assessment Checklist
Understanding
- I can explain progressive disclosure and why it matters
- I understand how Claude loads supporting files on-demand
- I can parse Python code using the ast module
- I know what makes good auto-generated documentation
Implementation
- My analysis script correctly identifies functions and classes
- Templates are loaded only when needed
- Generated documentation follows the template format
- The skill works for both files and directories
Growth
- I can add new templates for different documentation styles
- I understand when to use scripts vs inline logic
- I can debug progressive disclosure issues
17. Learning Milestones
| Milestone | Indicator |
|---|---|
| Supporting files load on-demand | You understand progressive disclosure |
| Analysis script provides insights | You can augment Claude with tools |
| Templates produce consistent docs | Youโve created reusable patterns |
| Multiple languages supported | Youโve built an extensible system |
This guide was expanded from CLAUDE_CODE_MASTERY_40_PROJECTS.md. For the complete learning path, see the project index.