CLI Tool Design Mastery: From Scripts to Ergonomic Command-Line Products

Goal: Build a deep mental model of how command-line tools really work, from shell parsing and POSIX conventions to signals, terminal control, and cross-platform distribution. You will learn to design CLIs that are scriptable, discoverable, and resilient under pipes, automation, and failure. By the end, you will have a portfolio of production-grade tools that handle configuration, interactive UX, TTY detection, and safe updates. Most importantly, you will be able to reason about why good CLIs feel effortless and how to engineer that experience deliberately.


Introduction

CLI tool design is the discipline of building command-line programs that behave predictably in both human and automated contexts. It is the craft of turning user intent into well-structured commands, robust I/O, and trustworthy outcomes that compose cleanly with other tools.

What you will build (by the end of this guide):

  • A grep-style search tool that respects pipes, exit codes, and TTY color handling.
  • A structured task manager with subcommands, config layering, and persistence.
  • An interactive project scaffold wizard with safe prompts and non-interactive modes.
  • A streaming visualizer that responds to signals and renders live metrics.
  • A secure environment and secrets runner.
  • A full-screen system monitor TUI with fast rendering.
  • A git insights CLI that safely composes with external tools.
  • A CLI generator from OpenAPI schemas.
  • A plugin-enabled CLI that discovers and isolates third-party extensions.
  • A self-updating, cross-platform distribution pipeline with shell completions.

Scope (what is included):

  • CLI command grammar, POSIX conventions, and help design
  • stdin/stdout/stderr behavior, exit codes, and machine-readable output
  • Configuration precedence, XDG directories, and secrets handling
  • Interactive prompts, safety patterns, and completion
  • Terminal control, TUI architecture, and signal handling
  • Integration, plugins, packaging, and self-update strategies

Out of scope (for this guide):

  • Building a full shell or terminal emulator
  • Long-running network services and complex backends
  • GUI apps (we focus on terminal UX only)

The Big Picture (Mental Model)

User intent
   |
   v
Shell parsing -> argv/env -> CLI parser -> command graph -> core logic
   |                |             |             |           |
   |                |             |             |           +-> stdout/stderr + exit code
   |                |             |             +-> config resolution + side effects
   |                |             +-> help, completion, subcommands
   |                +-> quoting, globbing, redirection
   +-> human goal + automation use cases

Key Terms You Will See Everywhere

  • argv: The argument vector passed to your program after shell parsing.
  • stdout/stderr: Standard output vs diagnostics channels.
  • TTY: A terminal device; determines when interactivity is safe.
  • Exit code: Integer status used by scripts to detect success/failure.
  • Subcommands: Nested commands that form a command hierarchy.

How to Use This Guide

  1. Read the Theory Primer first. It is the mini-book that explains the systems, conventions, and mental models.
  2. Use the Project-to-Concept Map to connect each project to the exact concepts it tests.
  3. For each project, read the Core Question, then sketch your design before coding.
  4. Use the Hints in Layers only when stuck; they are designed to nudge, not solve.
  5. Use the Definition of Done to verify you built something production-grade.

Prerequisites & Background Knowledge

Essential Prerequisites (Must Have)

Programming Skills:

  • Comfort writing small programs in Go, Rust, or Python.
  • Basic understanding of functions, modules, errors, and file I/O.
  • Ability to read docs and CLI help output.

Operating System Fundamentals:

  • What a process is and how it exits.
  • Basic filesystem permissions and paths.
  • Familiarity with environment variables.

Shell Basics:

  • Using pipes and redirection (|, >, 2>).
  • Running commands with flags and arguments.
  • Basic shell navigation and editing.

Helpful But Not Required

  • Concurrency (goroutines, threads, async/await)
  • Basic knowledge of terminal control (ANSI escape codes)
  • Understanding of packaging and release tooling
  • Familiarity with JSON, YAML, and config file formats

Self-Assessment Questions

  1. Can you explain the difference between stdout and stderr in one sentence?
  2. Do you know how to pass data to a program via stdin?
  3. Can you describe what --help should show for a subcommand?
  4. Have you ever used a CLI that outputs JSON for scripts?
  5. Do you know how to check an exit code in your shell?

If you answered “no” to 1-3, read early chapters of The Linux Command Line first. If you answered “yes” to all, you are ready to start.

Development Environment Setup

Required Tools:

  • A Unix-like environment (Linux, macOS, or WSL2)
  • Go 1.21+ or Rust 1.70+ (Python 3.10+ optional)
  • A terminal emulator (Terminal, iTerm2, Alacritty, or Windows Terminal)
  • A text editor or IDE

Recommended Tools:

  • git for version control
  • make or just for build scripts
  • strace/dtruss for I/O debugging
  • tmux or zellij for session management

Testing Your Setup:

$ which git go rustc
/usr/bin/git
/usr/local/bin/go
/usr/local/bin/rustc

$ go version
go version go1.21.5 darwin/arm64

Time Investment

  • Small projects: 4-8 hours each
  • Medium projects: 1-2 weeks each
  • Advanced projects: 3-4 weeks each
  • Full sprint: 2-4 months at 5-10 hours/week

Important Reality Check

CLI design is deceptively deep. Most mistakes show up only after real users script your tool. Expect to iterate: first make it work, then make it predictable, then make it pleasant. The goal is not perfection in the first pass; it is building a correct mental model and refining it through projects.


Big Picture / Mental Model

            +-------------------+         +---------------------+
User intent |  Human or Script  |  --->   |  Shell + Environment|
            +-------------------+         +---------------------+
                         |                          |
                         v                          v
                   argv, env                  cwd, signals
                         |                          |
                         +-----------+--------------+
                                     |
                                     v
                         +---------------------+
                         |  CLI Parser Layer   |
                         +---------------------+
                          |        |        |
                          v        v        v
                    help text  subcommands  flags
                          |
                          v
                  +------------------+
                  |  Core Execution  |
                  +------------------+
                   |       |       |
                   v       v       v
                stdout   stderr  exit code
                   |       |       |
                   +-------+-------+
                           |
                           v
                      pipes/scripts

Key insight: Your CLI is not just a program. It is a component in a pipeline. That pipeline includes the shell, environment variables, terminal, and other programs. Good CLI design is about playing nicely in that pipeline.


Theory Primer

This is the mini-book. Read it before coding. Every concept below is used in the projects.

Chapter 1: Execution Model and Command Grammar

Fundamentals

A CLI is invoked by a shell that has already parsed and rewritten the command line. What you receive as argv is not the raw user input, but a tokenized, expanded, and ordered list of arguments that has already undergone quoting rules, glob expansion, and environment variable substitution. Understanding this pipeline is critical to designing a predictable command grammar: you must decide what are subcommands, which flags are required, how positional arguments are interpreted, and how to expose help so the user can discover intent quickly. POSIX utility syntax and the Command Line Interface Guidelines formalize conventions users already expect. If you violate them, users will blame your tool, not the shell.

Deep Dive into the Concept

The shell is the first parser in your system, and it is stateful in a way your CLI is not. When the user types mytool --path "My Files" --verbose, the shell breaks it into words, handles quotes, performs variable expansion, and resolves glob patterns like *.log into a list of files. By the time your program runs, you have a deterministic list of strings. This means your CLI grammar must be designed around the post-shell view of the world. A common failure is to design a grammar that assumes you can see original quoting or spacing. You cannot. Instead, you should design based on unambiguous tokens: required subcommands, positional arguments, and flags that follow consistent conventions.

POSIX utility syntax guidelines define expectations for option ordering, option arguments, and the use of -- to end option parsing. Users will try cmd -- --literal because they expect -- to force the rest to be treated as operands. If your parser does not implement this, users will lose trust. Similarly, short options are expected to be single-letter flags prefixed by -, and long options are expected to be words prefixed by --. Even if you are not strictly POSIX-compliant, honoring these conventions reduces cognitive load and increases scriptability.

Subcommands allow you to build a consistent command tree. Tools like git and kubectl are memorable because their subcommand grammar maps to a conceptual model: git remote add, kubectl get pods. The hierarchy encodes nouns and verbs. When you design the command tree, you are designing an information architecture: what is the primary noun, which verbs belong under it, and how users discover the edges of the tree via --help. The CLI Guidelines emphasize discoverability: each command should have a help page; errors should mention the relevant usage; and a user should be able to explore the tool without reading external docs.

A subtle but critical part of grammar design is handling optional vs required arguments. If you allow optional positionals, your parsing becomes ambiguous. If you allow flags that optionally accept arguments, you create multiple parse paths, which makes error messages harder. A design heuristic: prefer explicitness over magic. If an argument is optional, make it a flag. If a flag takes a value, require the value. If you need to accept arbitrary user input (like search patterns), allow -- to separate options from operands. This is the line between a CLI that can be safely scripted and one that breaks in edge cases.

Finally, help text is part of the grammar. The help output is a machine-readable artifact for humans. If it does not match reality, users misapply the tool. Your help should expose: command synopsis, subcommands, required positional arguments, flags with defaults, and examples. The summary line matters more than you think: a clear one-line summary is the anchor for the user’s mental model. When you design the grammar, you are also designing the help tree.

Another practical constraint is backward compatibility. Once users learn a command shape, changing it breaks scripts. That means you should plan a stable grammar early, deprecate slowly, and support aliases for old flags. Treat the command grammar as an API: version it, document it, and add new commands in a way that does not change existing semantics.

How This Fits into the Projects

This chapter is foundational for every project. It is especially critical for the task manager, API generator, and distribution tool where subcommands and options are the primary interface.

Definitions & Key Terms

  • Command grammar: The structured layout of subcommands, flags, and positionals.
  • Operand: A positional argument that is not an option.
  • Option-argument: The value supplied to a flag.
  • -- separator: A convention that ends option parsing.

Mental Model Diagram

Input text -> shell parsing -> argv[] -> grammar parser -> command node
    |               |              |           |
    |               |              |           +-> subcommand/flags/operands
    |               +-> quotes/globs/expansion
    +-> human intent

How It Works (Step-by-Step)

  1. Shell tokenizes the command line and expands globs and variables.
  2. Your CLI receives argv and splits it into subcommands, flags, and operands.
  3. The parser validates required arguments and provides defaults.
  4. If parsing fails, the CLI prints usage and exits with a non-zero code.

Minimal Concrete Example

# User input
$ mytool repo add ./path/to/repo --name demo --public

# argv that your program sees
["mytool", "repo", "add", "./path/to/repo", "--name", "demo", "--public"]

Common Misconceptions

  • “I can detect how the user quoted arguments.” -> You cannot; quoting is removed by the shell.
  • “Optional positionals are fine.” -> They create ambiguous parsing and brittle scripts.

Check-Your-Understanding Questions

  1. Why should a CLI implement -- to end option parsing?
  2. What part of the command line does the shell interpret before your program runs?
  3. Why are optional positional arguments risky?

Check-Your-Understanding Answers

  1. It allows users to pass operands that start with - without confusion.
  2. Quoting, glob expansion, and environment variable substitution.
  3. They create ambiguity and make error messages harder to reason about.

Real-World Applications

  • Designing subcommands for git, kubectl, aws, and docker.
  • Building CLI wrappers around APIs with predictable syntax.

Where You Will Apply It

  • Project 1: minigrep-plus
  • Project 2: task-nexus
  • Project 7: git-insight
  • Project 8: api-forge
  • Project 10: distro-flow

References

  • POSIX Utility Conventions: https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap12.html
  • Command Line Interface Guidelines: https://clig.dev/

Key insight: A CLI grammar is an information architecture for intent, not just a parser configuration.

Summary

A CLI receives already-parsed arguments and must provide a grammar that is unambiguous, discoverable, and aligned with user expectations. The grammar design is as important as the underlying functionality because it defines how users think about your tool.

Homework/Exercises to Practice the Concept

  1. Take a tool you use daily and write its command tree as a diagram.
  2. Redesign a command with too many flags by introducing subcommands.
  3. Create a help output for a fictional tool with three subcommands.

Solutions to the Homework/Exercises

  1. You should end with a tree like tool -> {sub1, sub2} and list flags per node.
  2. Replace flags that change behavior with subcommands (e.g., --delete -> delete).
  3. Include a synopsis, subcommand list, and 2-3 examples in the help output.

Chapter 2: Streams, TTYs, Output Modes, and Errors

Fundamentals

Every CLI is an I/O system. It consumes input through stdin, produces data through stdout, and emits diagnostics through stderr. This separation is not cosmetic; it is the foundation of Unix composability. Your tool must also recognize whether it is attached to a terminal (TTY) or a pipe. That detection determines if it is safe to use color, prompts, or pagers. Exit codes are the final contract: they allow scripts to know whether an operation succeeded. If your CLI mixes these responsibilities, it breaks pipelines and makes automation unreliable.

Think of streams as contracts: the moment you deviate, every downstream tool inherits your mistake. A CLI with clean stdout/stderr separation is easier to test, easier to compose, and easier to trust in CI. That is why output discipline is not optional, even for small utilities.

Deep Dive into the Concept

Standard streams define a contract between programs. stdout is for the primary data output, and stderr is for messages meant for humans or logs. This allows users to redirect data (cmd > out.txt) while still seeing errors. If you print diagnostics to stdout, you corrupt the data stream. Conversely, if you print data to stderr, you break the expectations of pipes. A useful heuristic: anything that a machine might parse belongs on stdout; anything that a human should see belongs on stderr. Tools like git and curl follow this religiously because scripts depend on it.

TTY detection is how your tool decides whether the output is interactive. The isatty() function checks if a file descriptor refers to a terminal device. If stdout is not a TTY, your tool should default to plain, uncolored, non-interactive output. When writing for human terminals, you can add color, tables, or progress bars. But if the output is piped to another program, those embellishments become noise. The NO_COLOR convention adds a user-controlled override: if NO_COLOR is set, even TTY output should avoid ANSI color. You can still allow explicit --color=always to override, but you must respect the environment by default.

Exit codes are the last part of the pipeline. A CLI should return 0 on success, and non-zero on failure. Tools often use 1 for general failure and higher values for more specific errors. For example, grep returns 1 when no matches are found, even though the program ran correctly. This nuance matters because scripts test exit codes, not just output. Your CLI must document these semantics or provide a --json mode that includes a structured status field.

Output modes are another layer of the contract. Human-friendly output can include aligned columns, colors, and explanations. Machine-friendly output should be stable and parseable: JSON, TSV, or a line-oriented format. The CLI Guidelines recommend supporting a machine-readable mode for tools that could be scripted. This does not mean every tool needs JSON, but if a tool has structured output and is likely to be used in scripts, offering a structured output mode is a professional choice.

Buffering and performance are also part of stream behavior. If you are reading large inputs, process line-by-line or chunk-by-chunk to avoid memory blowups. If you are writing large outputs, flush strategically. In streaming tools, you often want to disable output buffering or use line-buffered output to keep the user informed in real time. Tools like stdbuf or environment variables can help, but you should design sensible defaults so users do not need to adjust buffering manually.

Finally, error handling is part of output design. Error messages should be actionable: include the failing input, the attempted operation, and the suggested fix. Do not dump stack traces by default. The CLI Guidelines emphasize that stderr should be clean and readable, with verbose logs available behind a --verbose or --debug flag. This division improves both UX and automation.

Many Unix systems also provide sysexits.h as a vocabulary of exit codes (usage error, data error, temporary failure). You do not have to adopt them, but having a documented error taxonomy makes automation more reliable. The goal is consistency: the same class of failures should produce the same exit code and error shape across subcommands.

How This Fits into the Projects

This chapter drives Project 1 (grep), Project 4 (stream visualizer), Project 6 (TUI), and Project 7 (integration). If you get streams wrong, every one of those tools becomes fragile in automation.

Definitions & Key Terms

  • stdin: Standard input stream (fd 0).
  • stdout: Standard output stream (fd 1).
  • stderr: Standard error stream (fd 2).
  • TTY: Terminal device that can handle interactive output.
  • Exit code: Integer process status used by scripts.

Mental Model Diagram

           +-------------+        +-------------+
stdin ---> |  CLI Tool   | -----> |   stdout    | --> pipes/files
           |             | -----> |   stderr    | --> terminal logs
           +-------------+
                   |
                   v
               exit code

How It Works (Step-by-Step)

  1. Read from stdin or files based on argv.
  2. Process data and write results to stdout.
  3. Write diagnostics to stderr.
  4. Determine exit code based on success semantics.

Minimal Concrete Example

# Human output
$ minigrep "TODO" src/main.rs
12: // TODO: refactor

# Script output
$ minigrep "TODO" src/main.rs --json | jq '.count'
1

# Exit code (0 = found, 1 = not found)
$ minigrep "nope" src/main.rs; echo $?
1

Common Misconceptions

  • “Color is always good.” -> Color in pipes corrupts output.
  • “stderr is for exceptions only.” -> stderr is for any diagnostic.

Check-Your-Understanding Questions

  1. Why should data output go to stdout, not stderr?
  2. When should a CLI disable color output by default?
  3. Why might a CLI return exit code 1 even if it did not crash?

Check-Your-Understanding Answers

  1. Because stdout is the stream that pipelines and files consume.
  2. When stdout is not a TTY or when NO_COLOR is set.
  3. Some tools use exit code 1 to indicate “no matches” or “not found”.

Real-World Applications

  • grep, find, and rg in scripts
  • curl piping data to jq
  • git in CI pipelines

Where You Will Apply It

  • Project 1: minigrep-plus
  • Project 4: stream-viz
  • Project 6: system-monitor-tui
  • Project 7: git-insight

References

  • isatty(3) man page: https://man7.org/linux/man-pages/man3/isatty.3.html
  • NO_COLOR convention: https://no-color.org/
  • CLI Guidelines (output and errors): https://clig.dev/

Key insight: Streams and exit codes are the contract between your CLI and the rest of the Unix pipeline.

Summary

Good CLI output respects the separation of data vs diagnostics, adapts to TTY vs pipes, and communicates success through exit codes. This is what enables composability and automation.

Homework/Exercises to Practice the Concept

  1. Write a small script that pipes data through two tools and note where stderr goes.
  2. Build a tiny program that prints colored output only when stdout is a TTY.
  3. Recreate grep exit code behavior in a toy program.

Solutions to the Homework/Exercises

  1. Stderr will still go to the terminal unless redirected separately.
  2. Use isatty(1) to decide whether to wrap output in ANSI codes.
  3. Return 0 on match, 1 on no match, 2 on error.

Chapter 3: Configuration, State, and Secrets

Fundamentals

A CLI rarely lives in isolation. It needs configuration files, cached data, and sometimes secrets. Without a clear configuration precedence order, users get confused and scripts break. A professional CLI follows platform conventions: on Unix, the XDG Base Directory spec defines where config, data, and cache should live. On Windows, you must use platform-specific directories. You also need a predictable layering order: defaults < config file < environment variables < flags. This chapter gives you the design pattern for safe, debuggable configuration.

Config formats also matter. JSON is simple and ubiquitous, TOML is friendly for humans, and YAML is flexible but easy to misindent. Pick one and stay consistent. Your tool should also explain where it loaded config from, so users can debug without guessing.

Deep Dive into the Concept

Configuration is not just a file. It is a decision tree. When your CLI starts, it needs to resolve values from multiple sources: built-in defaults, user config files, environment variables, and command-line flags. If you do not define a strict precedence order, users cannot reason about why a value is what it is. The most common and least surprising order is: defaults first, then config file, then environment variables, then CLI flags. This ensures that explicit user intent (a flag) overrides everything else. It also allows automation to rely on environment variables without editing config files.

The XDG Base Directory Specification provides default locations for config ($XDG_CONFIG_HOME or ~/.config), data ($XDG_DATA_HOME or ~/.local/share), cache ($XDG_CACHE_HOME or ~/.cache), and runtime files ($XDG_RUNTIME_DIR). By storing your files in these directories, you integrate cleanly with the broader system ecosystem. It also helps with backups and cleanup. If you store config in arbitrary locations, you create hidden state and reduce trust.

Secrets are a special case. They should not be stored in plain text or passed as command-line flags (which leak into shell history and process lists). Instead, you should support secure sources: environment variables, OS keychains, encrypted config files, or external secret managers. A small CLI can implement simple encryption for local secrets, but you must still use a secure key source, not a hard-coded password. At minimum, you should support --secret-file or read from stdin so scripts can provide secrets without exposing them.

State is the other side of configuration. Many CLIs keep local databases, caches, or metadata. For example, a task manager might keep a JSON file; a plugin system might cache discovered plugins. State should be stored in data directories, not config. This separation matters because users may want to back up config but not caches. If you blur the line, users end up with messy directories and unexpected side effects.

Configuration also influences UX. A CLI should provide a config subcommand that shows effective config and explains where values are coming from. This is vital for debugging. If a user can run tool config show --effective, they can see what is actually happening. This is one of the biggest differences between hobby CLIs and professional tools.

Finally, configuration must be validated. If a value is invalid, the CLI should fail fast with a clear message. Validation should happen after merging all sources so you can point to the final value. A typical error message should include the bad value, the expected format, and how to fix it. If you do not validate, you push errors downstream into unexpected runtime behavior.

As tools evolve, configuration schemas change. Plan for migrations: store a schema version, transform old configs on load, and provide warnings. If you add new required settings, choose safe defaults and surface them clearly. This prevents breaking upgrades and makes long-lived installs manageable.

Testing config merges matters. Unit tests should load synthetic configs and assert effective values, and integration tests should run with env vars and flags. This catches precedence regressions and ensures backward compatibility when new options are added.

How This Fits into the Projects

This chapter drives Project 2 (task manager), Project 3 (init wizard), Project 5 (secrets vault), and Project 8 (API generator). It also matters for distribution in Project 10 because update settings are configuration.

Definitions & Key Terms

  • Config precedence: The order in which configuration sources override each other.
  • XDG directories: Standard locations for config, data, cache, runtime.
  • Secrets: Sensitive values such as tokens or passwords.
  • State: Persistent data created by the CLI.

Mental Model Diagram

Defaults
   |
   v
Config file  -----> merge -----> Environment vars -----> CLI flags
                                                |
                                                v
                                         Effective config

How It Works (Step-by-Step)

  1. Load defaults in code.
  2. Read config file from XDG locations.
  3. Override with environment variables.
  4. Override with CLI flags.
  5. Validate and emit effective config.

Minimal Concrete Example

# Config file: ~/.config/task-nexus/config.toml
storage_path = "~/.local/share/task-nexus/tasks.json"

# Environment override
$ TASK_NEXUS_STORAGE=./tmp/tasks.json task-nexus list

# Flag override (highest precedence)
$ task-nexus --storage ./tmp/tasks.json list

Common Misconceptions

  • “It’s fine to store secrets in config.” -> Config files are often plaintext.
  • “Environment variables are always safe.” -> They can leak in process listings.

Check-Your-Understanding Questions

  1. What is the recommended precedence order for config sources?
  2. Where should user-specific config live on Unix systems?
  3. Why should secrets not be passed as CLI flags?

Check-Your-Understanding Answers

  1. Defaults -> config file -> environment -> flags.
  2. $XDG_CONFIG_HOME or ~/.config by default.
  3. Flags leak into shell history and process listings.

Real-World Applications

  • git config and layered settings
  • aws CLI profiles and env overrides
  • kubectl kubeconfig

Where You Will Apply It

  • Project 2: task-nexus
  • Project 3: init-wizard
  • Project 5: env-vault
  • Project 8: api-forge
  • Project 10: distro-flow

References

  • XDG Base Directory Spec: https://help.gnome.org/devel/basedir-spec/

Key insight: Configuration is a merge problem with strict precedence, not a single file.

Summary

A professional CLI defines clear configuration precedence, follows system directory conventions, and treats secrets as a special class of data. This makes tools predictable, secure, and easy to debug.

Homework/Exercises to Practice the Concept

  1. Draw a precedence diagram for a tool you use daily.
  2. Implement a tiny config loader that merges defaults, file, env, flags.
  3. Add a config show command to print effective settings.

Solutions to the Homework/Exercises

  1. The final setting should always be explainable by the override order.
  2. Use a map merge strategy where later sources overwrite earlier ones.
  3. Print each value and its source (default, file, env, flag).

Chapter 4: Interactive UX and Safety

Fundamentals

Interactivity is optional in CLI design, but when you use it, you must do it safely. Prompts, wizards, confirmations, and multi-selects are useful for humans, but they break scripts if triggered unexpectedly. The key rule is simple: never prompt in non-interactive contexts unless the user explicitly asked for it. You detect this by checking whether stdin and stdout are TTYs. You also need a non-interactive mode that uses flags or stdin, so automation can still run. This chapter teaches you how to design interactive UX without sabotaging automation.

Interactive UX is also about tempo. Prompts should be fast, reversible, and idempotent. If a prompt leads to file writes, show a preview and allow cancel. This keeps the CLI safe to explore.

Deep Dive into the Concept

Interactive UX is about reducing cognitive load. A wizard can collect multiple parameters, validate them in real time, and ensure a correct configuration. But the moment your CLI prompts unexpectedly in a pipeline, you block the script and cause confusion. This is why most professional CLIs follow a rule: if stdin or stdout is not a TTY, do not prompt unless a flag like --interactive is explicitly passed. A related pattern is --yes or --force to bypass confirmation prompts. The CLI Guidelines recommend that prompts should be opt-in in automated contexts, and that they should be short, clear, and safe.

Designing prompts is about using constraints rather than free-form input. For example, when scaffolding a project, provide a list of template options rather than asking the user to type arbitrary strings. This reduces errors and improves reproducibility. For advanced workflows, you can support a config file input or a --config flag that supplies the same values without interaction. The key is parity: everything you can do interactively should be achievable non-interactively with flags or config. This is the difference between a CLI that is friendly and one that is only for manual use.

Safety patterns matter in interactive contexts. If a command can delete or overwrite data, you must either ask for confirmation or require a --force flag. But the prompt should be explicit: include the target path, the size, or the number of items affected. Avoid vague prompts like “Are you sure?”. Instead, say “Delete 23 files from ./build? (y/N)”. It is also common to default to “no” on destructive actions, which prevents accidental data loss.

Shell completion is a form of interactivity that users often forget. Generating completion scripts for bash, zsh, and fish makes your CLI discoverable without reading docs. Tools like Cobra and Clap can generate completions automatically. Even if you do not have a generator, you can implement a completion subcommand that outputs scripts. This dramatically improves UX because users can explore subcommands and flags with the TAB key.

Interactive UX also includes progress indicators. If a task is slow, a progress bar or spinner can reassure users. But these should be suppressed in non-TTY output and replaced with line-based logs. This is another example of adapting behavior to interactive vs non-interactive contexts. Good CLI design uses TTY detection as a gate: if interactive, show friendly UI; if not, emit clean, machine-friendly output.

Consider batch mode as first-class. Provide a --config or --from-file flag to supply answers in bulk, and ensure prompts are only used when the necessary inputs are missing. This makes it possible to embed the tool in CI while still offering a smooth interactive path for first-time users.

Design for accessibility. Avoid color-only cues, keep prompts concise, and support --quiet or --json outputs for tooling. Also consider timeouts for prompts in automation, so the CLI fails fast rather than hanging indefinitely.

If you log progress in non-interactive mode, keep it line-based so it can be parsed and monitored.

How This Fits into the Projects

This chapter powers Project 3 (init wizard), Project 6 (TUI), and Project 10 (distribution with prompts and completions). It also influences Project 2 where interactive creation can be an optional mode.

Definitions & Key Terms

  • Interactive mode: CLI behavior that expects human input in real time.
  • Non-interactive mode: CLI behavior designed for scripts and automation.
  • Completion: Shell integration that suggests commands and flags.
  • Confirmation prompt: User confirmation before destructive actions.

Mental Model Diagram

           TTY? ---- yes ---> interactive prompts, spinners, colors
             |
             +---- no ---> plain output, no prompts, machine-friendly

How It Works (Step-by-Step)

  1. Detect if stdin/stdout are TTYs.
  2. If not TTY, disable prompts and interactive UI.
  3. Provide flags or config to supply required inputs.
  4. Offer --yes or --force to skip confirmations.
  5. Generate shell completions for discoverability.

Minimal Concrete Example

# Interactive wizard
$ init-wizard new
? Project name: demo
? Language: (Use arrow keys)
> Go
  Rust
  Python

# Non-interactive mode
$ init-wizard new --name demo --lang go --no-prompt

Common Misconceptions

  • “Prompts are always user-friendly.” -> Not in scripts.
  • “Completion is optional.” -> It is a major UX upgrade.

Check-Your-Understanding Questions

  1. When should a CLI avoid prompting the user?
  2. What flag patterns are commonly used to bypass prompts?
  3. Why is completion important even for experienced users?

Check-Your-Understanding Answers

  1. When stdin or stdout is not a TTY.
  2. --yes, --force, or --no-prompt.
  3. It speeds discovery and reduces memory load.

Real-World Applications

  • git uses -y or -f for destructive commands.
  • npm init uses interactive prompts but supports --yes.
  • kubectl generates shell completion scripts.

Where You Will Apply It

  • Project 3: init-wizard
  • Project 6: system-monitor-tui
  • Project 10: distro-flow

References

  • CLI Guidelines (interactivity and prompts): https://clig.dev/

Key insight: Interactivity should enhance humans without blocking automation.

Summary

Interactive UX is powerful but dangerous if applied blindly. Good CLI design detects TTYs, offers non-interactive equivalents, and treats confirmation as a safety feature, not a nuisance.

Homework/Exercises to Practice the Concept

  1. Add a --yes flag to a small destructive command.
  2. Implement a --no-prompt flag that bypasses all prompts.
  3. Generate shell completion scripts for a tiny CLI.

Solutions to the Homework/Exercises

  1. Use a boolean flag and skip the prompt when it is true.
  2. If --no-prompt is set and required data is missing, fail fast.
  3. Output completion scripts and document how to enable them.

Chapter 5: Terminal Control, TUI Architecture, and Signals

Fundamentals

Terminal UIs (TUIs) are not just text output; they are real-time interactive programs that control the terminal state. To build them safely, you must understand ANSI escape codes, terminal modes (canonical vs raw input), alternate screen buffers, and signals like SIGWINCH for window resizing. Signals also matter for all CLI tools because they indicate interrupts and shutdowns (SIGINT, SIGTERM). Without proper handling, your tool leaves the terminal corrupted or loses data on exit.

Signals are the operating system’s way of telling you to stop, pause, or resize. Even non-TUI tools must respond predictably. If you ignore signals, you create data loss and terminal corruption.

Deep Dive into the Concept

The terminal is a stateful device. When you write to stdout, the terminal interprets special sequences like \x1b[2J to clear the screen or \x1b[H to move the cursor. These ANSI escape codes allow you to build full-screen interfaces, but they also create risk: if your program crashes without resetting the terminal, the user can be left with a broken shell (no echo, wrong colors, or cursor hidden). TUI frameworks hide much of this, but you still need to understand the underlying mechanics to debug and design effectively.

Input handling is another challenge. In canonical mode, the terminal buffers input until the user presses Enter. In raw mode, you get each key press immediately. TUIs typically switch to raw mode to capture arrow keys, function keys, and shortcuts. This is a global change to the terminal state, so it must be reverted on exit. The safest pattern is to set up a defer or finally block that always restores the terminal settings, even on error. This is why proper signal handling is critical: you need to catch SIGINT and SIGTERM and clean up before exiting.

Signals are asynchronous notifications from the OS. SIGINT is sent when the user presses Ctrl+C, SIGTERM when the system requests termination, and SIGWINCH when the terminal window changes size. A robust CLI registers handlers for these signals, sets flags or triggers cleanup, and exits with appropriate codes. For example, a TUI should catch SIGWINCH and recompute the layout; a streaming tool should flush buffers on SIGTERM. If you ignore signals, you risk corrupted state or partial output.

TUIs also require rendering discipline. The naive approach is to redraw the entire screen every frame. This causes flicker and performance issues. Instead, use a diffing approach: compute changes and update only the affected regions. Libraries like Bubble Tea or Ratatui implement these patterns, but you still need to design your rendering loop so that it is event-driven, not busy-waiting. Combine an event loop with timers, input events, and state updates. This is where the Model-View-Update (MVU) pattern helps: state changes trigger redraws, and the view is a pure function of state.

Finally, you must respect the terminal as a shared resource. Your program should avoid hard-coding terminal sizes and instead query the current dimensions. It should handle small terminals gracefully by truncating or simplifying output. When output is redirected to a file, you should avoid ANSI codes entirely. This is the same TTY detection principle from Chapter 2, but applied at a more complex level.

Signal handlers are constrained: you cannot safely do arbitrary work inside them. The safe pattern is to set a flag or write to a pipe and let your main loop handle cleanup. This prevents deadlocks and undefined behavior, especially in languages that have runtimes or garbage collectors.

Also consider SIGPIPE when your CLI writes to a pipe whose reader exits early (like head). You should let the process terminate quietly or handle the error to avoid noisy stack traces. This is a common source of confusion in pipelines.

How This Fits into the Projects

This chapter powers Project 4 (stream visualizer), Project 6 (system monitor TUI), and influences distribution in Project 10 (clean shutdown and signal safety).

Definitions & Key Terms

  • ANSI escape codes: Special sequences that control terminal behavior.
  • Raw mode: Input mode that delivers key presses immediately.
  • Alternate screen: A separate buffer used for full-screen apps.
  • SIGINT/SIGTERM/SIGWINCH: Common signals for interrupt, termination, and resize.

Mental Model Diagram

Terminal state
   |  (raw mode, alt screen, cursor)
   v
TUI event loop -> update state -> render diff -> write ANSI
   |
   v
signal handler -> cleanup -> restore terminal -> exit

How It Works (Step-by-Step)

  1. Enter alternate screen and raw mode.
  2. Start event loop (input + timers + system data).
  3. Render UI and update on events.
  4. Handle SIGWINCH by recalculating layout.
  5. On SIGINT/SIGTERM, restore terminal and exit.

Minimal Concrete Example

# Start a TUI
$ system-monitor-tui

# Resize the terminal and see the layout adjust

# Quit cleanly
Press q or Ctrl+C

Common Misconceptions

  • “ANSI is just color.” -> It controls cursor, screen, and input modes.
  • “Signals are only for crashes.” -> They are normal lifecycle events.

Check-Your-Understanding Questions

  1. Why is it risky to exit without restoring terminal state?
  2. What signal is sent when the terminal is resized?
  3. Why is raw mode necessary for TUIs?

Check-Your-Understanding Answers

  1. The user may be left with broken echo/cursor settings.
  2. SIGWINCH.
  3. It delivers keypresses without waiting for Enter.

Real-World Applications

  • top, htop, btop and other system monitors
  • Full-screen git interfaces like lazygit

Where You Will Apply It

  • Project 4: stream-viz
  • Project 6: system-monitor-tui

References

  • CLI Guidelines (TTY and paging): https://clig.dev/

Key insight: Terminal control is stateful; if you do not clean up, you harm the user’s shell.

Summary

TUIs require precise control of the terminal and careful signal handling. You must manage terminal modes, render efficiently, and always restore state on exit.

Homework/Exercises to Practice the Concept

  1. Write a program that enters raw mode, reads a key, and restores the terminal.
  2. Handle SIGWINCH and print the new terminal size.
  3. Build a minimal TUI that redraws only when state changes.

Solutions to the Homework/Exercises

  1. Use a terminal library or direct termios calls with cleanup in a defer block.
  2. On SIGWINCH, call ioctl or a library function to get new dimensions.
  3. Use a state variable and only render when it changes.

Chapter 6: Integration, Extensibility, and Distribution

Fundamentals

Many CLI tools are orchestration layers: they wrap other commands, call APIs, or load plugins. Integration requires safe execution (no shell injection), stable parsing of external output, and the ability to version and evolve interfaces. Extensibility introduces new challenges: discovery, isolation, and compatibility. Distribution is the final mile: users must be able to install, update, and trust your binary. Without distribution, your tool does not exist in practice.

Distribution also includes trust. Users need to know where a binary came from and how to verify it. This is why checksums, signatures, and reproducible builds matter, even for internal tools.

This chapter treats these concerns as part of the same lifecycle: getting data in, extending behavior safely, and delivering updates reliably.

Deep Dive into the Concept

Integration begins with command execution. The safest pattern is to execute commands directly with argument arrays, not by passing strings through a shell. This avoids injection attacks and quoting bugs. When you capture output, you must decide whether to parse human-readable text or machine-readable formats. Tools like git often provide porcelain or format options (--pretty, --porcelain) specifically for this reason. If you parse human output, you must expect it to change across versions or locales. This is why robust CLIs prefer to parse structured formats when available.

Extensibility introduces a plugin lifecycle. A common design is “executable plugins”: any executable named tool-foo found in a plugin directory becomes a subcommand tool foo. This allows a decoupled ecosystem and keeps plugins isolated in separate processes. Another pattern is RPC-based plugins using a defined protocol over stdin/stdout. This allows richer interaction but requires versioning and error handling. WebAssembly-based plugins are emerging as a middle ground: they run in a sandbox but still allow user-defined extensions.

Compatibility is the hardest part. Once you publish a plugin API, you must version it. That means including a protocol version in the handshake, rejecting incompatible versions, and providing compatibility layers when possible. If you do not do this, updates will break plugins and users will stop trusting upgrades. This is the same principle as semantic versioning for libraries, but applied to CLI plugin interfaces.

Distribution is where design meets reality. A great CLI still fails if users cannot install or update it. You should provide a clear installation method (package managers, curl install script, or downloads). You should also provide checksums or signatures to ensure integrity. Self-update is powerful but risky: you must handle atomic replacement, failure rollback, and platform-specific behavior. On Windows, replacing a running binary requires a rename dance or a separate updater process. On Unix, you can overwrite the binary but must preserve permissions and ownership. These details are not optional if you want a reliable upgrade experience.

Shell completion is part of distribution. Shipping completion scripts improves adoption by making your tool feel native. Tools like kubectl, gh, and aws all ship completions. A good CLI includes a completion subcommand to generate scripts for bash, zsh, and fish. This also allows users to update completions automatically when they update the tool.

Finally, packaging is also about documentation. A user should be able to run tool --version, see where the binary came from, and know how to update it. The CLI should report its build metadata and update channel so that support is possible. Without this, debugging user issues becomes impossible.

Think about channels: stable vs nightly, or pinned vs latest. If your CLI auto-updates, allow users to opt out or pin a version. Provide --version with build metadata, and include a diagnose or doctor command that reports update source, config paths, and environment, so support is possible.

Security for plugins also matters. If you allow third-party code, document trust boundaries, run plugins with least privilege, and consider opt-in loading. A simple allowlist or signature check can prevent accidental execution of malicious binaries.

How This Fits into the Projects

This chapter drives Project 7 (git-insight integration), Project 8 (API CLI generation), Project 9 (plugin architecture), and Project 10 (distribution and updates).

Definitions & Key Terms

  • Command injection: Executing unintended commands by unsafely invoking a shell.
  • Porcelain output: Machine-readable output designed for parsing.
  • Plugin discovery: Finding and loading extensions automatically.
  • Self-update: Replacing the binary from within itself.

Mental Model Diagram

CLI core
  |
  +--> external command -> parse output -> structured data
  |
  +--> plugin discovery -> handshake -> execute -> result
  |
  +--> distribution -> release -> update -> verification

How It Works (Step-by-Step)

  1. Execute external tools with argument arrays (no shell).
  2. Parse structured output formats when available.
  3. Discover plugins in a well-defined directory.
  4. Negotiate protocol versions before execution.
  5. Distribute binaries with checksums and update metadata.

Minimal Concrete Example

# Plugin discovery pattern
$ plug-master --help
Available plugins:
  plug-master-hello
  plug-master-lint

# Self-update
$ distro-flow update
Checking for updates... v1.4.2 -> v1.5.0
Downloaded, verified checksum, replaced binary.

Common Misconceptions

  • “Parsing human output is fine.” -> It is brittle across versions/locales.
  • “Plugins can run in-process safely.” -> They can crash or corrupt state.

Check-Your-Understanding Questions

  1. Why should you avoid passing command strings through a shell?
  2. What is the benefit of plugin version negotiation?
  3. Why is self-update risky without verification?

Check-Your-Understanding Answers

  1. It prevents command injection and quoting bugs.
  2. It prevents incompatible plugins from crashing the host.
  3. It can install corrupted or malicious binaries.

Real-World Applications

  • git porcelain formats
  • kubectl plugin system
  • gh extensions

Where You Will Apply It

  • Project 7: git-insight
  • Project 8: api-forge
  • Project 9: plug-master
  • Project 10: distro-flow

References

  • CLI Guidelines (composability and output): https://clig.dev/

Key insight: A CLI that integrates and distributes well becomes a platform, not just a tool.

Summary

Integration, extensibility, and distribution are where CLIs scale beyond personal scripts. Safe execution, structured output, plugin isolation, and reliable updates are the pillars of this chapter.

Homework/Exercises to Practice the Concept

  1. Wrap git status --porcelain and parse it into a JSON summary.
  2. Implement a plugin discovery mechanism using executable naming.
  3. Create a mock update flow that verifies a checksum.

Solutions to the Homework/Exercises

  1. Use --porcelain and split lines into a structured object.
  2. Scan a directory for tool-* executables and map to subcommands.
  3. Download a file, compute checksum, compare before replace.

Glossary

  • CLI: Command-line interface; a tool controlled by text commands.
  • POSIX: Standard that defines Unix-like behavior and conventions.
  • TTY: Terminal device supporting interactive input/output.
  • Exit code: Integer status returned by a process (0 = success).
  • XDG: Standard directory layout for config, data, cache.
  • ANSI escape codes: Control sequences for terminal formatting.
  • SIGINT/SIGTERM: Signals for interrupt and termination.
  • Subcommand: Nested command under a main command.

Why CLI Tool Design Matters

The Modern Problem It Solves

The command line is still the backbone of automation, DevOps, and developer tooling. A well-designed CLI becomes a reliable building block for scripts, CI pipelines, and daily workflows. A poorly designed CLI breaks pipelines, hides errors, and creates brittle automation.

Real-world impact (recent stats):

  • Docker usage (2024): Docker is used by 59% of professional developers, making it the top “other tool” in the 2024 Stack Overflow Developer Survey. Source: https://survey.stackoverflow.co/2024
  • Docker usage in cloud tooling (2025): Docker reached 71% usage among cloud development and infrastructure technologies in 2025. Source: https://stackoverflow.co/company/press/archive/stack-overflow-2025-developer-survey/
  • Collaboration tooling (2025): GitHub is the most popular code documentation and collaboration tool at 81% usage. Source: https://stackoverflow.co/company/press/archive/stack-overflow-2025-developer-survey/

These tools are CLI-heavy, which means the quality of CLI design directly affects productivity for a huge portion of developers.

Bad CLI                         Good CLI
+------------------+            +------------------+
| unclear output   |            | clear stdout     |
| breaks pipes     |            | clean stderr     |
| no help          |            | helpful --help   |
| random flags     |            | consistent flags |
+------------------+            +------------------+

Context & Evolution (Short History)

Unix tooling evolved around composability: small tools that do one thing well, linked together by pipes. Modern CLIs add richer UX, JSON output, and interactive experiences, but they still live in the same pipeline model. The core principles remain: predictable grammar, clear output, and script-friendly behavior.


Concept Summary Table

Concept Cluster What You Need to Internalize
Execution Model and Command Grammar Shell parsing, POSIX conventions, and designing a predictable command tree.
Streams, TTYs, Output Modes, Errors Correct stdout/stderr separation, exit codes, and TTY-aware output.
Configuration, State, Secrets XDG directories, config precedence, and secure handling of sensitive data.
Interactive UX and Safety Prompts, non-interactive modes, confirmations, and completion.
Terminal Control and Signals ANSI escape codes, raw mode, SIGINT/SIGWINCH handling.
Integration, Extensibility, Distribution External command parsing, plugins, and safe updates.

Project-to-Concept Map

Project What It Builds Primer Chapters It Uses
Project 1: minigrep-plus Grep-like search tool 1, 2
Project 2: task-nexus Task manager CLI 1, 3
Project 3: init-wizard Interactive scaffolding wizard 4, 3
Project 4: stream-viz Streaming visualizer 2, 5
Project 5: env-vault Secrets + env runner 3, 2
Project 6: system-monitor-tui Full-screen system monitor 5, 4, 2
Project 7: git-insight Git summary integration 6, 2
Project 8: api-forge OpenAPI to CLI generator 1, 3, 6
Project 9: plug-master Plugin architecture 6
Project 10: distro-flow Distribution + updates 6, 4

Deep Dive Reading by Concept

Fundamentals and Grammar

Concept Book & Chapter Why This Matters
Shell parsing and argv The Linux Command Line by Shotts - Ch. 6 (I/O Redirection), Ch. 7 (Shell Expansion) Shell parsing determines the argv your CLI receives.
POSIX conventions Advanced Programming in the UNIX Environment by Stevens/Rago - Ch. 2 (Standardization) Explains standards and conventions that shape CLI behavior.

Streams, Errors, and Process Control

Concept Book & Chapter Why This Matters
File I/O and streams The Linux Programming Interface by Kerrisk - Ch. 4 (File I/O) Core knowledge for stdin/stdout/stderr behavior.
Process exit and signals Advanced Programming in the UNIX Environment by Stevens/Rago - Ch. 10 (Signals) Helps with SIGINT/SIGTERM handling.

Configuration and Environment

Concept Book & Chapter Why This Matters
Environment variables The Linux Command Line by Shotts - Ch. 11 (The Environment) Explains env precedence and scripting behavior.
Secure coding Effective C by Seacord - Ch. 5 (Error Handling) Helps build safe config parsing and validation.

Terminal UX

Concept Book & Chapter Why This Matters
Terminal I/O Advanced Programming in the UNIX Environment by Stevens/Rago - Ch. 18 (Terminal I/O) Explains raw mode, line discipline, and terminal state.
UX design The Pragmatic Programmer by Hunt/Thomas - Ch. 3 (The Cat Ate My Source Code) Encourages defensive design and clear feedback loops.

Extensibility and Distribution

Concept Book & Chapter Why This Matters
Software architecture Fundamentals of Software Architecture by Richards/Ford - Ch. 6 (Modularity) Helps design plugin boundaries.
Release engineering The Pragmatic Programmer - Ch. 8 (Delight Your Users) Focuses on distribution and user experience.

Quick Start: Your First 48 Hours

Day 1 (4 hours):

  1. Read Chapter 1 and Chapter 2 in the Theory Primer.
  2. Skim the CLI Guidelines site and POSIX Utility Conventions.
  3. Build Project 1 (minigrep-plus) with basic flags and exit codes.

Day 2 (4 hours):

  1. Read Chapter 3 and Chapter 4.
  2. Build Project 2 (task-nexus) with add and list.
  3. Add a --json output mode to one of the commands.

End of 48 hours: You can design a minimal but professional CLI and explain why stdout/stderr separation matters. This is 80% of the mental model.


Path 1: The Pragmatist (DevOps/SRE)

  • Project 1 -> Project 2 -> Project 5 -> Project 7 -> Project 10

Path 2: The Systems Engineer

  • Project 1 -> Project 4 -> Project 6 -> Project 9

Path 3: The Product Builder

  • Project 3 -> Project 6 -> Project 8 -> Project 10

Path 4: The Completionist

  • Project 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8 -> 9 -> 10

Success Metrics

  • Every command has accurate --help output.
  • stdout is machine-readable when asked; stderr contains only diagnostics.
  • CLI behaves correctly in pipes and non-interactive contexts.
  • Config precedence is documented and deterministic.
  • TUI exits cleanly without corrupting terminal state.
  • All projects meet their Definition of Done checklists.

CLI Tooling Appendix: Checklists and Debugging

Configuration Precedence Checklist

  • Defaults -> config file -> environment variables -> CLI flags
  • Provide config show --effective
  • Validate and fail fast on invalid values

Output Behavior Checklist

  • stdout is data
  • stderr is diagnostics
  • --json or --machine mode when output is structured
  • --color=auto|always|never and respect NO_COLOR

Signal Handling Checklist

  • Handle SIGINT and SIGTERM
  • Restore terminal state on exit
  • Handle SIGWINCH for TUIs

Debugging Tools

  • strace or dtruss for syscalls
  • script to capture terminal sessions
  • stdbuf to debug buffering
  • tput to inspect terminal capabilities

Project Overview Table

# Project Focus Difficulty Time Key Concepts
1 minigrep-plus Flags, streams, exit codes Beginner Weekend 1, 2
2 task-nexus Subcommands, config, storage Intermediate 1 week 1, 3
3 init-wizard Interactive prompts, UX Intermediate 1 week 4, 3
4 stream-viz Streaming output, signals Advanced 1 week 2, 5
5 env-vault Secrets and env runner Intermediate 1 week 3, 2
6 system-monitor-tui Full TUI + rendering Advanced 2 weeks 5, 4, 2
7 git-insight External command parsing Intermediate Weekend 6, 2
8 api-forge Code generation + API UX Advanced 2 weeks 1, 3, 6
9 plug-master Plugin architecture Expert 3-4 weeks 6
10 distro-flow Packaging and updates Advanced 1 week 6, 4

Project List

Project 1: minigrep-plus (The Foundation)

  • Main Programming Language: Rust
  • Alternative Programming Languages: Go, Python
  • Coolness Level: Level 2: Practical but forgettable
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 1: Beginner
  • Knowledge Area: Streams and argument parsing
  • Software or Tool: Clap / grep
  • Main Book: The Rust Programming Language (Ch. 12)

What you will build: A grep-like tool that searches files or stdin for patterns, supports case-insensitive mode, and prints colorized matches only when output is a TTY.

Why it teaches CLI design: It forces correct handling of stdin/stdout/stderr, exit codes, and option parsing. It is the smallest project that reveals the difference between human and machine contexts.

Core challenges you will face:

  • Argument parsing: Cleanly separate flags from positionals.
  • Streaming: Avoid loading entire files into memory.
  • TTY awareness: Color only when appropriate.

Real World Outcome

You will have a tool that behaves like a real Unix utility and integrates cleanly in pipes.

Command Line Outcome Example:

# Human-friendly output
$ minigrep "fn main" src/main.rs --ignore-case
1: fn main() {
42:     // calling fn main again

# Pipe output (no colors)
$ minigrep "fn main" src/main.rs --ignore-case | wc -l
2

# Exit code behavior
$ minigrep "notfound" src/main.rs; echo $?
1

The Core Question You’re Answering

“How do I design output that is useful both to humans and to scripts?”

The answer is correct use of streams, TTY detection, and exit codes.

Concepts You Must Understand First

  1. Standard Streams
    • What is the difference between stdout and stderr?
    • Why does stderr not get piped by default?
    • Book Reference: The Linux Programming Interface - Ch. 4
  2. TTY Detection
    • What does isatty() return and why?
    • When should you disable colors?
    • Book Reference: Advanced Programming in the UNIX Environment - Ch. 18
  3. Exit Codes
    • What does exit code 1 mean in grep?
    • How do scripts check exit status?
    • Book Reference: The Linux Programming Interface - process exit chapters

Questions to Guide Your Design

  1. How will you treat stdin when no file is provided?
  2. Should a “no matches” case be considered success or failure?
  3. How will you avoid coloring output when piping to another tool?

Thinking Exercise

The Pipeline Thought Experiment

Imagine your tool is used like this:

$ minigrep "ERROR" app.log | head -n 5 | awk '{print $1}'
  • Where does color output break this pipeline?
  • Which stream should errors go to?

The Interview Questions They’ll Ask

  1. “Why should stdout and stderr be separate?”
  2. “How does isatty() help CLI design?”
  3. “Why does grep return exit code 1 when no match is found?”
  4. “What happens if you print color codes into a pipe?”

Hints in Layers

Hint 1: Start simple

  • Parse pattern and file as positional arguments.

Hint 2: Add flags

#[derive(Parser)]
struct Args { pattern: String, file: Option<String>, #[arg(long)] ignore_case: bool }

Hint 3: TTY detection

let is_tty = atty::is(atty::Stream::Stdout);

Hint 4: Verify with pipes

$ minigrep "fn" src/main.rs | cat

Books That Will Help

Topic Book Chapter
Streams The Linux Programming Interface Ch. 4
Terminal I/O Advanced Programming in the UNIX Environment Ch. 18
CLI basics The Linux Command Line Ch. 6

Common Pitfalls & Debugging

Problem 1: “Color codes show up in piped output”

  • Why: You always enable color output.
  • Fix: Use TTY detection and NO_COLOR.
  • Quick test: minigrep ... | cat should be plain.

Problem 2: “Exit code always 0”

  • Why: You never propagate the match status.
  • Fix: Return 1 when no matches are found.
  • Quick test: minigrep "nope" file; echo $? should return 1.

Definition of Done

  • Supports search in file or stdin
  • Handles --ignore-case
  • Prints to stdout and errors to stderr
  • Colors only when stdout is a TTY
  • Correct exit codes (0 = found, 1 = not found, 2 = error)

Project 2: task-nexus (Subcommands and State)

  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, Python
  • Coolness Level: Level 3: Useful daily tool
  • Business Potential: 2. Micro-SaaS potential
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Subcommands + configuration
  • Software or Tool: Cobra / Viper or Clap / config
  • Main Book: The Linux Command Line (Ch. 11)

What you will build: A task manager CLI with add, list, done, and delete subcommands, backed by a local JSON or SQLite store.

Why it teaches CLI design: It forces a real command hierarchy, config precedence, and state persistence.

Core challenges you will face:

  • Subcommand structure: Designing a clean verb-noun model.
  • Config layering: Defaults vs env vs flags.
  • Storage format: JSON vs SQLite trade-offs.

Real World Outcome

$ task-nexus add "Write CLI primer" --project cli --due 2026-01-15
Added task #14

$ task-nexus list --project cli --format table
ID  Done  Due        Project  Title
14  [ ]   2026-01-15 cli      Write CLI primer

$ task-nexus done 14
Marked task #14 as done

$ task-nexus list --format json | jq '.tasks[0].done'
true

The Core Question You’re Answering

“How do I design a command tree and config system that scales as features grow?”

Concepts You Must Understand First

  1. Command Hierarchy
    • Should commands be verbs or nouns?
    • How do you group related actions?
    • Book Reference: Advanced Programming in the UNIX Environment - Ch. 2
  2. Configuration Precedence
    • What overrides what?
    • How do you surface effective config?
    • Book Reference: The Linux Command Line - Ch. 11
  3. State Storage
    • When is JSON enough? When use SQLite?
    • Book Reference: The Linux Programming Interface - Ch. 4

Questions to Guide Your Design

  1. What is the minimal subcommand set that still feels complete?
  2. How do you ensure output is stable for scripts?
  3. How do you prevent accidental deletion of tasks?

Thinking Exercise

Design the command tree for a task manager that can handle projects, priorities, and tags. Draw it as:

task-nexus
  add
  list
  done
  delete
  project
    create
    list

The Interview Questions They’ll Ask

  1. “Why use subcommands instead of flags?”
  2. “How do you handle config precedence?”
  3. “When do you use JSON vs SQLite for state?”

Hints in Layers

Hint 1: Start with JSON storage

  • Use a single file in ~/.local/share/task-nexus/tasks.json.

Hint 2: Add config loading

cfg, _ := os.UserConfigDir()

Hint 3: Add --format flag

  • Support table and json output.

Hint 4: Add task-nexus config show

  • Print effective config values.

Books That Will Help

Topic Book Chapter
Environment vars The Linux Command Line Ch. 11
File I/O The Linux Programming Interface Ch. 4
Architecture Clean Architecture Ch. 1

Common Pitfalls & Debugging

Problem 1: “Config overrides do not work”

  • Why: Flags are applied before env.
  • Fix: Apply precedence in correct order.
  • Quick test: Set env var and ensure it overrides config.

Problem 2: “Tasks disappear after restart”

  • Why: Data not flushed to disk.
  • Fix: Write file atomically or use SQLite.
  • Quick test: Add task, restart, list again.

Definition of Done

  • Subcommands: add/list/done/delete
  • Config precedence implemented
  • Data stored in XDG data dir
  • JSON and table output supported
  • config show displays effective config

Project 3: init-wizard (Interactive Scaffolding)

  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, Python
  • Coolness Level: Level 3: Fun and useful
  • Business Potential: 3. Indie tool potential
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Interactive UX
  • Software or Tool: Bubble Tea / Inquirer
  • Main Book: The Pragmatic Programmer

What you will build: An interactive project scaffolding CLI that asks questions, generates files, and supports non-interactive flags.

Why it teaches CLI design: It forces you to handle TTY detection, prompts, and safe defaults without breaking scripts.

Core challenges you will face:

  • Interactive vs non-interactive modes
  • Validation of user input
  • Template rendering and file generation

Real World Outcome

$ init-wizard new
? Project name: demo
? Language: Go
? License: MIT
Scaffolded ./demo

$ init-wizard new --name demo --lang go --license mit --no-prompt
Scaffolded ./demo

The Core Question You’re Answering

“How do I make a CLI friendly to humans without blocking automation?”

Concepts You Must Understand First

  1. TTY detection
    • When should prompts be disabled?
    • Book Reference: Advanced Programming in the UNIX Environment - Ch. 18
  2. Validation
    • How do you surface validation errors quickly?
    • Book Reference: Effective C - Ch. 5
  3. Config parity
    • Can every prompt be represented as a flag?
    • Book Reference: The Pragmatic Programmer - Ch. 3

Questions to Guide Your Design

  1. What happens if stdin is a pipe?
  2. How will users provide defaults without prompts?
  3. Should the wizard overwrite existing directories?

Thinking Exercise

Write a prompt flow that asks for language and license, but allows --lang and --license flags to skip prompts.

The Interview Questions They’ll Ask

  1. “How do you decide when to prompt?”
  2. “Why is non-interactive mode important?”
  3. “How do you validate user input in a wizard?”

Hints in Layers

Hint 1: Check for TTY

if !isatty.IsTerminal(os.Stdin.Fd()) { noPrompt = true }

Hint 2: Provide defaults

  • Use --name and --lang flags.

Hint 3: Validate early

  • Reject invalid project names before file creation.

Hint 4: Safe overwrite

  • Require --force to overwrite existing directories.

Books That Will Help

Topic Book Chapter
Terminal input Advanced Programming in the UNIX Environment Ch. 18
Defensive programming The Pragmatic Programmer Ch. 3

Common Pitfalls & Debugging

Problem 1: “CLI hangs in CI”

  • Why: Prompting with no TTY.
  • Fix: Disable prompts when stdin is not a TTY.
  • Quick test: echo "" | init-wizard new should not hang.

Problem 2: “Invalid names create broken projects”

  • Why: No validation on input.
  • Fix: Validate against regex and reserved words.
  • Quick test: Try --name "../../".

Definition of Done

  • Prompts work in TTY
  • --no-prompt mode works
  • Validation errors are clear
  • Templates generated correctly
  • Safe overwrite with --force

Project 4: stream-viz (Streaming Visualizer)

  • Main Programming Language: Rust
  • Alternative Programming Languages: Go, Python
  • Coolness Level: Level 3: Genuinely clever
  • Business Potential: 2. Internal tooling
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Streaming and signals
  • Software or Tool: Tokio / async I/O
  • Main Book: The Linux Programming Interface (signals)

What you will build: A CLI that reads stdin, computes throughput and line metrics, and renders a live updating view.

Why it teaches CLI design: It forces you to respect streaming data, TTY detection, and signal handling.

Core challenges you will face:

  • Non-blocking reads
  • Periodic updates without blocking input
  • Graceful shutdown on SIGINT

Real World Outcome

$ yes "ping" | stream-viz --rate --lines
Rate: 120000 lines/s | Total: 7,234,511

# Ctrl+C
^C
Final summary: 7,234,511 lines, avg 118,900 lines/s

The Core Question You’re Answering

“How do I build a streaming CLI that is both real-time and script-friendly?”

Concepts You Must Understand First

  1. Streaming I/O
    • Why should you read in chunks?
    • Book Reference: The Linux Programming Interface - Ch. 4
  2. TTY vs pipe behavior
    • When should you use live updates?
    • Book Reference: The Linux Command Line - Ch. 6
  3. Signals
    • How do you catch SIGINT?
    • Book Reference: Advanced Programming in the UNIX Environment - Ch. 10

Questions to Guide Your Design

  1. Should progress output go to stderr or stdout?
  2. What happens if stdin is slow?
  3. How do you avoid flickering output?

Thinking Exercise

Sketch a loop that reads from stdin while updating a counter every second.

The Interview Questions They’ll Ask

  1. “How do you handle Ctrl+C in a streaming CLI?”
  2. “Why not use blocking reads?”
  3. “How do you avoid corrupting output when piped?”

Hints in Layers

Hint 1: Use buffered reads

  • Read stdin in chunks (8-64 KB).

Hint 2: Separate output streams

  • Write progress to stderr, data to stdout.

Hint 3: Handle SIGINT

  • Set a flag and exit cleanly after flushing.

Hint 4: TTY detection

  • Disable live updates when stdout is not a TTY.

Books That Will Help

Topic Book Chapter
Streaming I/O The Linux Programming Interface Ch. 4
Signals Advanced Programming in the UNIX Environment Ch. 10

Common Pitfalls & Debugging

Problem 1: “Progress output breaks pipes”

  • Why: Writing progress to stdout.
  • Fix: Write progress to stderr.
  • Quick test: stream-viz | wc -l should work.

Problem 2: “Ctrl+C leaves terminal corrupted”

  • Why: No cleanup on SIGINT.
  • Fix: Restore terminal state in handler.
  • Quick test: After Ctrl+C, the prompt should be normal.

Definition of Done

  • Handles streaming input without memory blowup
  • Live updates only when TTY
  • Clean SIGINT shutdown
  • Final summary printed on exit

Project 5: env-vault (Secrets and Env Runner)

  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, Python
  • Coolness Level: Level 3: Practical security
  • Business Potential: 3. SaaS tooling
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Config + secrets
  • Software or Tool: AES/GPG or OS keychain
  • Main Book: Effective C

What you will build: A CLI that stores secrets securely and runs commands with injected environment variables.

Why it teaches CLI design: It forces correct handling of sensitive data and safe configuration behavior.

Core challenges you will face:

  • Secret storage
  • Safe injection into child processes
  • Avoiding leaks in logs

Real World Outcome

$ env-vault set API_KEY
Enter secret: ********
Saved secret for API_KEY

$ env-vault run -- ./deploy.sh
Running ./deploy.sh with 3 secrets injected

$ env-vault list
API_KEY
DB_PASSWORD
TOKEN

The Core Question You’re Answering

“How can a CLI handle secrets without leaking them into history or logs?”

Concepts You Must Understand First

  1. Config and state locations
    • Where should secrets live?
    • Book Reference: The Linux Programming Interface - Ch. 4
  2. Environment variables
    • How does a child process inherit env?
    • Book Reference: The Linux Command Line - Ch. 11
  3. Secure handling
    • Why not log secrets?
    • Book Reference: Effective C - Ch. 5

Questions to Guide Your Design

  1. Should secrets be stored encrypted or delegated to OS keychain?
  2. How will you protect against accidental prints?
  3. How do you allow non-interactive usage?

Thinking Exercise

Design a flow where a user runs env-vault run and secrets are only visible to the child process.

The Interview Questions They’ll Ask

  1. “Why are CLI flags unsafe for secrets?”
  2. “How do you avoid leaking secrets in logs?”
  3. “How do you pass env to child processes safely?”

Hints in Layers

Hint 1: Start with file encryption

  • Store secrets in encrypted JSON under XDG data dir.

Hint 2: Read secrets from stdin

  • Avoid passing them in flags.

Hint 3: Use exec or Command

  • Inject env in child process only.

Hint 4: Redact logs

  • Replace secret values with **** when printing.

Books That Will Help

Topic Book Chapter
Environment The Linux Command Line Ch. 11
Secure coding Effective C Ch. 5

Common Pitfalls & Debugging

Problem 1: “Secrets appear in shell history”

  • Why: Passing secrets as flags.
  • Fix: Use stdin or env vars.
  • Quick test: history | grep SECRET should return nothing.

Problem 2: “Child process missing env”

  • Why: Forgot to pass env map.
  • Fix: Use cmd.Env = append(os.Environ(), ...).
  • Quick test: Run env-vault run -- env | grep API_KEY.

Definition of Done

  • Secrets are stored securely
  • Secrets are never printed to stdout/stderr
  • run injects env correctly
  • Non-interactive usage supported

Project 6: system-monitor-tui (Full-Screen Dashboard)

  • Main Programming Language: Go
  • Alternative Programming Languages: Rust
  • Coolness Level: Level 4: Hardcore tech flex
  • Business Potential: 4. SaaS monitoring
  • Difficulty: Level 4: Expert
  • Knowledge Area: TUI architecture
  • Software or Tool: Bubble Tea / Ratatui
  • Main Book: Advanced Programming in the UNIX Environment

What you will build: A full-screen system monitor showing CPU, memory, and process info with keyboard navigation.

Why it teaches CLI design: It pushes you into terminal control, raw input, and efficient rendering.

Core challenges you will face:

  • Event loop design
  • Layout and resizing
  • Performance under frequent updates

Real World Outcome

+----------------- System Monitor -----------------+
| CPU [################----] 68%                   |
| RAM [##################--] 86%                   |
+----------------- Top Processes ------------------+
| PID   CMD            CPU%   MEM%                 |
| 1223  node           15.2   4.1                  |
| 887   postgres       10.4   3.8                  |
+--------------------------------------------------+
[q] Quit  [r] Refresh  [k] Kill

The Core Question You’re Answering

“How do I maintain a live terminal UI without breaking the terminal state?”

Concepts You Must Understand First

  1. Terminal raw mode
    • Why switch to raw mode?
    • Book Reference: Advanced Programming in the UNIX Environment - Ch. 18
  2. Signals and resize
    • What happens on SIGWINCH?
    • Book Reference: Advanced Programming in the UNIX Environment - Ch. 10
  3. Rendering performance
    • Why diff-based rendering?
    • Book Reference: Clean Code - Ch. 3

Questions to Guide Your Design

  1. How will you handle window size changes?
  2. How often should you redraw the screen?
  3. What happens if the terminal is too small?

Thinking Exercise

Sketch a render loop that updates CPU usage every second without flickering.

The Interview Questions They’ll Ask

  1. “What is raw mode and why do TUIs need it?”
  2. “How do you handle SIGWINCH?”
  3. “Why does flicker happen and how do you prevent it?”

Hints in Layers

Hint 1: Use a TUI framework

  • Bubble Tea or Ratatui manage raw mode for you.

Hint 2: Event loop structure

  • Separate update and render steps.

Hint 3: Resize handling

  • Subscribe to SIGWINCH and recalc layout.

Hint 4: Limit redraw rate

  • Cap at 30-60 FPS.

Books That Will Help

Topic Book Chapter
Terminal I/O Advanced Programming in the UNIX Environment Ch. 18
Signals Advanced Programming in the UNIX Environment Ch. 10

Common Pitfalls & Debugging

Problem 1: “Terminal is broken after exit”

  • Why: Raw mode not restored.
  • Fix: Ensure cleanup on exit and signals.
  • Quick test: After exit, typing should echo normally.

Problem 2: “UI flickers heavily”

  • Why: Full redraw each frame.
  • Fix: Use diff-based rendering.
  • Quick test: CPU usage display should look stable.

Definition of Done

  • Full-screen TUI renders correctly
  • Handles resize events
  • Clean exit restores terminal
  • Update loop is stable and efficient

Project 7: git-insight (Composability and Parsing)

  • Main Programming Language: Go
  • Alternative Programming Languages: Rust
  • Coolness Level: Level 3: Genuinely clever
  • Business Potential: 2. Internal tooling
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Integration and parsing
  • Software or Tool: git
  • Main Book: The Linux Command Line

What you will build: A CLI that summarizes repository activity (top contributors, churn, files changed) using git porcelain output.

Why it teaches CLI design: It forces safe command execution and robust parsing.

Core challenges you will face:

  • Command execution safety
  • Parsing output formats
  • Performance with large repos

Real World Outcome

$ git-insight summary
Repo: my-app
Commits (30d): 142
Top authors:
  alice  48
  bob    31

$ git-insight churn --json | jq '.files[0]'
{"file":"src/main.go","lines_added":210,"lines_removed":95}

The Core Question You’re Answering

“How do I integrate with external tools safely and predictably?”

Concepts You Must Understand First

  1. Safe command execution
    • Why not use shell strings?
    • Book Reference: The Linux Programming Interface - process execution chapters
  2. Structured output
    • Why parse porcelain output?
    • Book Reference: The Linux Command Line - Ch. 6

Questions to Guide Your Design

  1. How do you avoid shell injection?
  2. What git flags produce stable output?
  3. How do you handle errors from git?

Thinking Exercise

Design a parsing strategy for git log --pretty=format:"%H|%an|%ad" that avoids delimiter collisions.

The Interview Questions They’ll Ask

  1. “What is the difference between porcelain and plumbing?”
  2. “How do you avoid shell injection?”
  3. “What happens if git output changes?”

Hints in Layers

Hint 1: Use exec.Command

  • Pass args as a slice, never a string.

Hint 2: Use NUL separators

  • git log --pretty=format:"%H%x00%an%x00%ad"

Hint 3: Cache expensive calls

  • Store results to avoid repeated git calls.

Books That Will Help

Topic Book Chapter
Shell execution The Linux Programming Interface Process execution chapters
Text processing The Linux Command Line Ch. 6

Common Pitfalls & Debugging

Problem 1: “Parsing breaks on special characters”

  • Why: Using | as delimiter.
  • Fix: Use NUL separators.
  • Quick test: Create commit with | and parse.

Problem 2: “Command injection risk”

  • Why: Using sh -c.
  • Fix: Use direct exec.
  • Quick test: Try passing a malicious arg.

Definition of Done

  • Uses safe command execution
  • Parses structured output reliably
  • Supports JSON output
  • Handles errors cleanly

Project 8: api-forge (API Schema to CLI)

  • Main Programming Language: Rust
  • Alternative Programming Languages: Go, TypeScript
  • Coolness Level: Level 4: Impressive
  • Business Potential: 3. Micro-SaaS
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Code generation
  • Software or Tool: OpenAPI
  • Main Book: Clean Architecture

What you will build: A CLI that reads an OpenAPI schema and generates subcommands for every endpoint.

Why it teaches CLI design: It combines command grammar, config, and integration into a dynamic system.

Core challenges you will face:

  • Mapping HTTP concepts to CLI commands
  • Auth configuration and secrets
  • Dynamic command generation

Real World Outcome

$ api-forge init ./openapi.json
Generated 42 commands

$ api-forge users list --limit 5 --json
[{"id":1,"name":"Alice"},...]

The Core Question You’re Answering

“How do I design a command grammar that is generated from a schema without becoming unusable?”

Concepts You Must Understand First

  1. Command grammar
    • How do path params map to CLI args?
    • Book Reference: Advanced Programming in the UNIX Environment - Ch. 2
  2. Config and secrets
    • Where do API keys live?
    • Book Reference: Effective C - Ch. 5
  3. Integration
    • How do you handle errors from HTTP requests?
    • Book Reference: Clean Architecture - Ch. 2

Questions to Guide Your Design

  1. Should endpoints be grouped by tag or resource?
  2. How do you expose auth tokens safely?
  3. How do you handle pagination?

Thinking Exercise

Map the endpoint GET /users/{id} into a CLI command and list the flags you would provide.

The Interview Questions They’ll Ask

  1. “How do you map REST endpoints to CLI commands?”
  2. “How do you avoid leaking API keys?”
  3. “What output mode is best for API responses?”

Hints in Layers

Hint 1: Use tags to group commands

  • users list, users get, users create.

Hint 2: Auth via env or config

  • Support API_FORGE_TOKEN env var.

Hint 3: JSON output by default

  • APIs are already structured data.

Books That Will Help

Topic Book Chapter
Architecture Clean Architecture Ch. 2
Error handling Effective C Ch. 5

Common Pitfalls & Debugging

Problem 1: “Generated commands are confusing”

  • Why: Direct 1:1 mapping.
  • Fix: Group commands by tags.
  • Quick test: Ask a teammate to use it without docs.

Problem 2: “API keys show up in history”

  • Why: Passing as flags.
  • Fix: Use env vars or config.
  • Quick test: history | grep api-forge should be safe.

Definition of Done

  • Generates command tree from schema
  • Supports auth without leaking secrets
  • JSON output available
  • Errors are human-readable

Project 9: plug-master (Plugin Architecture)

  • Main Programming Language: Go
  • Alternative Programming Languages: Rust
  • Coolness Level: Level 4: Hardcore
  • Business Potential: 4. Platform-level tool
  • Difficulty: Level 4: Expert
  • Knowledge Area: Extensibility
  • Software or Tool: hashicorp/go-plugin or custom RPC
  • Main Book: Fundamentals of Software Architecture

What you will build: A CLI that discovers external plugins and exposes them as subcommands.

Why it teaches CLI design: It forces you to define a stable extension protocol and manage isolation.

Core challenges you will face:

  • Plugin discovery
  • Protocol versioning
  • Isolation and safety

Real World Outcome

$ plug-master --help
Available plugins:
  plug-master-hello
  plug-master-lint

$ plug-master hello --name world
Hello, world!

The Core Question You’re Answering

“How do I let third parties extend my CLI without breaking it?”

Concepts You Must Understand First

  1. Plugin discovery
    • How do tools like kubectl discover plugins?
    • Book Reference: Fundamentals of Software Architecture - Ch. 6
  2. Isolation
    • Why run plugins in separate processes?
    • Book Reference: Advanced Programming in the UNIX Environment - process chapters
  3. Versioning
    • How do you avoid breaking old plugins?
    • Book Reference: Clean Architecture - Ch. 7

Questions to Guide Your Design

  1. Where should plugins be stored?
  2. How do you handle incompatible versions?
  3. Should plugins be allowed to change global config?

Thinking Exercise

Design a handshake protocol that includes plugin name, version, and supported API version.

The Interview Questions They’ll Ask

  1. “How do you discover plugins?”
  2. “Why not load plugins in-process?”
  3. “How do you version plugin APIs?”

Hints in Layers

Hint 1: Executable naming convention

  • plug-master-<name> on PATH or in plugin dir.

Hint 2: Use JSON over stdin/stdout

  • Define a small protocol and version.

Hint 3: Sandbox

  • Consider running plugins as separate processes.

Books That Will Help

Topic Book Chapter
Modularity Fundamentals of Software Architecture Ch. 6
Process isolation Advanced Programming in the UNIX Environment Process chapters

Common Pitfalls & Debugging

Problem 1: “Plugin crashes host”

  • Why: Running in-process.
  • Fix: Use separate process.
  • Quick test: Kill plugin and ensure host survives.

Problem 2: “Old plugin breaks”

  • Why: No version check.
  • Fix: Include version handshake.
  • Quick test: Run old plugin with new host.

Definition of Done

  • Plugins discovered automatically
  • Stable protocol defined
  • Version compatibility checks
  • Host survives plugin crashes

Project 10: distro-flow (Distribution and Updates)

  • Main Programming Language: Go
  • Alternative Programming Languages: Rust
  • Coolness Level: Level 3: Genuinely clever
  • Business Potential: 5. Industry-grade tool
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Distribution and UX
  • Software or Tool: GoReleaser / GitHub Releases
  • Main Book: The Pragmatic Programmer

What you will build: A CLI that checks for updates, downloads releases, verifies checksums, and installs shell completions.

Why it teaches CLI design: Distribution is the last mile. Without it, your tool cannot be used reliably.

Core challenges you will face:

  • Safe binary replacement
  • Version comparison
  • Completion script distribution

Real World Outcome

$ distro-flow update
Current version: 1.2.0
Latest version: 1.3.1
Downloading...
Verified checksum
Update applied. Restart required.

$ distro-flow completion bash > /etc/bash_completion.d/distro-flow

The Core Question You’re Answering

“How do I ship a CLI that can update itself without breaking users?”

Concepts You Must Understand First

  1. Versioning
    • How does semver affect update logic?
    • Book Reference: The Pragmatic Programmer - Ch. 8
  2. Binary replacement
    • Why is Windows different?
    • Book Reference: Advanced Programming in the UNIX Environment - process chapters
  3. Completion scripts
    • Why is completion part of UX?
    • Book Reference: The Linux Command Line - Ch. 5

Questions to Guide Your Design

  1. How will you verify downloaded binaries?
  2. What happens if update fails halfway?
  3. How do users enable completions safely?

Thinking Exercise

Design an update flow that downloads, verifies, and atomically replaces the current binary.

The Interview Questions They’ll Ask

  1. “Why is self-update risky?”
  2. “How do you handle Windows binary replacement?”
  3. “Why include shell completion?”

Hints in Layers

Hint 1: Use checksums

  • Download checksum file and compare.

Hint 2: Use a temp file

  • Write new binary to temp, then rename.

Hint 3: Provide rollback

  • Keep old binary as .old.

Hint 4: Completion subcommand

  • distro-flow completion bash|zsh|fish.

Books That Will Help

Topic Book Chapter
Release practices The Pragmatic Programmer Ch. 8
Shell usage The Linux Command Line Ch. 5

Common Pitfalls & Debugging

Problem 1: “Update fails on Windows”

  • Why: Running binary cannot be overwritten.
  • Fix: Rename and swap, or spawn updater.
  • Quick test: Update on Windows VM.

Problem 2: “Completion not working”

  • Why: Script not sourced.
  • Fix: Document shell setup.
  • Quick test: Tab completion should suggest subcommands.

Definition of Done

  • Update flow works and verifies checksum
  • Handles Windows replacement strategy
  • Completion scripts generated
  • --version reports build metadata