Learn Dotfiles: From Zero to Productivity Master

Goal: Build a deep, mental-model-level understanding of dotfiles so you can design, version, and ship a portable developer environment that fits your workflow. You will understand how shells start, how environment state flows, and how tools (git, ssh, editor, terminal, tmux) consume configuration. You will learn to organize dotfiles as a coherent system, automate setup, and keep secrets safe. By the end, you can rebuild your entire environment on a new machine in minutes, with the same behavior, performance, and ergonomics every time.


Introduction

Dotfiles are hidden configuration files and directories (like ~/.zshrc, ~/.gitconfig, or ~/.config/nvim) that control your shell, editor, terminal, git, and many other tools. They are the boundary layer between you and your computer: your aliases, keybindings, prompt, editor defaults, SSH shortcuts, and automation all live there. This guide turns dotfiles into an intentional system rather than a pile of tweaks.

What you will build across the projects:

  • A cohesive shell environment (aliases, functions, prompt, completion)
  • A fully configured Git and SSH workflow
  • A modern editor and terminal setup
  • Reproducible machine bootstrapping
  • A clean, portable layout (XDG compliant) that scales across machines

Scope boundaries:

  • Included: shell config, terminal UX, git/ssh config, tmux, editor config, dotfile management tools, automation, portability, and dev containers
  • Excluded: full OS tuning (kernel, drivers), deep OS security hardening, and heavy GUI theming (keep dotfiles focused on developer productivity)

Big-picture shape

Your Intent
    |
    v
Dotfiles Repo  ->  Shell Startup  ->  Environment State  ->  Tool Configs
    |                     |                  |                |
    v                     v                  v                v
Symlinks/Manager       rc files           PATH, vars      git, ssh, nvim
    |                     |                  |                |
    v                     v                  v                v
Bootstrap Script  ->  Terminal UX  ->  Daily Workflows  ->  Portability

Key Terms You’ll See Everywhere

  • rc file: Shell startup configuration (e.g., .zshrc, .bashrc).
  • XDG: Standard directories for config/data/cache/state.
  • Symlink: A filesystem pointer used by dotfile managers.
  • Bootstrap: A script that installs tools and applies configs.
  • Override: A machine-specific config loaded after defaults.

How to Use This Guide

  1. Read the Theory Primer first. It is the mental model you will reuse in every project.
  2. Pick a learning path (see Recommended Learning Paths) based on your goals.
  3. Complete projects in order for a smooth difficulty curve, but feel free to skip ahead.
  4. Use the checklists in each project to define “done” and avoid configuration rot.
  5. Write documentation as you go. The best dotfiles are self-documenting.

Prerequisites & Background Knowledge

Before starting these projects, you should have foundational CLI and Git literacy. This guide is about designing and engineering your environment, not learning basic commands from scratch.

Essential Prerequisites (Must Have)

Command-line fundamentals:

  • Comfortable with core navigation and file operations (ls, cd, cp, mv, rm, mkdir, find).
  • Can read and edit plain text files in a terminal editor or GUI editor.
  • Understand basic process control (ps, kill, jobs, foreground/background).

Git fundamentals:

  • Can clone, stage, commit, and view history (git clone, git status, git add, git commit, git log).
  • Understand the difference between global config, repo config, and repository state.

OS & filesystem basics:

  • Know where user config typically lives ($HOME, ~/.config, ~/.local/share).
  • Comfortable with file permissions and ownership (chmod, chown).

Recommended reading: “The Linux Command Line” (Shotts) — Ch. 1–11.

Helpful But Not Required

Shell scripting basics:

  • Conditionals, loops, exit codes, and quoting rules.
  • Helpful for Projects 7, 9, 15, and 18.

Editor customization:

  • Familiarity with Vim/Neovim or another editor helps for Project 5.

SSH fundamentals:

  • Key-based auth, known_hosts, and port forwarding help for Project 4.

Self-Assessment Questions

  1. ✅ Can you explain the difference between a login shell and an interactive shell?
  2. ✅ Do you know which file your shell loads first and why?
  3. ✅ Can you diagnose a command not found error by inspecting PATH?
  4. ✅ Do you know where your git global config and SSH config live?
  5. ✅ Can you explain what a symlink is and how it behaves?

If you answered “no” to questions 1–3: Spend 1–2 weeks building CLI confidence before starting the projects.

Development Environment Setup

Required Tools:

  • macOS or Linux (Windows is supported via WSL).
  • A modern shell (zsh or bash).
  • A text editor (Neovim/Vim/VS Code).
  • Git and OpenSSH.

Recommended Tools:

  • A dotfile manager: GNU Stow, chezmoi, or yadm.
  • A terminal multiplexer: tmux.
  • A fuzzy finder: fzf.
  • A prompt framework: Starship.

Testing Your Setup:

# Verify core tools
$ which git ssh zsh bash
/usr/bin/git
/usr/bin/ssh
/bin/zsh
/bin/bash

# Check shell and git versions
$ zsh --version
zsh 5.x
$ git --version
git version 2.x

Time Investment

  • Simple projects (1, 2, 3, 4): 4–8 hours each
  • Moderate projects (5, 6, 7, 8, 10, 11, 12): 1 week each
  • Complex projects (9, 13, 14, 15, 16, 17, 18, Final): 2+ weeks each
  • Total sprint: 2–3 months of consistent practice

Important Reality Check

Dotfiles are a living system. The goal is not perfection; it is stability, clarity, and portability. Expect multiple iterations. Your first version will work. Your third version will feel “right.” The projects are designed to help you iterate safely.


Big Picture / Mental Model

                  +------------------------------+
                  |          SYSTEM LAYER        |
                  | /etc profile, ssh_config     |
                  +--------------+---------------+
                                 |
                                 v
+------------------------------+  +------------------------------+
|         USER LAYER           |  |         TOOL LAYER           |
| ~/.zshrc ~/.gitconfig        |  | git, ssh, nvim, tmux          |
| ~/.config/* ~/.local/*       |  | terminal, prompt, plugins     |
+---------------+--------------+  +---------------+--------------+
                |                                 |
                v                                 v
         +------+-------------------------------+--+
         |          WORKFLOW LAYER                 |
         | aliases, functions, scripts, dev tools  |
         +-------------------+---------------------+
                             |
                             v
                    +--------+--------+
                    |  PORTABILITY   |
                    | bootstrap, XDG |
                    | containers     |
                    +----------------+

Theory Primer (Mini-Book)

This is the “textbook” for the projects. Each concept below is a mental model you will re-use across the sprint. Read this once end-to-end, then revisit chapters as you build each project.

Concept C1: Dotfiles & Configuration-as-Code

Fundamentals

Dotfiles are the hidden configuration files that control how your shell, editor, terminal, Git, SSH, and other tools behave. Treating dotfiles as configuration-as-code means you apply the same discipline you use for software: version control, documentation, modular structure, and repeatable deployment. This shifts dotfiles from “personal tweaks” to an explicit system design. It also forces you to separate configuration (your intent) from state (generated data like caches or histories). Once you adopt this mental model, every change is intentional, reviewable, and reversible. That is the foundation of reproducibility and portability. You can reason about changes, roll them back, and apply the same baseline everywhere. It also encourages deliberate review and a clear change history.

Deep Dive into the Concept

Dotfiles are often introduced as “preferences,” but that framing hides their true power. In practice, dotfiles define the default behavior of your entire development environment: your command vocabulary, your prompt feedback, your Git safety rails, your SSH routing, and your editor ergonomics. When you version-control them, you gain history and accountability. When you structure them, you gain maintainability. When you test them, you gain confidence. The shift to configuration-as-code is a shift from local hacks to a system you can ship.

Start by recognizing that configuration exists at multiple layers. There are system-level defaults (e.g., /etc/profile, /etc/ssh/ssh_config), user-level defaults (e.g., ~/.zshrc, ~/.gitconfig), and project-specific overrides (e.g., .git/config). A coherent dotfile system makes those layers explicit. You decide what belongs globally, what is scoped to your user, and what is scoped to a particular repo. That design prevents accidental coupling between projects and stops “config drift” from propagating everywhere.

Next, understand the difference between configuration and state. Configuration is the intent you want applied every time. State is the byproduct of running tools: shell histories, caches, plugin downloads, swap files, or compiled theme artifacts. If you mix state into your dotfiles repo, you pollute it with noise and risk committing secrets. This is why the XDG Base Directory spec matters: it gives standard locations to keep configuration separate from state, which makes your dotfiles clean, portable, and easier to back up.

Another core principle is modularity. A monolithic .zshrc may be fine when you have 20 lines, but it becomes brittle at 200. Modularization means grouping by domain: shell/aliases, shell/functions, git/config, ssh/config, nvim/init.lua, etc. Modularity makes changes localized and easier to review. It also makes onboarding easier: someone can read the git/ folder to understand your Git defaults without digging through unrelated settings.

Configuration-as-code also implies testing. Dotfiles are code, which means changes can break workflows. You do not need a full unit test suite; you do need a “doctor” script or smoke tests. For example: run git config user.email and confirm the correct identity, open a tmux session and ensure keybindings work, start a shell and confirm startup time is within budget. Small tests turn configuration changes into safe iterations rather than risky experiments.

Portability is the next level. If you only ever use one machine, your dotfiles can be messy and still work. The moment you use a second machine, the flaws appear: hardcoded paths, OS-specific commands, missing dependencies. Config-as-code forces you to design for portability: use environment variables instead of hardcoded paths, isolate machine-specific overrides, and avoid embedding secrets. The fastest sign of a strong system is the ability to set up a new machine with one command and get the same behavior within minutes.

Finally, dotfiles are also documentation. The best dotfiles repos read like a product: a README that explains design choices, a CHANGELOG or commit history that records why decisions were made, and inline comments that clarify non-obvious settings. This is not vanity; it is operational memory. Your future self is the real user of your dotfiles. Documentation prevents regressions and makes experimentation safe.

How this fits on projects

Every project in this guide treats configuration as code. You will version your configs, modularize them, and build workflows that can be reproduced on any machine.

Definitions & key terms

  • Dotfiles: Hidden configuration files (often in $HOME or ~/.config).
  • Configuration-as-code: Treating configuration with the same rigor as software.
  • State: Generated artifacts (caches, histories, compiled plugin output).
  • Layering: The precedence model of system, user, and project config.
  • Portability: The ability to apply the same config across machines/OSes.

Mental model diagram

Intent -> Config Repo -> Apply (symlinks/manager) -> Tool Behavior -> Workflow
  ^            |                    |                   |               |
  |            v                    v                   v               v
Docs       Version history       Validation         Prompt/UX        Habit change

How it works (step-by-step)

  1. Identify a workflow pain point (slow navigation, unsafe Git defaults).
  2. Encode the intent as configuration in a versioned repo.
  3. Apply the config using a manager or symlink system.
  4. Validate behavior with a quick smoke test.
  5. Document the change and iterate.

Minimal concrete example

# Repo layout
~/dotfiles/
  shell/aliases.zsh
  shell/functions.zsh
  git/gitconfig

# Apply with a simple symlink
ln -s ~/dotfiles/git/gitconfig ~/.gitconfig

Common misconceptions

  • “Dotfiles are just visual tweaks.” → They control core behavior and safety.
  • “A private repo means secrets are safe.” → Secrets still leak via logs and history.
  • “One big rc file is simpler.” → It becomes unmaintainable and fragile at scale.

Check-your-understanding questions

  1. Why is configuration-as-code different from ad-hoc edits?
  2. What is the difference between configuration and state?
  3. Why does layering matter in dotfiles?

Check-your-understanding answers

  1. Because config-as-code adds structure, review, and reproducibility.
  2. Config is your intent; state is generated output.
  3. It prevents accidental overrides and keeps project-specific logic local.

Real-world applications

  • Onboarding teammates with a consistent environment
  • Rebuilding your setup after hardware failure
  • Shipping a standard dev environment across a team

Where you’ll apply it

Projects 1–18 and the Final Project.

References

  • The Linux Command Line (Shotts) — Ch. 11
  • Effective Shell (Kerr) — Ch. 1–3
  • XDG Base Directory Specification — config vs state separation

Key insight

Dotfiles are executable design decisions, not decoration.

Summary

Treat dotfiles like software: version them, structure them, validate them, and document them. That mindset unlocks portability and reproducibility.

Homework/Exercises

  1. Inventory every dotfile you currently use and classify it as config or state.
  2. Create a dotfiles repo with a README describing goals and scope.
  3. Split one monolithic config into modular files.

Solutions

  1. Use ls -a ~ and tag files in a checklist.
  2. git init ~/dotfiles and add a short README.
  3. Move aliases into ~/.config/shell/aliases and source them.

Concept C2: Shell Startup & Environment Model

Fundamentals

Shell startup is the boot sequence of your command line. Login shells, interactive shells, and non-interactive shells load different files in different orders. Where you place exports and functions determines whether they apply everywhere or only in a specific context. This is the root cause of many dotfile problems: a PATH that works in the terminal but not in scripts, or an alias that exists in one shell but not another. Understanding startup order and environment inheritance is essential for stable, portable configuration. It explains why scripts, cron jobs, and GUI apps often see different environments, and why CI behaves differently than a terminal.

Deep Dive into the Concept

A shell is just a program, and like any program it has a startup path. Bash and Zsh both define specific initialization files, but they differ in how they are invoked and in the order they read files. The details matter because a misplaced export can silently break your environment.

Bash distinguishes between login shells, interactive shells, and non-interactive shells. A login shell (e.g., a terminal started with --login or a TTY login) reads /etc/profile and then the first of ~/.bash_profile, ~/.bash_login, or ~/.profile. An interactive non-login shell reads ~/.bashrc. Non-interactive shells (scripts) read $BASH_ENV if set. This means that if you only put PATH changes in .bashrc, they won’t appear in login shells or scripts. The common pattern is to keep environment setup in ~/.bash_profile and source ~/.bashrc from there.

Zsh uses a different set of files: .zshenv is read every time, .zprofile is for login shells, .zshrc is for interactive shells, .zlogin runs after login setup, and .zlogout runs on exit. This gives you more precision but more risk. .zshenv should stay minimal because it runs even for non-interactive scripts; .zshrc is the right place for prompt configuration, aliases, and completion. Keeping this separation prevents slow startup and avoids polluting script environments with interactive-only settings.

The environment model is about inheritance. Once you export a variable, every child process inherits it. That means if you run export PATH=... in your shell, your editor, your build tools, and your scripts all inherit it. This is powerful but dangerous: a bad export can break tooling everywhere. The safest pattern is to isolate universal exports (e.g., PATH, LANG, XDG_*) in a file sourced by every shell, and keep interactive-only tweaks separate.

A frequent source of confusion is that GUI applications often do not launch as login shells. On macOS and Linux desktops, GUI apps may run without sourcing your shell startup files, which explains “terminal works, GUI doesn’t” inconsistencies. The fix is to place critical exports in files that login shells read or to use OS-specific mechanisms to set environment variables globally. Your dotfiles should document this boundary so you do not chase phantom bugs.

Another layer is shell options. In zsh, setopt changes behavior for history, globbing, and word splitting. In bash, shopt controls globbing and interactive features. These options are part of your startup sequence and should be documented alongside your other dotfiles. If you enable options like HIST_IGNORE_DUPS, you should know why they exist and how they change behavior.

Performance is the last key piece. Every command you run during startup slows the shell. Many common dotfile patterns (prompt frameworks, plugin managers, language version managers) can add 200–500ms of startup time. This is why advanced setups often include lazy-loading and caching. The fastest shell is usually the one that avoids running external commands on startup unless necessary. You can measure this using time zsh -i -c exit or similar. Performance is a configuration requirement, not an afterthought.

How this fits on projects

Startup order underpins Projects 1, 3, 7, 8, 11, 17, and 18. If you misplace exports or aliases, these projects will behave inconsistently.

Definitions & key terms

  • Login shell: A shell started with login semantics.
  • Interactive shell: A shell attached to a terminal session.
  • Non-interactive shell: A shell running a script.
  • Export: Making a variable part of the environment passed to child processes.

Mental model diagram

Shell start
   |
   +--> Is login? ---- yes --> read /etc/profile -> ~/.bash_profile
   |                              |
   |                              +--> source ~/.bashrc (recommended)
   |
   +--> Is interactive? -- yes --> read ~/.bashrc or ~/.zshrc
   |
   +--> Is script? ------- yes --> read $BASH_ENV (bash)

How it works (step-by-step)

  1. Shell starts and checks invocation flags.
  2. System-wide configs load first.
  3. User configs load based on login/interactive mode.
  4. Environment variables are exported and inherited.
  5. Interactive UI (prompt/completion) is built.

Minimal concrete example

# ~/.bash_profile
if [ -f ~/.bashrc ]; then
  . ~/.bashrc
fi

# ~/.bashrc
export PATH="$HOME/.local/bin:$PATH"

Common misconceptions

  • .bashrc always runs.” → Not in login shells.
  • “Exports only affect this terminal.” → They propagate to child processes.
  • “Startup files can print output safely.” → Not in non-interactive contexts.

Check-your-understanding questions

  1. Which file does zsh read on every invocation?
  2. Why might a script see a different PATH than your terminal?
  3. Where should you place exports required by scripts?

Check-your-understanding answers

  1. .zshenv.
  2. Non-interactive shells don’t source interactive configs.
  3. In a file sourced by all shells (e.g., .zshenv or a shared env file).

Real-world applications

  • Fixing “command not found” errors in CI
  • Ensuring GUI apps see language toolchains
  • Reducing startup time by moving expensive commands out of rc files

Where you’ll apply it

Projects 1, 3, 7, 8, 11, 17, 18.

References

  • Bash Reference Manual — Startup Files
  • Zsh Manual — Startup/Shutdown Files
  • The Linux Command Line — Ch. 11

Key insight

Most dotfile bugs are startup-order bugs in disguise.

Summary

Know the startup sequence and place configuration in the correct file. That single discipline eliminates a massive class of shell issues.

Homework/Exercises

  1. Print $0 and $- in your shell and interpret the mode.
  2. Add temporary echo statements to startup files to trace loading order.
  3. Create a ~/.config/shell/env file and source it in the right places.

Solutions

  1. echo $0; echo $- (look for i in $- for interactive).
  2. Add echo "sourced .zshrc" and open a new shell.
  3. Source the env file from .zshenv or .bash_profile.

Concept C3: CLI Ergonomics & Interaction (Aliases, Functions, Completion, History, FZF)

Fundamentals

CLI ergonomics is about reducing friction between intention and execution. Aliases compress frequently used commands, functions encode multi-step workflows, completion reduces memory load, and history search turns past commands into a usable database. Fuzzy finders (like fzf) add interactive discovery, allowing you to explore and select from long lists with a few keystrokes. Good ergonomics makes the command line feel like a tailored instrument rather than a generic interface, and it reduces mistakes by making safe defaults easy and dangerous actions explicit. It also improves discoverability, so your workflow remains usable months later. Better ergonomics also speeds onboarding and collaboration.

Deep Dive into the Concept

The command line is a language. Every alias or function you add becomes a new verb in that language. The important shift is from memorizing syntax to expressing intent. Instead of typing git log --oneline --graph --decorate --all, you run glg and spend your brainpower on the output, not the flags.

Aliases are best when you need a lightweight substitution with no arguments. Functions are for logic, arguments, and guardrails. A mature dotfile setup uses aliases for small syntax improvements and functions for workflows. A classic example is a mkcd function that creates a directory and enters it, or a gcof function that fuzzy-finds a git branch and checks it out. Functions give you validation and error handling; aliases do not.

Completion is the other half of ergonomics. When completion works well, you stop memorizing flags and start exploring. Zsh’s completion system is powerful but can be slow if misconfigured. That is why caching and lazy-loading are essential. Completion also becomes an interactive documentation system: it shows you available subcommands, flags, and even descriptions. This changes your relationship with CLI tools from “memorize flags” to “discover capabilities.”

History is a behavioral dataset. Standard shell history is a flat list, but tools like Atuin add context (timestamp, working directory, git repo) and enable full-text search across machines. This turns your history into a searchable knowledge base. You can also build a curated “long tail” of commands using cheatsheet tools like navi or by documenting commands directly in your dotfiles. The key is to treat history as information architecture: increase retention for useful commands and filter out sensitive ones.

Fuzzy finding (fzf) is the accelerant. It introduces a universal pattern: “list → fuzzy filter → select.” Once you internalize it, you can apply it to files, git branches, SSH hosts, Docker containers, or tmux sessions. The default key bindings (Ctrl-T for files, Ctrl-R for history search, Alt-C for directory switching) create fast entry points into this pattern. Many power users wrap fzf in helper functions to create workflows like “pick a branch, show diff, open file in editor.” This is where dotfiles become a workflow engine.

Ergonomics is also about discoverability. If you create 100 aliases and forget them, you have built cognitive debt. A good system includes a help or aliases command that prints categories. It includes comments and documentation. It includes consistent naming rules (e.g., g* for git, d* for docker). This is what turns personal hacks into a durable system you can live with for years.

Finally, ergonomics is about safety. You can wrap destructive commands in “safe” versions (rm with prompts, git push with confirmation, alias gclean=...). This is not about paranoia; it is about reducing risk under stress. A safe CLI is a reliable CLI.

Ergonomics also extends to line editing and keybindings. Readline (bash) and ZLE (zsh) let you remap keys, change word movement rules, and integrate incremental search. Small changes here compound over thousands of commands. History behavior is similarly tunable: HISTSIZE, HISTFILESIZE, and options like HIST_IGNORE_SPACE or HIST_IGNORE_DUPS shape what gets saved and how noisy the history becomes. A clean history improves search quality and reduces accidental command reuse. These are subtle settings that feel minor until you rely on history as a knowledge system.

How this fits on projects

This concept powers Projects 1 (aliases), 7 (functions), 8 (fzf), 14 (dashboard), and 17 (history).

Definitions & key terms

  • Alias: Simple text substitution for a command.
  • Function: A shell command with arguments and logic.
  • Completion: Auto-suggestions for commands and flags.
  • Fuzzy finder: Interactive approximate matching against lists.
  • History enrichment: Adding metadata and search to command history.

Mental model diagram

Intent -> Alias/Function -> Completion/History -> fzf -> Action

How it works (step-by-step)

  1. Identify repeated commands from history.
  2. Decide whether an alias or function is more appropriate.
  3. Add completion or fuzzy selection for lists.
  4. Document the command and add it to a help output.
  5. Measure impact (fewer keystrokes, fewer errors).

Minimal concrete example

# alias vs function
alias gs='git status'

mkcd() {
  mkdir -p "$1" && cd "$1"
}

# fzf-powered git branch checkout
gcof() {
  git checkout "$(git branch --all | fzf | sed 's#^..##')"
}

Common misconceptions

  • “Aliases can do everything.” → Use functions for arguments and logic.
  • “History is just for scrolling.” → It is a searchable knowledge base.
  • “fzf is only for files.” → It can wrap any list.

Check-your-understanding questions

  1. When should you use a function instead of an alias?
  2. Why does completion reduce cognitive load?
  3. What are the default fzf keybindings for file, history, and directory search?

Check-your-understanding answers

  1. When you need arguments, logic, or error handling.
  2. It replaces memorization with discovery.
  3. Ctrl-T (files), Ctrl-R (history), Alt-C (directories).

Real-world applications

  • Rapid navigation in large repos
  • Safe shortcuts for destructive commands
  • Searchable command history across machines

Where you’ll apply it

Projects 1, 7, 8, 14, 17.

References

  • Effective Shell (Kerr) — Ch. 19, 22–23
  • The Linux Command Line (Shotts) — Ch. 6
  • fzf README — default key bindings
  • Atuin docs — history syncing and storage

Key insight

Ergonomics is about reducing cognitive load, not just saving keystrokes.

Summary

Design a CLI vocabulary, make it discoverable, and use completion/history/fzf to reduce memory burden.

Homework/Exercises

  1. Analyze your history and create 10 aliases or functions.
  2. Add fzf bindings and build one custom fzf-powered function.
  3. Create a help command that prints alias categories.

Solutions

  1. Use history | awk '{print $2}' | sort | uniq -c | sort -rn.
  2. Bind fzf keys and create gcof or vif.
  3. Write a function that prints a formatted alias list.

Concept C4: Terminal UX & Prompt/Rendering

Fundamentals

Your terminal is a user interface. The prompt, colors, fonts, and layout influence how quickly you read information and how safely you operate under pressure. Terminal UX is the design of that interface: what information you surface, how you highlight risk, and how you keep the UI fast. The prompt is the most visible part of your dotfiles, and it can either clarify or distract. Good prompt design is about signal, not decoration. A disciplined prompt also communicates risk (root, dirty repo) and keeps rendering latency low. It should remain readable at a glance in local and remote sessions for fast scanning.

Deep Dive into the Concept

A terminal session is a pipeline of components: the terminal emulator renders text, the shell produces output, and the prompt is just a formatted string with embedded escape codes. That separation matters. A slow prompt is usually not the terminal; it is the shell running external commands every time it draws. A broken prompt often means malformed ANSI escape sequences that confuse the line editor.

Prompt design starts with information hierarchy. The prompt should answer: “Where am I?”, “What repo or environment am I in?”, and “Is there danger?” This suggests a baseline: current directory, git branch/status, and exit code of the last command. For some workflows, you might add runtime versions (Python/Node), Kubernetes context, or SSH host. But every extra segment costs performance and attention. A good rule: if you do not act on the information, remove it.

Colors and symbols should encode meaning. For example, red for errors, yellow for warnings, green for clean state. This is UI design applied to a CLI. If you are using a framework like Starship, you get a declarative config file (typically ~/.config/starship.toml) where each module can be enabled, disabled, or styled. The key is to treat it as a design system: consistent colors, consistent alignment, and a predictable layout.

Terminal rendering also depends on fonts, glyphs, and width calculations. Powerline or Nerd Fonts add icons that improve scanning but can break alignment if unsupported. Multi-line prompts improve readability but require correct escape sequences so the shell knows the visible length. Otherwise cursor movement breaks and line wrapping becomes chaotic. This is why prompt frameworks include functions to wrap non-printing sequences correctly.

Performance is a UX feature. Many prompts shell out to git on every render, which can be slow in large repos. Solutions include caching, asynchronous prompts, or reducing the frequency of expensive checks. Tools like Starship are designed to be fast, but you still control which modules run. Measure your prompt with time zsh -i -c exit and a prompt benchmark tool, then budget your startup time.

Another layer is terminal capability negotiation. The $TERM value and the terminal’s terminfo entry determine what escape sequences are safe to use. If $TERM is wrong, colors break, cursor movement fails, and line wrapping becomes unpredictable. True color support often depends on COLORTERM and terminal settings, which is why colors sometimes look different across machines. A robust dotfile system documents these assumptions so the prompt renders consistently everywhere.

Prompt hooks also matter. Bash uses PROMPT_COMMAND; zsh uses precmd and preexec. These hooks let you compute dynamic prompt segments, but each hook invocation costs time. Knowing where prompt logic runs lets you move expensive work out of the hot path or cache it. Multi-line prompts, right-side prompts, and transient prompts can improve UX, but each requires careful handling of non-printing sequences and line editor behavior.

Terminal UX also includes interaction model. Keybindings, copy/paste behavior, scrollback, and terminal profiles all affect your daily flow. For example, setting a larger scrollback buffer helps you audit logs; customizing copy behavior reduces friction. These settings live in terminal emulator config files (e.g., Alacritty, Kitty, iTerm2). Treat them as part of your dotfiles system because they shape your workflow as much as your shell does.

How this fits on projects

Terminal UX is central to Projects 3 (prompt), 10 (terminal emulator config), and 14 (dashboard).

Definitions & key terms

  • Prompt: The command line UI string rendered by the shell.
  • ANSI escape codes: Control sequences for colors, cursor movement, and style.
  • Glyphs: Icon characters used in prompts and UI.
  • Render performance: Time to produce a prompt and draw output.

Mental model diagram

Terminal Emulator -> Shell -> Prompt string -> ANSI render -> Your eyes

How it works (step-by-step)

  1. The shell builds the prompt string.
  2. ANSI codes colorize and style the output.
  3. Terminal emulator renders glyphs and layout.
  4. Line editor tracks cursor position and wrapping.
  5. You interpret signals and act.

Minimal concrete example

# ~/.config/starship.toml
add_newline = true

[git_branch]
format = "[$symbol$branch]($style) "
style = "bold yellow"

[character]
success_symbol = "[❯](green)"
error_symbol = "[❯](red)"

Common misconceptions

  • “A fancy prompt is always better.” → Too much signal becomes noise.
  • “Prompt slowness is inevitable.” → Usually caused by slow commands.
  • “Colors are just decoration.” → They encode risk and state.

Check-your-understanding questions

  1. What are the three most important pieces of information in a prompt?
  2. Why can ANSI escape codes break line editing?
  3. Why is prompt performance a UX concern?

Check-your-understanding answers

  1. Location, repo/state, and error/success status.
  2. Incorrect escape lengths confuse the line editor’s cursor math.
  3. A slow prompt slows every command you run.

Real-world applications

  • Faster situational awareness in production terminals
  • Reduced mistakes by showing repo status and exit codes
  • Clean dashboards for focused work sessions

Where you’ll apply it

Projects 3, 10, 14, 16.

References

  • Starship documentation — configuration and modules
  • Effective Shell (Kerr) — prompt and usability chapters

Key insight

A prompt is a UI: design it for clarity, not decoration.

Summary

Design your prompt and terminal UX like a product interface: minimal, fast, and informative.

Homework/Exercises

  1. Sketch a prompt layout with 3–4 data points.
  2. Implement the prompt and measure startup time.
  3. Remove one prompt element and measure if you miss it.

Solutions

  1. Include directory, git branch, exit status, and user/host for SSH.
  2. Use Starship or a minimal PS1 and benchmark startup.
  3. If you did not miss it, keep it removed.

Concept C5: Tool-Specific Power Config (Git, SSH, tmux, Neovim, Zsh plugins)

Fundamentals

Each tool has its own configuration model, file location, and precedence rules. Git uses layered config files (system, global, local). SSH uses a host-patterned config file. tmux uses a command-like config syntax. Neovim uses Lua or Vimscript. Understanding these models lets you reason about behavior instead of guessing. Tool configs are leverage points: small changes can prevent mistakes, speed up workflows, and make behavior consistent across machines. Knowing how to inspect effective config turns debugging into a deterministic process, and helps you trace exactly which file set a value. It also speeds up troubleshooting under pressure, especially during incidents.

Deep Dive into the Concept

The key to tool configuration is understanding scope and precedence. Git is a prime example: it reads system-level config, then global user config, then repository-local config. You can inspect where a value came from using git config --list --show-origin. Git’s includeIf directive lets you apply different config files based on the repository path, which is ideal for switching identities between work and personal repositories. This is config as policy: you encode safe defaults (fast-forward merges, rebase on pull, signing) and let overrides happen intentionally.

SSH is similar but structured differently. The SSH client reads options from a config file (typically ~/.ssh/config), and each Host block can match patterns. The config is processed top-to-bottom, and for most options the first value wins, so you place specific hosts early and broad defaults later. This lets you define precise overrides without accidentally shadowing them. Options like ProxyJump or ControlMaster can significantly improve workflow by enabling jump hosts or connection multiplexing. Managing SSH config as code also reduces risk: you can audit which keys are used where and avoid accidentally presenting the wrong identity to a server.

tmux is a terminal multiplexer, and its config is a sequence of commands: keybindings, options, and plugins. A disciplined tmux config sets a consistent prefix, defines pane/window navigation, and enables sensible defaults like mouse support or true color. Because tmux config is executed top-down, ordering matters. You can modularize it by sourcing additional files (e.g., ~/.config/tmux/tmux.conf sourcing keybindings.conf).

Neovim is essentially a programmable editor. Its configuration lives in a standard location (usually ~/.config/nvim/init.lua or init.vim). The modern best practice is a Lua-based configuration with separate modules for options, keymaps, and plugins. This turns editor config into a small codebase, which benefits from the same modularity and testing mindset as your dotfiles. The same reasoning applies to other tools: understanding the configuration schema is what allows you to make precise changes without copy-pasting blindly.

Zsh plugin systems add another layer. Whether you use a plugin manager or manual sourcing, you should think in terms of value per cost. Plugins add behavior, but they also add startup time and complexity. The right approach is to start minimal, measure startup time, and add plugins only when they provide clear, daily value. Config-as-code here means documenting why each plugin exists, not just installing it.

The power of tool-specific config is guardrails. Git can prevent dangerous pushes; SSH can restrict identities and enable safer connections; tmux can prevent accidental session exits by remapping keys; Neovim can enforce consistent formatting. These are not aesthetic changes. They are operational safety decisions. Treat them with the same seriousness you treat production code.

Debugging effective config is part of mastery. Git can show the origin of each setting with git config --show-origin. SSH can render its final config with ssh -G host, which is invaluable for verifying patterns and overrides. tmux can reload config via tmux source-file and report options with tmux show-options. Neovim provides :checkhealth and :scriptnames for diagnosing plugin load order. These tools let you validate that your dotfiles are doing what you think they are doing.

How this fits on projects

Tool configuration is the backbone of Projects 2 (Git), 4 (SSH), 5 (Neovim), 6 (tmux), and 11 (Zsh plugins).

Definitions & key terms

  • Scope: Where a config value applies (system, global, local).
  • IncludeIf: Conditional config inclusion based on repo path.
  • Host stanza: A block in SSH config that matches patterns.
  • Multiplexing: Reusing a single SSH connection.
  • Runtime path: The editor’s search path for config/modules.

Mental model diagram

Tool -> Config file(s) -> Precedence rules -> Effective behavior -> Workflow

How it works (step-by-step)

  1. Identify the tool’s config files and precedence order.
  2. Define safe defaults in global config.
  3. Add conditional overrides for special cases.
  4. Modularize config as it grows.
  5. Test changes with real workflows.

Minimal concrete example

# ~/.gitconfig
[alias]
  lg = log --oneline --graph --decorate
[includeIf "gitdir:~/work/"]
  path = ~/.gitconfig-work
# ~/.ssh/config
Host work-bastion
  HostName bastion.company.com
  User dev
  IdentityFile ~/.ssh/id_ed25519_work

Common misconceptions

  • “Defaults are good enough.” → Defaults are generic, not workflow-optimized.
  • “More plugins = more productivity.” → Every plugin has a cost.
  • “SSH flags are fine.” → A config file is safer, consistent, and auditable.

Check-your-understanding questions

  1. Why are conditional includes useful in Git?
  2. What does Host * do in SSH config?
  3. Why is modularizing Neovim config valuable?

Check-your-understanding answers

  1. They let you switch identities or policies by path automatically.
  2. It defines defaults for all hosts that match later blocks can override.
  3. It keeps configuration maintainable as it grows.

Real-world applications

  • Automatically using work Git identity in corporate repos
  • Jumping through bastion hosts with ProxyJump
  • Keeping editor config fast and organized

Where you’ll apply it

Projects 2, 4, 5, 6, 11.

References

  • git-config documentation — scope and includeIf
  • OpenSSH ssh_config — host patterns and options
  • tmux man page — config commands and options
  • Neovim user manual — config locations and runtime path

Key insight

Tool configs are leverage points: a small change can reshape your workflow.

Summary

Learn each tool’s config model and use it to build safe, repeatable defaults.

Homework/Exercises

  1. Add one safety-focused git config (e.g., pull.rebase or pull.ff).
  2. Create an SSH host alias and use it to connect.
  3. Split your editor config into modules.

Solutions

  1. Add config in ~/.gitconfig and verify with git config --get.
  2. Add a Host stanza and connect with ssh work-bastion.
  3. Create lua/config/options.lua, lua/config/keymaps.lua, etc.

Concept C6: Dotfile Management, Portability & XDG Hygiene

Fundamentals

Once dotfiles grow beyond a handful of files, you need a system to manage them: where they live in a repo, how they are applied to $HOME, how machine-specific overrides are handled, and how you keep config separate from state. The XDG Base Directory spec defines standard locations for configuration, data, cache, and state. Following it keeps your home directory clean and makes your dotfiles portable. Dotfile managers like GNU Stow, chezmoi, and yadm help you apply that structure safely. Portability means you can migrate or rebuild without manual guesswork, and backups stay clean, predictable, and easy to audit.

Deep Dive into the Concept

At scale, dotfile management is about indirection. You keep your canonical config in a repo, then apply it to your home directory using symlinks, copies, or a manager. This gives you a clean source of truth and a predictable deployment method. GNU Stow is the classic approach: it builds a symlink farm from your repo into $HOME. This keeps the mapping between repo and filesystem explicit and easy to reason about. Stow is minimal and transparent, which makes it a strong first manager.

When you introduce multiple machines, you need templating and conditional logic. Chezmoi excels here: it supports templates, encrypted secrets, and machine-specific data. It can generate config files based on OS, hostname, or user-defined variables. Yadm (Yet Another Dotfiles Manager) builds on Git and adds features like alternate files and templating. This makes it easier to maintain a single repo that adapts to personal and work machines. The tradeoff is complexity: these tools add a command layer that you must learn and maintain.

Portability is not only about the manager; it is about standards. The XDG Base Directory Specification defines environment variables like XDG_CONFIG_HOME, XDG_DATA_HOME, XDG_CACHE_HOME, and XDG_STATE_HOME. If you follow the spec, you separate configuration (stable intent) from data (state) and cache (disposable). That separation makes backups cleaner, repo boundaries clearer, and migrations easier. Some tools respect XDG; others do not. The goal is not perfection but a documented, intentional structure.

Secrets are the sharpest edge. Private keys, tokens, and API credentials do not belong in plain dotfiles. A portable system must have a secrets strategy: encrypted files (chezmoi), git-crypt, or integration with external secret managers. The best strategy is one you will actually use. It should be documented and enforced by .gitignore rules and pre-commit checks.

Migration deserves its own plan. Moving everything to ~/.config in one step often breaks tools, so migrate incrementally: move one tool, set the appropriate XDG variable, and verify behavior before moving the next. For tools that ignore XDG, you can use symlinks or wrapper scripts, but you should document those exceptions. This keeps the system predictable and makes it easy to identify which tools are “legacy” vs compliant.

Tool choice matters. If you only manage one or two machines, a simple symlink workflow may be enough. If you manage many machines or need templating and secret handling, a dedicated manager can pay off. The goal is to choose the simplest tool that still meets your requirements.

Machine-specific overrides are the last pillar. Your work machine might need a different Git email, SSH key, proxy, or package list. You should isolate these differences in small override files (~/.config/shell/local.zsh, ~/.gitconfig-work) and keep them out of Git. This design keeps your base configuration stable while allowing safe customization per machine. It also makes onboarding simpler: add one machine-specific file and you are done.

Finally, portability must consider the OS. macOS, Linux, and WSL all have different paths, default shells, and system-level configs. A good dotfile system detects OS and loads the correct overrides. This is why clean separation and documentation matter: you should never guess why a config behaves differently across systems.

How this fits on projects

This concept powers Projects 9 (bootstrap), 12 (XDG), 13 (machine-specific config), 18 (dev container), and the Final Project.

Definitions & key terms

  • XDG spec: Standard directories for config, data, state, and cache.
  • Symlink farm: A structured set of symlinks managed by a tool like Stow.
  • Template: Config file with conditional sections or variables.
  • Override: A machine-specific config file loaded after defaults.

Mental model diagram

Dotfiles Repo -> Manager (stow/chezmoi/yadm) -> $HOME layout -> Tools read config

How it works (step-by-step)

  1. Organize the repo by tool or domain (shell, git, nvim, tmux).
  2. Apply configs using symlinks or a manager.
  3. Add machine-specific overrides outside Git.
  4. Separate config from state with XDG directories.
  5. Validate portability on a second machine.

Minimal concrete example

# GNU Stow example
cd ~/dotfiles
stow shell git nvim

Common misconceptions

  • “Symlinks are risky.” → They are predictable and reversible when managed.
  • “XDG is optional.” → It becomes essential as configs grow.
  • “Private repos protect secrets.” → Accidental leaks still happen.

Check-your-understanding questions

  1. What problem does a dotfile manager solve?
  2. Why separate config and cache?
  3. How do overrides reduce risk?

Check-your-understanding answers

  1. It safely applies structured repo files into $HOME.
  2. Cache can be deleted; config should be stable.
  3. Overrides isolate machine-specific changes from the base config.

Real-world applications

  • Rebuilding your setup on a new laptop in minutes
  • Sharing a baseline config across a team
  • Keeping work and personal identities separate

Where you’ll apply it

Projects 9, 12, 13, 18, and the Final Project.

References

  • XDG Base Directory Specification — config/data/cache/state directories
  • GNU Stow manual — symlink farm management
  • chezmoi docs — templating and encryption
  • yadm docs — alternates and templates

Key insight

Portability is designed, not accidental.

Summary

Use standards, managers, and overrides to keep dotfiles portable, clean, and secure.

Homework/Exercises

  1. Migrate one tool to ~/.config and set XDG variables.
  2. Try Stow or a simple symlink script for one tool.
  3. Create a machine-specific override file and keep it out of Git.

Solutions

  1. Move your config and export XDG_CONFIG_HOME.
  2. stow tmux or create a link.sh script.
  3. Add ~/.config/shell/local.zsh and source it last.

Concept C7: Automation & Reproducibility (Bootstrap, Dev Scripts, Dev Containers)

Fundamentals

Automation turns dotfiles from static configuration into a repeatable system. Bootstrap scripts install dependencies and apply configs; dev scripts standardize workflows across projects; dev containers package the environment itself. The key ideas are idempotence (safe re-runs), clarity (readable logs), and reproducibility (consistent results across machines). These principles are what make dotfiles reliable beyond a single laptop. Automation also reduces onboarding time and makes recovery after failure predictable. It gives you repeatable outcomes across time as well as across machines, which is essential for long-term maintenance. It becomes a single source of truth for onboarding, recovery, and daily workflows.

Deep Dive into the Concept

Manual setup does not scale. Every new machine or teammate costs time in “setup tax.” Automation reduces that to a script. A bootstrap script should follow phases: install package manager, install core tools, apply dotfiles, and run verification checks. Each phase should be idempotent: check before you install, check before you overwrite, and fail fast if a dependency is missing. The point is not just automation; it is trust that the script can be run safely at any time.

Dev scripts are the next layer. They provide a consistent interface across projects (dev start, dev stop, dev logs, dev test). This reduces cognitive load and makes onboarding easier. The design challenge is detection: the script must detect the project type (node, python, rust) or read a local config file that defines behavior. A good dev script includes structured output, clear error messages, and a way to simulate or dry-run commands.

Dev containers push reproducibility even further. The Development Containers Specification defines devcontainer.json, which describes how to build or run a development container, what extensions to install, and which commands to run after creation. This is particularly valuable for teams because it standardizes the environment across OSes and makes onboarding near-instant. The tradeoff is complexity: you must manage volumes, handle credentials safely, and optimize performance. That said, if you can build your environment once and run it anywhere, the payoff is massive.

Automation also benefits from version pinning. If your bootstrap script always installs “latest,” you will eventually hit breaking changes. A reproducible system pins versions (e.g., in a Brewfile, apt list, or toolchain file) and records expected versions in documentation. For dev containers, you pin the base image tag. These choices turn your dotfiles into a stable platform rather than a moving target.

Finally, automation should be validated. A simple “doctor” script can check versions, ensure configs are linked, confirm that key tools exist, and benchmark shell startup. If you treat dotfiles as code, you should test them. Even a few checks go a long way toward preventing regressions when you update your environment. Consistency beats cleverness.

Operationally, good automation includes logs and dry runs. A bootstrap script should emit clear, prefixed logs and support a “plan” mode that prints what it would do without making changes. This makes it safer to run on a machine that already has state. You can also add a CI job that runs bootstrap inside a clean container to validate that your setup still works after changes. That feedback loop turns your dotfiles into a continuously tested artifact.

Dev containers add another set of concerns: mounts, users, and performance. You need to decide what stays inside the container (tooling, language runtimes) versus what is mounted from the host (source code, SSH keys). The spec supports post-create hooks and features, but you should keep those minimal to avoid long container start times. The key is to treat the container as another target environment for your dotfiles, not a separate system.

How this fits on projects

This concept powers Projects 9 (bootstrap), 15 (dev scripts), 18 (dev containers), and the Final Project.

Definitions & key terms

  • Idempotent: Safe to run multiple times without harmful side effects.
  • Bootstrap script: A script that installs tools and applies dotfiles.
  • Dev script: A standardized workflow command for a project.
  • Dev container: A containerized environment defined in devcontainer.json.

Mental model diagram

Bootstrap -> Dotfiles -> Dev scripts -> Dev container -> Reproducible workflow

How it works (step-by-step)

  1. Install package manager and core tools.
  2. Apply dotfiles and configs.
  3. Verify environment state.
  4. Add dev scripts for project workflows.
  5. Encapsulate in a dev container when needed.

Minimal concrete example

# bootstrap snippet
command -v brew >/dev/null 2>&1 ||   /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
// .devcontainer/devcontainer.json
{
  "name": "dotfiles-dev",
  "image": "mcr.microsoft.com/devcontainers/base:ubuntu",
  "postCreateCommand": "./bootstrap.sh"
}

Common misconceptions

  • “Automation is overkill.” → It pays off the second time you reinstall.
  • “Containers are only for production.” → They are excellent for dev reproducibility.
  • “Idempotence is optional.” → Non-idempotent scripts break trust.

Check-your-understanding questions

  1. Why is idempotence essential in bootstrap scripts?
  2. What does devcontainer.json define?
  3. Why do dev scripts improve onboarding?

Check-your-understanding answers

  1. Because you need safe re-runs and reliable recovery.
  2. The container image and dev environment configuration.
  3. They provide a consistent interface across projects.

Real-world applications

  • Onboarding a teammate without a setup wiki
  • Running consistent dev environments across OSes
  • Restoring your environment after a hardware failure

Where you’ll apply it

Projects 9, 15, 18, Final Project.

References

  • Dev Containers Specificationdevcontainer.json and lifecycle
  • Wicked Cool Shell Scripts — automation patterns

Key insight

Automation turns personal dotfiles into a reproducible platform.

Summary

Design your bootstrap scripts and dev environments for idempotence, clarity, and repeatability.

Homework/Exercises

  1. Write a bootstrap script that installs five core tools.
  2. Create a dev CLI with start, stop, and logs for one project.
  3. Build a minimal devcontainer that runs your bootstrap.

Solutions

  1. Use command -v checks and clear logging.
  2. Implement subcommands with a case statement and a config file.
  3. Use a base image and postCreateCommand in devcontainer.json.

Glossary (High-Signal)

  • Alias: A short name that expands into a longer command.
  • Function: A shell command with arguments and logic.
  • Prompt: The shell text shown before your input.
  • rc file: Shell startup configuration file (e.g., .zshrc).
  • XDG: Standard locations for config/data/cache directories.
  • Idempotent: Safe to run repeatedly without harmful changes.
  • Symlink: A filesystem pointer to another file.
  • Multiplexing: Reusing a single SSH connection for multiple sessions.
  • Dev container: A containerized development environment.
  • BASH_ENV: File sourced by bash for non-interactive shells.
  • PROMPT_COMMAND / precmd: Hooks used to build dynamic prompts.
  • TERM: Terminal type used to determine capabilities and color support.
  • IncludeIf: Git config directive for conditional includes.
  • Host stanza: A block in SSH config that matches host patterns.

Why Dotfiles Matter (Modern Context & Evolution)

The Modern Problem It Solves

Modern development happens across multiple machines, tools, and contexts. Your editor, Git, shell, SSH, and terminal all carry state and defaults that shape your productivity. Dotfiles are the layer where you make those defaults explicit and portable. Without a disciplined dotfile system, every machine becomes a snowflake and onboarding turns into tribal knowledge.

Real-World Impact (Stats + Signals)

  • Developer scale: GitHub’s 2025 Octoverse reports 180 million+ developers and 36 million new developers added in the last year, highlighting how quickly environments must be reproducible and shareable.
  • Editor dominance: The 2023 Stack Overflow Developer Survey reports VS Code as the most used IDE (73.71% of respondents), which underscores the importance of consistent editor configuration and tooling preferences.

The Core Shift

Dotfiles transform manual setup into a repeatable system. Instead of “remembering how to set things up,” you define the system once and apply it everywhere.

Old Way (Manual Setup)           New Way (Dotfiles System)
---------------------           -------------------------
Install tools by memory          Run bootstrap script
Edit configs ad-hoc              Version configs
Debug mysterious behavior        Reproduce + document
Lose setup on new machine        Apply dotfiles + verify

Context & Evolution (Brief)

Dotfiles started as a few shell rc files in Unix systems. Today, they are full environment blueprints that span shells, editors, terminals, SSH, Git, and containerized dev environments. The rise of multi-machine development and container-based workflows made dotfiles not a preference, but a reliability requirement.


Concept Summary Table

Concept ID Concept What You Need to Internalize
C1 Dotfiles & Config-as-Code Dotfiles are a system; version, structure, document, and test them.
C2 Shell Startup & Environment Startup order determines which settings apply. Environment variables propagate.
C3 CLI Ergonomics Aliases, functions, completion, history, and fzf reduce cognitive load.
C4 Terminal UX & Prompt Prompt is UI; ANSI, fonts, and performance matter.
C5 Tool Power Config Git, SSH, tmux, Neovim configs are leverage points.
C6 Portability & XDG Manage configs with tools, follow XDG, isolate secrets, add overrides.
C7 Automation & Reproducibility Bootstrap scripts, dev scripts, and dev containers make setups repeatable.

Project-to-Concept Map

Project What It Builds Primer Chapters It Uses
1. Shell Alias System Alias taxonomy + discovery C2, C3, C6
2. Git Configuration Powerhouse Safe global/local Git defaults C5, C6
3. Custom Shell Prompt Fast, informative prompt C3, C4
4. SSH Config Mastery Host shortcuts + safe SSH defaults C5, C6
5. Neovim Configuration Modular editor config C5
6. tmux Setup Multiplexer workflow + keybindings C3, C4, C5
7. Shell Functions Library Reusable CLI workflows C2, C3
8. FZF Power User Fuzzy navigation everywhere C3, C4
9. Dotfiles Bootstrap Script One-command install C6, C7
10. Terminal Emulator Config Terminal UX tuning C4
11. Zsh Plugin System Curated plugins + performance budget C2, C5
12. XDG Compliance Clean $HOME + XDG layout C6
13. Machine-Specific Config Work/personal overrides C6
14. CLI Dashboard Startup context & status C3, C4, C7
15. Dev Environment Scripts Standardized dev commands C5, C7
16. Keybinding System OS-wide shortcuts & automations C4, C7
17. Shell History System Searchable command memory C3, C7
18. Dev Container Portable dev environment C6, C7
Final Project End-to-end dotfiles platform C1–C7

Deep Dive Reading by Concept

Concept Book & Chapter Why This Matters
C1 The Linux Command Line Ch. 11 (The Environment) Understand config, environment, and dotfiles.
C2 The Linux Command Line Ch. 11; Shell Programming in Unix, Linux and OS X Ch. 1-3 Shell startup and environment inheritance.
C3 Effective Shell Ch. 19 (Aliases & Functions); The Linux Command Line Ch. 6 (Pipelines) Ergonomics and composability.
C4 Effective Shell Ch. 20-21 (Prompt & UX); terminal docs Prompt design and terminal rendering.
C5 Pro Git Ch. 1.6, 2.7, 8.1 (external); tmux 3 Ch. 1-3; Practical Vim Ch. 1 Tool configuration mental models.
C6 How Linux Works Ch. 2-3; XDG spec Filesystem hygiene and portability.
C7 Wicked Cool Shell Scripts Ch. 1-4; dev container spec Automation and reproducibility.

Quick Start

Day 1 (2-3 hours)

  1. Initialize a dotfiles repo and add a README (scope + goals).
  2. Create ~/.config/shell/aliases and add 5 aliases from history.
  3. Set up a minimal .gitconfig with your name/email and a safe default.
  4. Add a help or aliases command so your shortcuts are discoverable.

Day 2 (2-3 hours)

  1. Add a prompt (Starship or minimal custom) and measure startup time.
  2. Create a simple bootstrap script that installs 3 tools.
  3. Add a small “doctor” script to verify the basics.
  4. Commit everything and tag the repo as v0.1.

End of Weekend You should be able to explain where your shell loads config from and run a one-command setup on a fresh machine.


Path A: Minimal Productivity Upgrade (2-3 weekends)

  1. Project 10 (Terminal Emulator)
  2. Project 1 (Aliases)
  3. Project 2 (Git Config)
  4. Project 3 (Prompt)

Path B: Remote/Server Workflow (3-5 weekends)

  1. Project 1
  2. Project 4 (SSH)
  3. Project 6 (tmux)
  4. Project 9 (Bootstrap)

Path C: Full System (8-12 weeks) Complete projects in numeric order, then final project.

Path D: Completionist (10-14 weeks) Phase 1: Projects 1–4
Phase 2: Projects 5–8
Phase 3: Projects 9–13
Phase 4: Projects 14–18 + Final Project


Success Metrics

  • You can rebuild a machine from scratch in < 30 minutes
  • Your shell starts in < 200ms
  • You can explain every alias and function you use daily
  • You have zero secrets committed in git
  • You can switch machines without re-learning your workflow
  • Your bootstrap script completes without errors in a clean container
  • Your doctor script reports all required tools present

Optional Appendices

Appendix A: Bash Startup Order Cheat Sheet

Login shell:
  /etc/profile -> first of ~/.bash_profile, ~/.bash_login, ~/.profile
Interactive non-login:
  ~/.bashrc
Non-interactive:
  $BASH_ENV

Appendix B: Zsh Startup Order Cheat Sheet

Always:      ~/.zshenv
Login:       ~/.zprofile -> ~/.zlogin
Interactive: ~/.zshrc
Logout:      ~/.zlogout

Appendix C: XDG Directory Map

~/.config     -> configuration
~/.local/share -> data
~/.local/state -> state
~/.cache       -> cache

Appendix D: Safe Dotfiles Checklist

  • Secrets stored separately or encrypted
  • One command to bootstrap new machine
  • Configs modularized by tool
  • XDG variables set and documented

Project Overview Table

# Project Primary Tooling Difficulty Outcome
1 Shell Alias System Bash/Zsh Beginner A categorized, documented alias system
2 Git Configuration Powerhouse Git config Beginner Safe, consistent Git defaults
3 Custom Shell Prompt Starship/PS1 Beginner A fast, informative prompt
4 SSH Config Mastery OpenSSH config Intermediate Fast, safe SSH workflows
5 Vim/Neovim Configuration Vimscript/Lua Intermediate A modular editor config
6 Terminal Multiplexer Setup tmux Intermediate Session/pane workflow mastery
7 Shell Functions Library Bash/Zsh Intermediate Reusable CLI workflows
8 FZF Power User Setup fzf Intermediate Fuzzy navigation everywhere
9 Dotfiles Bootstrap Script Shell Advanced One-command setup
10 Terminal Emulator Configuration Alacritty/Kitty/iTerm2 Intermediate Terminal UX tuned to workflow
11 Zsh Plugin System zinit/antibody/zplug Intermediate Fast, curated plugins
12 XDG Base Directory Compliance Shell Intermediate Clean home directory layout
13 Machine-Specific Configuration Shell Intermediate Work/personal separation
14 Custom CLI Dashboard Shell + fastfetch Intermediate High-signal startup context
15 Local Dev Environment Scripts Shell/Python/Make Advanced Standardized dev workflows
16 Keybinding System Hammerspoon/Karabiner Advanced OS-wide automation layer
17 Shell History & Knowledge System Atuin + cheatsheets Intermediate Searchable command memory
18 Complete Development Container Dev Containers Advanced Portable dev environment
Final Complete Portable Dev Environment All Advanced End-to-end dotfiles system

Project List

Project 1: Shell Alias System (Your First Productivity Win)

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Bash/Zsh
  • Alternative Programming Languages: Fish, POSIX sh
  • Coolness Level: Level 2: Practical but Forgettable
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 1: Beginner
  • Knowledge Area: Shell Configuration
  • Software or Tool: Zsh/Bash
  • Main Book: “The Linux Command Line, 2nd Edition” by William E. Shotts

What you’ll build: An organized system of shell aliases that reduces your most common multi-character commands to 2-3 keystrokes, with categories, documentation, and easy discoverability.

Why it teaches dotfiles: Aliases are the “Hello World” of dotfiles—simple enough to understand immediately, but they introduce the core concepts: configuration files, shell initialization, and the idea that you can reshape your CLI experience.

Core challenges you’ll face:

  • Understanding where to put aliases → maps to shell initialization order
  • Avoiding alias conflicts with existing commands → maps to command precedence
  • Making aliases discoverable → maps to self-documenting configuration
  • Handling aliases that need arguments → maps to when to use functions instead

Key Concepts:

  • Shell Initialization: The Linux Command Line Ch. 11
  • Alias Syntax: Effective Shell Ch. 19
  • Zsh/Bash startup: Official docs

Difficulty: Beginner Time estimate: Weekend Prerequisites: Basic command line usage, ability to edit text files

Real World Outcome

$ aliases  # Your custom command to list all aliases

Navigation
  ..     -> cd ..
  ...    -> cd ../..
  dev    -> cd ~/Developer

Git
  g      -> git
  gs     -> git status
  gco    -> git checkout
  gcm    -> git commit -m

Search
  f      -> find . -name
  rg     -> rg --smart-case

Found 47 aliases across 8 categories

Implementation Hints:

Start by auditing your command history to find what you type most:

history | awk '{print $2}' | sort | uniq -c | sort -rn | head -20

Organize your aliases in a dedicated file (e.g., ~/.aliases or ~/.config/shell/aliases.zsh) and source it from your main shell config.

The Core Question You’re Answering

How do I convert repeated command patterns into a safe, discoverable vocabulary that fits how I work?

Concepts You Must Understand First

  • C2 (Shell Startup): The Linux Command Line Ch. 11
  • C3 (CLI Ergonomics): Effective Shell Ch. 19
  • C6 (Portability): XDG basics Book refs: The Linux Command Line Ch. 11, Effective Shell Ch. 19, How Linux Works Ch. 2-3.

Questions to Guide Your Design

  • Which commands appear in your top 20 history entries?
  • Which commands are dangerous and should be wrapped with safety flags?
  • How will you discover your aliases later (help command or docs)?
  • What naming conventions keep aliases memorable?
  • How will you separate work vs personal aliases?

Thinking Exercise

Take 10 commands you run daily and classify them by intent (navigate, git, build, search). Design a 2-letter vocabulary for each category.

The Interview Questions They’ll Ask

  1. When should you use an alias vs a function?
  2. How does shell initialization order affect aliases?
  3. How do you avoid alias collisions?
  4. How do you document aliases for future you?

Hints in Layers

  • Hint 1: Start with navigation and git aliases only.
  • Hint 2: Use comments to group aliases by category.
  • Hint 3: Build an aliases function that prints sections.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | The Linux Command Line | Ch. 11 | Environment and shell config basics | | Effective Shell | Ch. 19 | Alias and function patterns |

Common Pitfalls & Debugging

Problem: Alias works in one terminal but not another

  • Why: Wrong startup file sourced
  • Fix: Ensure .zshrc or .bashrc sources alias file
  • Quick test: type alias_name

Definition of Done

  • At least 15 aliases exist and are categorized
  • Alias file is sourced from correct startup file
  • You have a help command listing aliases
  • Unsafe commands are wrapped with safe defaults

Project 2: Git Configuration Powerhouse

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Git config / Bash
  • Alternative Programming Languages: N/A (Git-specific)
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 1: Beginner
  • Knowledge Area: Version Control Configuration
  • Software or Tool: Git
  • Main Book: “Pro Git” by Scott Chacon (external)

What you’ll build: A comprehensive .gitconfig with custom aliases, useful defaults, delta/diff tooling, global gitignore, commit templates, and conditional includes for work vs personal projects.

Why it teaches dotfiles: Git configuration demonstrates the power of tool-specific dotfiles. You’ll learn about git’s hierarchical config system (system → global → local), conditional includes, and how proper defaults prevent mistakes.

Real World Outcome

$ git lg
* a1b2c3d (HEAD -> feature/auth) Add OAuth support
* e4f5g6h Refactor user model
* i7j8k9l (origin/main, main) Initial commit

$ git standup
a1b2c3d - Add OAuth support (23 hours ago)

$ cat ~/.gitconfig
[user]
  name = Your Name
  email = personal@email.com

[includeIf "gitdir:~/work/"]
  path = ~/.gitconfig-work

The Core Question You’re Answering

How can I encode safe, fast git workflows so I never need to remember long commands or risk mistakes?

Concepts You Must Understand First

  • C5 (Tool Power Config): git-config, includes
  • C6 (Portability): conditional identity Book refs: Pro Git Ch. 1.6, 2.7, 8.1; How Linux Works Ch. 2-3.

Questions to Guide Your Design

  • Which git commands do you run daily?
  • What defaults reduce risk (rebase, branch naming)?
  • How do you separate work and personal identity?
  • How will you review diffs more effectively?

Thinking Exercise

Design a git alias set for status, log, amend, and undo. Explain why each alias exists.

The Interview Questions They’ll Ask

  1. How does git config precedence work?
  2. What is includeIf used for?
  3. What is the difference between global and local git config?
  4. How do you avoid committing with the wrong identity?

Hints in Layers

  • Hint 1: Start with user.name and user.email.
  • Hint 2: Add a global .gitignore for OS/IDE files.
  • Hint 3: Add includeIf for work repos.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Pro Git | Ch. 1.6, 2.7, 8.1 | Git setup, aliases, config |

Common Pitfalls & Debugging

Problem: Wrong email in commits

  • Why: Missing conditional include
  • Fix: Add includeIf in .gitconfig
  • Quick test: git config user.email

Definition of Done

  • Global config contains safe defaults
  • At least 5 aliases in [alias]
  • Conditional include works for work repos
  • Global gitignore configured

Project 3: Custom Shell Prompt (Starship or Pure Prompt)

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: TOML (Starship) / Zsh
  • Alternative Programming Languages: Bash, Fish
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Shell Customization / Terminal UI
  • Software or Tool: Starship / Oh-My-Zsh / Pure
  • Main Book: “Effective Shell” by Dave Kerr

What you’ll build: A custom shell prompt that shows contextual information: git branch/status, current directory, language versions, exit codes, execution time, and optional k8s context.

Real World Outcome

~/Developer/myproject on main [!?] via v18.17.0 took 2s
❯

# After a failed command
~/Developer/myproject on main [!?] ✖
❯

The Core Question You’re Answering

How can a prompt surface the exact state I need without slowing my shell down?

Concepts You Must Understand First

  • C3 (CLI Ergonomics): context-aware workflows
  • C4 (Prompt UX): rendering and performance Book refs: Effective Shell Ch. 19-21.

Questions to Guide Your Design

  • What context do you actually use daily?
  • Which segments should appear only in relevant directories?
  • How will you keep startup time fast?

Thinking Exercise

Sketch a two-line prompt layout and decide what belongs on each line.

The Interview Questions They’ll Ask

  1. Why can prompts become slow?
  2. How do you show git status efficiently?
  3. What is the tradeoff between information density and clarity?

Hints in Layers

  • Hint 1: Start with current directory and git branch only.
  • Hint 2: Add exit status and command duration.
  • Hint 3: Add language versions conditionally.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Effective Shell | Ch. 20-21 | Prompt structure and UX |

Common Pitfalls & Debugging

Problem: Prompt lags in large git repos

  • Why: Git status check is too slow
  • Fix: Use prompt frameworks with caching or async segments
  • Quick test: time zsh -i -c exit

Definition of Done

  • Prompt shows path and git status
  • Prompt is fast (<200ms shell startup)
  • Error status is visible
  • Config is versioned in dotfiles

Project 4: SSH Config Mastery

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: SSH config syntax
  • Alternative Programming Languages: N/A
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 2. The “Micro-SaaS / Pro Tool”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Networking / Remote Access
  • Software or Tool: OpenSSH
  • Main Book: “SSH Mastery” by Michael W. Lucas (external)

What you’ll build: A comprehensive SSH configuration with host aliases, jump hosts, connection sharing, identity file management, and proxy configurations.

Real World Outcome

$ ssh prod-api   # uses ProxyJump and IdentityFile automatically
$ ssh dev-db     # picks a different user and key

The Core Question You’re Answering

How can I turn SSH from a long command into a safe, consistent interface?

Concepts You Must Understand First

  • C5 (Tool Power Config): ssh_config directives
  • C6 (Portability): key management and overrides Book refs: SSH Mastery Ch. 8-10; How Linux Works Ch. 2-3.

Questions to Guide Your Design

  • What hosts do you connect to often?
  • Which hosts require a bastion (ProxyJump)?
  • Which identity keys map to which hosts?

Thinking Exercise

Write a pseudo-SSH config mapping 5 hosts to 2 keys and 1 bastion.

The Interview Questions They’ll Ask

  1. What does ProxyJump do?
  2. What is ControlMaster used for?
  3. How do IdentityFile and IdentitiesOnly interact?

Hints in Layers

  • Hint 1: Start with Host aliases only.
  • Hint 2: Add IdentityFile for each host.
  • Hint 3: Add ControlMaster and ControlPersist for speed.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | SSH Mastery | Ch. 8-10 | SSH config and multiplexing |

Common Pitfalls & Debugging

Problem: SSH tries the wrong key

  • Why: Missing IdentitiesOnly yes
  • Fix: Add IdentitiesOnly yes and explicit IdentityFile
  • Quick test: ssh -v host

Definition of Done

  • Aliases exist for all key hosts
  • ProxyJump configured for bastion
  • Connection multiplexing works
  • Correct keys used per host

Project 5: Vim/Neovim Configuration from Scratch

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Vimscript / Lua (Neovim)
  • Alternative Programming Languages: N/A
  • Coolness Level: Level 4: Hardcore Tech Flex
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Text Editor Configuration
  • Software or Tool: Neovim
  • Main Book: “Practical Vim, 2nd Edition” by Drew Neil

What you’ll build: A complete Neovim configuration with sensible defaults, plugin management, LSP integration, treesitter, telescope, and custom keymaps.

Real World Outcome

┌─────────────────────────────────────────────────────────┐
│ NORMAL  src/auth/login.ts          ts  ln 45  col 18     │
├─────────────────────────────────────────────────────────┤
│  1 import { User } from '../models/user'                │
│  2                                                       │
│  3 export async function login(email: string, password) │
│  4   const user = await User.findByEmail(email)         │
│  5   if (!user) {                                       │
│  6     throw new AuthError('User not found')            │
│  7   }                                                   │
│  8                                                       │
│  9   const valid = await user.checkPassword(password)   │
│ 10   return { token: generateToken(user) }              │
├─────────────────────────────────────────────────────────┤
│  LSP: 1 error, 0 warnings  |  <Space>ff Files  <Space>fg │
└─────────────────────────────────────────────────────────┘

Press gd   -> Go to definition
Press K    -> Hover docs
Press <Space>ca -> Code actions

The Core Question You’re Answering

How do I turn Neovim into a fast, coherent IDE without losing its simplicity?

Concepts You Must Understand First

  • C5 (Tool Power Config): editor configuration Book refs: Practical Vim Ch. 1-3; Modern Vim Ch. 8-10.

Questions to Guide Your Design

  • Which editor defaults cause friction?
  • Which plugins provide real value vs noise?
  • How will you manage startup performance?

Thinking Exercise

List 5 editor actions you do daily and design keymaps for them.

The Interview Questions They’ll Ask

  1. How do you manage plugins efficiently?
  2. How does LSP improve editing?
  3. How do you keep startup time fast?

Hints in Layers

  • Hint 1: Start with options and keymaps only.
  • Hint 2: Add plugin manager and one plugin at a time.
  • Hint 3: Add LSP last and test each language.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Practical Vim | Ch. 1-3 | Core editor mastery | | Modern Vim | Ch. 8-10 | Advanced workflows |

Common Pitfalls & Debugging

Problem: Startup is slow

  • Why: Too many plugins loading eagerly
  • Fix: Use lazy loading and profile startup
  • Quick test: nvim --startuptime /tmp/nvim.log

Definition of Done

  • Config modularized into Lua files
  • LSP works for at least one language
  • Keymaps documented
  • Startup time < 200ms on empty repo

Project 6: Terminal Multiplexer Setup (tmux)

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: tmux config
  • Alternative Programming Languages: N/A
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Terminal Management / Session Persistence
  • Software or Tool: tmux
  • Main Book: “tmux 3: Productive Mouse-Free Development” by Brian P. Hogan

What you’ll build: A complete tmux configuration with intuitive keybindings, session management, a status bar, and workflow scripts.

Real World Outcome

┌─────────────────────────────────────────────────────────┐
│ 0:nvim   1:server   2:logs   3:shell            10:30AM │
├──────────────────────────────┬──────────────────────────┤
│ [Neovim editing]             │ $ npm run dev            │
│                              │ Server listening on 3000 │
│                              │                           │
├──────────────────────────────┴──────────────────────────┤
│ $ docker logs -f api                                     │
│ [10:30:22] GET /health 200 12ms                          │
└─────────────────────────────────────────────────────────┘
Ctrl-a c  -> new window   Ctrl-a |  -> split vertical
Ctrl-a d  -> detach       tmux attach -> reattach

The Core Question You’re Answering

How do I create persistent terminal sessions that survive disconnects and mirror my workflow?

Concepts You Must Understand First

  • C5 (Tool Power Config)
  • C3 (CLI Ergonomics) Book refs: tmux 3 Ch. 1-3; Effective Shell Ch. 19.

Questions to Guide Your Design

  • What prefix key is ergonomic for you?
  • How do you want splits and navigation to behave?
  • What should appear in the status bar?

Thinking Exercise

Sketch a tmux layout for a project with editor, server, and logs.

The Interview Questions They’ll Ask

  1. What problem does tmux solve?
  2. How do you manage sessions across projects?
  3. How do you customize status line content?

Hints in Layers

  • Hint 1: Remap prefix and add split shortcuts.
  • Hint 2: Configure status line and colors.
  • Hint 3: Add session scripts.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | tmux 3 | Ch. 1-3 | Core tmux workflows |

Common Pitfalls & Debugging

Problem: Keybindings conflict with shell

  • Why: Prefix overlaps with readline bindings
  • Fix: Choose a different prefix (Ctrl-a or Ctrl-Space)
  • Quick test: Run tmux list-keys

Definition of Done

  • Prefix remapped
  • Split and navigation keys configured
  • Status bar shows useful info
  • Session restore workflow exists

Project 7: Shell Functions Library

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Bash/Zsh
  • Alternative Programming Languages: Fish
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Shell Scripting
  • Software or Tool: Zsh/Bash
  • Main Book: “Shell Script Professional” (external)

What you’ll build: A library of shell functions that automate your workflows (directory jumping, git worktrees, docker helpers, scaffolding).

Real World Outcome

$ mkproject my-api --template node
Creating project: my-api
✓ Initialized git repo
✓ Installed dependencies
✓ Created src/ tests/ README.md

$ gwt feature/auth
Created worktree at ~/Developer/project-feature-auth
Switched to ~/Developer/project-feature-auth

$ dlogs api
2025-01-01T10:14:03Z  [api] Server listening on :3000

The Core Question You’re Answering

How can I turn repeated multi-step workflows into reliable one-command functions?

Concepts You Must Understand First

  • C2 (Shell Startup)
  • C3 (CLI Ergonomics)
  • C7 (Automation) Book refs: The Linux Command Line Ch. 11; Effective Shell Ch. 19; Wicked Cool Shell Scripts Ch. 1-4.

Questions to Guide Your Design

  • Which workflows cost you the most time each week?
  • What inputs should the function accept?
  • What should happen on errors?

Thinking Exercise

Pick one workflow (e.g., create project) and design a CLI interface for it.

The Interview Questions They’ll Ask

  1. How do you parse arguments in shell functions?
  2. How do you return error codes in functions?
  3. How do you keep functions discoverable?

Hints in Layers

  • Hint 1: Start with 2 functions only.
  • Hint 2: Add --help output.
  • Hint 3: Use consistent naming conventions.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Wicked Cool Shell Scripts | Ch. 1-4 | Shell scripting patterns | | The Linux Command Line | Ch. 32 | Positional params |

Common Pitfalls & Debugging

Problem: Functions shadow system commands

  • Why: Naming conflicts
  • Fix: Prefix function names or namespace them
  • Quick test: type function_name

Definition of Done

  • 5+ functions used weekly
  • Functions handle errors and arguments
  • Functions are documented

Project 8: FZF Power User Setup

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Bash/Zsh + FZF config
  • Alternative Programming Languages: Fish
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Fuzzy Finding / CLI UX
  • Software or Tool: fzf, ripgrep, bat, fd
  • Main Book: “Effective Shell” by Dave Kerr

What you’ll build: A comprehensive FZF setup with custom keybindings, preview windows, and workflow-specific fuzzy finders.

Real World Outcome

# Ctrl-T: fuzzy file picker with preview
┌──────────────────────────────────────────────┐
│ > src/auth/login.ts                          │
│   src/auth/logout.ts                         │
│   src/index.ts                               │
├──────────────────────────────────────────────┤
│ 1 import { login } from './auth/login'       │
│ 2 export async function login(...) { ... }   │
│ 3 // preview via bat                          │
└──────────────────────────────────────────────┘

# Ctrl-R: fuzzy history
> docker compose up -d
> git rebase -i HEAD~5
> rg \"TODO\" src/

The Core Question You’re Answering

How can fuzzy search become the default way I navigate files, history, and systems?

Concepts You Must Understand First

  • C3 (CLI Ergonomics)
  • C4 (Terminal UX) Book refs: Effective Shell Ch. 6-7, 20.

Questions to Guide Your Design

  • Which lists do you navigate most often?
  • What preview information is useful?
  • How do you integrate fzf with git and docker?

Thinking Exercise

Design a fuzzy finder for git branches, including preview of last commit.

The Interview Questions They’ll Ask

  1. What does CTRL-T do in fzf?
  2. How do you override the file list source?
  3. How do you integrate fzf with shell history?

Hints in Layers

  • Hint 1: Enable default keybindings.
  • Hint 2: Add previews with bat.
  • Hint 3: Build one custom function.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Effective Shell | Ch. 6-7 | Interactive shell workflows |

Common Pitfalls & Debugging

Problem: fzf ignores hidden files

  • Why: Default command excludes them
  • Fix: Set FZF_DEFAULT_COMMAND='fd --hidden --exclude .git'
  • Quick test: echo $FZF_DEFAULT_COMMAND

Definition of Done

  • Default keybindings active (CTRL-T, CTRL-R, ALT-C)
  • Previews configured with bat
  • At least 2 custom fzf functions

Project 9: Dotfiles Bootstrap Script

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Bash
  • Alternative Programming Languages: Python, Ansible
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 2. The “Micro-SaaS / Pro Tool”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Automation / System Administration
  • Software or Tool: Bash, homebrew
  • Main Book: “Wicked Cool Shell Scripts, 2nd Edition” by Dave Taylor and Brandon Perry

What you’ll build: A single-command bootstrap script that installs your complete dev environment and applies dotfiles.

Real World Outcome

$ ./bootstrap.sh
[1/6] Installing package manager... ok
[2/6] Installing CLI tools... git nvim tmux fzf ripgrep
[3/6] Linking dotfiles... .zshrc .gitconfig .config/nvim
[4/6] Installing plugins... ok
[5/6] Verifying setup... ok
[6/6] Done in 5m 12s

The Core Question You’re Answering

How can I turn a new machine into my machine in a single command?

Concepts You Must Understand First

  • C7 (Automation & Reproducibility)
  • C6 (Portability) Book refs: Wicked Cool Shell Scripts Ch. 1-4; How Linux Works Ch. 2-3.

Questions to Guide Your Design

  • What are the minimum tools I need on day one?
  • How will I detect OS and package manager?
  • How will I handle failures and retries?

Thinking Exercise

Write a checklist of steps you take when setting up a new machine, then convert them into script phases.

The Interview Questions They’ll Ask

  1. What does idempotent mean?
  2. How do you avoid reinstalling already-installed tools?
  3. How do you log bootstrap progress?

Hints in Layers

  • Hint 1: Start with a minimal tool list.
  • Hint 2: Add OS detection and conditional installs.
  • Hint 3: Add verification checks at the end.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Wicked Cool Shell Scripts | Ch. 1-4 | Idempotent scripting patterns |

Common Pitfalls & Debugging

Problem: Script fails halfway and leaves system inconsistent

  • Why: No error handling and no logging
  • Fix: Use set -euo pipefail and log each step
  • Quick test: Run script twice to confirm idempotence

Definition of Done

  • Script installs core tools
  • Script applies dotfiles
  • Script is idempotent
  • Script logs progress clearly

Project 10: Terminal Emulator Configuration (Alacritty/Kitty)

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: YAML (Alacritty) / Conf (Kitty)
  • Alternative Programming Languages: TOML (WezTerm)
  • Coolness Level: Level 2: Practical but Forgettable
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 1: Beginner
  • Knowledge Area: Terminal Configuration
  • Software or Tool: Alacritty/Kitty/WezTerm

What you’ll build: A configured terminal emulator with custom fonts, colors, and key bindings.

Real World Outcome

~/project on main
❯ ls
.git  README.md  src  tests

Terminal feels smooth:
- Nerd Font icons render correctly
- Colors match editor theme
- Copy/paste shortcuts work

The Core Question You’re Answering

How do I make my terminal readable, fast, and consistent with the rest of my tools?

Concepts You Must Understand First

  • C4 (Terminal UX) Book refs: Effective Shell Ch. 20-21.

Questions to Guide Your Design

  • Which font is most readable for you?
  • Do you need a light or dark theme?
  • Which keybindings conflict with your shell?

Thinking Exercise

Design a color palette and test it in both your terminal and editor.

The Interview Questions They’ll Ask

  1. What is the role of a terminal emulator?
  2. Why do fonts impact prompt alignment?
  3. How do you keep colors consistent across tools?

Hints in Layers

  • Hint 1: Pick a Nerd Font and set size 12-14.
  • Hint 2: Choose a known color scheme.
  • Hint 3: Configure padding and copy/paste shortcuts.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Effective Shell | Ch. 20 | Prompt and terminal UX |

Common Pitfalls & Debugging

Problem: Icons render as squares

  • Why: Nerd Font not installed
  • Fix: Install Nerd Font and set it in config
  • Quick test: Print a known glyph

Definition of Done

  • Font renders correctly
  • Theme matches editor
  • Terminal config versioned

Project 11: ZSH Plugin System (Without Oh-My-Zsh Bloat)

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Zsh
  • Alternative Programming Languages: N/A
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Shell Internals / Plugin Architecture
  • Software or Tool: Zsh, zinit/antibody/zplug

What you’ll build: A fast Zsh setup with a minimal plugin set, lazy loading, and a measured startup budget.

Real World Outcome

$ time zsh -i -c exit
zsh -i -c exit  0.06s user 0.04s system 0.10s total

$ zsh-bench
Total startup: 94ms
Plugins: zsh-autosuggestions, zsh-syntax-highlighting, zsh-completions

The Core Question You’re Answering

How can I get powerful shell features without slow startup times?

Concepts You Must Understand First

  • C2 (Shell Startup)
  • C5 (Tool Power Config) Book refs: The Linux Command Line Ch. 11; Effective Shell Ch. 24-26.

Questions to Guide Your Design

  • Which plugins provide real daily value?
  • What can be lazy-loaded?
  • How will you measure startup time?

Thinking Exercise

List 5 plugins you use and explain what each actually does.

The Interview Questions They’ll Ask

  1. How do zsh startup files affect plugin loading?
  2. What is lazy loading and why does it matter?
  3. How do you profile shell startup?

Hints in Layers

  • Hint 1: Start with only 2 plugins.
  • Hint 2: Add completion caching.
  • Hint 3: Use zinit turbo mode for lazy loading.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Effective Shell | Ch. 24-26 | Shell performance and config |

Common Pitfalls & Debugging

Problem: Shell startup > 500ms

  • Why: Too many plugins loaded eagerly
  • Fix: Lazy load and remove unused plugins
  • Quick test: time zsh -i -c exit

Definition of Done

  • Startup < 150ms
  • Plugins documented
  • Completions work correctly

Project 12: XDG Base Directory Compliance

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Shell + configs
  • Alternative Programming Languages: N/A
  • Coolness Level: Level 2: Practical but Forgettable
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: System Organization / Standards

What you’ll build: An XDG-compliant home directory layout with migrated configs and documented exceptions.

Real World Outcome

$ ls -a ~ | wc -l
9

$ ls ~/.config
alacritty  git  nvim  shell  tmux  zsh

$ echo $XDG_CONFIG_HOME
/Users/you/.config

The Core Question You’re Answering

How do I keep my home directory clean and portable while still supporting old tools?

Concepts You Must Understand First

  • C6 (Portability & XDG) Book refs: How Linux Works Ch. 2-3.

Questions to Guide Your Design

  • Which tools support XDG natively?
  • Which tools need wrappers or symlinks?
  • How will you verify nothing broke?

Thinking Exercise

Create a table of your top 10 tools and where they store config/data.

The Interview Questions They’ll Ask

  1. What is XDG_CONFIG_HOME?
  2. Why separate config and cache?
  3. How do you handle tools that ignore XDG?

Hints in Layers

  • Hint 1: Start with git and zsh.
  • Hint 2: Move config files one tool at a time.
  • Hint 3: Track migration in a checklist.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | How Linux Works | Ch. 2-3 | Filesystem structure |

Common Pitfalls & Debugging

Problem: Tool stops working after moving config

  • Why: Missing env var or wrong path
  • Fix: Set correct XDG env variable
  • Quick test: Run tool with --config flag or debug output

Definition of Done

  • XDG vars set
  • 5+ tools migrated
  • Home directory has < 15 dotfiles

Project 13: Machine-Specific Configuration

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Shell
  • Alternative Programming Languages: Python
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 2. The “Micro-SaaS / Pro Tool”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Configuration Management

What you’ll build: A dotfiles system that loads different settings based on machine type or hostname.

Real World Outcome

$ chezmoi apply
Applying .gitconfig: email=work@company.com
Applying .config/shell/local.zsh: work aliases loaded

The Core Question You’re Answering

How do I keep one dotfiles repo while supporting multiple machines with different needs?

Concepts You Must Understand First

  • C6 (Portability & Overrides) Book refs: Effective Shell Ch. 26.

Questions to Guide Your Design

  • How will you detect machine type (OS, hostname)?
  • Which settings differ across machines?
  • How will you keep secrets separate?

Thinking Exercise

Define three machine profiles (work, personal, server) and list differences.

The Interview Questions They’ll Ask

  1. What is templating in dotfile managers?
  2. How do you detect machine context in shell?
  3. How do you avoid leaking work config into personal?

Hints in Layers

  • Hint 1: Use hostname-based conditionals.
  • Hint 2: Add a local file for overrides.
  • Hint 3: Use chezmoi templates or yadm alternate files.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Effective Shell | Ch. 26 | Shell scripting practices |

Common Pitfalls & Debugging

Problem: Wrong config loaded on a machine

  • Why: Conditionals too broad
  • Fix: Add explicit match and log machine type
  • Quick test: echo $MACHINE_TYPE

Definition of Done

  • Repo works on 2+ machines
  • Work and personal identities separated
  • Overrides documented

Project 14: Custom CLI Dashboard (Fastfetch + Scripts)

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Shell
  • Alternative Programming Languages: Python
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: CLI UX / System Information

What you’ll build: A login dashboard that shows system info, repo status, tasks, and a daily focus prompt.

Real World Outcome

┌─────────────────────────────────────────────────────┐
│ douglas@macbook      Wed, Dec 31 10:30 AM            │
│ macOS 14.x  •  M2 Pro  •  72F                        │
├─────────────────────────────────────────────────────┤
│ RECENT REPOS                 TODAY                  │
│ api-service   main ✓          [ ] Review PR #245     │
│ web-frontend  feat/auth ●     [x] Deploy staging    │
├─────────────────────────────────────────────────────┤
│ Quote: "First, solve the problem. Then, write code." │
└─────────────────────────────────────────────────────┘

The Core Question You’re Answering

How can my terminal give me high-signal context every time I open it?

Concepts You Must Understand First

  • C3 (CLI Ergonomics)
  • C4 (Terminal UX)
  • C7 (Automation) Book refs: Effective Shell Ch. 19-21; Wicked Cool Shell Scripts Ch. 5-7.

Questions to Guide Your Design

  • What info actually helps you start work?
  • Which data is slow to fetch (needs caching)?
  • How will you make the dashboard skippable?

Thinking Exercise

Design a dashboard layout on paper with 3 data blocks and 1 quote.

The Interview Questions They’ll Ask

  1. How do you keep startup fast?
  2. How do you cache API data?
  3. How do you handle API failures gracefully?

Hints in Layers

  • Hint 1: Start with static system info only.
  • Hint 2: Add one API source and cache it.
  • Hint 3: Add key repo git status.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Wicked Cool Shell Scripts | Ch. 5-7 | Formatting output and APIs |

Common Pitfalls & Debugging

Problem: Dashboard slows down shell startup

  • Why: Network calls on every shell
  • Fix: Cache results and run async
  • Quick test: time zsh -i -c exit

Definition of Done

  • Dashboard runs < 200ms
  • Data is cached
  • Dashboard can be disabled per session

Project 15: Local Dev Environment Scripts (Projects, Services, Workflows)

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Shell
  • Alternative Programming Languages: Python, Makefile
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 2. The “Micro-SaaS / Pro Tool”
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Developer Experience / Automation

What you’ll build: A dev CLI that standardizes start/stop/logs/test across multiple project types.

Real World Outcome

$ dev start
Starting project: api-service (node)
✓ docker-compose up -d
✓ waiting for db... ready
✓ running migrations... ok
✓ starting server in tmux

$ dev logs
[api] Server listening on :3000
[db]  Ready to accept connections

The Core Question You’re Answering

How do I create one consistent command interface across all my projects?

Concepts You Must Understand First

  • C7 (Automation)
  • C5 (Tool Config) Book refs: Wicked Cool Shell Scripts Ch. 8-10; Pro Git Ch. 2.7 (aliases for workflow consistency).

Questions to Guide Your Design

  • How will you detect project type?
  • What subcommands will you standardize?
  • How will you handle failures?

Thinking Exercise

Draft a dev CLI spec with 5 subcommands and their outputs.

The Interview Questions They’ll Ask

  1. What makes a script interface stable?
  2. How do you handle background processes safely?
  3. How do you log and aggregate output?

Hints in Layers

  • Hint 1: Implement dev start only.
  • Hint 2: Add dev logs and dev stop.
  • Hint 3: Add project detection and config file.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Wicked Cool Shell Scripts | Ch. 8-10 | Process management |

Common Pitfalls & Debugging

Problem: Scripts fail silently

  • Why: No error handling or logs
  • Fix: Add set -euo pipefail and logging
  • Quick test: Add a failing command and verify output

Definition of Done

  • dev start/stop/logs works in 2 project types
  • Errors are logged clearly
  • Interface documented

Project 16: Keybinding System (Hammerspoon/Karabiner)

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Lua (Hammerspoon) / JSON (Karabiner)
  • Alternative Programming Languages: AppleScript
  • Coolness Level: Level 4: Hardcore Tech Flex
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 3: Advanced
  • Knowledge Area: macOS Automation / System Integration

What you’ll build: A hyper-key driven automation layer for window management and app launching.

Real World Outcome

$ kb
Hyper+H  -> Window left half
Hyper+L  -> Window right half
Hyper+T  -> Launch Terminal
Hyper+B  -> Launch Browser
Hyper+M  -> Maximize window

The Core Question You’re Answering

How do I turn my keyboard into a programmable control surface for my OS?

Concepts You Must Understand First

  • C4 (Terminal UX)
  • C7 (Automation) Book refs: The Pragmatic Programmer Ch. 2.

Questions to Guide Your Design

  • What apps need fast switching?
  • What window layouts are most useful?
  • Which key should become Hyper?

Thinking Exercise

Design a 10-key hyper layout for your most-used actions.

The Interview Questions They’ll Ask

  1. What is a hyper key and why is it useful?
  2. How does Hammerspoon bind hotkeys?
  3. How do you avoid conflicts with app shortcuts?

Hints in Layers

  • Hint 1: Remap Caps Lock to Hyper.
  • Hint 2: Add window management bindings.
  • Hint 3: Add app launchers.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | The Pragmatic Programmer | Ch. 2 | Automating your workflow |

Common Pitfalls & Debugging

Problem: Keybinding doesn’t trigger

  • Why: Modifier conflict or wrong key code
  • Fix: Use Karabiner EventViewer to confirm key codes
  • Quick test: Bind the key to a simple alert

Definition of Done

  • Hyper key works
  • 10+ shortcuts configured
  • Config versioned

Project 17: Shell History and Knowledge System

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Shell + Rust tools
  • Alternative Programming Languages: Python
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 2. The “Micro-SaaS / Pro Tool”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: CLI Productivity / Personal Knowledge

What you’ll build: A synced, searchable command history with context, stats, and cheatsheets.

Real World Outcome

$ atuin search docker
2h ago  ~/api-project   docker compose up -d
1d ago  ~/api-project   docker logs -f api
1w ago  ~/infra         docker system prune -af

The Core Question You’re Answering

How do I turn my command history into a searchable, context-rich knowledge base?

Concepts You Must Understand First

  • C3 (CLI Ergonomics)
  • C7 (Automation) Book refs: Effective Shell Ch. 22-23; Wicked Cool Shell Scripts Ch. 5.

Questions to Guide Your Design

  • What metadata do you want in history search?
  • How will you sync across machines?
  • How will you document important commands?

Thinking Exercise

List 5 commands you often forget and design a cheatsheet entry for each.

The Interview Questions They’ll Ask

  1. Why is plain shell history insufficient?
  2. What does atuin store and where?
  3. How do you prevent sensitive commands from being saved?

Hints in Layers

  • Hint 1: Enable extended history with timestamps.
  • Hint 2: Install atuin and integrate with shell.
  • Hint 3: Add navi cheatsheets.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Effective Shell | Ch. 22-23 | History and search workflows |

Common Pitfalls & Debugging

Problem: History search is incomplete

  • Why: History size too small or not shared
  • Fix: Increase HISTSIZE and enable SHARE_HISTORY
  • Quick test: Run multiple shells and check sharing

Definition of Done

  • History sync works across machines
  • 20+ cheatsheet entries exist
  • Sensitive commands are filtered

Project 18: Complete Development Container

  • File: LEARN_DOTFILES_PRODUCTIVITY.md
  • Main Programming Language: Dockerfile + Shell
  • Alternative Programming Languages: Nix
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 3. The “Service & Support”
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Containerization / Reproducible Environments

What you’ll build: A devcontainer setup that installs your dotfiles inside a containerized environment.

Real World Outcome

$ code .
# VS Code prompts: "Reopen in Container"
# ... container builds ...

dev@container:/workspaces/project$ nvim .
dev@container:/workspaces/project$ git status
On branch main

The Core Question You’re Answering

How can I package my entire dev environment into a portable container?

Concepts You Must Understand First

  • C7 (Automation)
  • C6 (Portability) Book refs: Fundamentals of Software Architecture Ch. 10; How Linux Works Ch. 2-3.

Questions to Guide Your Design

  • Which tools must be inside the container?
  • What should be mounted from host vs baked in?
  • How will you handle credentials safely?

Thinking Exercise

Write a minimal devcontainer.json that installs one tool and mounts your repo.

The Interview Questions They’ll Ask

  1. What is the benefit of a dev container?
  2. How do you keep containers fast?
  3. What should never be baked into a container image?

Hints in Layers

  • Hint 1: Start with a base devcontainers image.
  • Hint 2: Add your bootstrap script in postCreate.
  • Hint 3: Mount your dotfiles repo read-only.

Books That Will Help

| Book | Chapters | Why This Matters | |——|———-|——————| | Fundamentals of Software Architecture | Ch. 10 | Environment consistency |

Common Pitfalls & Debugging

Problem: Container lacks host SSH keys

  • Why: Keys not mounted
  • Fix: Bind mount ~/.ssh read-only
  • Quick test: ssh -T git@github.com inside container

Definition of Done

  • Container builds successfully
  • Dotfiles applied inside container
  • Tools available match host

Final Project: The Complete Portable Development Environment

What you’ll build: A polished dotfiles system that installs in minutes, works across macOS/Linux/containers, has documented keybindings, and can be shared as a template.

Real World Outcome

$ curl -fsSL https://yourdomain/install.sh | bash
Installing packages... ok
Linking dotfiles... ok
Setting shell defaults... ok
Running verification... ok
Setup complete in 7m 40s

The Core Question You’re Answering

How do I build a complete, shareable developer environment that outlives any single machine?

Concepts You Must Understand First

Stop and validate these concepts before you integrate everything:

  1. C1: Dotfiles & Configuration-as-Code
    • What is config vs state in your repo?
    • Which files are safe to version and which must be ignored?
    • Book Reference: The Linux Command Line Ch. 11
  2. C2: Shell Startup & Environment
    • Which file sets universal exports vs interactive-only settings?
    • How will GUI apps inherit your PATH and variables?
    • Book Reference: The Linux Command Line Ch. 11; Shell Programming in Unix, Linux and OS X Ch. 1–3
  3. C3: CLI Ergonomics
    • Which commands are worth aliasing vs functions?
    • How will you make your shortcuts discoverable?
    • Book Reference: Effective Shell Ch. 19, 22–23
  4. C4: Terminal UX & Prompt
    • What information belongs in your prompt (and what doesn’t)?
    • How will you keep the prompt fast across large repos?
    • Book Reference: Effective Shell Ch. 20–21
  5. C5: Tool Power Config
    • How will you scope Git identity per repo or machine?
    • Which SSH options reduce risk or improve speed?
    • Book Reference: Practical Vim Ch. 1; tmux 3 Ch. 1–3
  6. C6: Portability & XDG
    • Which tools are XDG-compliant and which need overrides?
    • How will you store secrets safely?
    • Book Reference: How Linux Works Ch. 2–3
  7. C7: Automation & Reproducibility
    • What makes your bootstrap script idempotent?
    • How will you validate a new machine in < 10 minutes?
    • Book Reference: Wicked Cool Shell Scripts Ch. 1–4

Questions to Guide Your Design

  • What is the canonical repo layout (by tool or by function), and why?
  • How will you separate public config from secrets and machine-specific overrides?
  • What is your “one command” install story, and what are the phases?
  • How will you validate correctness (doctor script, smoke tests, benchmarks)?
  • What are your portability targets (macOS, Linux, container) and where do they differ?

Thinking Exercise

The “New Laptop in 60 Minutes” Drill
Write a step-by-step plan that gets a brand-new machine from zero to fully usable. Include time estimates and which steps are automated vs manual. Then identify which steps should be automated but currently aren’t.

The Interview Questions They’ll Ask

  1. How do you keep secrets out of dotfiles while still automating setup?
  2. How do you handle per-machine overrides without forking your repo?
  3. What do you do when a dotfile change breaks a teammate’s setup?
  4. How do you measure and enforce shell startup performance?
  5. How do dev containers fit into a dotfiles strategy?

Hints in Layers

Hint 1: Start with the repo skeleton
Create directories for shell, git, ssh, nvim, tmux, terminal, scripts, and docs.

Hint 2: Add a bootstrap entrypoint
Write a bootstrap.sh that installs dependencies and applies symlinks.

Hint 3: Add a doctor script
Validate tool presence, versions, and config file locations.

#!/usr/bin/env bash
set -euo pipefail

command -v git >/dev/null || echo "git missing"
command -v zsh >/dev/null || echo "zsh missing"
test -f ~/.gitconfig || echo "~/.gitconfig missing"

Hint 4: Lock down secrets and overrides
Use .gitignore for local.* files and document how to create them.

Books That Will Help

Topic Book Chapter
Shell environment The Linux Command Line Ch. 11
Shell ergonomics Effective Shell Ch. 19, 22–23
Editor workflows Practical Vim Ch. 1–3
Automation Wicked Cool Shell Scripts Ch. 1–4
System structure How Linux Works Ch. 2–3

Common Pitfalls & Debugging

Problem: “My install script broke on a fresh machine.”

  • Why: Missing OS detection or package manager checks.
  • Fix: Add OS detection and conditional install steps.
  • Quick test: Run bootstrap in a clean container.

Problem: “Secrets leaked into the repo.”

  • Why: No ignore rules or secret separation plan.
  • Fix: Add .gitignore rules and a secrets workflow (encrypted or external).
  • Quick test: git status should never show secret files.

Problem: “Configuration works in terminal but not in GUI apps.”

  • Why: GUI apps don’t source shell startup files.
  • Fix: Move critical exports to login-sourced files or OS-specific env hooks.
  • Quick test: Launch the app from GUI and check echo $PATH inside it.

Definition of Done

  • One command setup from scratch
  • Works on at least 2 OSes or container + host
  • Documentation covers every tool and shortcut
  • Secrets handled securely

Resources Summary

Essential Books

  • “The Linux Command Line, 2nd Edition” by William E. Shotts
  • “Effective Shell” by Dave Kerr
  • “Practical Vim, 2nd Edition” by Drew Neil
  • “tmux 3: Productive Mouse-Free Development” by Brian P. Hogan
  • “Wicked Cool Shell Scripts” by Dave Taylor

Online Resources

  • XDG Base Directory Spec
  • Bash Reference Manual (startup files)
  • Zsh Startup/Shutdown Files
  • GNU Stow manual
  • git-config documentation
  • OpenSSH ssh_config man page
  • Neovim user manual (config locations)
  • Starship configuration
  • fzf key bindings
  • Atuin docs (history sync)
  • Dev Containers specification
  • Hammerspoon docs
  • Karabiner-Elements docs

Happy customizing. Start simple, keep it documented, and evolve your dotfiles as your work evolves.