← Back to all projects

LEARN GO DEEP DIVE

Learn Go: From Zero to Go Master

Goal: Deeply understand the Go programming language—its philosophy of simplicity, its revolutionary concurrency model with goroutines and channels, its approach to composition over inheritance, and its entire ecosystem. You’ll learn not just how to write Go code, but why Go was designed the way it was, enabling you to build reliable, efficient, and maintainable systems at any scale.


Why Go Matters

In 2007, three legendary engineers at Google—Robert Griesemer, Rob Pike, and Ken Thompson (co-creator of Unix and C)—grew frustrated. They were waiting 45 minutes for massive C++ codebases to compile. They saw developers drowning in complexity: byzantine inheritance hierarchies, cryptic template metaprogramming, dependency hell. Meanwhile, the world was shifting to multi-core processors, but most languages made concurrent programming a nightmare.

Their solution was radical: Go, a language that deliberately removed features. No classes. No inheritance. No exceptions. No generics (until 2022). Instead, they focused on:

  • Simplicity: One way to do things, not ten
  • Fast compilation: Large programs compile in seconds
  • Built-in concurrency: Goroutines and channels as first-class citizens
  • Explicit over implicit: You see what’s happening

Real-World Impact

Go powers critical infrastructure you use daily:

┌─────────────────────────────────────────────────────────────────┐
│                    GO IN PRODUCTION                             │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Docker        - Container runtime (the one that started it)   │
│  Kubernetes    - Container orchestration (runs the cloud)      │
│  Terraform     - Infrastructure as Code                        │
│  Prometheus    - Monitoring and alerting                       │
│  etcd          - Distributed key-value store                   │
│  CockroachDB   - Distributed SQL database                      │
│  Hugo          - Static site generator                         │
│  Caddy         - Modern web server with auto-HTTPS             │
│  Vault         - Secrets management                            │
│  Consul        - Service mesh                                  │
│                                                                 │
│  Companies: Google, Uber, Dropbox, Twitch, Cloudflare,        │
│             Netflix, Meta, American Express, PayPal            │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Why Go Wins for Systems Programming

Traditional Approach:                   Go Approach:

┌──────────────────────┐               ┌──────────────────────┐
│  C/C++               │               │  Go                  │
│  - Manual memory     │               │  - GC (but fast)     │
│  - Segfaults         │               │  - Memory safe       │
│  - Complex builds    │               │  - Single binary     │
│  - Platform specific │               │  - Cross-compile     │
└──────────────────────┘               └──────────────────────┘
         vs.                                    vs.
┌──────────────────────┐               ┌──────────────────────┐
│  Python/Ruby/Node    │               │  Go                  │
│  - Slow              │               │  - Fast              │
│  - Runtime required  │               │  - No runtime        │
│  - Dynamic typing    │               │  - Static typing     │
│  - Concurrency pain  │               │  - Goroutines easy   │
└──────────────────────┘               └──────────────────────┘

Go sits in a unique sweet spot: it’s nearly as fast as C, but nearly as easy to write as Python.


Core Concept Analysis

The Go Philosophy: Less is Exponentially More

Go’s design philosophy is captured in Rob Pike’s famous talk “Less is More.” Every feature has a cost—not just in implementation, but in cognitive load for every developer who reads the code. Go chose radical simplicity:

┌─────────────────────────────────────────────────────────────────┐
│              FEATURES GO DELIBERATELY OMITS                     │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  ✗ Classes           → Use structs + methods                   │
│  ✗ Inheritance       → Use composition + interfaces            │
│  ✗ Exceptions        → Use explicit error returns              │
│  ✗ Generics*         → Use interfaces (* added in Go 1.18)     │
│  ✗ Operator overload → Use explicit method calls               │
│  ✗ Function overload → Use different names or variadic         │
│  ✗ Default arguments → Use variadic or option structs          │
│  ✗ Pointer arithmetic→ Use slices                              │
│  ✗ Implicit type     → Use explicit conversions                │
│    conversions                                                  │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

This isn’t laziness—it’s discipline. The result: any Go developer can read any Go codebase quickly.


Goroutines: Lightweight Concurrency

The most revolutionary aspect of Go is its concurrency model. A goroutine is not a thread—it’s much lighter:

┌─────────────────────────────────────────────────────────────────┐
│                  THREAD vs GOROUTINE                            │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  OS Thread:                    Goroutine:                       │
│  ┌──────────────┐              ┌──────────────┐                │
│  │              │              │              │                │
│  │  ~1-8 MB     │              │   ~2-8 KB    │                │
│  │   stack      │              │    stack     │                │
│  │              │              │  (growable)  │                │
│  └──────────────┘              └──────────────┘                │
│                                                                 │
│  Creation: ~1ms                Creation: ~1µs                   │
│  Context switch: expensive     Context switch: cheap            │
│  Managed by: OS                Managed by: Go runtime           │
│  Practical limit: ~10,000      Practical limit: ~1,000,000+     │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

The Go Scheduler (GMP Model)

Go’s runtime manages goroutines using the GMP model:

                    ┌─────────────────────────────────────────┐
                    │           GO SCHEDULER (GMP)            │
                    └─────────────────────────────────────────┘

    G = Goroutine (your concurrent function)
    M = Machine (OS thread)
    P = Processor (logical processor, typically = GOMAXPROCS)

    ┌─────────────────────────────────────────────────────────────┐
    │                                                             │
    │   ┌───┐ ┌───┐ ┌───┐ ┌───┐ ┌───┐ ┌───┐  ... millions of G  │
    │   │ G │ │ G │ │ G │ │ G │ │ G │ │ G │                      │
    │   └─┬─┘ └─┬─┘ └─┬─┘ └─┬─┘ └─┬─┘ └─┬─┘                      │
    │     │     │     │     │     │     │                         │
    │     └──┬──┴──┬──┘     └──┬──┴──┬──┘                         │
    │        │     │           │     │                            │
    │     ┌──▼──┐ ┌▼───┐   ┌───▼─┐ ┌─▼──┐   Local Run Queues     │
    │     │Queue│ │Queue│   │Queue│ │Queue│                       │
    │     └──┬──┘ └──┬─┘   └──┬──┘ └──┬─┘                         │
    │        │       │        │       │                           │
    │     ┌──▼──┐ ┌──▼──┐ ┌───▼──┐ ┌──▼──┐   P (Processors)      │
    │     │ P0  │ │ P1  │ │  P2  │ │ P3  │   GOMAXPROCS = 4      │
    │     └──┬──┘ └──┬──┘ └───┬──┘ └──┬──┘                        │
    │        │       │        │       │                           │
    │     ┌──▼──┐ ┌──▼──┐ ┌───▼──┐ ┌──▼──┐   M (OS Threads)      │
    │     │ M0  │ │ M1  │ │  M2  │ │ M3  │                        │
    │     └──┬──┘ └──┬──┘ └───┬──┘ └──┬──┘                        │
    │        │       │        │       │                           │
    │     ┌──▼───────▼────────▼───────▼──┐                        │
    │     │         Operating System      │                       │
    │     │         (CPU Cores)           │                       │
    │     └──────────────────────────────┘                        │
    │                                                             │
    └─────────────────────────────────────────────────────────────┘

Key insight: The Go scheduler does work stealing—if one P’s queue is empty, it steals goroutines from another P’s queue. This keeps all cores busy.


Channels: Communication Between Goroutines

Go’s mantra: “Don’t communicate by sharing memory; share memory by communicating.”

┌─────────────────────────────────────────────────────────────────┐
│                    CHANNEL OPERATIONS                           │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Unbuffered Channel (synchronous):                             │
│                                                                 │
│  Goroutine A          Channel           Goroutine B            │
│      │                  │                    │                  │
│      │    ch <- val     │                    │                  │
│      │ ────────────────>│                    │                  │
│      │   (A blocks)     │    val := <-ch     │                  │
│      │                  │<────────────────── │                  │
│      │   (A unblocks)   │   (B receives)     │                  │
│      │                  │                    │                  │
│                                                                 │
│  Buffered Channel (capacity = 3):                              │
│                                                                 │
│      ┌─────────────────────────┐                               │
│      │ [ val1 | val2 |      ] │  ← can hold 3 values          │
│      └─────────────────────────┘                               │
│        ↑                    ↑                                   │
│      write                read                                  │
│   (blocks when full)  (blocks when empty)                      │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Channel Patterns

┌─────────────────────────────────────────────────────────────────┐
│                 COMMON CHANNEL PATTERNS                         │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  1. Fan-Out (one producer, many consumers):                    │
│                                                                 │
│              ┌──────> Worker 1 ──────┐                         │
│     Producer ├──────> Worker 2 ──────┼──> Results              │
│              └──────> Worker 3 ──────┘                         │
│                                                                 │
│  2. Fan-In (many producers, one consumer):                     │
│                                                                 │
│     Producer 1 ──────┐                                          │
│     Producer 2 ──────┼──> Consumer                             │
│     Producer 3 ──────┘                                          │
│                                                                 │
│  3. Pipeline (stages of processing):                           │
│                                                                 │
│     Input ──> Stage1 ──> Stage2 ──> Stage3 ──> Output          │
│               (chan)     (chan)     (chan)                     │
│                                                                 │
│  4. Worker Pool:                                                │
│                                                                 │
│              ┌─────────────────────┐                           │
│     Jobs ───>│  ┌───┐ ┌───┐ ┌───┐ │───> Results               │
│      ch      │  │W1 │ │W2 │ │W3 │ │     ch                    │
│              │  └───┘ └───┘ └───┘ │                            │
│              └─────────────────────┘                           │
│                   (all read from same job channel)             │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Interfaces: Implicit Satisfaction

In Go, interfaces are satisfied implicitly. If your type has the right methods, it implements the interface—no implements keyword needed:

┌─────────────────────────────────────────────────────────────────┐
│              INTERFACE SATISFACTION IN GO                       │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Java/C# way:                         Go way:                   │
│  ─────────────                        ────────                  │
│  class Dog implements Animal {        type Dog struct{}         │
│      void Speak() { ... }             func (d Dog) Speak() {}   │
│  }                                                              │
│                                       // Dog implicitly         │
│  // Must declare "implements"         // implements Animal!     │
│                                                                 │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Interface Definition:                                          │
│  ┌────────────────────────┐                                    │
│  │  type Animal interface │                                    │
│  │  {                     │                                    │
│  │    Speak() string      │  ◄── Any type with Speak() string │
│  │  }                     │      method IS an Animal           │
│  └────────────────────────┘                                    │
│                                                                 │
│  Satisfaction (no declaration needed):                         │
│                                                                 │
│  ┌─────────────┐   ┌─────────────┐   ┌─────────────┐          │
│  │    Dog      │   │    Cat      │   │   Robot     │          │
│  ├─────────────┤   ├─────────────┤   ├─────────────┤          │
│  │ Speak()     │   │ Speak()     │   │ Speak()     │          │
│  │ string      │   │ string      │   │ string      │          │
│  └──────┬──────┘   └──────┬──────┘   └──────┬──────┘          │
│         │                 │                 │                   │
│         └────────────┬────┴─────────────────┘                   │
│                      │                                          │
│                      ▼                                          │
│              All satisfy Animal!                                │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

This enables retroactive interface implementation: you can define an interface that existing types (even from other packages) already satisfy.


Error Handling: Explicit is Better

Go eschews exceptions for explicit error returns:

┌─────────────────────────────────────────────────────────────────┐
│              GO ERROR HANDLING PHILOSOPHY                       │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Exception-based (hidden control flow):                        │
│  ───────────────────────────────────────                       │
│       try {                                                     │
│         result = doSomething();     // might throw             │
│         processResult(result);      // might throw             │
│         saveResult(result);         // might throw             │
│       } catch (Exception e) {                                   │
│         // Which one failed? Who knows!                        │
│       }                                                         │
│                                                                 │
│  Go style (explicit control flow):                             │
│  ─────────────────────────────────                             │
│       result, err := doSomething()                             │
│       if err != nil {                                          │
│           return fmt.Errorf("doing something: %w", err)        │
│       }                                                         │
│       if err := processResult(result); err != nil {            │
│           return fmt.Errorf("processing: %w", err)             │
│       }                                                         │
│       if err := saveResult(result); err != nil {               │
│           return fmt.Errorf("saving: %w", err)                 │
│       }                                                         │
│       // You KNOW what succeeded and what failed               │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Error Wrapping Chain

┌─────────────────────────────────────────────────────────────────┐
│                    ERROR WRAPPING                               │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Call Stack:              Wrapped Error:                        │
│                                                                 │
│  main()                   "loading config: opening file:       │
│    │                       config.json: no such file"          │
│    ▼                                │                           │
│  LoadConfig()                       │                           │
│    │                       ┌────────┴────────┐                  │
│    │  fmt.Errorf(          │ "loading config: %w"               │
│    │    "loading config:   │        │                           │
│    │     %w", err)         │        ▼                           │
│    ▼                       │ "opening file: %w"                 │
│  OpenFile()                │        │                           │
│    │                       │        ▼                           │
│    │  fmt.Errorf(          │ "config.json: no such file"       │
│    │    "opening file:     │   (original error)                │
│    │     %w", err)         └────────────────────┘               │
│    ▼                                                            │
│  os.Open()                 Use errors.Unwrap() to traverse      │
│    │                       Use errors.Is() to check type        │
│    │  returns              Use errors.As() to extract           │
│    ▼                                                            │
│  *PathError                                                     │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Memory Layout: Slices, Maps, and Structs

Understanding how Go organizes memory is crucial for writing efficient code:

┌─────────────────────────────────────────────────────────────────┐
│                      SLICE INTERNALS                            │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  slice := []int{1, 2, 3, 4, 5}                                 │
│                                                                 │
│  Slice Header (24 bytes on 64-bit):                            │
│  ┌─────────────────────────────────┐                           │
│  │  ptr  │  len  │  cap            │                           │
│  │  8B   │  8B   │  8B             │                           │
│  └───┬───┴───────┴─────────────────┘                           │
│      │                                                          │
│      ▼  Backing Array (heap):                                  │
│  ┌───┬───┬───┬───┬───┬───┬───┬───┐                            │
│  │ 1 │ 2 │ 3 │ 4 │ 5 │   │   │   │  capacity = 8             │
│  └───┴───┴───┴───┴───┴───┴───┴───┘                            │
│    0   1   2   3   4   5   6   7                               │
│                    ▲                                            │
│                    └── len = 5                                  │
│                                                                 │
│  subslice := slice[1:3]                                        │
│  ┌─────────────────────────────────┐                           │
│  │  ptr  │  len=2│  cap=7          │  ← SHARES backing array!  │
│  └───┬───┴───────┴─────────────────┘                           │
│      │                                                          │
│      ▼  (points to index 1)                                    │
│  ┌───┬───┬───┬───┬───┬───┬───┬───┐                            │
│  │ 1 │ 2 │ 3 │ 4 │ 5 │   │   │   │                            │
│  └───┴───┴───┴───┴───┴───┴───┴───┘                            │
│        ▲                                                        │
│        └── subslice starts here                                │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Map Internals

┌─────────────────────────────────────────────────────────────────┐
│                       MAP INTERNALS                             │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  m := map[string]int{"a": 1, "b": 2}                           │
│                                                                 │
│  Map Header (hmap):                                            │
│  ┌───────────────────────────────────────┐                     │
│  │ count │ flags │ B │ noverflow │ hash0 │                     │
│  │ buckets* │ oldbuckets* │ ...          │                     │
│  └─────────────────┬─────────────────────┘                     │
│                    │                                            │
│                    ▼  Bucket Array                             │
│  ┌─────────┬─────────┬─────────┬─────────┐                     │
│  │ bucket0 │ bucket1 │ bucket2 │ bucket3 │  (2^B buckets)     │
│  └────┬────┴─────────┴─────────┴─────────┘                     │
│       │                                                         │
│       ▼  Each Bucket (bmap):                                   │
│  ┌───────────────────────────────────────────┐                 │
│  │ tophash[8] │ keys[8] │ values[8] │ overflow*│              │
│  └───────────────────────────────────────────┘                 │
│                                                                 │
│  Hash Distribution:                                            │
│  hash("a") = 0x3f2a1b... → bucket = hash & (2^B - 1)          │
│                           tophash = top 8 bits of hash         │
│                                                                 │
│  ⚠️  Maps are NOT safe for concurrent access!                  │
│      Use sync.Map or sync.RWMutex for concurrent maps          │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

The Go Module System

Go modules (introduced in Go 1.11, default since 1.16) solve dependency management:

┌─────────────────────────────────────────────────────────────────┐
│                    GO MODULE STRUCTURE                          │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  myproject/                                                     │
│  ├── go.mod              ← Module definition                   │
│  ├── go.sum              ← Cryptographic checksums             │
│  ├── main.go                                                   │
│  ├── internal/           ← Private packages (can't import)    │
│  │   └── database/                                             │
│  │       └── db.go                                             │
│  ├── pkg/                ← Public packages                     │
│  │   └── api/                                                  │
│  │       └── api.go                                            │
│  └── cmd/                ← Executable commands                 │
│      ├── server/                                               │
│      │   └── main.go                                           │
│      └── cli/                                                  │
│          └── main.go                                           │
│                                                                 │
│  go.mod:                                                       │
│  ┌────────────────────────────────────────┐                    │
│  │ module github.com/user/myproject       │                    │
│  │                                        │                    │
│  │ go 1.21                                │                    │
│  │                                        │                    │
│  │ require (                              │                    │
│  │     github.com/gin-gonic/gin v1.9.1   │                    │
│  │     github.com/lib/pq v1.10.9         │                    │
│  │ )                                      │                    │
│  └────────────────────────────────────────┘                    │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

The Go Toolchain

Go ships with a powerful set of built-in tools:

┌─────────────────────────────────────────────────────────────────┐
│                     GO TOOLCHAIN                                │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Building & Running:                                           │
│  ┌──────────────────────────────────────────────────────────┐  │
│  │ go build    - Compile packages and dependencies          │  │
│  │ go run      - Compile and run Go program                 │  │
│  │ go install  - Compile and install packages               │  │
│  └──────────────────────────────────────────────────────────┘  │
│                                                                 │
│  Code Quality:                                                 │
│  ┌──────────────────────────────────────────────────────────┐  │
│  │ go fmt      - Format source code (gofmt)                 │  │
│  │ go vet      - Report likely mistakes in packages         │  │
│  │ go test     - Run tests and benchmarks                   │  │
│  │ go test -race  - Run with race detector                  │  │
│  └──────────────────────────────────────────────────────────┘  │
│                                                                 │
│  Dependencies:                                                 │
│  ┌──────────────────────────────────────────────────────────┐  │
│  │ go mod init   - Initialize new module                    │  │
│  │ go mod tidy   - Add missing / remove unused modules      │  │
│  │ go get        - Add dependencies                         │  │
│  │ go mod vendor - Create vendor directory                  │  │
│  └──────────────────────────────────────────────────────────┘  │
│                                                                 │
│  Analysis & Profiling:                                         │
│  ┌──────────────────────────────────────────────────────────┐  │
│  │ go tool pprof  - Profile CPU, memory, goroutines         │  │
│  │ go tool trace  - Trace program execution                 │  │
│  │ go tool cover  - Code coverage analysis                  │  │
│  └──────────────────────────────────────────────────────────┘  │
│                                                                 │
│  Documentation:                                                │
│  ┌──────────────────────────────────────────────────────────┐  │
│  │ go doc        - Show documentation for package/symbol    │  │
│  │ godoc         - Start documentation server               │  │
│  └──────────────────────────────────────────────────────────┘  │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Cross-Compilation: One Binary Everywhere

One of Go’s killer features is trivial cross-compilation:

┌─────────────────────────────────────────────────────────────────┐
│                   CROSS-COMPILATION                             │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  # Compile for different OS/Arch from ANY machine:             │
│                                                                 │
│  GOOS=linux   GOARCH=amd64 go build -o app-linux-amd64        │
│  GOOS=linux   GOARCH=arm64 go build -o app-linux-arm64        │
│  GOOS=darwin  GOARCH=amd64 go build -o app-macos-amd64        │
│  GOOS=darwin  GOARCH=arm64 go build -o app-macos-arm64        │
│  GOOS=windows GOARCH=amd64 go build -o app-windows.exe        │
│                                                                 │
│  Supported Targets:                                            │
│  ┌─────────────┬────────────────────────────────────────────┐  │
│  │ GOOS        │ linux, darwin, windows, freebsd, netbsd,  │  │
│  │             │ openbsd, dragonfly, solaris, android, ios │  │
│  ├─────────────┼────────────────────────────────────────────┤  │
│  │ GOARCH      │ amd64, arm64, 386, arm, mips, mips64,     │  │
│  │             │ ppc64, ppc64le, riscv64, s390x, wasm      │  │
│  └─────────────┴────────────────────────────────────────────┘  │
│                                                                 │
│  Result: Single static binary, no runtime dependencies!        │
│  → Deploy by copying one file                                  │
│  → Docker images can be FROM scratch (literally)              │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Concept Summary Table

Concept Cluster What You Need to Internalize
Philosophy Go removes features on purpose. Simplicity is a feature. One way to do things, readable by all.
Goroutines Lightweight (2KB), multiplexed onto threads by runtime. Create millions. Cost is ~1µs.
Channels Typed conduits for communication. Unbuffered = sync, buffered = async. Ownership matters.
Interfaces Implicit satisfaction. Small interfaces. Composition over inheritance.
Error Handling Explicit returns, not exceptions. Wrap errors with context. Check err != nil.
Memory Slices are headers + backing array. Maps are hash tables. Values copy, pointers share.
Modules go.mod defines module. Semantic versioning. Checksum verification in go.sum.
Toolchain go build, go test, go fmt, go vet, pprof. Built-in race detector.
Packages Lowercase = private, Uppercase = public. internal/ packages are private to module.
Context Carries cancellation, deadlines, values across API boundaries. Pass as first param.

Deep Dive Reading by Concept

This section maps each concept to specific book chapters for deeper understanding. Read these before or alongside the projects to build strong mental models.

Go Fundamentals

Concept Book & Chapter
Go philosophy & design “Learning Go” by Jon Bodner — Ch. 1: “Setting Up Your Go Environment”
Types, variables, constants “Learning Go” by Jon Bodner — Ch. 2: “Primitive Types and Declarations”
Composite types (arrays, slices, maps) “Learning Go” by Jon Bodner — Ch. 3: “Composite Types”
Control flow “The Go Programming Language” by Donovan & Kernighan — Ch. 1-2
Functions “Learning Go” by Jon Bodner — Ch. 5: “Functions”

Concurrency

Concept Book & Chapter
Goroutines basics “Learning Go” by Jon Bodner — Ch. 12: “Concurrency in Go”
Channels “Concurrency in Go” by Katherine Cox-Buday — Ch. 3: “Go’s Concurrency Building Blocks”
Patterns (fan-in/out, pipelines) “Concurrency in Go” by Katherine Cox-Buday — Ch. 4: “Concurrency Patterns in Go”
Context “Learning Go” by Jon Bodner — Ch. 14: “The Context”
sync package “Concurrency in Go” by Katherine Cox-Buday — Ch. 3: “The sync Package”

Interfaces & Type System

Concept Book & Chapter
Interface mechanics “Learning Go” by Jon Bodner — Ch. 7: “Types, Methods, and Interfaces”
Interface design “100 Go Mistakes” by Teiva Harsanyi — Ch. 2: “Interfaces”
Generics “Learning Go” by Jon Bodner — Ch. 8: “Generics”

Error Handling

Concept Book & Chapter
Error basics “Learning Go” by Jon Bodner — Ch. 9: “Errors”
Error wrapping “100 Go Mistakes” by Teiva Harsanyi — Ch. 7: “Error Management”

Testing

Concept Book & Chapter
Testing fundamentals “Learning Go” by Jon Bodner — Ch. 15: “Writing Tests”
Table-driven tests “The Go Programming Language” by Donovan & Kernighan — Ch. 11
Benchmarking “100 Go Mistakes” by Teiva Harsanyi — Ch. 11: “Testing”

Essential Reading Order

For maximum comprehension, read in this order:

  1. Foundation (Week 1-2):
    • “Learning Go” Ch. 1-6 (basics through functions)
    • “The Go Programming Language” Ch. 1-3 (alternate perspective)
  2. Intermediate (Week 3-4):
    • “Learning Go” Ch. 7-11 (types, generics, modules)
    • “100 Go Mistakes” Ch. 1-4 (common pitfalls)
  3. Concurrency (Week 5-6):
    • “Learning Go” Ch. 12-14 (concurrency & context)
    • “Concurrency in Go” Full book
  4. Mastery (Week 7+):
    • “100 Go Mistakes” Remaining chapters
    • “The Go Programming Language” Ch. 9-13 (advanced)

Project List

The following 15 projects will take you from Go beginner to Go master. They are ordered by difficulty and concept progression—each builds on knowledge from previous projects.


Project 1: CLI Task Manager with Persistence

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, Python, TypeScript (Deno)
  • Coolness Level: Level 2: Practical but Forgettable
  • Business Potential: 2. The “Micro-SaaS / Pro Tool”
  • Difficulty: Level 1: Beginner
  • Knowledge Area: CLI Development, File I/O, JSON Serialization
  • Software or Tool: Cobra, Viper (optional)
  • Main Book: “Learning Go” by Jon Bodner

What you’ll build: A command-line task manager that stores tasks in JSON, supports adding, listing, completing, and deleting tasks with priorities and due dates.

Why it teaches Go: This is the perfect first project because it covers all Go fundamentals: structs, slices, maps, file I/O, JSON encoding/decoding, error handling, and package organization—without any concurrency complexity.

Core challenges you’ll face:

  • Structuring CLI arguments → maps to understanding os.Args and flag package
  • Serializing/deserializing JSON → maps to struct tags and encoding/json
  • File operations with error handling → maps to os package and explicit error checking
  • Organizing code into packages → maps to Go project layout conventions

Key Concepts:

  • Structs and methods: “Learning Go” Ch. 7 - Jon Bodner
  • JSON encoding: “The Go Programming Language” Ch. 4.5 - Donovan & Kernighan
  • File I/O: “Learning Go” Ch. 13 - Jon Bodner
  • CLI design: “The Power of Go: Tools” by John Arundel

Difficulty: Beginner Time estimate: Weekend Prerequisites: Basic programming concepts (variables, functions, loops). No prior Go experience needed. Complete the Go Tour (tour.golang.org) first.


Real World Outcome

You’ll have a fully functional CLI task manager. When you run it:

$ task add "Learn Go concurrency" --priority high --due 2025-01-15
✓ Added task #1: "Learn Go concurrency" (high priority, due Jan 15)

$ task add "Read Learning Go book" --priority medium
✓ Added task #2: "Read Learning Go book" (medium priority)

$ task list
┌────┬─────────────────────────┬──────────┬─────────────┬──────────┐
│ ID │ Task                    │ Priority │ Due         │ Status   │
├────┼─────────────────────────┼──────────┼─────────────┼──────────┤
│ 1  │ Learn Go concurrency    │ HIGH     │ Jan 15 2025 │ pending  │
│ 2  │ Read Learning Go book   │ MEDIUM   │ -           │ pending  │
└────┴─────────────────────────┴──────────┴─────────────┴──────────┘

$ task complete 1
✓ Marked task #1 as complete

$ task list --status completed
┌────┬─────────────────────────┬──────────┬─────────────┬──────────┐
│ ID │ Task                    │ Priority │ Due         │ Status   │
├────┼─────────────────────────┼──────────┼─────────────┼──────────┤
│ 1  │ Learn Go concurrency    │ HIGH     │ Jan 15 2025 │ complete │
└────┴─────────────────────────┴──────────┴─────────────┴──────────┘

$ cat ~/.tasks.json
{
  "tasks": [
    {
      "id": 1,
      "title": "Learn Go concurrency",
      "priority": "high",
      "due": "2025-01-15T00:00:00Z",
      "status": "complete",
      "created": "2025-01-10T14:30:00Z"
    },
    ...
  ]
}

Tasks persist across sessions—you’ve built real, useful software!


The Core Question You’re Answering

“How do I organize a Go program, handle user input, and persist data?”

Before you write any code, understand this: Go programs are structured around packages, not classes. There’s no main class—there’s a main package with a main function. Your data lives in structs, your operations are functions or methods on those structs.


Concepts You Must Understand First

Stop and research these before coding:

  1. Go Program Structure
    • What is a package? What makes main special?
    • How does Go find and import packages?
    • What does go mod init do?
    • Book Reference: “Learning Go” Ch. 1 - Jon Bodner
  2. Structs and JSON Tags
    • How do you define a struct in Go?
    • What are struct tags and why does json:"fieldname" work?
    • How does json.Marshal know which fields to include?
    • Book Reference: “Learning Go” Ch. 7 - Jon Bodner
  3. Error Handling Basics
    • Why does Go return (result, error) from functions?
    • What’s the if err != nil pattern?
    • How do you create your own errors?
    • Book Reference: “Learning Go” Ch. 9 - Jon Bodner

Questions to Guide Your Design

Before implementing, think through these:

  1. Data Model
    • What fields does a Task need? (id, title, status, priority, due date, created date?)
    • How will you generate unique IDs?
    • How will you represent priority—string, int, or custom type?
  2. Storage
    • Where should the JSON file live? (~/.tasks.json? Current directory?)
    • What happens if the file doesn’t exist yet?
    • Should you load all tasks into memory or read on demand?
  3. CLI Interface
    • What commands do you need? (add, list, complete, delete?)
    • What flags/options should each command support?
    • How will you parse command-line arguments?

Thinking Exercise

Trace the Data Flow

Before coding, draw out what happens when the user runs task add "Buy groceries":

User runs: task add "Buy groceries"
    │
    ▼
os.Args = ["task", "add", "Buy groceries"]
    │
    ▼
Parse command: "add"
Parse title: "Buy groceries"
    │
    ▼
Load existing tasks from ~/.tasks.json
    │
    ├── File exists? → json.Unmarshal into []Task
    │
    └── File doesn't exist? → Start with empty []Task
    │
    ▼
Create new Task{ID: nextID, Title: "...", ...}
Append to []Task
    │
    ▼
json.Marshal []Task → []byte
os.WriteFile(path, data, 0644)
    │
    ▼
Print success message

Questions while tracing:

  • What happens if json.Unmarshal fails because the file is corrupted?
  • What happens if you can’t write to the file (permissions)?
  • How do you generate nextID reliably?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “How do Go’s struct tags work, and what are they used for?”
  2. “What’s the difference between json.Marshal and json.Encoder?”
  3. “How would you handle the case where the JSON file is corrupted?”
  4. “What’s the difference between a slice and an array in Go?”
  5. “How do you organize a Go project into multiple packages?”
  6. “What happens if you try to access a nil slice vs an empty slice?”
  7. “How would you add unit tests to this project?”

Hints in Layers

Hint 1: Start Simple Ignore CLI parsing libraries at first. Just use os.Args directly. Your main function should check os.Args[1] for the command.

Hint 2: Define Your Data Create a Task struct with the fields you need. Use struct tags like json:"title" to control JSON field names. Create a TaskStore struct that holds a slice of tasks and knows the file path.

Hint 3: File Operations Pattern The pattern is: ReadFile → Unmarshal → Modify → Marshal → WriteFile. Handle the “file doesn’t exist” case with os.IsNotExist(err).

Hint 4: Test Your JSON Use go run main.go frequently. Print intermediate values with fmt.Printf("%+v\n", tasks). Check your JSON file manually with cat or jq.


Books That Will Help

Topic Book Chapter
Go basics “Learning Go” by Jon Bodner Ch. 1-6
Structs and JSON “The Go Programming Language” by Donovan & Kernighan Ch. 4
File operations “Learning Go” by Jon Bodner Ch. 13
CLI patterns “The Power of Go: Tools” by John Arundel Part 1

Learning milestones:

  1. Tasks save and load correctly → You understand JSON serialization and file I/O
  2. All commands work → You understand program structure and control flow
  3. Errors are handled gracefully → You’ve internalized Go’s error handling pattern

Project 2: Custom JSON Parser

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: C, Rust, Zig
  • Coolness Level: Level 4: Hardcore Tech Flex
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Parsing, Lexical Analysis, Recursive Descent
  • Software or Tool: None (from scratch)
  • Main Book: “Writing An Interpreter In Go” by Thorsten Ball

What you’ll build: A JSON parser from scratch that tokenizes and parses JSON into Go data structures, without using encoding/json.

Why it teaches Go: Parsing is a fundamental CS skill, and JSON is simple enough to complete in a weekend. You’ll master strings, runes, state machines, recursion, interfaces, and type switches—all core Go concepts.

Core challenges you’ll face:

  • Tokenizing (lexing) JSON → maps to string processing, runes, and state machines
  • Recursive descent parsing → maps to function recursion and Go’s call stack
  • Representing dynamic types → maps to interfaces and type assertions
  • Handling edge cases → maps to error handling and Unicode support

Key Concepts:

  • Strings vs runes: “The Go Programming Language” Ch. 3.5 - Donovan & Kernighan
  • Interfaces and type switches: “Learning Go” Ch. 7 - Jon Bodner
  • Recursive descent: “Writing An Interpreter In Go” Ch. 2 - Thorsten Ball
  • State machines: “Crafting Interpreters” Ch. 4 - Bob Nystrom (free online)

Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Completed Project 1. Understand structs, slices, maps, and error handling. Familiarity with recursion.


Real World Outcome

You’ll have a JSON parser that works like this:

$ echo '{"name": "Go", "year": 2009, "features": ["fast", "simple"]}' | ./jsonparser

Parsed successfully!
Type: Object
{
  "name" (String): "Go"
  "year" (Number): 2009
  "features" (Array): [
    (String): "fast"
    (String): "simple"
  ]
}

$ echo '{"broken": }' | ./jsonparser
Parse error at position 11: unexpected token '}', expected value

$ ./jsonparser --tokens '{"x": 1}'
Tokens:
  LEFT_BRACE     {
  STRING         "x"
  COLON          :
  NUMBER         1
  RIGHT_BRACE    }
  EOF

You can also use it as a library:

result, err := jsonparser.Parse(`{"key": [1, 2, 3]}`)
if err != nil {
    log.Fatal(err)
}
obj := result.(map[string]interface{})
arr := obj["key"].([]interface{})
fmt.Println(arr[0]) // 1

The Core Question You’re Answering

“How do computers read and understand structured text?”

Before you write any code, understand that parsing has two phases: lexing (turning characters into tokens) and parsing (turning tokens into structure). JSON is beautiful for learning because its grammar is simple enough to fit on a napkin.


Concepts You Must Understand First

Stop and research these before coding:

  1. Lexical Analysis (Tokenization)
    • What is a token? What tokens does JSON have?
    • How do you handle multi-character tokens like strings and numbers?
    • What’s the difference between a character and a rune in Go?
    • Book Reference: “Writing An Interpreter In Go” Ch. 1 - Thorsten Ball
  2. Recursive Descent Parsing
    • What does “recursive descent” mean?
    • How does the parser’s structure mirror the grammar?
    • How do you handle errors mid-parse?
    • Book Reference: “Crafting Interpreters” Ch. 6 - Bob Nystrom
  3. Go Interfaces and Type Assertions
    • What is interface{}/any?
    • How do type assertions work? What’s a type switch?
    • When would you use value.(type) vs switch v := value.(type)?
    • Book Reference: “Learning Go” Ch. 7 - Jon Bodner

Questions to Guide Your Design

Before implementing, think through these:

  1. Lexer Design
    • What tokens exist in JSON? (LEFT_BRACE, STRING, NUMBER, COLON, COMMA, etc.)
    • How will you represent tokens? Struct? What fields?
    • How will you handle string escape sequences (\n, \t, \u0041)?
  2. Parser Design
    • How will you represent parsed values? (interface{} returning string/float64/bool/nil/[]interface{}/map[string]interface{}`)
    • How does parseValue() decide which type to parse?
    • How do you parse arrays and objects recursively?
  3. Error Handling
    • How will you report the position of errors?
    • What makes an error message helpful vs cryptic?

Thinking Exercise

Draw the Token Stream

For this JSON:

{"users": [{"name": "Alice"}, {"name": "Bob"}]}

Write out the token sequence:

1.  LEFT_BRACE       {
2.  STRING           "users"
3.  COLON            :
4.  LEFT_BRACKET     [
5.  LEFT_BRACE       {
6.  STRING           "name"
7.  COLON            :
8.  STRING           "Alice"
9.  RIGHT_BRACE      }
10. COMMA            ,
11. LEFT_BRACE       {
12. STRING           "name"
13. COLON            :
14. STRING           "Bob"
15. RIGHT_BRACE      }
16. RIGHT_BRACKET    ]
17. RIGHT_BRACE      }
18. EOF

Questions while drawing:

  • How does the lexer know "users" is complete? (closing quote)
  • How does the lexer handle true vs "true"?
  • What if there’s a \n inside a string?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “What’s the difference between a lexer and a parser?”
  2. “How would you handle deeply nested JSON without stack overflow?”
  3. “What’s the time complexity of your parser?”
  4. “How do you handle Unicode in Go strings vs other languages?”
  5. “Why does interface{} exist and when should you avoid it?”
  6. “How would you make your parser streaming for huge files?”
  7. “What’s the difference between LL and LR parsing?”

Hints in Layers

Hint 1: Start with the Lexer Build a complete lexer before touching the parser. Your lexer should pass tests for all JSON token types independently.

Hint 2: Token Structure A token needs: Type (enum/const), Value (the actual text), Position (for error messages). Use iota for token type constants.

Hint 3: Parser Entry Point Parse(input string) calls the lexer to get all tokens, then calls parseValue(). parseValue() looks at the current token type and delegates: LEFT_BRACE → parseObject(), LEFT_BRACKET → parseArray(), STRING → return string value, etc.

Hint 4: Test with Real JSON Download sample JSON files from the internet. Test edge cases: empty object {}, empty array [], nested structures, unicode strings, numbers with exponents.


Books That Will Help

Topic Book Chapter
Lexing fundamentals “Crafting Interpreters” by Bob Nystrom Ch. 4 (free online)
Parsing in Go “Writing An Interpreter In Go” by Thorsten Ball Ch. 1-2
String handling “The Go Programming Language” by Donovan & Kernighan Ch. 3.5
Error handling “100 Go Mistakes” by Teiva Harsanyi Ch. 7

Learning milestones:

  1. Lexer tokenizes all JSON correctly → You understand string processing and state machines
  2. Parser handles nested structures → You understand recursion and the call stack
  3. Errors report line/column → You understand how real tools provide helpful feedback

Project 3: HTTP Server from Scratch

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: C, Rust, Zig
  • Coolness Level: Level 4: Hardcore Tech Flex
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Networking, HTTP Protocol, TCP Sockets
  • Software or Tool: None (net package only)
  • Main Book: “Network Programming with Go” by Jan Newmarch

What you’ll build: An HTTP/1.1 server using only Go’s net package (not net/http), supporting GET/POST, headers, static files, and keep-alive connections.

Why it teaches Go: This project forces you to understand what net/http does for you by doing it yourself. You’ll learn TCP sockets, concurrent connection handling with goroutines, parsing protocols, and buffer management.

Core challenges you’ll face:

  • Accepting TCP connections → maps to net.Listen and net.Conn
  • Parsing HTTP requests → maps to bufio.Reader and string parsing
  • Handling concurrent connections → maps to goroutines and connection pools
  • Implementing keep-alive → maps to connection lifecycle management

Key Concepts:

  • TCP in Go: “Network Programming with Go” Ch. 3 - Jan Newmarch
  • HTTP/1.1 spec: RFC 7230-7235 (sections on message format)
  • Goroutines for connections: “Concurrency in Go” Ch. 3 - Katherine Cox-Buday
  • Buffer management: “The Go Programming Language” Ch. 7.6 - Donovan & Kernighan

Difficulty: Advanced Time estimate: 2-3 weeks Prerequisites: Completed Projects 1-2. Understand goroutines basics. Read about HTTP/1.1 request/response format.


Real World Outcome

You’ll have a working HTTP server:

$ ./httpserver --port 8080 --root ./public
Server listening on :8080

# In another terminal:
$ curl -v http://localhost:8080/index.html
> GET /index.html HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/8.0.1
>
< HTTP/1.1 200 OK
< Content-Type: text/html
< Content-Length: 156
< Connection: keep-alive
<
<!DOCTYPE html>
<html>...

$ curl -X POST http://localhost:8080/echo -d '{"message": "hello"}'
{"received": {"message": "hello"}}

$ curl http://localhost:8080/not-found
HTTP/1.1 404 Not Found

# Your server logs:
[2025-01-10 14:32:01] 192.168.1.5:52341 - GET /index.html - 200 - 2ms
[2025-01-10 14:32:05] 192.168.1.5:52341 - POST /echo - 200 - 1ms
[2025-01-10 14:32:08] 192.168.1.5:52342 - GET /not-found - 404 - 0ms

You’ve built what net/http does under the hood!


The Core Question You’re Answering

“What actually happens when a browser talks to a web server?”

Before you write any code, understand: HTTP is just text over TCP. A “request” is literally text like GET /index.html HTTP/1.1\r\nHost: localhost\r\n\r\n. A “response” is text like HTTP/1.1 200 OK\r\nContent-Length: 5\r\n\r\nhello. Your job is to read bytes, parse that text, and write text back.


Concepts You Must Understand First

Stop and research these before coding:

  1. TCP Sockets in Go
    • What does net.Listen("tcp", ":8080") return?
    • What is net.Conn? What methods does it have?
    • How do you read and write to a connection?
    • Book Reference: “Network Programming with Go” Ch. 3 - Jan Newmarch
  2. HTTP/1.1 Message Format
    • What’s the structure of an HTTP request? (request line, headers, blank line, body)
    • What’s the structure of an HTTP response?
    • What is Content-Length and why is it important?
    • Reference: RFC 7230 Section 3 (HTTP Message Format)
  3. Goroutines for Concurrent Connections
    • Why spawn a goroutine per connection?
    • How do you avoid resource leaks with many connections?
    • What is connection keep-alive?
    • Book Reference: “Concurrency in Go” Ch. 3 - Katherine Cox-Buday

Questions to Guide Your Design

Before implementing, think through these:

  1. Connection Handling
    • How do you accept connections in a loop?
    • What happens when you spawn go handleConnection(conn)?
    • When should you close the connection?
  2. Request Parsing
    • How do you read lines until the blank line (end of headers)?
    • How do you handle requests with bodies (POST)?
    • What if the client sends malformed data?
  3. Response Building
    • How do you determine Content-Type from file extension?
    • How do you handle files that don’t exist?
    • How do you format the status line and headers?

Thinking Exercise

Trace a Full HTTP Request/Response

What happens when curl http://localhost:8080/hello runs?

1. curl opens TCP connection to localhost:8080
   │
   └── Your server: listener.Accept() returns new conn
       │
       └── go handleConnection(conn)

2. curl sends request bytes:
   "GET /hello HTTP/1.1\r\n"
   "Host: localhost:8080\r\n"
   "User-Agent: curl/8.0.1\r\n"
   "\r\n"
   │
   └── Your server reads from conn with bufio.Reader

3. You parse:
   - Method: GET
   - Path: /hello
   - Version: HTTP/1.1
   - Headers: map[Host:localhost:8080, User-Agent:curl/8.0.1]

4. You generate response:
   - Find handler for /hello (or static file, or 404)
   - Build response body

5. You write response bytes:
   "HTTP/1.1 200 OK\r\n"
   "Content-Type: text/plain\r\n"
   "Content-Length: 12\r\n"
   "\r\n"
   "Hello World!"
   │
   └── conn.Write(responseBytes)

6. curl reads response, displays it

Questions while tracing:

  • What if the client never sends \r\n\r\n?
  • How do you know when to stop reading the body?
  • What happens if the client disconnects mid-request?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “What’s the difference between TCP and HTTP?”
  2. “How does HTTP keep-alive work and why does it matter?”
  3. “How would you handle 10,000 concurrent connections?”
  4. “What’s the difference between blocking and non-blocking I/O?”
  5. “How would you add HTTPS support?”
  6. “What is chunked transfer encoding?”
  7. “How do real web servers like nginx handle concurrency?”

Hints in Layers

Hint 1: Start with Echo Server Before HTTP, build a simple echo server: accept connection, read a line, write it back. Verify it works with telnet localhost 8080.

Hint 2: Parse Request Line First The first line is METHOD PATH VERSION. Split by space. Then read headers line by line until you get an empty line.

Hint 3: Use bufio.Reader Wrap the connection: reader := bufio.NewReader(conn). Use ReadString('\n') for lines or Read(buf) for body.

Hint 4: Test with curl Use curl -v (verbose) to see exactly what’s being sent and received. Use nc localhost 8080 to send raw HTTP manually.


Books That Will Help

Topic Book Chapter
TCP networking “Network Programming with Go” by Jan Newmarch Ch. 3-4
HTTP protocol “HTTP: The Definitive Guide” by Gourley & Totty Ch. 1-4
Concurrent connections “Concurrency in Go” by Katherine Cox-Buday Ch. 3-4
Buffer I/O “The Go Programming Language” by Donovan & Kernighan Ch. 7.6

Learning milestones:

  1. Echo server works → You understand TCP sockets and connection handling
  2. GET requests work → You understand HTTP parsing and response building
  3. Keep-alive works → You understand connection lifecycle management

Project 4: Concurrent Web Scraper

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: Python, Rust, Node.js
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 2. The “Micro-SaaS / Pro Tool”
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Concurrency, HTTP Client, HTML Parsing
  • Software or Tool: colly (optional), goquery
  • Main Book: “Concurrency in Go” by Katherine Cox-Buday

What you’ll build: A concurrent web crawler that discovers and fetches pages from a website, respects robots.txt, limits concurrent requests, and extracts structured data—all using goroutines and channels.

Why it teaches Go: This is THE project for learning Go’s concurrency model. You’ll use goroutines for parallel fetching, channels for coordination, sync primitives for shared state, and context for cancellation. It’s practical and makes concurrency tangible.

Core challenges you’ll face:

  • Coordinating concurrent fetchers → maps to goroutines, channels, and WaitGroups
  • Rate limiting requests → maps to time.Ticker and semaphores
  • Tracking visited URLs → maps to sync.Map or mutex-protected maps
  • Graceful shutdown → maps to context.Context and cancellation

Key Concepts:

  • Goroutines and channels: “Concurrency in Go” Ch. 3 - Katherine Cox-Buday
  • Worker pools: “Concurrency in Go” Ch. 4 - Katherine Cox-Buday
  • Context for cancellation: “Learning Go” Ch. 14 - Jon Bodner
  • HTTP client: “The Go Programming Language” Ch. 5 - Donovan & Kernighan

Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Completed Projects 1-3. Understand goroutines, channels basics, and HTTP requests.


Real World Outcome

You’ll have a powerful web scraper:

$ ./scraper --url https://example.com --depth 3 --workers 10 --delay 100ms
Starting crawl of https://example.com
  Max depth: 3
  Workers: 10
  Delay between requests: 100ms

[Worker 1] Fetching: https://example.com/
[Worker 2] Fetching: https://example.com/about
[Worker 3] Fetching: https://example.com/products
[Worker 1] Found 15 links on /
[Worker 4] Fetching: https://example.com/products/item1
...

Crawl complete!
  Pages fetched: 127
  Time elapsed: 12.3s
  Errors: 3 (404 not found)

Results saved to: results.json

$ cat results.json
{
  "pages": [
    {
      "url": "https://example.com/",
      "title": "Example - Home",
      "links": ["https://example.com/about", ...],
      "fetched_at": "2025-01-10T14:30:00Z"
    },
    ...
  ]
}

# Live progress visualization:
$ ./scraper --url https://news.ycombinator.com --live
┌─────────────────────────────────────────────────────────┐
│ Active: 10/10 workers | Queue: 234 | Done: 89 | Err: 2 │
│ ████████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 27%    │
└─────────────────────────────────────────────────────────┘

The Core Question You’re Answering

“How do I coordinate many concurrent tasks that share state and need to communicate?”

Before you write any code, understand the fundamental challenge: multiple goroutines will discover URLs, but you need to ensure each URL is fetched only once, limit how many requests happen simultaneously, and stop everything cleanly when done. This is coordination.


Concepts You Must Understand First

Stop and research these before coding:

  1. Goroutines and the go Keyword
    • What happens when you write go fetchURL(url)?
    • How many goroutines can you create? What limits them?
    • How do goroutines communicate?
    • Book Reference: “Concurrency in Go” Ch. 3 - Katherine Cox-Buday
  2. Channels for Communication
    • What’s the difference between unbuffered and buffered channels?
    • What happens when you send to a full channel? Receive from empty?
    • How do you close a channel and why?
    • Book Reference: “Concurrency in Go” Ch. 3 - Katherine Cox-Buday
  3. sync Package Primitives
    • When do you use WaitGroup vs channels?
    • What is sync.Mutex and when do you need it?
    • What is sync.Map and when is it better than mutex + map?
    • Book Reference: “Learning Go” Ch. 12 - Jon Bodner

Questions to Guide Your Design

Before implementing, think through these:

  1. Architecture
    • Who produces URLs to fetch? Who consumes them?
    • How do workers get work? Channel of URLs?
    • How do you know when all work is done?
  2. Shared State
    • How do you track which URLs have been visited?
    • How do you prevent duplicate fetches?
    • How do you collect results from all workers?
  3. Rate Limiting & Politeness
    • How do you limit to N concurrent requests?
    • How do you add delay between requests?
    • How do you respect robots.txt?

Thinking Exercise

Draw the Data Flow

                         ┌─────────────────┐
                         │  URL Frontier   │
                         │   (channel)     │
                         └────────┬────────┘
                                  │
          ┌───────────────────────┼───────────────────────┐
          │                       │                       │
          ▼                       ▼                       ▼
    ┌──────────┐           ┌──────────┐           ┌──────────┐
    │ Worker 1 │           │ Worker 2 │           │ Worker 3 │
    │          │           │          │           │          │
    │ 1. Fetch │           │ 1. Fetch │           │ 1. Fetch │
    │ 2. Parse │           │ 2. Parse │           │ 2. Parse │
    │ 3. Extract│          │ 3. Extract│          │ 3. Extract│
    └────┬─────┘           └────┬─────┘           └────┬─────┘
         │                      │                      │
         └──────────────────────┼──────────────────────┘
                                │
                                ▼
                     ┌────────────────────┐
                     │   Results Channel  │
                     └─────────┬──────────┘
                               │
                               ▼
                     ┌────────────────────┐
                     │   Result Collector │
                     │   (writes JSON)    │
                     └────────────────────┘

Questions while drawing:

  • How does a worker add newly discovered URLs back to the frontier?
  • How do you prevent infinite loops (A links to B, B links to A)?
  • What if the URL frontier is empty but workers are still fetching?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “Explain the difference between concurrency and parallelism.”
  2. “When would you use a mutex vs a channel?”
  3. “How do you prevent goroutine leaks?”
  4. “What is a race condition and how do you detect them in Go?”
  5. “How would you implement rate limiting?”
  6. “What is context.Context and how do you use it for cancellation?”
  7. “How would you make this crawler distributed across multiple machines?”

Hints in Layers

Hint 1: Start with Worker Pool Pattern Create N worker goroutines that all read from a single jobs channel. This naturally limits concurrency.

Hint 2: Use a Visited Map Use sync.Map or a mutex-protected map[string]bool to track visited URLs. Check before adding to the jobs channel.

Hint 3: WaitGroup for Completion Increment WaitGroup when adding a URL to process, decrement when done. Wait on the WaitGroup in main.

Hint 4: Run the Race Detector Always run with go run -race main.go during development. It will catch race conditions you didn’t know you had.


Books That Will Help

Topic Book Chapter
Concurrency fundamentals “Concurrency in Go” by Katherine Cox-Buday Ch. 3-4
Worker pools “Concurrency in Go” by Katherine Cox-Buday Ch. 4
Context “Learning Go” by Jon Bodner Ch. 14
Common mistakes “100 Go Mistakes” by Teiva Harsanyi Ch. 8-9

Learning milestones:

  1. Single-threaded crawler works → You understand HTTP fetching and parsing
  2. Concurrent workers work → You understand goroutines and channels
  3. No race conditions → You understand synchronization primitives

Project 5: Rate Limiter Library

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, Java, C++
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 4. The “Open Core” Infrastructure
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Algorithms, Time Management, API Design
  • Software or Tool: None (from scratch)
  • Main Book: “100 Go Mistakes and How to Avoid Them” by Teiva Harsanyi

What you’ll build: A production-grade rate limiting library implementing multiple algorithms (token bucket, sliding window, leaky bucket) with a clean API, suitable for use in HTTP middleware.

Why it teaches Go: Rate limiters are deceptively complex. You’ll master time handling, concurrent-safe data structures, interface design, and building reusable libraries. This is the kind of code that runs in production at scale.

Core challenges you’ll face:

  • Implementing rate limiting algorithms → maps to time package and algorithm design
  • Making it thread-safe → maps to sync.Mutex and atomic operations
  • Designing a clean API → maps to interface design and Go idioms
  • Testing time-dependent code → maps to test design and mocking

Key Concepts:

  • Token bucket algorithm: System Design resources
  • Time handling: “The Go Programming Language” Ch. 6 - Donovan & Kernighan
  • sync/atomic: “Concurrency in Go” Ch. 3 - Katherine Cox-Buday
  • Interface design: “100 Go Mistakes” Ch. 2 - Teiva Harsanyi

Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Completed Projects 1-4. Strong understanding of concurrency. Familiarity with rate limiting concepts.


Real World Outcome

You’ll have a library you can use in any Go project:

// Create a rate limiter: 100 requests per second, burst of 10
limiter := ratelimit.NewTokenBucket(100, 10)

// Use in HTTP middleware
func RateLimitMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        clientIP := r.RemoteAddr

        if !limiter.Allow(clientIP) {
            http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
            return
        }

        next.ServeHTTP(w, r)
    })
}

// Different algorithms available
tokenBucket := ratelimit.NewTokenBucket(rate, burst)
slidingWindow := ratelimit.NewSlidingWindow(rate, windowSize)
leakyBucket := ratelimit.NewLeakyBucket(rate)

// Check current state
info := limiter.Status("client-123")
fmt.Printf("Remaining: %d, Reset in: %v\n", info.Remaining, info.ResetIn)

Command-line tool for testing:

$ ./ratelimit-demo --algorithm token-bucket --rate 10 --burst 5

Sending requests...
Request 1: ✓ Allowed (tokens: 4)
Request 2: ✓ Allowed (tokens: 3)
Request 3: ✓ Allowed (tokens: 2)
Request 4: ✓ Allowed (tokens: 1)
Request 5: ✓ Allowed (tokens: 0)
Request 6: ✗ Denied (waiting 100ms)
[100ms passes]
Request 7: ✓ Allowed (tokens: 0)

The Core Question You’re Answering

“How do I protect my system from being overwhelmed while being fair to users?”

Before you write any code, understand: rate limiting is about time. How many requests per unit time? What happens when the limit is exceeded? How do you track time fairly across concurrent requests?


Concepts You Must Understand First

Stop and research these before coding:

  1. Rate Limiting Algorithms
    • How does token bucket work? What are “tokens” and “bucket”?
    • How does sliding window differ from fixed window?
    • What is leaky bucket and when is it preferred?
    • Resource: “System Design Interview” by Alex Xu - Ch. on Rate Limiting
  2. Time in Go
    • What is time.Now() vs time.Since()?
    • How do you handle time durations and calculations?
    • What is monotonic time and why does it matter?
    • Book Reference: “The Go Programming Language” Ch. 6 - Donovan & Kernighan
  3. Concurrent-Safe Operations
    • When is sync/atomic faster than mutex?
    • What is a race condition in time-based code?
    • How do you test concurrent code?
    • Book Reference: “Concurrency in Go” Ch. 3 - Katherine Cox-Buday

Questions to Guide Your Design

Before implementing, think through these:

  1. Algorithm Choice
    • What’s the core invariant of token bucket? (tokens replenish over time)
    • How does sliding window handle edge cases at window boundaries?
    • Which algorithm is best for your use case?
  2. Per-Client Tracking
    • How do you track limits per client/IP/API key?
    • What data structure holds per-client state?
    • How do you clean up old client data?
  3. API Design
    • What should the interface look like? Allow(key string) bool?
    • Should it block or return immediately?
    • How do you communicate “wait time” to callers?

Thinking Exercise

Token Bucket State Machine

Initial state: bucket = 5 tokens (burst), rate = 1 token/second

Time 0.0s: Request arrives
           bucket = 5 → Allow, bucket = 4

Time 0.1s: Request arrives
           bucket = 4 → Allow, bucket = 3

Time 0.2s: Request arrives
           bucket = 3 → Allow, bucket = 2

Time 0.3s: Request arrives
           bucket = 2 → Allow, bucket = 1

Time 0.4s: Request arrives
           bucket = 1 → Allow, bucket = 0

Time 0.5s: Request arrives
           bucket = 0 → DENY (or wait)

           [500ms pass, 0.5 tokens added]

Time 1.0s: Request arrives
           bucket = 0.5 → Allow, bucket = 0
           (Note: some implementations round down)

Time 2.0s: Request arrives
           bucket = 1.0 → Allow, bucket = 0

Questions while tracing:

  • How do you handle fractional tokens?
  • When do you actually add tokens—on each request or background goroutine?
  • What if clock goes backward (NTP adjustment)?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “Explain the token bucket algorithm and its parameters.”
  2. “What’s the difference between rate and burst?”
  3. “How would you implement distributed rate limiting?”
  4. “What happens if two requests arrive at exactly the same time?”
  5. “How do you test time-dependent code reliably?”
  6. “What are the tradeoffs between different rate limiting algorithms?”
  7. “How would you handle rate limiting in a microservices architecture?”

Hints in Layers

Hint 1: Start Simple Implement fixed window first (count requests in current second). Then evolve to token bucket.

Hint 2: Lazy Token Refill Don’t use a background goroutine. Calculate tokens based on time elapsed since last request: tokens = min(burst, tokens + (now - lastTime) * rate).

Hint 3: Interface First Define your interface before implementation. Something like:

type Limiter interface {
    Allow(key string) bool
    Wait(key string) error
}

Hint 4: Test with Fake Time Create a Clock interface that returns current time. In tests, use a fake clock you can control.


Books That Will Help

Topic Book Chapter
Rate limiting algorithms “System Design Interview” by Alex Xu Ch. 4
Time handling “The Go Programming Language” by Donovan & Kernighan Ch. 6
API design “100 Go Mistakes” by Teiva Harsanyi Ch. 2
Testing “Learning Go” by Jon Bodner Ch. 15

Learning milestones:

  1. Token bucket works for single client → You understand the algorithm
  2. Works for multiple concurrent clients → You understand concurrent maps
  3. Tests pass with fake time → You understand testable design

Project 6: Custom HTTP Router with Middleware

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, TypeScript, Python
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 4. The “Open Core” Infrastructure
  • Difficulty: Level 3: Advanced
  • Knowledge Area: HTTP, Data Structures (Trie), API Design
  • Software or Tool: None (from scratch)
  • Main Book: “The Go Programming Language” by Donovan & Kernighan

What you’ll build: A fast HTTP router using a radix trie for path matching, supporting path parameters (/users/:id), wildcards, middleware chains, and route groups—like a mini Gin or Chi.

Why it teaches Go: Building a router teaches you interface design, function composition (middleware), efficient data structures (tries), and HTTP internals. You’ll understand why frameworks work the way they do.

Core challenges you’ll face:

  • Radix trie for path matching → maps to tree data structures and algorithms
  • Extracting path parameters → maps to string parsing and request context
  • Middleware chain execution → maps to function composition and closures
  • Route grouping → maps to API design and builder pattern

Key Concepts:

  • Radix/Patricia tries: Algorithm textbooks or online resources
  • http.Handler interface: “The Go Programming Language” Ch. 7 - Donovan & Kernighan
  • Middleware pattern: “Learning Go” Ch. 10 - Jon Bodner
  • Context values: “Learning Go” Ch. 14 - Jon Bodner

Difficulty: Advanced Time estimate: 2-3 weeks Prerequisites: Completed Projects 1-5. Understand HTTP deeply. Familiarity with tree data structures.


Real World Outcome

You’ll have a router as clean as any popular framework:

r := router.New()

// Middleware
r.Use(router.Logger())
r.Use(router.Recovery())

// Routes with parameters
r.GET("/users/:id", getUser)
r.POST("/users", createUser)
r.PUT("/users/:id", updateUser)
r.DELETE("/users/:id", deleteUser)

// Wildcards
r.GET("/static/*filepath", serveStatic)

// Route groups with group-specific middleware
api := r.Group("/api/v1")
api.Use(authMiddleware)
{
    api.GET("/profile", getProfile)
    api.GET("/settings", getSettings)
}

// Start server
http.ListenAndServe(":8080", r)

Handlers get parameters easily:

func getUser(c *router.Context) {
    id := c.Param("id")  // from /users/:id

    // Query parameters
    page := c.Query("page", "1")  // with default

    // Response helpers
    c.JSON(200, map[string]string{
        "id": id,
        "name": "John",
    })
}

The Core Question You’re Answering

“How do web frameworks match URLs to handlers so efficiently?”

Before you write any code, understand: the naive approach (linear search through routes) is O(n) per request. Real routers use tree structures to match in O(k) where k is the path length. The trie makes /users/123 and /users/456 share the /users/ prefix.


Concepts You Must Understand First

Stop and research these before coding:

  1. Trie Data Structure
    • What is a trie and how does it work?
    • What is a radix trie (compressed trie)?
    • How do you handle path parameters in a trie?
    • Resource: Algorithm textbooks or “Introduction to Algorithms” (CLRS)
  2. http.Handler Interface
    • What is the http.Handler interface?
    • What is http.HandlerFunc?
    • How does ServeHTTP work?
    • Book Reference: “The Go Programming Language” Ch. 7 - Donovan & Kernighan
  3. Middleware Pattern
    • What is middleware in the context of HTTP?
    • How do you chain middleware functions?
    • What is the “onion model” of middleware?
    • Book Reference: “Learning Go” Ch. 10 - Jon Bodner

Questions to Guide Your Design

Before implementing, think through these:

  1. Data Structure
    • How do you represent the route tree?
    • How do you handle static segments vs parameters vs wildcards?
    • What metadata do you store at each node?
  2. Parameter Extraction
    • How do you know :id means “capture this segment”?
    • Where do you store captured parameters?
    • How does the handler access them?
  3. Middleware Execution
    • How do you compose multiple middleware functions?
    • How does a middleware call the next handler?
    • How do you handle early returns (auth failure)?

Thinking Exercise

Visualize the Route Trie

For these routes:

GET  /users
GET  /users/:id
POST /users
GET  /users/:id/posts
GET  /posts/:id

The trie looks like:

root
├── users
│   ├── [GET: usersHandler]
│   ├── [POST: createUserHandler]
│   └── :id (param node)
│       ├── [GET: getUserHandler]
│       └── posts
│           └── [GET: getUserPostsHandler]
└── posts
    └── :id (param node)
        └── [GET: getPostHandler]

Questions while drawing:

  • How do you match /users/123 to /users/:id?
  • What if you have both /users/new and /users/:id? Which matches first?
  • How do you handle trailing slashes?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “Why use a trie instead of a hash map for routing?”
  2. “What’s the time complexity of route matching?”
  3. “How do you handle route conflicts?”
  4. “Explain how middleware works in your router.”
  5. “How would you add support for regex in routes?”
  6. “What’s the difference between path parameters and query parameters?”
  7. “How do popular routers like Gin or Chi implement routing?”

Hints in Layers

Hint 1: Start with Static Routes First, implement exact path matching with a simple map. Then add the trie for efficiency.

Hint 2: Node Types Create node types: static (literal segment), param (:id), catchAll (*filepath). Match static first, then param, then catchAll.

Hint 3: Middleware as Functions A middleware is func(next http.Handler) http.Handler. Chain them by wrapping: middleware3(middleware2(middleware1(handler))).

Hint 4: Context for Values Use context.WithValue to store path parameters. Create a helper function to retrieve them in handlers.


Books That Will Help

Topic Book Chapter
http.Handler “The Go Programming Language” by Donovan & Kernighan Ch. 7
Tree structures “Algorithms” by Sedgewick & Wayne Ch. 5
Middleware “Learning Go” by Jon Bodner Ch. 10
API design “100 Go Mistakes” by Teiva Harsanyi Ch. 2-3

Learning milestones:

  1. Static routes work → You understand http.Handler
  2. Parameter routes work → You understand trie matching
  3. Middleware chains work → You understand function composition

Project 7: Mini Redis Clone

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: C, Rust, C++
  • Coolness Level: Level 5: Pure Magic (Super Cool)
  • Business Potential: 4. The “Open Core” Infrastructure
  • Difficulty: Level 4: Expert
  • Knowledge Area: Data Structures, Networking, Persistence
  • Software or Tool: None (from scratch)
  • Main Book: “Designing Data-Intensive Applications” by Martin Kleppmann

What you’ll build: A Redis-compatible in-memory key-value store supporting strings, lists, hashes, sets, TTL expiration, persistence (RDB/AOF), and the Redis protocol (RESP).

Why it teaches Go: This project combines everything: networking (TCP server), concurrency (handling clients), data structures (efficient storage), persistence (file I/O), and protocol parsing (RESP). It’s a masterpiece of systems programming.

Core challenges you’ll face:

  • Implementing RESP protocol → maps to binary protocol parsing
  • Thread-safe data store → maps to concurrent data structures
  • TTL expiration → maps to time-based goroutines and cleanup
  • Persistence (AOF/RDB) → maps to file I/O and serialization

Key Concepts:

  • RESP protocol: Redis documentation (redis.io/topics/protocol)
  • Concurrent data structures: “Concurrency in Go” Ch. 3-4 - Katherine Cox-Buday
  • Persistence strategies: “Designing Data-Intensive Applications” Ch. 3 - Kleppmann
  • TCP server patterns: “Network Programming with Go” - Jan Newmarch

Difficulty: Expert Time estimate: 1 month+ Prerequisites: Completed Projects 1-6. Strong TCP networking skills. Understanding of data structure implementation.


Real World Outcome

You’ll have a Redis-compatible server:

$ ./miniredis --port 6379 --aof /var/data/redis.aof
MiniRedis server starting...
  Port: 6379
  AOF: /var/data/redis.aof (loading 1,234 commands)
Ready to accept connections

# In another terminal, use redis-cli:
$ redis-cli -p 6379

127.0.0.1:6379> SET user:1:name "Alice"
OK

127.0.0.1:6379> GET user:1:name
"Alice"

127.0.0.1:6379> SETEX session:abc 3600 "active"
OK

127.0.0.1:6379> TTL session:abc
(integer) 3599

127.0.0.1:6379> LPUSH queue:jobs "job1" "job2" "job3"
(integer) 3

127.0.0.1:6379> RPOP queue:jobs
"job1"

127.0.0.1:6379> HSET user:1 name "Alice" age "30" city "NYC"
(integer) 3

127.0.0.1:6379> HGETALL user:1
1) "name"
2) "Alice"
3) "age"
4) "30"
5) "city"
6) "NYC"

127.0.0.1:6379> INFO
# Server
miniredis_version:1.0.0
uptime_in_seconds:123
connected_clients:2
used_memory:4096

The Core Question You’re Answering

“How do you build a fast, durable, concurrent key-value store?”

Before you write any code, understand the Redis mental model: it’s a giant dictionary in memory, accessed by multiple clients over TCP, with optional persistence. The magic is in making it fast, correct, and durable simultaneously.


Concepts You Must Understand First

Stop and research these before coding:

  1. RESP Protocol
    • What are the RESP data types (simple strings, errors, integers, bulk strings, arrays)?
    • How do you parse *3\r\n$3\r\nSET\r\n$4\r\nname\r\n$5\r\nAlice\r\n?
    • How do you serialize responses?
    • Resource: Redis documentation (redis.io/topics/protocol)
  2. Concurrent Data Access
    • How do you handle multiple clients reading/writing the same key?
    • When do you need locks? Can you avoid them?
    • What is a sync.RWMutex and when to use it?
    • Book Reference: “Concurrency in Go” Ch. 3 - Katherine Cox-Buday
  3. Persistence Strategies
    • What is AOF (Append-Only File)?
    • What is RDB (Redis Database dump)?
    • What are the tradeoffs between them?
    • Book Reference: “Designing Data-Intensive Applications” Ch. 3 - Kleppmann

Questions to Guide Your Design

Before implementing, think through these:

  1. Data Model
    • How do you store different types (strings, lists, hashes, sets)?
    • What’s the key → value mapping structure?
    • How do you handle type mismatches (SET on a list key)?
  2. Concurrency Model
    • One goroutine per client or worker pool?
    • How do you serialize access to the data store?
    • Can you use sharding to reduce lock contention?
  3. Expiration
    • How do you track keys with TTL?
    • Active expiration (background goroutine) vs passive (check on access)?
    • What data structure efficiently tracks next-to-expire?

Thinking Exercise

RESP Parsing State Machine

Parse this command: *2\r\n$4\r\nPING\r\n$5\r\nHello\r\n

State: START
Read: '*' → Array type, read count
Read: '2' → Array of 2 elements
Read: '\r\n' → End of line

State: READING_ARRAY (2 elements remaining)
Read: '$' → Bulk string, read length
Read: '4' → 4 bytes
Read: '\r\n'
Read: 'PING' → First element: "PING"
Read: '\r\n'

State: READING_ARRAY (1 element remaining)
Read: '$' → Bulk string
Read: '5' → 5 bytes
Read: '\r\n'
Read: 'Hello' → Second element: "Hello"
Read: '\r\n'

Result: ["PING", "Hello"]
Command: PING with argument "Hello"

Questions while parsing:

  • What if the client sends incomplete data?
  • How do you handle pipelining (multiple commands in one read)?
  • What if count says 3 but only 2 elements arrive?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “How does Redis achieve single-threaded performance?”
  2. “Explain the difference between AOF and RDB persistence.”
  3. “How would you implement TTL expiration efficiently?”
  4. “How do you handle network partial reads?”
  5. “What’s the tradeoff between strong consistency and availability?”
  6. “How would you implement Redis Cluster (sharding)?”
  7. “How do you prevent memory exhaustion?”

Hints in Layers

Hint 1: Start with PING/PONG Implement just enough to handle PING+PONG\r\n. This proves your protocol parsing works.

Hint 2: Value Types Use an interface or union type for values:

type Value interface{}
// or
type Entry struct {
    Type    ValueType  // STRING, LIST, HASH, SET
    Data    interface{}
    Expires *time.Time
}

Hint 3: RWMutex for Store Use sync.RWMutex: multiple readers OR one writer. Lock write for SET, RLock read for GET.

Hint 4: AOF is Simple AOF is just appending each command to a file. On startup, replay the file.


Books That Will Help

Topic Book Chapter
Persistence “Designing Data-Intensive Applications” by Kleppmann Ch. 3
Protocol design Redis documentation redis.io
Concurrent data “Concurrency in Go” by Katherine Cox-Buday Ch. 3-4
Networking “Network Programming with Go” by Jan Newmarch Ch. 3-5

Learning milestones:

  1. String commands work → You understand the protocol
  2. Multiple clients work → You understand concurrent access
  3. Data survives restart → You understand persistence

Project 8: Log Aggregator with Tail -f

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, C++, Java
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 3. The “Service & Support” Model
  • Difficulty: Level 3: Advanced
  • Knowledge Area: File I/O, Streaming, Real-time Processing
  • Software or Tool: None (from scratch)
  • Main Book: “The Linux Programming Interface” by Michael Kerrisk

What you’ll build: A log aggregation system that watches multiple log files in real-time (like tail -f), streams them to a central server, supports filtering/searching, and stores in compressed archives.

Why it teaches Go: This combines file system watching (fsnotify), streaming I/O, concurrent file handling, network streaming, and compression. It’s the kind of tool that runs silently in production keeping systems observable.

Core challenges you’ll face:

  • Efficient file tailing → maps to file I/O and inotify/fsnotify
  • Handling file rotation → maps to detecting file changes and reopening
  • Streaming over network → maps to TCP/WebSocket streaming
  • Real-time filtering → maps to regex and string processing

Key Concepts:

  • fsnotify for file watching: fsnotify package documentation
  • io.Reader/Writer: “The Go Programming Language” Ch. 7 - Donovan & Kernighan
  • Compression (gzip): compress/gzip package documentation
  • Streaming protocols: WebSocket or custom TCP

Difficulty: Advanced Time estimate: 2 weeks Prerequisites: Completed Projects 1-6. Understand file I/O deeply. Familiar with goroutines for background tasks.


Real World Outcome

You’ll have a log aggregation system:

# Agent running on each server:
$ ./logagent --config /etc/logagent.yaml
Watching:
  - /var/log/nginx/access.log (streaming)
  - /var/log/nginx/error.log (streaming)
  - /var/log/app/*.log (watching for new files)

Connected to aggregator at logs.example.com:9000
Streaming...

# Central aggregator:
$ ./logaggregator --port 9000 --storage /var/logs
Listening on :9000
  Connected agents: 15
  Logs/second: 2,341
  Storage used: 12.4 GB

# Query logs:
$ ./logcli search --from "1h ago" --pattern "ERROR" --source "web-*"
[web-01] 2025-01-10 14:32:01 ERROR: Connection refused
[web-03] 2025-01-10 14:32:15 ERROR: Timeout after 30s
[web-01] 2025-01-10 14:33:02 ERROR: Out of memory

# Live tail across all servers:
$ ./logcli tail --source "web-*" --pattern "ERROR"
[web-01] 2025-01-10 14:35:01 ERROR: Connection refused
^C

# Export compressed logs:
$ ./logcli export --date 2025-01-09 --output logs-2025-01-09.gz
Exported 1,234,567 lines (compressed: 45 MB)

The Core Question You’re Answering

“How do you efficiently watch files that are constantly being written to?”

Before you write any code, understand the challenge: you can’t just read a file once—you need to detect new content as it’s appended. tail -f does this using inotify (Linux) or similar mechanisms. You also need to handle log rotation (file gets renamed, new file created).


Concepts You Must Understand First

Stop and research these before coding:

  1. File System Notifications
    • What is inotify? What events can you watch?
    • How does fsnotify wrap platform-specific APIs?
    • What happens during log rotation?
    • Resource: fsnotify documentation
  2. Efficient File Reading
    • How do you read only new bytes appended to a file?
    • What is file offset and seeking?
    • How do you detect file truncation?
    • Book Reference: “The Linux Programming Interface” Ch. 4 - Kerrisk
  3. Streaming Patterns
    • How do you stream data over TCP?
    • What framing protocol do you use?
    • How do you handle backpressure?
    • Book Reference: “Network Programming with Go” - Jan Newmarch

Questions to Guide Your Design

Before implementing, think through these:

  1. Tailing Strategy
    • How do you detect new content? (polling vs inotify)
    • What if multiple lines are written between reads?
    • How do you handle partial lines at read boundary?
  2. Log Rotation Handling
    • How do you detect rotation? (inode change, rename event)
    • What if logs are renamed vs truncated?
    • How do you avoid missing lines during rotation?
  3. Network Streaming
    • What protocol between agent and aggregator?
    • How do you handle network interruptions?
    • How do you buffer when network is slow?

Thinking Exercise

File Rotation Sequence

Initial state:
  /var/log/app.log (inode 12345, size 50000)
  Agent is reading at offset 50000

Log rotation begins:
  1. logrotate moves /var/log/app.log → /var/log/app.log.1
     - File still has inode 12345
     - Your file handle still valid!

  2. App creates new /var/log/app.log (inode 67890)
     - New file, empty

  3. App writes "New log line\n"
     - This goes to NEW file (inode 67890)

What should your agent do?
  - Detect that file was renamed (fsnotify: Rename event)
  - Reopen /var/log/app.log
  - Start reading from offset 0
  - Maybe finish reading old file first?

Questions while tracing:

  • How do you finish reading old file before switching?
  • What if the app writes between rename and your reopen?
  • What’s the risk of duplicate lines?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “How does tail -f work under the hood?”
  2. “How do you handle log rotation?”
  3. “What’s the difference between polling and event-driven file watching?”
  4. “How would you handle logs with multi-line entries?”
  5. “How do you prevent memory exhaustion when network is slow?”
  6. “How would you implement distributed full-text search on these logs?”
  7. “What’s the difference between push vs pull log collection?”

Hints in Layers

Hint 1: Start with Simple Tail Just read a file to EOF, then poll for new content with time.Sleep(100ms). Works correctly, just not efficient.

Hint 2: Add fsnotify Replace polling with fsnotify. Watch for Write events. But still poll occasionally for edge cases.

Hint 3: Track Position Store file position (offset) and inode. On each read, seek to position, read to EOF, update position.

Hint 4: Line Buffer Accumulate bytes until you see \n. Only emit complete lines. Keep remainder for next read.


Books That Will Help

Topic Book Chapter
File I/O “The Linux Programming Interface” by Kerrisk Ch. 4-5
Streaming “Network Programming with Go” by Jan Newmarch Ch. 5
Go patterns “Learning Go” by Jon Bodner Ch. 13
Compression Go standard library documentation compress/gzip

Learning milestones:

  1. Simple tail works → You understand file I/O
  2. Rotation detection works → You understand fsnotify
  3. Network streaming works → You understand streaming protocols

Project 9: gRPC Service with Streaming

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: Java, C++, Python
  • Coolness Level: Level 3: Genuinely Clever
  • Business Potential: 4. The “Open Core” Infrastructure
  • Difficulty: Level 3: Advanced
  • Knowledge Area: RPC, Protocol Buffers, Microservices
  • Software or Tool: grpc-go, protoc
  • Main Book: “gRPC: Up and Running” by Kasun Indrasiri

What you’ll build: A microservices system using gRPC with all four communication patterns: unary, server streaming, client streaming, and bidirectional streaming—implementing a real-time chat service.

Why it teaches Go: gRPC is the standard for modern microservices. You’ll learn Protocol Buffers, code generation, streaming patterns, interceptors (middleware), and how production Go services communicate. This is how Google builds services.

Core challenges you’ll face:

  • Defining Protocol Buffers → maps to schema design and code generation
  • Implementing streaming RPCs → maps to concurrent streams and flow control
  • gRPC interceptors → maps to middleware and cross-cutting concerns
  • Error handling in gRPC → maps to status codes and error details

Key Concepts:

  • Protocol Buffers: protobuf.dev documentation
  • gRPC patterns: “gRPC: Up and Running” - Kasun Indrasiri
  • Streaming: “gRPC: Up and Running” Ch. 4 - Kasun Indrasiri
  • Interceptors: gRPC documentation on middleware

Difficulty: Advanced Time estimate: 2 weeks Prerequisites: Completed Projects 1-6. Basic understanding of RPC. Familiarity with protocol buffers is helpful.


Real World Outcome

You’ll have a complete gRPC service:

// chat.proto
syntax = "proto3";

service ChatService {
  // Unary: send single message
  rpc SendMessage(Message) returns (SendResponse);

  // Server streaming: get message history
  rpc GetHistory(HistoryRequest) returns (stream Message);

  // Client streaming: upload file in chunks
  rpc UploadFile(stream FileChunk) returns (UploadResponse);

  // Bidirectional: real-time chat
  rpc Chat(stream Message) returns (stream Message);
}

Working like this:

# Start server
$ ./chat-server --port 50051
gRPC server listening on :50051

# Client usage (via CLI client you build)
$ ./chat-cli connect --server localhost:50051

> /join general
Joined room: general

> Hello everyone!
[alice] Hello everyone!
[bob] Hey Alice!
[charlie] Welcome!

> /history 10
[bob] (10 min ago) Anyone here?
[charlie] (5 min ago) I'm here
...

> /upload image.png
Uploading... 45%
Uploading... 90%
Uploaded: image.png (2.3 MB)

> /quit
Disconnected.

The Core Question You’re Answering

“How do modern microservices communicate efficiently and type-safely?”

Before you write any code, understand: gRPC is HTTP/2 + Protocol Buffers. It’s faster than REST/JSON (binary encoding), type-safe (schema), supports streaming, and has first-class support for Go.


Concepts You Must Understand First

Stop and research these before coding:

  1. Protocol Buffers
    • What is a .proto file?
    • How do you define messages and services?
    • How does protoc generate Go code?
    • Resource: protobuf.dev
  2. gRPC Communication Patterns
    • Unary: single request, single response
    • Server streaming: single request, stream of responses
    • Client streaming: stream of requests, single response
    • Bidirectional: both sides stream
    • Book Reference: “gRPC: Up and Running” Ch. 3-4 - Kasun Indrasiri
  3. HTTP/2 Basics
    • What is multiplexing?
    • How does flow control work?
    • What are frames and streams?
    • Resource: HTTP/2 RFC or high-level articles

Questions to Guide Your Design

Before implementing, think through these:

  1. Proto Design
    • What messages do you need?
    • What fields should be optional vs required?
    • How do you version your API?
  2. Streaming Logic
    • How do you read from and write to streams concurrently?
    • When do you close a stream?
    • How do you handle stream errors?
  3. Real-time Chat
    • How do you broadcast messages to all connected clients?
    • How do you track who’s connected?
    • How do you handle disconnections?

Thinking Exercise

Bidirectional Stream Flow

Client A                 Server                  Client B
   │                       │                        │
   │──Join("general")────>│                        │
   │<─────────"Joined"─────│                        │
   │                       │<─Join("general")───────│
   │                       │──────"Joined"────────>│
   │                       │                        │
   │──Msg("Hello!")──────>│                        │
   │                       │──Msg(A:"Hello!")────>│
   │<─Msg(A:"Hello!")─────│                        │
   │                       │                        │
   │                       │<─Msg("Hi A!")──────────│
   │<─Msg(B:"Hi A!")──────│                        │
   │                       │──Msg(B:"Hi A!")──────>│
   │                       │                        │

Questions while drawing:

  • How does the server know which clients are in “general”?
  • How do you send to all clients in a room simultaneously?
  • What if Client A disconnects mid-message?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “What is gRPC and how does it differ from REST?”
  2. “Explain the four gRPC communication patterns.”
  3. “What is Protocol Buffers and why use it?”
  4. “How do you handle errors in gRPC?”
  5. “What are interceptors and how do you use them?”
  6. “How do you implement authentication in gRPC?”
  7. “What is the difference between HTTP/1.1 and HTTP/2?”

Hints in Layers

Hint 1: Start with protoc Get the protobuf compiler working first. Generate Go code. Just build hello world unary RPC.

Hint 2: Goroutine per Stream For bidirectional, spawn a goroutine to read from stream and another to write. Use channels to communicate.

Hint 3: Client Registry Use a sync.Map or mutex-protected map to track connected clients and their streams. Key by session ID.

Hint 4: Interceptors for Logging Add a server interceptor that logs every RPC call. Great for debugging and understanding the flow.


Books That Will Help

Topic Book Chapter
gRPC fundamentals “gRPC: Up and Running” by Kasun Indrasiri Ch. 1-4
Streaming “gRPC: Up and Running” by Kasun Indrasiri Ch. 4
Protocol Buffers protobuf.dev documentation Language Guide
Microservices “Building Microservices” by Sam Newman Ch. 4

Learning milestones:

  1. Unary RPC works → You understand proto and code generation
  2. Server streaming works → You understand stream writing
  3. Bidirectional chat works → You understand full-duplex streams

Project 10: SQL Query Engine

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, C++, Java
  • Coolness Level: Level 5: Pure Magic (Super Cool)
  • Business Potential: 5. The “Industry Disruptor”
  • Difficulty: Level 5: Master
  • Knowledge Area: Databases, Parsing, Query Optimization
  • Software or Tool: None (from scratch)
  • Main Book: “Database Internals” by Alex Petrov

What you’ll build: A SQL query engine that parses SQL, builds query plans, optimizes them, and executes against in-memory tables—supporting SELECT, WHERE, JOIN, GROUP BY, and ORDER BY.

Why it teaches Go: This is computer science at its finest. You’ll combine parsing (SQL grammar), data structures (B-trees, hash tables), algorithms (query optimization), and systems programming. You’ll understand how databases work, not just how to use them.

Core challenges you’ll face:

  • SQL parsing → maps to recursive descent parsing, AST building
  • Query planning → maps to tree transformations, relational algebra
  • Join algorithms → maps to hash join, nested loop join
  • Optimization → maps to cost estimation, plan selection

Key Concepts:

  • SQL parsing: “Crafting Interpreters” by Bob Nystrom (parsing techniques)
  • Query execution: “Database Internals” Ch. 7-9 - Alex Petrov
  • Join algorithms: “Designing Data-Intensive Applications” Ch. 3 - Kleppmann
  • B-trees: “Database Internals” Ch. 2-3 - Alex Petrov

Difficulty: Master Time estimate: 2-3 months Prerequisites: All previous projects. Strong algorithms background. Understanding of relational algebra helpful.


Real World Outcome

You’ll have a working SQL engine:

$ ./minisql
MiniSQL v1.0
Type "help" for commands, "exit" to quit.

> CREATE TABLE users (id INT, name TEXT, age INT);
Table 'users' created.

> INSERT INTO users VALUES (1, 'Alice', 30);
> INSERT INTO users VALUES (2, 'Bob', 25);
> INSERT INTO users VALUES (3, 'Charlie', 35);
Inserted 3 rows.

> SELECT name, age FROM users WHERE age > 27;
┌─────────┬─────┐
│ name    │ age │
├─────────┼─────┤
│ Alice   │ 30  │
│ Charlie │ 35  │
└─────────┴─────┘
2 rows returned

> CREATE TABLE orders (id INT, user_id INT, amount FLOAT);
> INSERT INTO orders VALUES (1, 1, 99.99);
> INSERT INTO orders VALUES (2, 1, 49.99);
> INSERT INTO orders VALUES (3, 2, 199.99);

> SELECT u.name, SUM(o.amount) as total
  FROM users u
  JOIN orders o ON u.id = o.user_id
  GROUP BY u.name
  ORDER BY total DESC;
┌───────┬────────┐
│ name  │ total  │
├───────┼────────┤
│ Bob   │ 199.99 │
│ Alice │ 149.98 │
└───────┴────────┘

> EXPLAIN SELECT * FROM users WHERE age > 25;
Query Plan:
└─ Filter: age > 25
   └─ TableScan: users
   Estimated cost: 3 rows scanned

The Core Question You’re Answering

“How does a database turn SQL text into actual results?”

Before you write any code, understand the pipeline: SQL text → Tokens → AST → Logical Plan → Physical Plan → Execution → Results. Each step transforms the query into something closer to execution.


Concepts You Must Understand First

Stop and research these before coding:

  1. SQL Grammar and Parsing
    • What’s the grammar for SELECT statements?
    • How do you handle operator precedence (AND vs OR)?
    • How do you parse JOINs?
    • Book Reference: “Crafting Interpreters” Ch. 6 - Bob Nystrom
  2. Relational Algebra
    • What are the fundamental operations (select, project, join)?
    • How do you represent a query plan as a tree?
    • What is a logical vs physical plan?
    • Resource: Database systems textbooks (Ramakrishnan, Silberschatz)
  3. Join Algorithms
    • What is nested loop join?
    • What is hash join and when is it faster?
    • What is sort-merge join?
    • Book Reference: “Database Internals” Ch. 7 - Alex Petrov

Questions to Guide Your Design

Before implementing, think through these:

  1. Data Representation
    • How do you store tables in memory?
    • How do you represent rows and columns?
    • What types do you support (INT, TEXT, FLOAT)?
  2. Query Plan Representation
    • What nodes are in your plan tree?
    • How do you represent a Filter? A Join? A Project?
    • How do nodes connect?
  3. Execution Model
    • Iterator/Volcano model (each node is an iterator)?
    • Or materialized (each node produces full result)?
    • How do you pass rows between operators?

Thinking Exercise

Trace a Query Through the Pipeline

For: SELECT name FROM users WHERE age > 25

1. Parsing:
   Tokens: [SELECT, name, FROM, users, WHERE, age, >, 25]

   AST:
   SelectStmt
   ├── Columns: [name]
   ├── From: users
   └── Where: BinaryExpr
           ├── Left: age
           ├── Op: >
           └── Right: 25

2. Logical Plan:
   Project(columns=[name])
   └── Filter(age > 25)
       └── Scan(users)

3. Physical Plan:
   Project(columns=[name])
   └── Filter(predicate=age > 25)
       └── SeqScan(table=users)

4. Execution (Iterator model):
   - SeqScan yields rows one at a time
   - Filter checks predicate, passes matching rows
   - Project extracts requested columns
   - Results collected and formatted

Questions while tracing:

  • What if the WHERE clause is complex: age > 25 AND (name = 'Alice' OR city = 'NYC')?
  • How would you add an index for faster WHERE lookups?
  • How does GROUP BY change the pipeline?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “Explain the query execution pipeline.”
  2. “What is the volcano/iterator model?”
  3. “How do you optimize JOIN order?”
  4. “What is a query plan and how do you represent it?”
  5. “When would you use hash join vs nested loop join?”
  6. “How does indexing speed up queries?”
  7. “What is cost-based optimization?”

Hints in Layers

Hint 1: Start with Simple SELECT Just parse and execute SELECT * FROM table. No WHERE, no JOIN. Get the full pipeline working first.

Hint 2: AST Nodes Each SQL construct becomes an AST node. Use Go interfaces to represent different statement types.

Hint 3: Iterator Pattern Each plan node implements: Next() (Row, bool, error). Parent calls child’s Next() repeatedly.

Hint 4: Test with Known Datasets Use simple test tables. Verify results by hand. Build confidence incrementally.


Books That Will Help

Topic Book Chapter
Database architecture “Database Internals” by Alex Petrov Ch. 1-3
Query execution “Database Internals” by Alex Petrov Ch. 7-9
Parsing “Crafting Interpreters” by Bob Nystrom Ch. 4-6
Theory “Designing Data-Intensive Applications” by Kleppmann Ch. 3

Learning milestones:

  1. Simple SELECT works → You understand the full pipeline
  2. WHERE filtering works → You understand predicate evaluation
  3. JOIN works → You understand join algorithms

Project 11: Mini Container Runtime (Docker-lite)

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, C
  • Coolness Level: Level 5: Pure Magic (Super Cool)
  • Business Potential: 5. The “Industry Disruptor”
  • Difficulty: Level 5: Master
  • Knowledge Area: Linux Internals, Containers, Namespaces
  • Software or Tool: Linux kernel features (namespaces, cgroups)
  • Main Book: “Linux Kernel Development” by Robert Love

What you’ll build: A minimal container runtime that uses Linux namespaces and cgroups to isolate processes—like a simplified Docker without the image handling.

Why it teaches Go: Docker is written in Go for a reason. You’ll learn Linux system calls, process isolation, resource limits, and networking—the foundation of cloud infrastructure. This is deep systems programming.

Core challenges you’ll face:

  • Linux namespaces → maps to process isolation, system calls
  • cgroups for resource limits → maps to file system manipulation, kernel interfaces
  • Pivot root and mount namespaces → maps to filesystem isolation
  • Network namespaces → maps to virtual networking

Key Concepts:

  • Namespaces: “Linux Kernel Development” or container documentation
  • cgroups: Linux documentation and LWN articles
  • syscalls in Go: “The Linux Programming Interface” by Kerrisk
  • Container theory: “Container Security” by Liz Rice

Difficulty: Master Time estimate: 1 month+ Prerequisites: All previous projects. Linux environment required. Understanding of system calls.


Real World Outcome

You’ll have a container runtime:

$ sudo ./minicontainer run --rootfs ./alpine-rootfs --cmd /bin/sh
[minicontainer] Creating new container...
[minicontainer] Setting up namespaces: user, pid, net, uts, mnt
[minicontainer] Setting up cgroups: memory=512M, cpu=50%
[minicontainer] Pivoting root to ./alpine-rootfs
[minicontainer] Container started with PID 1 (inside namespace)

/ # hostname
container-abc123

/ # cat /etc/hostname
container-abc123

/ # ps aux
PID   USER     COMMAND
    1 root     /bin/sh
   10 root     ps aux

/ # echo $$
1    # We ARE PID 1 inside the container!

# From host (another terminal):
$ ps aux | grep alpine
root  54321  0.0  0.0  minicontainer run --rootfs ./alpine-rootfs

$ cat /sys/fs/cgroup/minicontainer-abc123/memory.max
536870912   # 512MB limit

# Network isolation:
/ # ip addr
1: lo: <LOOPBACK,UP>
    inet 127.0.0.1/8
2: eth0: <UP>
    inet 10.0.0.2/24

/ # ping -c 1 10.0.0.1  # Host
PING 10.0.0.1: 64 bytes from 10.0.0.1

The Core Question You’re Answering

“What actually IS a container? How does it isolate processes?”

Before you write any code, understand: containers are NOT virtual machines. They’re just Linux processes with restricted views of the system—namespaces hide things (like other processes), cgroups limit resources (like memory).


Concepts You Must Understand First

Stop and research these before coding:

  1. Linux Namespaces
    • What namespaces exist? (pid, net, mnt, uts, ipc, user, cgroup)
    • What does each namespace isolate?
    • How do you create namespaces? (clone(), unshare())
    • Resource: man namespaces(7), LWN articles
  2. Control Groups (cgroups)
    • What are cgroups v1 vs v2?
    • How do you set memory limits?
    • How do you set CPU limits?
    • Resource: Kernel documentation
  3. Root Filesystem
    • What is pivot_root?
    • Why not just chroot?
    • What mounts does a container need? (/proc, /sys, /dev)
    • Resource: man pivot_root(2)

Questions to Guide Your Design

Before implementing, think through these:

  1. Namespace Creation
    • Which namespaces do you need for basic isolation?
    • How do you use syscall.CLONE_NEWPID, etc.?
    • What order do you set up namespaces?
  2. Filesystem Setup
    • Where does the rootfs come from? (Alpine mini rootfs)
    • What bind mounts are needed?
    • How do you unmount the host filesystem?
  3. Networking
    • How do you create a virtual ethernet pair?
    • How do you route traffic to/from container?
    • What about DNS?

Thinking Exercise

Container Creation Sequence

Host Process                    Container Process
     │
     │ fork() with CLONE_NEWPID | CLONE_NEWNS | CLONE_NEWUTS | ...
     │────────────────────────────────────>│
     │                                      │
     │                                      │ (now PID 1 in new namespace)
     │                                      │
     │                                      │ Set hostname
     │                                      │ Mount /proc (type=proc)
     │                                      │ Mount /sys (bind)
     │                                      │
     │                                      │ pivot_root(new_root, old_root)
     │                                      │ unmount(old_root)
     │                                      │
     │                                      │ Setup cgroups (write to /sys/fs/cgroup)
     │                                      │
     │                                      │ exec("/bin/sh")
     │                                      │
     │ Wait for child                       │ Running isolated!

Questions while tracing:

  • What if the user inside container tries to remount /proc?
  • How does the host see the container’s processes?
  • What happens when PID 1 in the container exits?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “What is the difference between a container and a VM?”
  2. “Explain Linux namespaces.”
  3. “What is cgroups and what can you limit with it?”
  4. “How does Docker networking work?”
  5. “What is an OCI runtime?”
  6. “How does container escaping work and how do you prevent it?”
  7. “What is the difference between privileged and unprivileged containers?”

Hints in Layers

Hint 1: Start with PID Namespace Only Just create a new PID namespace and run /bin/sh. Verify with ps that you’re PID 1.

Hint 2: Add Mount Namespace Add CLONE_NEWNS, mount a tmpfs, and you have isolated filesystem (sort of).

Hint 3: Get Alpine Root FS Download Alpine Linux root filesystem tarball. Extract it. This becomes your container’s root.

Hint 4: Read LWN Articles LWN has excellent articles on each namespace. Read them carefully.


Books That Will Help

Topic Book Chapter
Linux internals “The Linux Programming Interface” by Kerrisk Ch. 24-30
Namespaces LWN.net articles Search “namespaces”
Containers “Container Security” by Liz Rice Full book
Systems programming “Linux Kernel Development” by Robert Love Ch. 1-5

Learning milestones:

  1. PID namespace works → You understand namespace creation
  2. Mount isolation works → You understand pivot_root
  3. cgroups limit memory → You understand resource control

Project 12: Distributed Key-Value Store with Raft

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, Java
  • Coolness Level: Level 5: Pure Magic (Super Cool)
  • Business Potential: 5. The “Industry Disruptor”
  • Difficulty: Level 5: Master
  • Knowledge Area: Distributed Systems, Consensus, Replication
  • Software or Tool: None (from scratch) or etcd/raft library
  • Main Book: “Designing Data-Intensive Applications” by Martin Kleppmann

What you’ll build: A distributed key-value store that uses the Raft consensus algorithm to replicate data across multiple nodes, tolerating failures and ensuring consistency.

Why it teaches Go: This is the holy grail of distributed systems. You’ll implement leader election, log replication, and fault tolerance—the algorithms that power etcd, CockroachDB, and Consul. This is what separates senior engineers from staff engineers.

Core challenges you’ll face:

  • Leader election → maps to state machines, timeouts, voting
  • Log replication → maps to RPC, consistency guarantees
  • Handling failures → maps to network partitions, node crashes
  • Client interaction → maps to linearizability, redirects

Key Concepts:

  • Raft consensus: The Raft paper (raft.github.io)
  • Distributed systems: “Designing Data-Intensive Applications” Ch. 8-9 - Kleppmann
  • Replication: “Designing Data-Intensive Applications” Ch. 5 - Kleppmann
  • Consistency models: “Designing Data-Intensive Applications” Ch. 9 - Kleppmann

Difficulty: Master Time estimate: 2-3 months Prerequisites: All previous projects. Deep networking knowledge. Read the Raft paper multiple times.


Real World Outcome

You’ll have a distributed database:

# Start a 3-node cluster:
$ ./raft-kv --id 1 --cluster localhost:8001,localhost:8002,localhost:8003
Node 1 starting...
Cluster: [localhost:8001, localhost:8002, localhost:8003]
[FOLLOWER] Waiting for leader...
[CANDIDATE] Starting election for term 1
[LEADER] Elected! Term: 1

$ ./raft-kv --id 2 --cluster localhost:8001,localhost:8002,localhost:8003
Node 2 starting...
[FOLLOWER] Leader is node 1

$ ./raft-kv --id 3 --cluster localhost:8001,localhost:8002,localhost:8003
Node 3 starting...
[FOLLOWER] Leader is node 1

# Client operations:
$ ./raft-cli set name "Alice"
OK (committed to 3/3 nodes)

$ ./raft-cli get name
Alice

# Kill the leader (node 1):
$ kill -9 <pid-node-1>

# Other nodes elect new leader:
Node 2: [CANDIDATE] Starting election for term 2
Node 2: [LEADER] Elected! Term: 2

# Client still works:
$ ./raft-cli get name
Alice (from node 2)

# Restart node 1:
Node 1: [FOLLOWER] Catching up... (100 log entries)
Node 1: [FOLLOWER] Caught up! Leader is node 2

# View cluster status:
$ ./raft-cli status
Cluster Status:
  Node 1: FOLLOWER  (log: 156 entries)
  Node 2: LEADER    (log: 156 entries) *
  Node 3: FOLLOWER  (log: 156 entries)
  Commit index: 156
  Last applied: 156

The Core Question You’re Answering

“How do distributed systems agree on a single value even when nodes fail?”

Before you write any code, read the Raft paper (raft.github.io) at least twice. Understand: there’s ONE leader, leaders have terms, logs must match, majority must agree. This is consensus.


Concepts You Must Understand First

Stop and research these before coding:

  1. Raft Basics
    • What are the three roles: leader, follower, candidate?
    • What is a term?
    • What triggers an election?
    • Resource: The Raft Paper (raft.github.io)
  2. Log Replication
    • What is the replicated log?
    • When is an entry “committed”?
    • How do logs stay consistent?
    • Resource: Raft Paper Section 5
  3. Failure Modes
    • What happens when leader crashes?
    • What happens during network partition?
    • How does a stale leader get detected?
    • Resource: Raft Paper Section 5-8

Questions to Guide Your Design

Before implementing, think through these:

  1. State Machine
    • What state does each node maintain?
    • How do you persist state (term, votedFor, log)?
    • What triggers state transitions?
  2. RPC Design
    • What RPCs do you need? (RequestVote, AppendEntries)
    • How do you handle RPC failures?
    • What timeouts do you use?
  3. Log Structure
    • How do you store the log?
    • What’s in each log entry?
    • How do you find where logs diverge?

Thinking Exercise

Leader Election Scenario

Initial state: 3 nodes, all followers, no leader

Time 0ms:    Node 1, 2, 3 all waiting
             Election timeouts: 150ms, 200ms, 180ms

Time 150ms:  Node 1 timeout fires!
             Node 1: term=1, state=CANDIDATE
             Node 1 sends RequestVote to nodes 2, 3

Time 155ms:  Node 2 receives RequestVote(term=1)
             Node 2: "Term 1 > my term 0, granting vote"
             Node 2 sends VoteGranted to Node 1

Time 160ms:  Node 3 receives RequestVote(term=1)
             Node 3: "Term 1 > my term 0, granting vote"
             Node 3 sends VoteGranted to Node 1

Time 165ms:  Node 1 receives 2 votes (+ self = 3/3 = majority)
             Node 1: state=LEADER
             Node 1 sends empty AppendEntries (heartbeat)

Time 170ms:  Nodes 2, 3 receive heartbeat
             Reset election timers
             Leader established!

Questions while tracing:

  • What if Node 1 and Node 2 both timeout at 150ms?
  • What if Node 1’s RequestVote to Node 3 is lost?
  • What if there’s a network partition during election?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “Explain the Raft consensus algorithm.”
  2. “What is split brain and how does Raft prevent it?”
  3. “How does leader election work?”
  4. “What happens when the leader fails?”
  5. “What is linearizability and does Raft provide it?”
  6. “How does Raft compare to Paxos?”
  7. “What are the tradeoffs of consensus?”

Hints in Layers

Hint 1: Implement State Machine First Just implement the state transitions: follower → candidate → leader. Test with print statements.

Hint 2: Leader Election Only Get election working before log replication. Nodes should elect a stable leader.

Hint 3: Use the TLA+ Spec The Raft paper has a TLA+ specification. Use it to verify your understanding.

Hint 4: Test Network Partitions Use tc (traffic control) or iptables to simulate network partitions. Your system should survive.


Books That Will Help

Topic Book Chapter
Consensus “Designing Data-Intensive Applications” by Kleppmann Ch. 8-9
Raft The Raft Paper raft.github.io
Replication “Designing Data-Intensive Applications” by Kleppmann Ch. 5
Testing “Designing Data-Intensive Applications” by Kleppmann Ch. 8

Learning milestones:

  1. Leader election works → You understand the state machine
  2. Log replication works → You understand append entries
  3. Survives leader crash → You understand fault tolerance

Project 13: Git from Scratch

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, C, Python
  • Coolness Level: Level 5: Pure Magic (Super Cool)
  • Business Potential: 1. The “Resume Gold”
  • Difficulty: Level 4: Expert
  • Knowledge Area: Version Control, Content-Addressable Storage, Graphs
  • Software or Tool: None (from scratch)
  • Main Book: “Pro Git” by Scott Chacon (free online)

What you’ll build: A Git implementation that handles the core commands: init, add, commit, log, branch, checkout, merge—using the same object format as real Git, so you can interoperate with the real git CLI.

Why it teaches Go: Git is elegant computer science. You’ll learn content-addressable storage (SHA-1 hashing), tree data structures, directed acyclic graphs, and file system operations. Understanding Git deeply makes you a better developer.

Core challenges you’ll face:

  • Object storage (blobs, trees, commits) → maps to hashing, file I/O, compression
  • Index (staging area) → maps to binary format parsing, file locking
  • Branch management → maps to refs, symbolic links, DAG traversal
  • Merge algorithms → maps to three-way merge, conflict detection

Key Concepts:

  • Git internals: “Pro Git” Ch. 10 - Scott Chacon (free online)
  • SHA-1 hashing: crypto/sha1 package
  • Zlib compression: compress/zlib package
  • DAG traversal: Algorithm fundamentals

Difficulty: Expert Time estimate: 1 month Prerequisites: Completed Projects 1-6. Strong understanding of file systems. Familiarity with how Git works conceptually.


Real World Outcome

You’ll have a Git-compatible tool:

$ ./minigit init
Initialized empty Git repository in .git/

$ ./minigit status
On branch master
No commits yet
nothing to commit

$ echo "Hello, World!" > hello.txt
$ ./minigit add hello.txt
$ ./minigit status
On branch master
No commits yet
Changes to be committed:
  new file: hello.txt

$ ./minigit commit -m "Initial commit"
[master (root-commit) a3f4b21] Initial commit
 1 file changed, 1 insertion(+)
 create mode 100644 hello.txt

# Real git can read our repository!
$ git log --oneline
a3f4b21 Initial commit

$ git cat-file -p a3f4b21
tree 8ab686ea...
author You <you@example.com> 1704891234 -0500
committer You <you@example.com> 1704891234 -0500

Initial commit

$ ./minigit branch feature
$ ./minigit checkout feature
Switched to branch 'feature'

$ echo "New feature" >> hello.txt
$ ./minigit add hello.txt
$ ./minigit commit -m "Add feature"
[feature 7c8d9e0] Add feature

$ ./minigit checkout master
$ ./minigit merge feature
Merge made by recursive strategy.

The Core Question You’re Answering

“How does Git store history efficiently and enable branching?”

Before you write any code, understand: Git is a content-addressable filesystem. Every object (blob, tree, commit) is stored by its SHA-1 hash. Commits form a DAG (directed acyclic graph). Branches are just pointers to commits.


Concepts You Must Understand First

Stop and research these before coding:

  1. Git Object Model
    • What are blobs, trees, and commits?
    • How is each object stored? (header + compressed content)
    • How do SHA-1 hashes provide integrity?
    • Resource: “Pro Git” Ch. 10 - Scott Chacon
  2. The Index (Staging Area)
    • What is the index format?
    • How does git add modify the index?
    • Why is the index binary, not text?
    • Resource: Git documentation on index format
  3. Refs and HEAD
    • What is a ref? How are branches stored?
    • What is HEAD and what does it point to?
    • How do you implement checkout?
    • Resource: “Pro Git” Ch. 10 - Scott Chacon

Questions to Guide Your Design

Before implementing, think through these:

  1. Object Storage
    • Where do objects live? (.git/objects/)
    • How do you compute the SHA-1 of an object?
    • How do you compress with zlib?
  2. Tree Construction
    • How do you represent directory structure as a tree?
    • How do you handle nested directories?
    • What permissions do you store?
  3. Commit Graph
    • How do you traverse commit history?
    • How do you find the merge base of two branches?
    • How do you detect fast-forward vs true merge?

Thinking Exercise

Object Creation Flow

For git add file.txt && git commit -m "msg":

1. git add file.txt
   - Read file content: "Hello\n"
   - Create blob object:
     header = "blob 6\0"
     content = "Hello\n"
     full = header + content
     hash = SHA1(full) = "ce013...bc"
   - Compress and store: .git/objects/ce/013...bc
   - Update index: add entry for file.txt → ce013...bc

2. git commit -m "msg"
   - Read index
   - Create tree object from index:
     "100644 file.txt\0<hash-bytes>"
     hash = SHA1(tree) = "4b825...ef"
   - Store tree: .git/objects/4b/825...ef

   - Create commit object:
     "tree 4b825...ef\n"
     "parent <previous-commit>\n" (if exists)
     "author ...\n"
     "committer ...\n"
     "\n"
     "msg\n"
     hash = SHA1(commit) = "a3f4b...21"
   - Store commit: .git/objects/a3/f4b...21

   - Update ref: .git/refs/heads/master = "a3f4b...21"

Questions while tracing:

  • What if two files have identical content?
  • How does Git know what changed between commits?
  • Why is the tree stored separately from the commit?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “How does Git store files internally?”
  2. “What is content-addressable storage?”
  3. “How do branches work in Git?”
  4. “What is the three-way merge algorithm?”
  5. “How does Git handle conflicts?”
  6. “What is a rebase and how does it differ from merge?”
  7. “How does Git achieve fast performance?”

Hints in Layers

Hint 1: Start with blob Objects Just implement reading and writing blob objects. Verify with git cat-file -p <hash>.

Hint 2: Object Format Format: <type> <size>\0<content>. Hash the whole thing. Compress with zlib. Store in .git/objects/XX/YYYY…

Hint 3: Use Real Git to Verify After each step, verify with real git. git fsck checks repository integrity.

Hint 4: Index is Tricky The index is binary. Start by just recreating it from scratch on each add. Optimize later.


Books That Will Help

Topic Book Chapter
Git internals “Pro Git” by Scott Chacon Ch. 10 (free online)
Algorithms “Algorithms” by Sedgewick & Wayne Ch. 4 (graphs)
File I/O “The Go Programming Language” by Donovan & Kernighan Ch. 7
Compression compress/zlib documentation Go stdlib

Learning milestones:

  1. Blob storage works → You understand content-addressable storage
  2. Commits work → You understand the object graph
  3. Branches work → You understand refs and DAGs

Project 14: Performance Profiler

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: Rust, C++
  • Coolness Level: Level 4: Hardcore Tech Flex
  • Business Potential: 3. The “Service & Support” Model
  • Difficulty: Level 4: Expert
  • Knowledge Area: Profiling, Runtime Internals, Visualization
  • Software or Tool: go tool pprof (as reference)
  • Main Book: “High Performance Go” (online resources)

What you’ll build: A profiling tool that instruments Go programs, collects CPU and memory samples, and generates flame graphs—helping you understand where programs spend time and allocate memory.

Why it teaches Go: To profile Go, you must understand Go’s runtime: goroutines, scheduler, garbage collector, and memory allocator. You’ll learn how to use the runtime package, interpret profiling data, and generate useful visualizations.

Core challenges you’ll face:

  • Collecting profiling data → maps to runtime package, signal handling
  • Parsing pprof format → maps to protocol buffers, binary data
  • Stack trace analysis → maps to call graphs, aggregation
  • Flame graph generation → maps to SVG generation, visualization

Key Concepts:

  • Go runtime: runtime package documentation
  • pprof format: Protocol buffer definition
  • Flame graphs: Brendan Gregg’s work
  • Performance analysis: Dave Cheney’s blog posts

Difficulty: Expert Time estimate: 2-3 weeks Prerequisites: Completed Projects 1-9. Deep understanding of Go runtime. Familiarity with profiling concepts.


Real World Outcome

You’ll have a profiling toolkit:

$ ./profiler record --cpu --duration 30s ./myapp
Recording CPU profile for 30s...
Profile saved to profile.pb.gz

$ ./profiler analyze profile.pb.gz
Top 10 by CPU time:
  45.2%  encoding/json.Marshal
  22.1%  net/http.(*conn).serve
  12.3%  runtime.mallocgc
   8.7%  myapp.processRequest
   4.2%  database/sql.(*DB).Query
   ...

Hot paths:
  main.handleRequest
  └── myapp.processRequest (8.7%)
      └── encoding/json.Marshal (45.2%)
          └── encoding/json.(*encodeState).marshal
              └── encoding/json.(*encodeState).reflectValue

$ ./profiler flamegraph profile.pb.gz > flame.svg
Flame graph written to flame.svg

# Open flame.svg in browser - interactive!

$ ./profiler record --mem --allocs ./myapp
Recording memory profile...

$ ./profiler analyze --top 5 mem.pb.gz
Top 5 allocations:
  1.2 GB  []byte allocations in encoding/json
  800 MB  string allocations in net/http
  256 MB  map[string]interface{} in myapp
  128 MB  *sql.Rows in database/sql
   64 MB  goroutine stacks

$ ./profiler diff profile1.pb.gz profile2.pb.gz
Diff (profile2 vs profile1):
  +15.2%  myapp.newFeature
   -8.3%  encoding/json.Marshal (optimization worked!)
   +2.1%  runtime.mallocgc

The Core Question You’re Answering

“Where is my program spending its time and memory?”

Before you write any code, understand: profiling works by sampling. The CPU profiler periodically interrupts the program and records what’s executing. Memory profiler records allocations. Stack traces are aggregated to find hot spots.


Concepts You Must Understand First

Stop and research these before coding:

  1. Go Runtime Profiling
    • What is runtime/pprof?
    • How does CPU profiling work? (SIGPROF)
    • How does memory profiling work? (runtime.MemProfile)
    • Resource: Go runtime/pprof documentation
  2. pprof Format
    • What’s in a pprof file?
    • How are samples and stacks represented?
    • What is a Profile protocol buffer?
    • Resource: google/pprof on GitHub
  3. Flame Graphs
    • What does a flame graph show?
    • How do you generate SVG from stacks?
    • What is stack collapsing?
    • Resource: Brendan Gregg’s Flame Graph page

Questions to Guide Your Design

Before implementing, think through these:

  1. Data Collection
    • How do you start/stop CPU profiling?
    • What format does runtime/pprof output?
    • How do you correlate samples with source code?
  2. Analysis
    • How do you aggregate stack traces?
    • How do you calculate percentages?
    • How do you identify hot functions?
  3. Visualization
    • How do you generate SVG?
    • How do you make it interactive (clickable)?
    • What colors convey meaning?

Thinking Exercise

CPU Profile Sample Flow

1. Start profiling:
   pprof.StartCPUProfile(file)

2. Runtime sets up signal handler for SIGPROF
   (fires every ~10ms by default)

3. Program runs normally
   main() → handleRequest() → processData() → json.Marshal()

4. SIGPROF fires (10ms elapsed)
   Runtime captures current goroutine's stack:
   [
     runtime.sigprof,
     encoding/json.(*encodeState).reflectValue,
     encoding/json.Marshal,
     myapp.processData,
     myapp.handleRequest,
     main.main
   ]
   Stack written to profile

5. After 30 seconds, 3000 samples collected
   Profile shows:
   - json.Marshal appears in 1350 samples = 45%
   - processData appears in 261 samples = 8.7%

Questions while tracing:

  • What if a function is fast but called millions of times?
  • How do you profile goroutines that are blocked?
  • What about CPU time in syscalls?

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “How does CPU profiling work?”
  2. “What is a flame graph and how do you read it?”
  3. “How would you profile memory allocations?”
  4. “What’s the overhead of profiling?”
  5. “How would you profile a production system?”
  6. “What is escape analysis and how does it affect allocations?”
  7. “How do you find goroutine leaks?”

Hints in Layers

Hint 1: Start with pprof Parsing Don’t collect data yet. Just parse an existing pprof file from go tool pprof.

Hint 2: Use go tool pprof as Reference Generate profiles with go test -cpuprofile=cpu.prof. Compare your output with go tool pprof.

Hint 3: SVG Generation Flame graphs are just stacked rectangles. Each box is a function, width = percentage, color = type (green=Go, red=runtime).

Hint 4: Diff is Powerful Comparing two profiles is where real insights come from. Implement diff early.


Books That Will Help

Topic Book Chapter
Go runtime Go documentation runtime package
Profiling Dave Cheney’s blog “Profiling Go Programs”
Flame graphs Brendan Gregg flamegraphs.com
Performance “100 Go Mistakes” by Teiva Harsanyi Ch. 10-12

Learning milestones:

  1. Parse pprof files → You understand the format
  2. Generate text report → You understand aggregation
  3. Generate flame graph → You understand visualization

Project 15: Final Capstone - Cloud-Native Application Platform

  • File: LEARN_GO_DEEP_DIVE.md
  • Main Programming Language: Go
  • Alternative Programming Languages: None (Go is perfect for this)
  • Coolness Level: Level 5: Pure Magic (Super Cool)
  • Business Potential: 5. The “Industry Disruptor”
  • Difficulty: Level 5: Master
  • Knowledge Area: Everything - This is the Final Boss
  • Software or Tool: Everything you’ve built
  • Main Book: All of them

What you’ll build: A mini Platform-as-a-Service that deploys containerized applications, load balances traffic, handles service discovery, provides monitoring, and scales based on load—combining everything you’ve learned.

Why it teaches Go: This is the synthesis project. You’ll integrate: networking (HTTP router, gRPC), concurrency (goroutines everywhere), containers (from your runtime), distributed systems (Raft for state), and monitoring (your profiler). This is what Go was made for.

Core challenges you’ll face:

  • Container orchestration → maps to your container runtime + scheduling
  • Service mesh → maps to your HTTP router + load balancing
  • Distributed state → maps to your Raft KV store
  • Observability → maps to your log aggregator + profiler

Key Concepts:

  • Everything from previous projects
  • Kubernetes architecture (for inspiration)
  • Service mesh patterns (Envoy, Istio)
  • GitOps and deployment patterns

Difficulty: Master Time estimate: 2-3 months Prerequisites: ALL previous projects completed. This is the final exam.


Real World Outcome

You’ll have a mini Kubernetes:

$ ./minicloud init
MiniCloud initialized.
  API server: localhost:8443
  Scheduler: running
  Controller: running
  etcd (raft): localhost:2379

$ ./minicloud deploy myapp --image ./myapp-rootfs --replicas 3
Deploying myapp...
  Creating container: myapp-1
  Creating container: myapp-2
  Creating container: myapp-3
  Configuring load balancer
  Registering in service discovery

Deployment complete:
  myapp.local → [10.0.0.2:8080, 10.0.0.3:8080, 10.0.0.4:8080]

$ curl http://myapp.local/health
{"status": "healthy", "instance": "myapp-2"}

$ ./minicloud scale myapp --replicas 5
Scaling myapp from 3 to 5 replicas...
  Creating container: myapp-4
  Creating container: myapp-5
  Updating load balancer
Scaled to 5 replicas.

$ ./minicloud logs myapp --follow
[myapp-1] 2025-01-10 14:30:00 GET /health 200 1ms
[myapp-2] 2025-01-10 14:30:01 GET /api/users 200 45ms
[myapp-3] 2025-01-10 14:30:02 POST /api/orders 201 120ms
...

$ ./minicloud metrics
Service: myapp
  Replicas: 5/5 healthy
  Requests/sec: 1,234
  Latency p50: 12ms
  Latency p99: 89ms
  CPU: 23% avg
  Memory: 256MB avg

$ ./minicloud status
Cluster Status:
  Nodes: 3 (all healthy)
  Services: 5
  Containers: 23
  Load balancer: active
  Storage: 12.4 GB used

$ ./minicloud node failure node-2
Simulating node-2 failure...
  Containers on node-2: myapp-2, myapp-4
  Rescheduling to healthy nodes...
  myapp-2 → node-1
  myapp-4 → node-3
  Load balancer updated
  Service restored in 2.3s

The Core Question You’re Answering

“How do cloud platforms like Kubernetes actually work?”

This is the culmination of everything. You’ll understand:

  • How containers are scheduled across nodes
  • How services discover each other
  • How traffic is load balanced
  • How the system heals from failures
  • How everything is observed and monitored

Architecture Overview

┌─────────────────────────────────────────────────────────────────┐
│                        MINICLOUD                                 │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌────────────────────────────────────────────────────────────┐ │
│  │                    Control Plane                           │ │
│  │                                                            │ │
│  │  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌───────────┐ │ │
│  │  │   API    │  │Scheduler │  │Controller│  │  Raft KV  │ │ │
│  │  │  Server  │  │          │  │          │  │  (state)  │ │ │
│  │  │ (gRPC)   │  │          │  │          │  │           │ │ │
│  │  └────┬─────┘  └────┬─────┘  └────┬─────┘  └─────┬─────┘ │ │
│  │       │             │             │              │        │ │
│  │       └─────────────┴─────────────┴──────────────┘        │ │
│  │                          │                                │ │
│  └──────────────────────────┼────────────────────────────────┘ │
│                             │                                   │
│  ┌──────────────────────────┼────────────────────────────────┐ │
│  │                     Data Plane                             │ │
│  │                          │                                 │ │
│  │  ┌──────────┐  ┌─────────▼─────────┐  ┌──────────────────┐│ │
│  │  │   Load   │  │     Service       │  │   Log           ││ │
│  │  │ Balancer │◄─┤    Discovery      │  │   Aggregator    ││ │
│  │  │(router)  │  │    (DNS/Registry) │  │                 ││ │
│  │  └────┬─────┘  └───────────────────┘  └─────────────────┘│ │
│  │       │                                                   │ │
│  │       ▼                                                   │ │
│  │  ┌─────────────────────────────────────────────────────┐ │ │
│  │  │                    Nodes                             │ │ │
│  │  │                                                      │ │ │
│  │  │  ┌─────────┐    ┌─────────┐    ┌─────────┐         │ │ │
│  │  │  │ Node 1  │    │ Node 2  │    │ Node 3  │         │ │ │
│  │  │  │┌───────┐│    │┌───────┐│    │┌───────┐│         │ │ │
│  │  │  ││ app-1 ││    ││ app-2 ││    ││ app-3 ││         │ │ │
│  │  │  │└───────┘│    │└───────┘│    │└───────┘│         │ │ │
│  │  │  │┌───────┐│    │┌───────┐│    │         │         │ │ │
│  │  │  ││ db-1  ││    ││ cache ││    │         │         │ │ │
│  │  │  │└───────┘│    │└───────┘│    │         │         │ │ │
│  │  │  └─────────┘    └─────────┘    └─────────┘         │ │ │
│  │  │                                                      │ │ │
│  │  └─────────────────────────────────────────────────────┘ │ │
│  │                                                           │ │
│  └───────────────────────────────────────────────────────────┘ │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘

Components You’ll Integrate

  1. API Server → Your gRPC service (Project 9)
  2. State Store → Your Raft KV (Project 12)
  3. Container Runtime → Your mini-Docker (Project 11)
  4. Load Balancer → Your HTTP router (Project 6)
  5. Log Collection → Your log aggregator (Project 8)
  6. Monitoring → Your profiler (Project 14)
  7. Service Discovery → New component (DNS-like)
  8. Scheduler → New component (bin-packing algorithm)

The Interview Questions They’ll Ask

Prepare to answer these:

  1. “Explain how container orchestration works.”
  2. “How does Kubernetes scheduling work?”
  3. “What is service discovery and how is it implemented?”
  4. “How do you handle node failures?”
  5. “What is the difference between stateless and stateful workloads?”
  6. “How does rolling deployment work?”
  7. “What is the CAP theorem and how does it apply here?”

Books That Will Help

Topic Book Chapter
Kubernetes “Kubernetes in Action” by Marko Lukša All
Distributed systems “Designing Data-Intensive Applications” All
Container orchestration “Kubernetes Up and Running” All
Service mesh Envoy documentation All

Learning milestones:

  1. Containers deploy and run → Control plane works
  2. Services discover each other → Data plane works
  3. Survives node failure → You’ve built a cloud

Project Comparison Table

# Project Difficulty Time Depth Fun Key Skills
1 CLI Task Manager Beginner Weekend ⭐⭐ ⭐⭐⭐ File I/O, JSON, CLI
2 JSON Parser Intermediate 1-2 weeks ⭐⭐⭐⭐ ⭐⭐⭐ Parsing, Recursion
3 HTTP Server Advanced 2-3 weeks ⭐⭐⭐⭐ ⭐⭐⭐⭐ TCP, Protocols
4 Web Scraper Intermediate 1-2 weeks ⭐⭐⭐ ⭐⭐⭐⭐ Concurrency
5 Rate Limiter Advanced 1-2 weeks ⭐⭐⭐ ⭐⭐⭐ Time, Algorithms
6 HTTP Router Advanced 2-3 weeks ⭐⭐⭐⭐ ⭐⭐⭐ Tries, Middleware
7 Mini Redis Expert 1 month ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐⭐ Networking, Persistence
8 Log Aggregator Advanced 2 weeks ⭐⭐⭐ ⭐⭐⭐ File Watching, Streaming
9 gRPC Service Advanced 2 weeks ⭐⭐⭐ ⭐⭐⭐⭐ Protobuf, Streaming
10 SQL Engine Master 2-3 months ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐⭐ Databases, Parsing
11 Container Runtime Master 1 month ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐⭐ Linux, Namespaces
12 Raft KV Store Master 2-3 months ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐ Consensus, Distributed
13 Git Clone Expert 1 month ⭐⭐⭐⭐ ⭐⭐⭐⭐⭐ Hashing, DAGs
14 Profiler Expert 2-3 weeks ⭐⭐⭐⭐ ⭐⭐⭐ Runtime, Visualization
15 Cloud Platform Master 2-3 months ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐⭐ Everything

Recommendation

Where to Start

For absolute beginners (new to Go): Start with Project 1 (CLI Task Manager). It covers all fundamentals without overwhelming you.

For developers with some Go experience: Start with Project 3 (HTTP Server) or Project 4 (Web Scraper). These are the signature Go projects.

For experienced developers wanting depth: Jump to Project 7 (Mini Redis) or Project 12 (Raft KV Store). These are interview gold.

For systems programmers: Go straight to Project 11 (Container Runtime). It’s the most rewarding systems project.

Learning Path

Week 1-2:    Project 1 (CLI basics)
Week 3-4:    Project 2 (Parsing) + Project 3 (Networking)
Week 5-6:    Project 4 (Concurrency mastery)
Week 7-8:    Project 5 + 6 (Libraries and APIs)
Week 9-12:   Project 7 (Redis) or Project 9 (gRPC)
Month 3-4:   Choose ONE deep project: 10, 11, or 12
Month 5-6:   Project 15 (Capstone)

Summary

This learning path covers Go from basics to mastery through 15 hands-on projects. Here’s the complete list:

# Project Name Main Language Difficulty Time Estimate
1 CLI Task Manager Go Beginner Weekend
2 Custom JSON Parser Go Intermediate 1-2 weeks
3 HTTP Server from Scratch Go Advanced 2-3 weeks
4 Concurrent Web Scraper Go Intermediate 1-2 weeks
5 Rate Limiter Library Go Advanced 1-2 weeks
6 Custom HTTP Router Go Advanced 2-3 weeks
7 Mini Redis Clone Go Expert 1 month
8 Log Aggregator Go Advanced 2 weeks
9 gRPC Service with Streaming Go Advanced 2 weeks
10 SQL Query Engine Go Master 2-3 months
11 Mini Container Runtime Go Master 1 month
12 Distributed KV Store (Raft) Go Master 2-3 months
13 Git from Scratch Go Expert 1 month
14 Performance Profiler Go Expert 2-3 weeks
15 Cloud Platform (Capstone) Go Master 2-3 months

For beginners: Start with projects #1, #2, #3 For intermediate: Focus on projects #4, #5, #6, #7 For advanced: Tackle projects #10, #11, #12

Expected Outcomes

After completing these projects, you will:

  • Write idiomatic Go code that follows best practices
  • Build concurrent systems using goroutines and channels
  • Implement complex protocols and parsers from scratch
  • Design clean APIs and reusable libraries
  • Understand how databases, containers, and distributed systems work
  • Be prepared for senior/staff-level Go interviews
  • Have a portfolio of impressive, practical projects

You’ll have built 15 working projects that demonstrate deep understanding of Go and systems programming from first principles.


Sources and Resources

Books (Primary)

  • “Learning Go” by Jon Bodner — Modern, comprehensive Go guide
  • “The Go Programming Language” by Donovan & Kernighan — The classic
  • “Concurrency in Go” by Katherine Cox-Buday — Deep dive into goroutines
  • “100 Go Mistakes” by Teiva Harsanyi — Common pitfalls and solutions
  • “Designing Data-Intensive Applications” by Martin Kleppmann — Distributed systems

Online Resources

Frameworks and Libraries

  • Gin — High-performance HTTP framework
  • Echo — Minimalist web framework
  • GORM — ORM library
  • Cobra — CLI framework