LEARN WEB PERFORMANCE OPTIMIZATION
Learn Web Performance: From Zero to Performance Master
Goal: Deeply understand web performance optimization—from how browsers render pixels to network protocols, JavaScript execution, and the metrics that matter. You’ll learn to measure, diagnose, and fix performance bottlenecks at every layer of the stack, understanding not just WHAT makes websites fast, but WHY certain techniques work and how to apply them systematically.
Why Web Performance Matters
In 2006, Amazon found that every 100ms of latency cost them 1% in sales. Google discovered that a 500ms delay in search results caused a 20% drop in traffic. These weren’t just numbers—they revealed a fundamental truth: speed is a feature.
Today, with Core Web Vitals directly affecting search rankings and user experience determining business outcomes, performance isn’t optional—it’s existential. Yet most developers treat performance as an afterthought, sprinkling optimizations like magic dust rather than understanding the underlying systems.
The Real Cost of Slow Websites
┌─────────────────────────────────────────────────────────────────┐
│ THE PERFORMANCE WATERFALL │
├─────────────────────────────────────────────────────────────────┤
│ │
│ User clicks link │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ DNS Lookup │ 50-200ms (first visit) │
│ └────────┬────────┘ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ TCP Handshake │ 1 RTT (~50-150ms) │
│ └────────┬────────┘ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ TLS Handshake │ 1-2 RTTs (~100-300ms) │
│ └────────┬────────┘ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ HTTP Request │ 1 RTT + server time │
│ └────────┬────────┘ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ HTML Download │ Size / bandwidth │
│ └────────┬────────┘ │
│ ▼ │
│ ┌─────────────────────────────────────────────┐ │
│ │ CRITICAL RENDERING PATH │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────────┐ │ │
│ │ │ Parse │→ │ Build │→ │ Layout │ │ │
│ │ │ HTML │ │ DOM │ │ (Reflow) │ │ │
│ │ └─────────┘ └─────────┘ └──────┬──────┘ │ │
│ │ ↓ ↓ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────────┐ │ │
│ │ │ Parse │→ │ Build │→ │ Paint │ │ │
│ │ │ CSS │ │ CSSOM │ │ (Pixels) │ │ │
│ │ └─────────┘ └─────────┘ └──────┬──────┘ │ │
│ │ ↓ │ │
│ │ ┌─────────────┐ │ │
│ │ │ Composite │ │ │
│ │ │ (Layers) │ │ │
│ │ └─────────────┘ │ │
│ └─────────────────────────────────────────────┘ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ First Contentful│ User sees SOMETHING │
│ │ Paint │ │
│ └────────┬────────┘ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ JavaScript │ Blocks main thread! │
│ │ Execution │ │
│ └────────┬────────┘ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Time to Inter- │ User can INTERACT │
│ │ active │ │
│ └─────────────────┘ │
│ │
│ TOTAL: 2-10+ seconds on typical pages │
│ │
└─────────────────────────────────────────────────────────────────┘
The Numbers That Matter
| Metric | Good | Needs Improvement | Poor |
|---|---|---|---|
| LCP (Largest Contentful Paint) | ≤ 2.5s | 2.5s - 4s | > 4s |
| INP (Interaction to Next Paint) | ≤ 200ms | 200ms - 500ms | > 500ms |
| CLS (Cumulative Layout Shift) | ≤ 0.1 | 0.1 - 0.25 | > 0.25 |
| TTFB (Time to First Byte) | ≤ 800ms | 800ms - 1.8s | > 1.8s |
| FCP (First Contentful Paint) | ≤ 1.8s | 1.8s - 3s | > 3s |
These aren’t arbitrary numbers—they’re based on research into human perception:
- 100ms: Feels instant
- 1 second: User notices delay but flow is unbroken
- 10 seconds: User loses attention entirely
The Browser: A Rendering Engine You Must Understand
Before you can optimize anything, you need to understand what the browser actually does when it receives your HTML, CSS, and JavaScript. The browser is not a black box—it’s a sophisticated rendering pipeline with specific bottlenecks you can target.
┌─────────────────────────────────────────────────────────────────────────────┐
│ THE BROWSER RENDERING PIPELINE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ NETWORK MAIN THREAD COMPOSITOR │
│ THREAD THREAD │
│ ┌──────┐ │
│ │Fetch │ │
│ │HTML │───────▶ ┌─────────────────────────────────────┐ │
│ └──────┘ │ PARSE HTML │ │
│ │ ┌─────┐ ┌─────┐ ┌─────┐ │ │
│ │ │Token│→ │Node │→ │ DOM │ │ │
│ │ └─────┘ └─────┘ └──┬──┘ │ │
│ ┌──────┐ │ │ │ │
│ │Fetch │ │ BLOCKING! ↓ │ │
│ │CSS │───────▶ │ ┌─────────────────────────┐ │ │
│ └──────┘ │ │ PARSE CSS │ │ │
│ │ │ ┌─────┐ ┌─────────┐ │ │ │
│ │ │ │Rules│→ │ CSSOM │ │ │ │
│ │ │ └─────┘ └────┬────┘ │ │ │
│ ┌──────┐ │ └────────────────┼────────┘ │ │
│ │Fetch │ │ │ │ │
│ │ JS │───────▶ │ BLOCKING! │ │ │
│ └──────┘ │ ┌────────────────┼────────┐ │ │
│ │ │ EXECUTE JS │ │ │ │
│ │ │ (Can modify ↓ │ │ │
│ │ │ DOM/CSSOM) │ │ │
│ │ └─────────────────────────┘ │ │
│ │ │ │
│ │ DOM + CSSOM = RENDER TREE │ │
│ │ ↓ │ │
│ │ ┌─────────────┐ │ │
│ │ │ LAYOUT │ Calculate geometry │ │
│ │ │ (Reflow) │ for every element │ │
│ │ └──────┬──────┘ │ │
│ │ ↓ │ │
│ │ ┌─────────────┐ │ │
│ │ │ PAINT │ Fill in pixels │ │
│ │ │ │ per layer │ │
│ │ └──────┬──────┘ │ │
│ └─────────┼───────────────────────────┘ │
│ ↓ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ COMPOSITE │ │
│ │ Combine layers in correct order │ │
│ │ (GPU-accelerated, separate thread!) │ │
│ └─────────────────────────────────────────────────────┘ │
│ ↓ │
│ PIXELS ON SCREEN │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Why This Matters for Performance
- CSS blocks rendering: The browser won’t paint anything until CSSOM is complete
- JavaScript blocks HTML parsing: Scripts halt the parser (unless async/defer)
- Layout is expensive: Changing geometry triggers layout for affected elements
- Paint is expensive: Changing colors, shadows triggers repaint
- Composite is cheap: Transforms and opacity only trigger compositing
This is why the “critical rendering path” is so important—it’s the minimum set of resources needed to render the first meaningful content.
The Network: Where Most Time Is Wasted
Performance problems aren’t just about code—they’re about physics. Data travels at the speed of light, but the speed of light is slow when you’re going around the world.
┌─────────────────────────────────────────────────────────────────────────────┐
│ NETWORK LATENCY BREAKDOWN │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ San Francisco to Sydney: ~15,000 km │
│ Speed of light: 299,792 km/s │
│ Theoretical minimum RTT: 100ms │
│ Real-world RTT: 200-300ms (routing, processing, congestion) │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ A TYPICAL PAGE LOAD │ │
│ ├─────────────────────────────────────────────────────────────────┤ │
│ │ │ │
│ │ 0ms 100ms 200ms 300ms 400ms 500ms 600ms 700ms │ │
│ │ │ │ │ │ │ │ │ │ │ │
│ │ ├───────┤ │ │
│ │ │ DNS │ DNS lookup (cached: 0ms, uncached: 50-200ms) │ │
│ │ │ │ │ │
│ │ ├───────┼───────┤ │ │
│ │ │ TCP │ TCP handshake (1 RTT) │ │
│ │ │ Handshake │ │ │
│ │ │ │ │ │
│ │ ├───────────────┼───────────────┤ │ │
│ │ │ TLS Handshake │ TLS 1.2 (2 RTT) │ │
│ │ │ │ TLS 1.3 (1 RTT) │ │
│ │ │ │ │ │
│ │ ├───────────────────────────────┼───────┤ │ │
│ │ │ HTTP Request/Response │ │ │
│ │ │ │ │ │
│ │ └───────────────────────────────────────┘ │ │
│ │ │ │
│ │ WITHOUT optimization: 4+ round trips before first byte! │ │
│ │ WITH optimization: 1 round trip (0-RTT TLS 1.3 resumption) │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
HTTP/1.1 vs HTTP/2 vs HTTP/3
┌────────────────────────────────────────────────────────────────────────────┐
│ HTTP PROTOCOL EVOLUTION │
├────────────────────────────────────────────────────────────────────────────┤
│ │
│ HTTP/1.1 (1997) HTTP/2 (2015) HTTP/3 (2022) │
│ ───────────────── ───────────── ──────────── │
│ │
│ ┌─────┐ ┌─────┐ ┌───────────────┐ ┌───────────────┐ │
│ │Req 1│ │Req 2│ │ Single │ │ QUIC │ │
│ │ │ │ │ │ Connection │ │ (UDP-based) │ │
│ └──┬──┘ └──┬──┘ │ │ │ │ │
│ │ │ │ ┌─────────┐ │ │ ┌─────────┐ │ │
│ │ │ │ │Stream 1 │ │ │ │Stream 1 │ │ │
│ ┌──▼──┐ │ │ ├─────────┤ │ │ ├─────────┤ │ │
│ │Res 1│ │ │ │Stream 2 │ │ │ │Stream 2 │ │ │
│ └──┬──┘ │ │ ├─────────┤ │ │ ├─────────┤ │ │
│ │ ┌──▼──┐ │ │Stream 3 │ │ │ │Stream 3 │ │ │
│ │ │Res 2│ │ └─────────┘ │ │ └─────────┘ │ │
│ │ └─────┘ └───────────────┘ └───────────────┘ │
│ │
│ • 6 connections max • Multiplexing! • No head-of-line │
│ • Head-of-line blocking • Header compression blocking │
│ • No header compression • Server push • 0-RTT connection │
│ • No prioritization • Prioritization • Better on mobile │
│ │
│ Time to load 100 requests: Time to load 100 requests: Time to load 100: │
│ ~10 seconds (6 parallel) ~2 seconds (parallel) ~1.5 seconds │
│ │
└────────────────────────────────────────────────────────────────────────────┘
JavaScript: The Performance Killer
JavaScript is unique among web resources because it doesn’t just download—it must be parsed, compiled, and executed. Each of these steps takes time on the main thread, blocking everything else.
┌─────────────────────────────────────────────────────────────────────────────┐
│ JAVASCRIPT PROCESSING PIPELINE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ 1MB of JavaScript ≠ 1MB of images │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ IMAGE (1MB) │ │
│ ├─────────────────────────────────────────────────────────────────────┤ │
│ │ Download ████████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 1.5s │ │
│ │ Decode ░░░░░░░░░░░░░░░░████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0.1s │ │
│ │ (Off main thread!) │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ JAVASCRIPT (1MB) │ │
│ ├─────────────────────────────────────────────────────────────────────┤ │
│ │ Download ████████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 1.5s │ │
│ │ Parse ░░░░░░░░░░░░░░░░████████████░░░░░░░░░░░░░░░░░░░░░░ 0.5s │ │
│ │ Compile ░░░░░░░░░░░░░░░░░░░░░░░░░░░░████████░░░░░░░░░░░░░░ 0.3s │ │
│ │ Execute ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░████████████░░ 0.5s │ │
│ │ (ALL ON MAIN THREAD - BLOCKS EVERYTHING!) │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ This is why bundle size matters so much! │
│ │
│ ┌────────────────────────────────────────────────────────────────────┐ │
│ │ PARSE TIME BY DEVICE (200KB JAVASCRIPT) │ │
│ ├────────────────────────────────────────────────────────────────────┤ │
│ │ │ │
│ │ MacBook Pro (M2) ██░░░░░░░░░░░░░░░░░░░░ ~100ms │ │
│ │ iPhone 14 ████░░░░░░░░░░░░░░░░░░ ~200ms │ │
│ │ Mid-range Android ████████░░░░░░░░░░░░░░ ~400ms │ │
│ │ Low-end Android ████████████████░░░░░░ ~800ms │ │
│ │ │ │
│ │ Your fast laptop is lying to you about performance! │ │
│ │ │ │
│ └────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
The Main Thread Problem
┌─────────────────────────────────────────────────────────────────────────────┐
│ THE SINGLE-THREADED NATURE OF JAVASCRIPT │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ THE MAIN THREAD handles: │
│ • JavaScript execution │
│ • DOM manipulation │
│ • Style calculations │
│ • Layout │
│ • Paint │
│ • User input events │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ BLOCKING THE MAIN THREAD │ │
│ │ │ │
│ │ Time: 0ms 50ms 100ms 150ms 200ms 250ms 300ms │ │
│ │ │ │ │ │ │ │ │ │ │
│ │ ├───────────────────────────────────────┤ │ │
│ │ │ Long JavaScript Task (250ms) │ │ │
│ │ │ │ │ │
│ │ │ 🖱️ User clicks button at 100ms │ │ │
│ │ │ │ │ │ │
│ │ │ │ WAITING... │ │ │
│ │ │ │ │ │ │
│ │ │ └────────────────────────┼──▶ Handler runs │ │
│ │ │ │ at 250ms! │ │
│ │ │ 150ms delay! │ │ │
│ │ │ User feels "jank" │ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ GOAL: Keep tasks under 50ms for responsive UI │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Caching: The Ultimate Performance Win
The fastest request is one that never happens. Caching is your most powerful tool, but it’s also the most misunderstood.
┌─────────────────────────────────────────────────────────────────────────────┐
│ THE CACHING HIERARCHY │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ SPEED │
│ ▲ │
│ │ ┌─────────────────────────────────────────────────────────────┐ │
│ │ │ MEMORY CACHE │ │
│ │ │ • Same-page resources │ │
│ │ │ • ~0ms access │ │
│ │ │ • Cleared on navigation │ │
│ │ └─────────────────────────────────────────────────────────────┘ │
│ │ ▼ │
│ │ ┌─────────────────────────────────────────────────────────────┐ │
│ │ │ SERVICE WORKER │ │
│ │ │ • Programmable cache │ │
│ │ │ • ~1-5ms access │ │
│ │ │ • Survives browser close │ │
│ │ │ • You control caching logic! │ │
│ │ └─────────────────────────────────────────────────────────────┘ │
│ │ ▼ │
│ │ ┌─────────────────────────────────────────────────────────────┐ │
│ │ │ DISK CACHE │ │
│ │ │ • HTTP cache (Cache-Control headers) │ │
│ │ │ • ~10-50ms access (SSD) │ │
│ │ │ • Survives browser close │ │
│ │ │ • Size limited by browser │ │
│ │ └─────────────────────────────────────────────────────────────┘ │
│ │ ▼ │
│ │ ┌─────────────────────────────────────────────────────────────┐ │
│ │ │ CDN CACHE │ │
│ │ │ • Edge servers worldwide │ │
│ │ │ • ~10-100ms (nearest PoP) │ │
│ │ │ • Shared across users │ │
│ │ └─────────────────────────────────────────────────────────────┘ │
│ │ ▼ │
│ │ ┌─────────────────────────────────────────────────────────────┐ │
│ │ │ ORIGIN SERVER │ │
│ SLOW │ • Your actual server │ │
│ │ • 100-3000ms (location dependent) │ │
│ │ • Always current │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Cache-Control Directives
┌─────────────────────────────────────────────────────────────────────────────┐
│ CACHE-CONTROL EXPLAINED │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Cache-Control: max-age=31536000, immutable │
│ └──────────────────────────────────────────┘ │
│ │ │ │ │
│ │ │ └─ Never revalidate (even on refresh) │
│ │ │ │
│ │ └─ Cache for 1 year (in seconds) │
│ │ │
│ └─ The header that controls caching │
│ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ STRATEGY: THE CACHE-FOREVER + HASH PATTERN │ │
│ ├───────────────────────────────────────────────────────────────────────┤ │
│ │ │ │
│ │ /index.html │ │
│ │ Cache-Control: no-cache │ │
│ │ (Always revalidate) │ │
│ │ │ │ │
│ │ └─▶ <script src="/app.a1b2c3.js"> │ │
│ │ │ │ │
│ │ └─▶ Cache-Control: max-age=31536000, immutable │ │
│ │ (Cache forever - content hash in filename!) │ │
│ │ │ │
│ │ When code changes: │ │
│ │ • New hash: app.d4e5f6.js │ │
│ │ • index.html updated (revalidated) │ │
│ │ • Browser fetches new JS │ │
│ │ • Old JS naturally evicted │ │
│ │ │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Images: The Biggest Opportunity
Images typically account for 50%+ of page weight. Optimizing images is often the highest-impact change you can make.
┌─────────────────────────────────────────────────────────────────────────────┐
│ IMAGE FORMAT COMPARISON │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Same image, different formats (1000x1000px photo): │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ PNG ████████████████████████████████████████████ 1,200 KB │ │
│ │ │ │
│ │ JPEG ████████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 200 KB │ │
│ │ (q=80) │ │
│ │ │ │
│ │ WebP ████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 150 KB │ │
│ │ (q=80) │ │
│ │ │ │
│ │ AVIF ████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 100 KB │ │
│ │ (q=60) │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────────────────────┐ │
│ │ WHEN TO USE EACH FORMAT │ │
│ ├───────────────────────────────────────────────────────────────────────┤ │
│ │ │ │
│ │ AVIF Best compression, good for photos │ │
│ │ ⚠️ Slower encoding, limited support (use with fallback) │ │
│ │ │ │
│ │ WebP Great compression, wide support (97%+) │ │
│ │ Good for photos and graphics │ │
│ │ │ │
│ │ JPEG Universal support, good for photos │ │
│ │ Use as fallback │ │
│ │ │ │
│ │ PNG Lossless, supports transparency │ │
│ │ Use for screenshots, diagrams with text │ │
│ │ │ │
│ │ SVG Vector, infinitely scalable │ │
│ │ Icons, logos, simple graphics │ │
│ │ │ │
│ └───────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Responsive Images
┌─────────────────────────────────────────────────────────────────────────────┐
│ RESPONSIVE IMAGE STRATEGY │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ THE PROBLEM: │
│ │
│ Mobile (375px) Desktop (1920px) │
│ ┌───────────────┐ ┌─────────────────────────────────────┐ │
│ │ │ │ │ │
│ │ Downloads │ │ Needs 1920px image │ │
│ │ 1920px │ │ │ │
│ │ image! │ │ │ │
│ │ │ │ │ │
│ │ Wastes │ └─────────────────────────────────────┘ │
│ │ bandwidth! │ │
│ │ │ │
│ └───────────────┘ │
│ │
│ THE SOLUTION: │
│ │
│ <img srcset="image-375.jpg 375w, │
│ image-750.jpg 750w, │
│ image-1200.jpg 1200w, │
│ image-1920.jpg 1920w" │
│ sizes="(max-width: 600px) 100vw, │
│ (max-width: 1200px) 50vw, │
│ 800px" │
│ src="image-750.jpg" │
│ alt="..."> │
│ │
│ Browser calculates: │
│ 1. What's the viewport width? │
│ 2. What's the image display size? (from sizes) │
│ 3. What's the device pixel ratio? │
│ 4. Which srcset image is closest? │
│ │
│ Result: Mobile downloads 375px image, desktop downloads 1920px │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
CSS: The Silent Performance Killer
CSS seems harmless—it’s just styling, right? But CSS can cause massive performance problems if you don’t understand how the browser processes it.
┌─────────────────────────────────────────────────────────────────────────────┐
│ CSS SELECTOR MATCHING │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Browsers match selectors RIGHT TO LEFT! │
│ │
│ Selector: div.container ul li a.link │
│ │
│ Browser does: │
│ 1. Find ALL elements with class "link" │
│ 2. Filter to only <a> elements │
│ 3. Check if parent is <li> │
│ 4. Check if ancestor is <ul> │
│ 5. Check if ancestor is div.container │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ SELECTOR PERFORMANCE (relative cost) │ │
│ ├─────────────────────────────────────────────────────────────────────┤ │
│ │ │ │
│ │ FAST #id █░░░░░░░░░░░░░░░░░░ │ │
│ │ .class ██░░░░░░░░░░░░░░░░░ │ │
│ │ tag ███░░░░░░░░░░░░░░░░ │ │
│ │ tag.class ████░░░░░░░░░░░░░░░ │ │
│ │ │ │
│ │ MEDIUM [attribute] ██████░░░░░░░░░░░░░ │ │
│ │ :pseudo-class ███████░░░░░░░░░░░░ │ │
│ │ │ │
│ │ SLOW div > ul > li > a ██████████░░░░░░░░░ │ │
│ │ .parent .child .deep ████████████░░░░░░░ │ │
│ │ * ████████████████░░░ (avoid!) │ │
│ │ [attr*="value"] █████████████████░░ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Layout Thrashing: The Hidden Performance Bug
┌─────────────────────────────────────────────────────────────────────────────┐
│ LAYOUT THRASHING │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ WHAT IS IT? │
│ Reading layout properties forces the browser to calculate layout. │
│ Writing style changes invalidates layout. Interleaving = thrashing! │
│ │
│ BAD CODE (forces layout on every iteration): │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ for (let i = 0; i < boxes.length; i++) { │ │
│ │ boxes[i].style.width = boxes[i].offsetWidth + 10 + 'px'; │ │
│ │ } ▲ ▲ │ │
│ │ │ │ │ │
│ │ │ └─ READS layout (forces calc!) │ │
│ │ │ │ │
│ │ └─ WRITES style (invalidates layout!) │ │
│ │ │ │
│ │ Each iteration: write → read → layout calculation! │ │
│ │ 100 elements = 100 layout calculations! │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ GOOD CODE (batch reads, then batch writes): │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ // First: read all values │ │
│ │ const widths = boxes.map(box => box.offsetWidth); │ │
│ │ │ │
│ │ // Then: write all values │ │
│ │ boxes.forEach((box, i) => { │ │
│ │ box.style.width = widths[i] + 10 + 'px'; │ │
│ │ }); │ │
│ │ │ │
│ │ Total: 1 layout calculation! │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ PROPERTIES THAT TRIGGER LAYOUT: │
│ offsetTop/Left/Width/Height, scrollTop/Left/Width/Height, │
│ clientTop/Left/Width/Height, getComputedStyle(), getBoundingClientRect() │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Core Web Vitals: The Metrics That Matter
Google’s Core Web Vitals are the official metrics for web performance. Understanding them deeply is essential.
┌─────────────────────────────────────────────────────────────────────────────┐
│ CORE WEB VITALS EXPLAINED │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ LCP (Largest Contentful Paint) │ │
│ ├─────────────────────────────────────────────────────────────────────┤ │
│ │ │ │
│ │ WHAT: Time until the largest content element is visible │ │
│ │ WHY: Measures perceived load speed │ │
│ │ │ │
│ │ Loading timeline: │ │
│ │ ┌─────┬─────────┬────────────────┬─────────────────────────┐ │ │
│ │ │ │ │ │ │ │ │
│ │ │ Nav │ TTFB │ FCP │ LCP │ │ │
│ │ │ │ │ (logo) │ (hero image) │ │ │
│ │ └─────┴─────────┴────────────────┴─────────────────────────┘ │ │
│ │ 0 200ms 800ms 1200ms 2500ms │ │
│ │ │ │
│ │ LCP candidates: <img>, <video>, background-image, text blocks │ │
│ │ │ │
│ │ GOOD: ≤ 2.5s NEEDS IMPROVEMENT: 2.5-4s POOR: > 4s │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ INP (Interaction to Next Paint) │ │
│ ├─────────────────────────────────────────────────────────────────────┤ │
│ │ │ │
│ │ WHAT: Time from user interaction to visual feedback │ │
│ │ WHY: Measures responsiveness throughout the page lifecycle │ │
│ │ │ │
│ │ Interaction breakdown: │ │
│ │ ┌──────────────────────────────────────────────────────────┐ │ │
│ │ │ Click │ Input Delay │ Processing │ Presentation Delay │ │ │
│ │ │ │ (blocked │ (event │ (render, paint, │ │ │
│ │ │ │ by JS) │ handler) │ composite) │ │ │
│ │ └───────┴─────────────┴────────────┴──────────────────────┘ │ │
│ │ │ │
│ │ └────────────── INP ──────────────────────┘ │ │
│ │ │ │
│ │ GOOD: ≤ 200ms NEEDS IMPROVEMENT: 200-500ms POOR: > 500ms │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ CLS (Cumulative Layout Shift) │ │
│ ├─────────────────────────────────────────────────────────────────────┤ │
│ │ │ │
│ │ WHAT: Measures unexpected layout shifts during page load │ │
│ │ WHY: Prevents frustrating visual instability │ │
│ │ │ │
│ │ Before (shift) After (shift) │ │
│ │ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ Header │ │ Header │ │ │
│ │ ├─────────────┤ ├─────────────┤ │ │
│ │ │ [Button] │◀────┐ │ [Ad loads!] │ │ │
│ │ │ │ │ ├─────────────┤ │ │
│ │ │ Content │ └───│ [Button] │ ← User clicks wrong spot! │ │
│ │ │ │ │ │ │ │
│ │ └─────────────┘ └─────────────┘ │ │
│ │ │ │
│ │ CLS = impact fraction × distance fraction │ │
│ │ │ │
│ │ GOOD: ≤ 0.1 NEEDS IMPROVEMENT: 0.1-0.25 POOR: > 0.25 │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Resource Loading: Priority and Timing
Understanding how browsers prioritize resource loading is crucial for optimization.
┌─────────────────────────────────────────────────────────────────────────────┐
│ RESOURCE LOADING PRIORITIES │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Browser assigns priorities based on resource type and discovery time: │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ Priority │ Resource Type │ │
│ ├──────────┼──────────────────────────────────────────────────────────┤ │
│ │ HIGHEST │ Main document (HTML) │ │
│ │ │ Critical CSS (render-blocking) │ │
│ │ │ Fonts in <head> with preload │ │
│ ├──────────┼──────────────────────────────────────────────────────────┤ │
│ │ HIGH │ Scripts in <head> │ │
│ │ │ CSS in <head> │ │
│ │ │ Preloaded resources │ │
│ │ │ Images in viewport (LCP candidates) │ │
│ ├──────────┼──────────────────────────────────────────────────────────┤ │
│ │ MEDIUM │ Scripts at end of <body> │ │
│ │ │ Images above the fold │ │
│ ├──────────┼──────────────────────────────────────────────────────────┤ │
│ │ LOW │ Images below the fold │ │
│ │ │ Async scripts │ │
│ │ │ Prefetched resources │ │
│ ├──────────┼──────────────────────────────────────────────────────────┤ │
│ │ LOWEST │ Deferred scripts │ │
│ │ │ Lazy-loaded images │ │
│ │ │ Resources with fetchpriority="low" │ │
│ └──────────┴──────────────────────────────────────────────────────────┘ │
│ │
│ RESOURCE HINTS: │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ <link rel="preconnect" href="https://api.example.com"> │ │
│ │ └─ Start connection early (DNS + TCP + TLS) │ │
│ │ │ │
│ │ <link rel="preload" href="hero.jpg" as="image"> │ │
│ │ └─ Fetch this NOW, it's critical │ │
│ │ │ │
│ │ <link rel="prefetch" href="next-page.html"> │ │
│ │ └─ Fetch this later, user might need it soon │ │
│ │ │ │
│ │ <link rel="dns-prefetch" href="https://cdn.example.com"> │ │
│ │ └─ Resolve DNS only (lighter than preconnect) │ │
│ │ │ │
│ │ <img fetchpriority="high" src="hero.jpg"> │ │
│ │ └─ Boost priority of this specific image │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Concept Summary Table
| Concept Cluster | What You Need to Internalize |
|---|---|
| Critical Rendering Path | The sequence of steps from receiving HTML to painting pixels. CSS blocks rendering. JS blocks HTML parsing. Minimize critical resources. |
| Network Latency | Every round trip costs time. RTT matters more than bandwidth for most sites. HTTP/2 multiplexing and connection reuse are essential. |
| JavaScript Cost | JS isn’t just download time—parse, compile, execute all block the main thread. Bundle size directly impacts interactivity. |
| Browser Caching | The fastest request never happens. Understand Cache-Control directives, content-addressed URLs, and service workers. |
| Image Optimization | Images are 50%+ of page weight. Modern formats (AVIF/WebP), responsive images, and lazy loading are non-negotiable. |
| CSS Performance | Selectors match right-to-left. Layout thrashing kills performance. Composited animations (transform, opacity) are cheap. |
| Core Web Vitals | LCP (visual completeness), INP (responsiveness), CLS (stability). These affect SEO and user experience. |
| Resource Loading | Browser prioritizes resources. Use preconnect, preload, defer, async strategically. |
Deep Dive Reading by Concept
This section maps each concept from above to specific book chapters for deeper understanding. Read these before or alongside the projects to build strong mental models.
Browser Rendering
| Concept | Book & Chapter |
|---|---|
| Critical Rendering Path | High Performance Browser Networking by Ilya Grigorik — Ch. 10: “Primer on Web Performance” |
| DOM/CSSOM Construction | Web Performance in Action by Jeremy Wagner — Ch. 2: “Understanding the Asset Pipeline” |
| Compositor & Layers | High Performance Web Sites by Steve Souders — Ch. 8: “Reduce the Number of DOM Elements” |
Network Optimization
| Concept | Book & Chapter |
|---|---|
| HTTP/2 & HTTP/3 | High Performance Browser Networking by Ilya Grigorik — Ch. 12-15: HTTP/2 and HTTP/3 sections |
| TCP/TLS Handshakes | High Performance Browser Networking by Ilya Grigorik — Ch. 2-4: TCP, TLS sections |
| CDNs & Edge Computing | Designing Data-Intensive Applications by Martin Kleppmann — Ch. 1: “Reliable, Scalable, Maintainable Applications” |
JavaScript Performance
| Concept | Book & Chapter |
|---|---|
| V8 Engine Internals | JavaScript: The Good Parts by Douglas Crockford — Ch. 4: “Functions” |
| Memory Management | High Performance JavaScript by Nicholas C. Zakas — Ch. 4: “Algorithms and Flow Control” |
| Event Loop & Main Thread | You Don’t Know JS: Async & Performance by Kyle Simpson — Ch. 1-2 |
Caching Strategies
| Concept | Book & Chapter |
|---|---|
| HTTP Caching | HTTP: The Definitive Guide by David Gourley — Ch. 7: “Caching” |
| Service Workers | Building Progressive Web Apps by Tal Ater — Ch. 4-6 |
| CDN Caching | High Performance Browser Networking by Ilya Grigorik — Ch. 16: “Optimizing for Mobile Networks” |
Image Optimization
| Concept | Book & Chapter |
|---|---|
| Image Formats | Practical Modern CSS by Stephanie Eckles — Ch. on responsive images |
| Compression Algorithms | High Performance Images by Colin Bendell et al. — Full book |
Essential Reading Order
For maximum comprehension, read in this order:
- Foundation (Week 1):
- High Performance Browser Networking Ch. 1-4 (networking fundamentals)
- High Performance Web Sites Ch. 1-3 (frontend optimization basics)
- Browser Internals (Week 2):
- High Performance Browser Networking Ch. 10-11 (rendering pipeline)
- High Performance JavaScript Ch. 1-4 (JS execution)
- Advanced Optimization (Week 3-4):
- HTTP: The Definitive Guide Ch. 7 (caching deep dive)
- High Performance Browser Networking Ch. 12-15 (HTTP/2, HTTP/3)
Project List
Projects are ordered from fundamental understanding to advanced implementations. Each project forces you to grapple with specific performance concepts.
Project 1: Build a Web Performance Waterfall Visualizer
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: JavaScript (Node.js + Browser)
- Alternative Programming Languages: Python, Go, Rust
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Network / Browser DevTools / Performance APIs
- Software or Tool: Chrome DevTools Protocol
- Main Book: “High Performance Browser Networking” by Ilya Grigorik
What you’ll build: A tool that captures and visualizes the network waterfall for any URL—like a mini Chrome DevTools Network panel. It shows DNS, TCP, TLS, request, and response times for every resource, rendered as an interactive timeline.
Why it teaches performance: Before you can optimize, you must measure. This project forces you to understand every phase of a network request, the Performance Timing API, and how resources relate to each other in the loading process.
Core challenges you’ll face:
- Capturing timing data via Chrome DevTools Protocol → maps to understanding browser internals
- Parsing and organizing resource timing → maps to the Resource Timing API
- Visualizing overlapping requests → maps to understanding HTTP/2 multiplexing
- Identifying the critical path → maps to what blocks rendering
Key Concepts:
- Navigation Timing API: “High Performance Browser Networking” Ch. 10 - Ilya Grigorik
- Resource Timing API: MDN Web Docs - Resource Timing
- Chrome DevTools Protocol: Chrome DevTools Protocol documentation
- Waterfall Analysis: “Web Performance in Action” Ch. 3 - Jeremy Wagner
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Understanding of HTTP, basic Node.js, familiarity with Chrome DevTools
Real World Outcome
You’ll have a command-line tool that takes a URL and generates an interactive HTML waterfall chart. When you run it, you’ll see:
- Every resource loaded by the page with exact timing
- Color-coded bars showing DNS/TCP/TLS/Request/Response phases
- The critical path highlighted
- Total page load time and resource count
Example Output:
$ ./perf-waterfall https://example.com
Capturing performance data for https://example.com...
Total Load Time: 2,847ms
Resources: 47
Critical Path Length: 1,234ms
Generating waterfall...
Waterfall saved to: waterfall-example-com-2024-01-15.html
┌─────────────────────────────────────────────────────────────────┐
│ Resource │ 0ms 500ms 1000ms 1500ms 2000ms │
├─────────────────────────────┼─────────────────────────────────────┤
│ example.com (document) │ ████████░░░░░░░░░░░░░░░░░░░░░░░░░ │
│ ├─ main.css │ ░░░░████████████░░░░░░░░░░░░░░░░░ │
│ ├─ app.js │ ░░░░████████████████░░░░░░░░░░░░░ │
│ ├─ hero.jpg │ ░░░░░░░░░░░░████████████████░░░░░ │
│ └─ font.woff2 │ ░░░░░░░░████████░░░░░░░░░░░░░░░░░ │
└─────────────────────────────┴─────────────────────────────────────┘
Legend: ████ = Waiting ████ = Download ░░░░ = Idle
Opening waterfall-example-com.html shows an interactive visualization where you can hover over each bar to see exact timings and click resources to see headers and details.
The Core Question You’re Answering
“What happens between typing a URL and seeing pixels—and WHERE does the time go?”
Before you write any code, sit with this question. Most developers think of page load as a single number (3 seconds!), but it’s actually dozens of overlapping network requests, each with its own lifecycle. Understanding the waterfall is understanding web performance.
Concepts You Must Understand First
Stop and research these before coding:
- The Navigation Timing API
- What is
performance.timingand what events does it capture? - What’s the difference between
fetchStart,responseStart, anddomContentLoadedEventEnd? - Why is
navigationStartimportant as a reference point? - Book Reference: “High Performance Browser Networking” Ch. 10 - Ilya Grigorik
- What is
- The Resource Timing API
- How does
performance.getEntriesByType('resource')work? - What do
dns,connect,secureConnect,requestStart,responseEndrepresent? - Why might some timing values be zero?
- Book Reference: MDN Documentation - Resource Timing API
- How does
- Chrome DevTools Protocol
- How do you connect to Chrome’s debugging interface?
- What events does the Network domain provide?
- How do you capture HAR (HTTP Archive) data?
- Book Reference: Chrome DevTools Protocol Documentation
Questions to Guide Your Design
Before implementing, think through these:
- Data Collection
- How will you launch a browser instance and connect to it?
- Should you use Puppeteer, Playwright, or raw CDP?
- How will you wait for “page complete” (DOMContentLoaded? load? networkidle?)
- Data Processing
- How will you correlate requests with their timing?
- How do you handle redirects (same resource, multiple requests)?
- How will you identify the critical path vs. non-critical resources?
- Visualization
- How will you represent overlapping concurrent requests?
- What color scheme makes timing phases distinguishable?
- How will you handle very long and very short requests on the same scale?
Thinking Exercise
Trace a Resource Load
Before coding, trace what happens when the browser loads a single CSS file:
Request: GET /styles/main.css
Timeline:
1. DNS Lookup: example.com → 93.184.216.34
2. TCP Handshake: SYN → SYN-ACK → ACK
3. TLS Handshake: ClientHello → ServerHello → ...
4. HTTP Request: GET /styles/main.css
5. Server Processing: ...
6. HTTP Response: 200 OK (streaming bytes)
7. Download Complete
8. CSS Parsing begins...
Questions while tracing:
- At which point does
requestStartfire? - What if this is a second request to the same domain?
- Why is the TLS handshake sometimes 0ms?
- What does “Time to First Byte” actually measure?
The Interview Questions They’ll Ask
Prepare to answer these:
- “Explain the difference between DOMContentLoaded and window.load events.”
- “What is the critical rendering path and why does it matter?”
- “How would you identify the slowest resource on a page?”
- “Why might a waterfall show requests starting after the previous one completes even with HTTP/2?”
- “What causes ‘staircase’ patterns in network waterfalls?”
- “How does browser caching affect resource timing values?”
Hints in Layers
Hint 1: Getting Started
Start by using Puppeteer or Playwright to load a page and access performance.getEntriesByType('resource'). Log the results.
Hint 2: Understanding the Data
Each resource entry has timing properties relative to fetchStart. The difference between responseEnd and startTime is total load time.
Hint 3: Building the Waterfall Create an SVG or Canvas element. Scale the x-axis to max(responseEnd). Each resource is a row. Draw rectangles for each timing phase.
Hint 4: Debugging Use Chrome DevTools → Performance tab to validate your waterfall matches the browser’s built-in view. Check for missing resources or incorrect timing.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Network timing phases | “High Performance Browser Networking” by Ilya Grigorik | Ch. 10: “Primer on Web Performance” |
| Resource loading | “High Performance Web Sites” by Steve Souders | Ch. 6: “Put Scripts at the Bottom” |
| Chrome DevTools | “Web Performance in Action” by Jeremy Wagner | Ch. 3: “Using Browser DevTools” |
Implementation Hints
The key insight is that browsers expose timing data through standard APIs, but getting accurate data requires controlling the browser itself.
Conceptual approach:
1. Launch headless Chrome with debugging enabled
2. Connect via Chrome DevTools Protocol
3. Enable Network domain events
4. Navigate to URL
5. Collect all network events (requestWillBeSent, responseReceived, loadingFinished)
6. Also grab performance.timing and performance.getEntriesByType('resource')
7. Merge data sources (CDP has more detail, Performance API is simpler)
8. Identify critical path (resources that block rendering)
9. Generate SVG waterfall with interactive tooltips
10. Add summary stats (TTFB, total load time, resource count by type)
The tricky parts:
- Waiting for “true” page complete (some SPAs keep loading forever)
- Handling Service Worker responses (different timing semantics)
- Correlating CDP events with Performance API entries
Learning milestones:
- You capture and log timing data → You understand the Performance API
- You visualize a basic waterfall → You understand timing relationships
- You identify render-blocking resources → You understand the critical path
- Your waterfall matches DevTools exactly → You’ve mastered resource timing
Project 2: Build a Critical CSS Extractor
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: JavaScript (Node.js)
- Alternative Programming Languages: Python, Go
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: CSS / DOM / Rendering
- Software or Tool: Puppeteer/Playwright, CSS Parser
- Main Book: “CSS: The Definitive Guide” by Eric Meyer
What you’ll build: A tool that analyzes a webpage and extracts only the CSS rules needed to render the above-the-fold content—the “critical CSS” that should be inlined in <head> to eliminate render-blocking.
Why it teaches performance: CSS blocks rendering. The browser won’t paint anything until CSSOM is complete. By understanding exactly which CSS rules are needed for initial paint, you learn what “critical” really means.
Core challenges you’ll face:
- Determining “above the fold” → maps to viewport calculations and layout
- Parsing and understanding CSS selectors → maps to how browsers match styles
- Tracing which rules apply to which elements → maps to CSS specificity and cascade
- Handling responsive breakpoints → maps to media queries and device testing
Key Concepts:
- CSS Parsing: “CSS: The Definitive Guide” Ch. 2 - Eric Meyer
- The CSSOM: “High Performance Browser Networking” Ch. 10 - Ilya Grigorik
- Selector Matching: “CSS Secrets” by Lea Verou
- Critical Rendering Path: Google Web Fundamentals documentation
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Strong CSS knowledge, understanding of the DOM, Puppeteer/Playwright basics
Real World Outcome
You’ll have a CLI tool that takes a URL and outputs the minimal CSS needed for initial render:
Example Output:
$ ./critical-css https://example.com --viewport 1920x1080
Analyzing https://example.com...
Viewport: 1920x1080
Above-fold elements: 47
Total CSS rules: 2,847
Critical CSS rules: 156
Full CSS size: 245KB
Critical CSS size: 12KB (95% reduction!)
Output saved to: critical.css
--- Sample of extracted critical CSS ---
html, body {
margin: 0;
font-family: -apple-system, BlinkMacSystemFont, sans-serif;
}
.header {
position: fixed;
top: 0;
width: 100%;
background: #fff;
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
}
.hero {
min-height: 60vh;
display: flex;
align-items: center;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
}
/* ... 153 more rules ... */
Usage:
<style>${critical}</style>
<link rel="preload" href="styles.css" as="style" onload="this.rel='stylesheet'">
The Core Question You’re Answering
“Which CSS does the browser ACTUALLY need to paint the first screen?”
Before you write any code, sit with this question. A typical site loads 200KB+ of CSS, but only a fraction is needed for what users first see. The rest can load later without blocking rendering.
Concepts You Must Understand First
Stop and research these before coding:
- CSS Object Model (CSSOM)
- How does the browser build the CSSOM?
- Why does CSS block rendering?
- What’s the relationship between DOM and CSSOM for the render tree?
- Book Reference: “High Performance Browser Networking” Ch. 10 - Ilya Grigorik
- CSS Selector Matching
- How do browsers match selectors to elements?
- Why do browsers match right-to-left?
- What is selector specificity and how is it calculated?
- Book Reference: “CSS: The Definitive Guide” Ch. 3 - Eric Meyer
- The Viewport and “Above the Fold”
- What determines the viewport size?
- How do you calculate if an element is visible without scrolling?
- What about elements that start above the fold but extend below?
- Book Reference: MDN - Intersection Observer API
Questions to Guide Your Design
Before implementing, think through these:
- Element Detection
- How will you find all elements visible in the viewport?
- How do you handle elements with
position: fixed? - What about pseudo-elements (::before, ::after)?
- CSS Rule Extraction
- How will you parse the page’s CSS and identify which rules apply?
- How do you handle rules with multiple selectors?
- What about
@mediaqueries—do you need the rule?
- Edge Cases
- What about CSS that affects layout of above-fold elements (body margins)?
- How do you handle web fonts referenced in CSS?
- What about CSS variables (custom properties)?
Thinking Exercise
Trace CSS Rule Application
Given this HTML and CSS, manually identify the critical CSS for a 800px tall viewport:
<body>
<header class="nav">Logo</header> <!-- 60px tall -->
<main class="hero">Welcome</main> <!-- 500px tall -->
<section class="features">Features</section> <!-- Below fold -->
<footer>Footer</footer> <!-- Way below -->
</body>
body { margin: 0; font-family: sans-serif; }
.nav { height: 60px; background: #333; color: white; }
.hero { height: 500px; background: blue; }
.features { padding: 100px; background: #f5f5f5; }
.features h2 { font-size: 2em; }
footer { height: 200px; background: #333; }
footer a { color: white; }
Questions while tracing:
- Which rules are critical? (body, .nav, .hero)
- Why is
.featuresborderline? (might affect layout) - Why are
footer adefinitely not critical? - What if
.herohadmargin-bottom: 200px?
The Interview Questions They’ll Ask
Prepare to answer these:
- “What is critical CSS and why does it improve performance?”
- “How would you automate critical CSS extraction in a CI/CD pipeline?”
- “What’s the tradeoff between inline critical CSS and external stylesheets?”
- “How do you handle critical CSS for different breakpoints?”
- “Why might extracted critical CSS cause a ‘flash of unstyled content’ (FOUC)?”
Hints in Layers
Hint 1: Getting Started
Use Puppeteer to load the page. Use page.coverage.startCSSCoverage() to see which CSS rules are actually used.
Hint 2: Finding Above-Fold Elements
Get all elements, filter by getBoundingClientRect().top < viewportHeight. Include ancestors too.
Hint 3: Matching CSS to Elements
Use window.getMatchedCSSRules() (deprecated) or iterate stylesheets and test selectors with element.matches(selector).
Hint 4: Validation Render the page with only your extracted CSS. Take screenshots and compare—layout should match.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| CSS fundamentals | “CSS: The Definitive Guide” by Eric Meyer | Ch. 2-3 |
| Critical CSS concept | “High Performance Web Sites” by Steve Souders | Ch. 5 |
| Browser rendering | “Web Performance in Action” by Jeremy Wagner | Ch. 2 |
Implementation Hints
Conceptual approach:
1. Load page in headless browser at specified viewport
2. Wait for styles to be fully applied
3. Get all elements in viewport using getBoundingClientRect
4. For each element, get all CSS rules that apply (getComputedStyle won't help - need actual rules)
5. Parse all stylesheets (both external and inline)
6. For each rule, check if ANY above-fold element matches the selector
7. Also include rules that affect layout ancestors (body, html)
8. Deduplicate rules (same selector might appear multiple times)
9. Output minimal CSS, preserving source order (important for cascade!)
Learning milestones:
- You extract visible elements correctly → You understand viewport and layout
- You match selectors to elements → You understand CSS matching
- Your critical CSS renders the page correctly → You understand the cascade
- You handle edge cases (fonts, variables) → You’ve mastered critical CSS
Project 3: Build a JavaScript Bundle Analyzer
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: JavaScript (Node.js)
- Alternative Programming Languages: Go, Rust
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: JavaScript / Bundlers / AST
- Software or Tool: webpack-bundle-analyzer inspiration, AST parsers
- Main Book: “High Performance JavaScript” by Nicholas C. Zakas
What you’ll build: A tool that analyzes JavaScript bundles and shows exactly what’s inside them—a treemap visualization of every module, its size, and what’s importing it. Like webpack-bundle-analyzer, but understanding how it works.
Why it teaches performance: JavaScript is the most expensive resource on the web. Understanding exactly what’s in your bundle and why is essential for reducing bundle size and improving Time to Interactive.
Core challenges you’ll face:
- Parsing source maps to reverse-engineer bundles → maps to understanding source maps
- Calculating true module sizes → maps to gzip vs raw size
- Building a dependency graph → maps to import/export analysis
- Creating a treemap visualization → maps to hierarchical data visualization
Key Concepts:
- Source Maps: “High Performance JavaScript” Ch. 1 - Nicholas C. Zakas
- JavaScript Bundling: Webpack documentation - concepts
- Tree Shaking: Rollup documentation - ES modules
- Treemap Visualization: D3.js documentation
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Understanding of bundlers (webpack/rollup), JavaScript modules, basic data visualization
Real World Outcome
You’ll have a tool that takes a JavaScript bundle (with source map) and generates an interactive HTML treemap:
Example Output:
$ ./bundle-analyzer dist/main.js dist/main.js.map
Analyzing bundle: dist/main.js (1.2MB raw, 312KB gzipped)
Parsing source map...
Found 234 modules
Top 10 largest modules:
┌──────────────────────────────────────────────────────────────┐
│ Module │ Raw Size │ Gzip Size │
├──────────────────────────────────────────────────────────────┤
│ node_modules/moment/moment.js │ 287KB │ 72KB │
│ node_modules/lodash/lodash.js │ 531KB │ 71KB │
│ node_modules/react-dom/cjs/... │ 128KB │ 41KB │
│ src/components/DataGrid.js │ 45KB │ 12KB │
│ node_modules/axios/lib/axios.js │ 32KB │ 9KB │
└──────────────────────────────────────────────────────────────┘
RECOMMENDATIONS:
⚠️ moment.js: Consider day.js (2KB) or date-fns (tree-shakeable)
⚠️ lodash.js: Import specific functions: import { debounce } from 'lodash/debounce'
ℹ️ Large components: Consider code-splitting DataGrid
Generating treemap visualization...
Saved to: bundle-analysis.html
Opening bundle-analysis.html shows an interactive treemap where you can:
- Hover to see module details
- Click to zoom into directories
- Toggle between raw and gzipped sizes
- Search for specific modules
The Core Question You’re Answering
“What’s actually IN my JavaScript bundle—and why is it so big?”
Before you write any code, sit with this question. Most developers are shocked to discover their bundle includes the entirety of moment.js (67KB gzipped) when they only use one function, or that their “tiny” utility library pulled in 200KB of dependencies.
Concepts You Must Understand First
Stop and research these before coding:
- Source Maps
- What is a source map and what does it contain?
- How do source maps map generated code to original source?
- What are the
sources,sourcesContent, andmappingsfields? - Book Reference: “Introduction to Source Maps” - HTML5 Rocks article
- JavaScript Module Systems
- What’s the difference between CommonJS and ES Modules?
- How do bundlers resolve
importandrequirestatements? - What is tree shaking and why does it only work with ES Modules?
- Book Reference: Webpack documentation - Tree Shaking
- Compression
- Why is gzipped size different from raw size?
- What affects gzip compression ratio?
- Why does repetitive code compress well?
- Book Reference: “High Performance Web Sites” Ch. 4 - Steve Souders
Questions to Guide Your Design
Before implementing, think through these:
- Source Map Parsing
- How will you parse the source map format?
- How do you map byte offsets to original modules?
- What about bundles without source maps?
- Size Calculation
- How do you calculate the “cost” of each module?
- Should you count only the module’s code or include its dependencies?
- How do you estimate gzipped size per module?
- Visualization
- How will you structure the treemap hierarchy?
- How do you handle very small modules (1KB)?
- How do you make the visualization interactive?
Thinking Exercise
Trace a Bundle
Given this simple project:
// src/index.js
import { format } from 'date-fns';
import { debounce } from 'lodash';
console.log(format(new Date(), 'yyyy-MM-dd'));
export const debouncedFn = debounce(() => {}, 300);
Questions while tracing:
- If you use webpack without tree shaking, what gets bundled?
- If you use date-fns (tree-shakeable), what gets included vs excluded?
- If you use lodash (not tree-shakeable by default), what gets included?
- How would the bundle differ with
import debounce from 'lodash/debounce'?
The Interview Questions They’ll Ask
Prepare to answer these:
- “How would you reduce the size of a JavaScript bundle?”
- “What is tree shaking and what conditions must be met for it to work?”
- “How do source maps work and what’s the security concern with shipping them?”
- “What’s the difference between bundle size and parse/compile time?”
- “How would you implement code splitting for a React application?”
Hints in Layers
Hint 1: Getting Started
Use the source-map npm package to parse source maps. It handles the complex mapping format for you.
Hint 2: Calculating Module Sizes The source map tells you which original file each byte of output came from. Sum the output bytes per source file.
Hint 3: Gzip Estimation
You can’t perfectly estimate per-module gzip size (gzip works on the whole file). Approximate using zlib.gzipSync(moduleCode).length.
Hint 4: Treemap Visualization
Use D3’s treemap layout or a library like treemap-squarify. Structure data as nested objects with name and value properties.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| JavaScript performance | “High Performance JavaScript” by Nicholas C. Zakas | Ch. 1, 8 |
| Bundler internals | Webpack documentation | “Under the Hood” section |
| Data visualization | “The Visual Display of Quantitative Information” by Tufte | Ch. 1-2 |
Implementation Hints
Conceptual approach:
1. Read the bundle file and source map
2. Parse source map to get mappings and source content
3. For each byte position in bundle, find original source file
4. Aggregate bytes per source file
5. Calculate gzip size (approximation: gzip each module separately)
6. Build hierarchical structure (node_modules/react/index.js → node_modules → react → index.js)
7. Generate treemap data
8. Render as SVG with interactive zoom and tooltips
9. Add recommendations based on known heavy libraries
Learning milestones:
- You parse source maps correctly → You understand the format
- You calculate accurate module sizes → You understand bundle composition
- You generate an interactive treemap → You understand visualization
- You provide actionable recommendations → You understand optimization strategies
Project 4: Build an Image Optimization Pipeline
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: Node.js (JavaScript)
- Alternative Programming Languages: Go, Rust, Python
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Image Processing / Compression / Responsive Images
- Software or Tool: sharp, libvips, imagemin
- Main Book: “High Performance Images” by Colin Bendell et al.
What you’ll build: An automated image optimization pipeline that takes any image and produces optimized versions in multiple formats (AVIF, WebP, JPEG) and sizes (responsive breakpoints), with automatic quality selection based on visual similarity.
Why it teaches performance: Images account for 50%+ of most pages’ weight. Understanding image formats, compression algorithms, and responsive images deeply is one of the highest-impact performance skills.
Core challenges you’ll face:
- Understanding image formats and when to use each → maps to codec characteristics
- Balancing quality vs file size → maps to perceptual quality metrics
- Generating optimal responsive breakpoints → maps to responsive image strategy
- Automating for any input → maps to building robust pipelines
Key Concepts:
- Image Formats: “High Performance Images” Ch. 3-5 - Colin Bendell
- Compression: Understanding lossy vs lossless, DCT, wavelets
- Responsive Images: “Responsive Web Design” Ch. 3 - Ethan Marcotte
- SSIM/DSSIM: Structural similarity for quality measurement
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Basic understanding of image formats, command line tools
Real World Outcome
You’ll have a CLI tool that optimizes images for the web:
Example Output:
$ ./img-optimize hero-photo.png --quality auto --sizes "320,640,1024,1920"
Analyzing: hero-photo.png (5.2MB, 4000x3000 PNG)
Detecting optimal quality...
Target SSIM: 0.95 (high quality)
Optimal JPEG quality: 82
Optimal WebP quality: 78
Optimal AVIF quality: 62
Generating optimized images...
┌───────────────────────────────────────────────────────────────────────────┐
│ Output │ Size │ Savings │ SSIM │ │
├───────────────────────────────────────────────────────────────────────────┤
│ hero-photo-1920.avif │ 142KB │ 97% │ 0.951 │ ★ Best │
│ hero-photo-1920.webp │ 198KB │ 96% │ 0.958 │ │
│ hero-photo-1920.jpg │ 287KB │ 94% │ 0.962 │ │
├───────────────────────────────────────────────────────────────────────────┤
│ hero-photo-1024.avif │ 68KB │ 99% │ 0.948 │ │
│ hero-photo-1024.webp │ 94KB │ 98% │ 0.955 │ │
│ hero-photo-1024.jpg │ 142KB │ 97% │ 0.959 │ │
├───────────────────────────────────────────────────────────────────────────┤
│ hero-photo-640.avif │ 32KB │ 99% │ 0.944 │ │
│ hero-photo-640.webp │ 45KB │ 99% │ 0.952 │ │
│ hero-photo-640.jpg │ 68KB │ 99% │ 0.956 │ │
├───────────────────────────────────────────────────────────────────────────┤
│ hero-photo-320.avif │ 12KB │ 99% │ 0.941 │ │
│ hero-photo-320.webp │ 18KB │ 99% │ 0.949 │ │
│ hero-photo-320.jpg │ 28KB │ 99% │ 0.953 │ │
└───────────────────────────────────────────────────────────────────────────┘
Total original: 5.2MB
Total optimized: 1.1MB (12 variants)
Average savings: 97%
Suggested HTML:
<picture>
<source type="image/avif"
srcset="hero-photo-320.avif 320w,
hero-photo-640.avif 640w,
hero-photo-1024.avif 1024w,
hero-photo-1920.avif 1920w">
<source type="image/webp"
srcset="hero-photo-320.webp 320w, ...">
<img src="hero-photo-640.jpg"
srcset="hero-photo-320.jpg 320w, ..."
sizes="(max-width: 640px) 100vw, 640px"
alt="Hero photo">
</picture>
The Core Question You’re Answering
“How do you deliver the right image in the right format at the right size for every device—automatically?”
Before you write any code, sit with this question. A 4K phone has different needs than a 1080p laptop. Safari users need different formats than Chrome users. Solving this at scale requires understanding the entire image optimization landscape.
Concepts You Must Understand First
Stop and research these before coding:
- Image Compression Fundamentals
- What’s the difference between lossy and lossless compression?
- How does JPEG’s DCT-based compression work conceptually?
- Why can AVIF achieve better compression than WebP?
- Book Reference: “High Performance Images” Ch. 3 - Colin Bendell
- Responsive Images
- What do
srcsetandsizesattributes do? - How does the browser choose which image to download?
- What’s the role of device pixel ratio?
- Book Reference: MDN - Responsive Images
- What do
- Quality Metrics
- What is SSIM (Structural Similarity Index)?
- How do you objectively measure “good enough” quality?
- Why is perceptual quality different from mathematical metrics?
- Book Reference: Research on SSIM and image quality
Questions to Guide Your Design
Before implementing, think through these:
- Format Selection
- When should you prefer AVIF over WebP over JPEG?
- How do you handle transparency (PNG → WebP/AVIF)?
- Should you ever upscale images?
- Quality Determination
- How do you find the optimal quality for a target SSIM?
- Should you use the same quality for all sizes of the same image?
- How do you handle images that are already heavily compressed?
- Pipeline Design
- How do you parallelize encoding (AVIF is slow)?
- How do you handle memory for very large images?
- What metadata should you preserve or strip?
Thinking Exercise
Trace Image Optimization
Given a 2MB JPEG photo (2000x1500px):
Trace through:
- What’s the file size at quality 100, 85, 70, 50?
- At what quality does visible degradation start?
- Converting to WebP at “equivalent” quality—how much smaller?
- Converting to AVIF at “equivalent” quality—how much smaller?
Questions:
- Why does JPEG quality 50 look worse than WebP quality 50?
- What’s special about photos vs. screenshots vs. graphics?
- Why might a PNG sometimes be smaller than a JPEG?
The Interview Questions They’ll Ask
Prepare to answer these:
- “Explain the difference between JPEG, PNG, WebP, and AVIF—when would you use each?”
- “How does responsive images (
srcset/sizes) work and how does the browser choose?” - “What is the Art Direction problem and how does
<picture>solve it?” - “How would you implement lazy loading for images?”
- “What’s the tradeoff between image quality and file size, and how do you find the right balance?”
Hints in Layers
Hint 1: Getting Started
Use the sharp library—it’s fast (uses libvips) and supports all modern formats. Start with simple resize + format conversion.
Hint 2: Auto Quality Binary search for quality: encode at 90, 80, 70… Check SSIM at each level. Find lowest quality that meets target SSIM.
Hint 3: SSIM Calculation
Use sharp’s stats() or a dedicated SSIM library. Compare original (at target size) to compressed version.
Hint 4: Parallelization AVIF encoding is CPU-intensive. Use worker threads or a job queue. Encode different sizes/formats in parallel.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Image formats deep dive | “High Performance Images” by Colin Bendell | Ch. 3-6 |
| Responsive images | “Responsible Responsive Design” by Scott Jehl | Ch. 4 |
| Web performance | “High Performance Web Sites” by Steve Souders | Ch. 10 |
Implementation Hints
Conceptual approach:
1. Load image and detect format, dimensions, color profile
2. Generate responsive breakpoints (or use provided sizes)
3. For each target size:
a. Resize image (preserve aspect ratio)
b. For each format (AVIF, WebP, JPEG):
- If auto quality: binary search for target SSIM
- Encode at optimal quality
- Save with meaningful filename
4. Generate HTML snippet for srcset/picture
5. Calculate and display savings
Learning milestones:
- You resize and convert formats → You understand basic image processing
- You auto-detect optimal quality → You understand perceptual quality
- You generate responsive variants → You understand responsive images
- Your pipeline is fast and robust → You’ve built production-quality tooling
Project 5: Build a Service Worker Caching Layer
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: JavaScript
- Alternative Programming Languages: TypeScript
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Service Workers / Caching / Offline
- Software or Tool: Service Worker API, Cache API
- Main Book: “Building Progressive Web Apps” by Tal Ater
What you’ll build: A programmable caching layer using Service Workers that implements multiple caching strategies (cache-first, network-first, stale-while-revalidate) with intelligent routing based on resource type, and provides offline functionality.
Why it teaches performance: Service Workers are the ultimate caching mechanism—you control what gets cached, how it’s served, and can even work offline. Understanding service workers means understanding the pinnacle of web caching.
Core challenges you’ll face:
- Understanding the Service Worker lifecycle → maps to install, activate, fetch events
- Implementing caching strategies → maps to when to use which strategy
- Handling cache invalidation → maps to the hardest problem in CS
- Building offline-first functionality → maps to resilient web applications
Key Concepts:
- Service Worker Lifecycle: “Building Progressive Web Apps” Ch. 4 - Tal Ater
- Cache API: MDN - Cache API documentation
- Caching Strategies: Google Workbox documentation
- Offline UX: “Progressive Web Apps” by Jason Grigsby
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Understanding of the fetch API, Promise-based JavaScript, HTTP caching basics
Real World Outcome
You’ll have a service worker library that you can drop into any project:
Example Output:
# After implementing and registering the service worker
$ curl -I https://your-site.com/main.js
x-cache-strategy: cache-first
x-cache-hit: true
x-cache-age: 3600
$ curl -I https://your-site.com/api/data
x-cache-strategy: network-first
x-cache-hit: false
# Offline test (network disabled)
$ curl https://your-site.com/
HTTP/1.1 200 OK
x-served-by: service-worker
x-offline: true
In the browser console:
[SW] Installing service worker v1.2.3
[SW] Precaching 23 assets (1.2MB total)
[SW] Activated! Ready to serve.
[SW] Cache hit: /main.css (cache-first)
[SW] Network fetch: /api/users (network-first)
[SW] Stale-while-revalidate: /avatar.jpg (updating in background)
[SW] Offline! Serving from cache: /index.html
Dashboard showing:
- Cache size by strategy
- Hit/miss ratios
- Network savings (requests avoided)
- Cache staleness metrics
The Core Question You’re Answering
“How do I make my web app feel instant—and work offline?”
Before you write any code, sit with this question. HTTP caching is limited to what the browser decides. Service Workers let YOU decide. This is the difference between “fast when cached” and “always fast.”
Concepts You Must Understand First
Stop and research these before coding:
- The Service Worker Lifecycle
- What triggers install, activate, and fetch events?
- Why do you need to call
skipWaiting()andclients.claim()? - How do you handle updates to the service worker itself?
- Book Reference: “Building Progressive Web Apps” Ch. 4 - Tal Ater
- Caching Strategies
- What is cache-first vs network-first vs stale-while-revalidate?
- When would you use each strategy?
- How do you handle cache versioning?
- Book Reference: Google Workbox documentation - Strategies
- The Cache API
- How does
caches.open(),cache.put(),cache.match()work? - What are the storage limits?
- How do you handle cache eviction?
- Book Reference: MDN - Cache API
- How does
Questions to Guide Your Design
Before implementing, think through these:
- Strategy Selection
- How do you determine which strategy to use for a given request?
- Should strategies be configurable per route?
- How do you handle partial matches (regex patterns)?
- Cache Management
- How do you version caches for deployments?
- How do you clean up old caches?
- What’s your cache size limit strategy?
- Offline Experience
- What do you show when a resource isn’t cached and user is offline?
- How do you queue offline mutations for later sync?
- How do you detect online/offline status?
Thinking Exercise
Trace a Request Through Service Worker
For this request sequence, trace what the service worker does:
1. User visits /index.html (first time, online)
2. User visits /index.html (second time, online)
3. User visits /index.html (third time, offline)
4. User visits /api/data (first time, online)
5. User visits /api/data (second time, offline)
Given strategies:
- HTML: network-first with cache fallback
- API: stale-while-revalidate
- Static assets: cache-first
Questions while tracing:
- What gets cached when?
- What gets served from cache vs network?
- What happens when the network fails?
- When does the cache get updated?
The Interview Questions They’ll Ask
Prepare to answer these:
- “Explain the service worker lifecycle and how updates work.”
- “What’s the difference between cache-first and stale-while-revalidate?”
- “How would you implement offline support for a web application?”
- “What are the security implications of service workers?”
- “How do you debug service worker caching issues?”
- “What is Background Sync and when would you use it?”
Hints in Layers
Hint 1: Getting Started Start with a minimal service worker that logs all fetch events. See what requests flow through it.
Hint 2: First Strategy Implement cache-first for static assets. Open a cache, check for match, return cached or fetch and cache.
Hint 3: Multiple Strategies
Create a routing system: router.get('/api/*', networkFirst()), router.get('/*.css', cacheFirst()).
Hint 4: Debugging Use Chrome DevTools → Application → Service Workers. Check “Update on reload” during development.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Service worker fundamentals | “Building Progressive Web Apps” by Tal Ater | Ch. 4-6 |
| Caching strategies | Workbox documentation | “Strategies” section |
| Offline UX | “Progressive Web Apps” by Jason Grigsby | Ch. 7 |
Implementation Hints
Conceptual approach:
1. Create router class that maps URL patterns to strategies
2. Implement strategy classes:
- CacheFirst: try cache, fallback to network, cache response
- NetworkFirst: try network, fallback to cache, cache successful responses
- StaleWhileRevalidate: serve from cache, update in background
- NetworkOnly: always fetch, for POST requests
3. Handle install event: precache critical assets
4. Handle activate event: clean up old caches
5. Handle fetch event: route to appropriate strategy
6. Add versioning: cache names include version, clean old on activate
7. Add analytics: track hit/miss ratios
8. Add offline page: custom offline fallback when nothing is cached
Learning milestones:
- You intercept and log fetch events → You understand the SW lifecycle
- Cache-first works for static assets → You understand basic caching
- Multiple strategies route correctly → You understand routing patterns
- Your app works offline → You’ve built a complete caching layer
Project 6: Build a Core Web Vitals Monitor
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: JavaScript (TypeScript recommended)
- Alternative Programming Languages: Go (for backend)
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 3. The “Service & Support” Model (B2B Utility)
- Difficulty: Level 2: Intermediate
- Knowledge Area: Performance Metrics / Real User Monitoring
- Software or Tool: web-vitals library, Performance Observer API
- Main Book: “Web Performance in Action” by Jeremy Wagner
What you’ll build: A Real User Monitoring (RUM) system that captures Core Web Vitals (LCP, INP, CLS) from actual users, sends them to a backend, and displays trends over time. Like a mini Datadog RUM or SpeedCurve.
Why it teaches performance: Lab data (Lighthouse) tells you how fast your site COULD be. Real user data tells you how fast it ACTUALLY is. This project teaches you to measure what matters for real users.
Core challenges you’ll face:
- Understanding Performance Observer API → maps to how browsers expose metrics
- Capturing Core Web Vitals accurately → maps to metric definitions and edge cases
- Aggregating and analyzing metrics → maps to p75 vs averages, segmentation
- Building meaningful dashboards → maps to actionable performance data
Key Concepts:
- Core Web Vitals: web.dev documentation on LCP, INP, CLS
- Performance Observer API: MDN documentation
- RUM vs Synthetic Monitoring: “Web Performance in Action” Ch. 4 - Jeremy Wagner
- Statistical Analysis: Understanding percentiles and distributions
Difficulty: Intermediate Time estimate: 2 weeks Prerequisites: Understanding of Core Web Vitals, basic backend/database knowledge
Real World Outcome
You’ll have a RUM system with client SDK and dashboard:
Client SDK usage:
import { initVitals } from './vitals-monitor';
initVitals({
endpoint: 'https://your-backend/vitals',
sampleRate: 0.1, // 10% of users
tags: { version: '1.2.3', page: 'checkout' }
});
Console output:
[Vitals] LCP: 2,340ms (element: img.hero)
[Vitals] INP: 89ms (event: click on button.submit)
[Vitals] CLS: 0.04 (shift: img without dimensions)
[Vitals] Sending to backend...
Dashboard showing:
┌─────────────────────────────────────────────────────────────────┐
│ CORE WEB VITALS - Last 7 Days │
├─────────────────────────────────────────────────────────────────┤
│ │
│ LCP (p75) │
│ ██████████████████████░░░░░░░░ 2.3s ✅ Good │
│ ▼ 200ms from last week │
│ │
│ INP (p75) │
│ ████████░░░░░░░░░░░░░░░░░░░░░░ 156ms ✅ Good │
│ ▲ 12ms from last week │
│ │
│ CLS (p75) │
│ ████░░░░░░░░░░░░░░░░░░░░░░░░░░ 0.08 ✅ Good │
│ ▼ 0.02 from last week │
│ │
├─────────────────────────────────────────────────────────────────┤
│ Breakdown by page: │
│ /checkout - LCP: 3.1s ⚠️ INP: 234ms ⚠️ CLS: 0.02 ✅ │
│ /product - LCP: 1.8s ✅ INP: 89ms ✅ CLS: 0.15 ⚠️ │
│ /home - LCP: 2.1s ✅ INP: 112ms ✅ CLS: 0.04 ✅ │
└─────────────────────────────────────────────────────────────────┘
The Core Question You’re Answering
“How fast is my site for REAL users—not just my fast laptop on good WiFi?”
Before you write any code, sit with this question. Lighthouse might say your site scores 95, but real users on 3G in India might experience 8 second loads. RUM captures reality.
Concepts You Must Understand First
Stop and research these before coding:
- Core Web Vitals Definitions
- What exactly triggers an LCP measurement?
- How is INP calculated (not just first input)?
- What counts as a layout shift for CLS?
- Book Reference: web.dev - Core Web Vitals documentation
- Performance Observer API
- How do you observe
largest-contentful-paint,layout-shift,event? - What’s the difference between
buffered: trueand without? - How do you handle entries that arrive after page load?
- Book Reference: MDN - Performance Observer API
- How do you observe
- Percentiles and Aggregation
- Why is p75 used instead of average or median?
- How do you calculate percentiles from raw data?
- What’s a meaningful sample size for RUM?
- Book Reference: “High Performance Browser Networking” appendix on statistics
Questions to Guide Your Design
Before implementing, think through these:
- Client-Side Collection
- When do you send the metrics (on visibilitychange? unload?)?
- How do you handle SPA navigation (multiple LCPs)?
- How do you sample users fairly?
- Data Storage
- What’s your schema for storing vitals?
- How do you query for p75 across time ranges?
- How do you segment by page, device, connection?
- Dashboard Design
- What’s the most actionable way to display this data?
- How do you show trends and regressions?
- How do you drill down into problem areas?
Thinking Exercise
Trace Metric Collection
For a single page load:
0ms - Navigation starts
200ms - FCP (First Contentful Paint)
500ms - Image starts loading
1200ms - Hero image loads (potential LCP)
1500ms - Above-fold text renders (potential LCP)
2000ms - User scrolls, ad loads (layout shift!)
3000ms - User clicks button (input)
3089ms - Visual response to click
4000ms - Video loads (potential LCP? No, too late)
Questions while tracing:
- What is the final LCP value and element?
- What is the CLS value?
- What would INP be? (Hint: consider more interactions)
- When should the SDK send the data?
The Interview Questions They’ll Ask
Prepare to answer these:
- “Explain the difference between LCP, FCP, and TTI.”
- “Why does Google use p75 for Core Web Vitals thresholds?”
- “How would you debug a high CLS score?”
- “What’s the difference between RUM and synthetic monitoring?”
- “How do you handle soft navigations (SPA) for Core Web Vitals?”
Hints in Layers
Hint 1: Getting Started
Use the web-vitals library to capture metrics. It handles all the edge cases.
Hint 2: Sending Data
Use navigator.sendBeacon() on visibilitychange event. It’s reliable even when the page unloads.
Hint 3: Backend Storage
Start with SQLite or PostgreSQL. Schema: timestamp, page, metric, value, tags. Calculate percentiles with window functions.
Hint 4: Dashboard Use Chart.js or D3 for visualization. Show time series of p75 values with threshold lines.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Performance metrics | “Web Performance in Action” by Jeremy Wagner | Ch. 4 |
| RUM implementation | web.dev documentation | “Measuring Performance” section |
| Data visualization | “Information Dashboard Design” by Stephen Few | Ch. 3-4 |
Implementation Hints
Conceptual approach:
CLIENT SDK:
1. Create PerformanceObserver for 'largest-contentful-paint'
2. Create PerformanceObserver for 'layout-shift' (hadRecentInput filter!)
3. Create PerformanceObserver for 'event' (INP approximation)
4. On visibilitychange='hidden': send metrics via sendBeacon
5. Include context: URL, user agent, connection type, viewport
6. Support custom tags for segmentation
BACKEND:
1. POST endpoint to receive metrics
2. Store in time-series format
3. Aggregate queries: p75 by page, by day, by device
4. API endpoints for dashboard data
DASHBOARD:
1. Time series charts for each metric
2. Threshold lines (green/yellow/red zones)
3. Breakdown tables by page/device
4. Alerts when metrics regress
Learning milestones:
- You capture all three Core Web Vitals → You understand the metrics
- You reliably send data on unload → You understand browser events
- You calculate meaningful p75 values → You understand RUM statistics
- Your dashboard shows actionable insights → You’ve built real monitoring
Project 7: Build a Lazy Loading System
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: JavaScript (TypeScript)
- Alternative Programming Languages: Vanilla JS
- Coolness Level: Level 2: Practical but Forgettable
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 1: Beginner
- Knowledge Area: DOM / Intersection Observer / Loading Strategies
- Software or Tool: Intersection Observer API
- Main Book: “High Performance Web Sites” by Steve Souders
What you’ll build: A lazy loading library that defers loading of images, iframes, and heavy components until they’re about to enter the viewport. Includes placeholder animations, error handling, and priority hints.
Why it teaches performance: Lazy loading is fundamental to performance—don’t load what users won’t see. Understanding Intersection Observer deeply opens the door to many advanced patterns (infinite scroll, animations, etc.).
Core challenges you’ll face:
- Understanding Intersection Observer API → maps to efficient visibility detection
- Handling edge cases → maps to print, zoom, resize scenarios
- Managing load priorities → maps to what to load first when scrolling fast
- Creating smooth loading experiences → maps to placeholders and transitions
Key Concepts:
- Intersection Observer: MDN documentation
- Native Lazy Loading:
loading="lazy"attribute behavior - Placeholder Strategies: LQIP, blur-up, skeleton screens
- Priority Hints:
fetchpriorityattribute
Difficulty: Beginner Time estimate: Weekend Prerequisites: Basic DOM manipulation, understanding of viewport concepts
Real World Outcome
You’ll have a lazy loading library:
Usage:
<img data-lazy-src="/huge-image.jpg"
data-lazy-placeholder="/tiny-placeholder.jpg"
alt="Product photo">
<iframe data-lazy-src="https://youtube.com/embed/xyz"></iframe>
<div data-lazy-component="ProductReviews"
data-props='{"productId": 123}'></div>
import { LazyLoader } from './lazy-loader';
const loader = new LazyLoader({
rootMargin: '200px', // Load 200px before entering viewport
threshold: 0,
placeholder: 'blur', // 'blur' | 'skeleton' | 'none'
onLoad: (el) => console.log('Loaded:', el),
onError: (el, err) => console.error('Failed:', el, err)
});
loader.observe(document.querySelectorAll('[data-lazy-src]'));
Console output:
[LazyLoader] Observing 47 elements
[LazyLoader] Loading: img[src=product-1.jpg] (200px from viewport)
[LazyLoader] Loaded: img[src=product-1.jpg] (234ms)
[LazyLoader] Loading: iframe[youtube.com] (0px from viewport)
[LazyLoader] Loading component: ProductReviews
[LazyLoader] Stats: 47 observed, 12 loaded, 35 pending
Visual effect:
- Placeholder shows immediately
- Real image fades in smoothly when loaded
- Error state shows if load fails
The Core Question You’re Answering
“Why load something the user might never scroll to see?”
Before you write any code, sit with this question. A typical page might have 50 images, but only 5 are above the fold. Loading all 50 wastes bandwidth and delays interactivity.
Concepts You Must Understand First
Stop and research these before coding:
- Intersection Observer API
- What are
rootMarginandthresholdand how do they work? - How is it different from scroll event listeners?
- When does the callback fire (synchronously or asynchronously)?
- Book Reference: MDN - Intersection Observer API
- What are
- Native Lazy Loading
- What does
loading="lazy"do? - What are its limitations?
- When would you need a custom solution?
- Book Reference: web.dev - Browser-level lazy loading
- What does
- Loading States
- What makes a good placeholder? (LQIP, blur, skeleton)
- How do you prevent layout shift when lazy loading?
- How do you handle load errors gracefully?
- Book Reference: “Every Layout” by Heydon Pickering
Questions to Guide Your Design
Before implementing, think through these:
- Observer Configuration
- What
rootMargingives the best experience? - Should you use one observer or one per element?
- How do you handle dynamically added elements?
- What
- Loading Behavior
- What happens if user scrolls quickly past elements?
- Should you abort loads for elements that leave viewport?
- How do you prioritize elements when many need loading?
- Placeholder Experience
- How do you prevent CLS when image loads?
- What transition creates the smoothest experience?
- How do you handle images with unknown dimensions?
Thinking Exercise
Trace Intersection Observer
Given a page with 20 images:
- 3 above the fold
- 17 below the fold
- User scrolls slowly to bottom
Trace through:
- What happens on page load?
- What triggers as user scrolls?
- What if user scrolls very quickly?
- What if user prints the page?
Questions:
- When does the callback fire for each image?
- What’s the
intersectionRatioat different points? - How would
rootMargin: '200px'change behavior?
The Interview Questions They’ll Ask
Prepare to answer these:
- “What is Intersection Observer and how does it differ from scroll events?”
- “What are the performance benefits of lazy loading?”
- “How do you prevent CLS when lazy loading images?”
- “What’s the difference between native lazy loading and JavaScript solutions?”
- “How would you lazy load React components?”
Hints in Layers
Hint 1: Getting Started
Create an IntersectionObserver with a callback that swaps data-lazy-src to src when element is visible.
Hint 2: Better UX
Add rootMargin: '100px' to start loading before elements enter viewport. Prevents user seeing placeholder.
Hint 3: Preventing CLS
Require width and height attributes or use aspect-ratio CSS. Reserve space before image loads.
Hint 4: Error Handling
Listen for error event on elements. Show fallback or retry with exponential backoff.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Loading strategies | “High Performance Web Sites” by Steve Souders | Ch. 6 |
| Intersection Observer | MDN documentation | Full guide |
| Image optimization | “High Performance Images” by Colin Bendell | Ch. 9 |
Implementation Hints
Conceptual approach:
1. Create class LazyLoader with configurable options
2. Store IntersectionObserver instance
3. For each observed element:
- If image: set placeholder, observe
- If iframe: show thumbnail, observe
- If component: render skeleton, observe
4. On intersection:
- Start loading (swap src, import component, etc.)
- Show loading state
- On success: fade in, unobserve
- On error: show error state, optionally retry
5. Handle edge cases:
- Print: load everything
- prefers-reduced-motion: skip animations
- No JS: use noscript fallback
Learning milestones:
- Basic lazy loading works → You understand Intersection Observer
- Placeholders prevent CLS → You understand layout stability
- Error handling is graceful → You’ve built robust code
- Performance is measurable → You can prove the improvement
Project 8: Build a HTTP/2 Server Push Simulator
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: Go
- Alternative Programming Languages: Node.js, Rust
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: HTTP/2 / Protocol / Server
- Software or Tool: Go http2 package, nghttp2
- Main Book: “High Performance Browser Networking” by Ilya Grigorik
What you’ll build: An HTTP/2 server that implements server push—proactively sending resources to the client before they’re requested. Includes intelligent push decisions based on page analysis.
Why it teaches performance: HTTP/2 server push is one of the most misunderstood features. Building it yourself teaches you HTTP/2 multiplexing, push promises, and why push often hurts rather than helps (and when it does help).
Core challenges you’ll face:
- Understanding HTTP/2 framing → maps to how multiplexing works
- Implementing push promises → maps to server push mechanism
- Avoiding over-pushing → maps to why push often backfires
- Cache-aware pushing → maps to push only what client needs
Key Concepts:
- HTTP/2 Multiplexing: “High Performance Browser Networking” Ch. 12 - Ilya Grigorik
- Push Promises: RFC 7540 Section 8.2
- Cache Digests: HTTP Cache Digests draft spec
- Early Hints (103): Alternative to push
Difficulty: Advanced Time estimate: 2-4 weeks Prerequisites: Strong understanding of HTTP, experience with server-side programming
Real World Outcome
You’ll have an HTTP/2 server with intelligent push:
Server output:
$ ./push-server --port 443 --cert cert.pem --key key.pem
[HTTP/2] Server listening on :443
[HTTP/2] Connection from 192.168.1.100
[PUSH] Analyzing /index.html for push candidates
[PUSH] Will push: /style.css (critical CSS)
[PUSH] Will push: /app.js (deferred, but commonly requested)
[PUSH] Skipping: /hero.jpg (client has in cache - cache digest)
[HTTP/2] Stream 1: GET /index.html
[HTTP/2] Stream 2: PUSH_PROMISE /style.css
[HTTP/2] Stream 4: PUSH_PROMISE /app.js
[HTTP/2] Stream 1: Response 200 (12KB, 34ms)
[HTTP/2] Stream 2: Response 200 (8KB, pushed ahead of request)
[HTTP/2] Stream 4: Response 200 (45KB, pushed ahead of request)
[STATS] Saved 2 round trips (~180ms estimated)
Client request showing push working:
$ nghttp -v https://localhost/
[ 0.032] recv PUSH_PROMISE frame <length=34, flags=0x04, stream_id=1>
; Promised Stream ID: 2
; path=/style.css
[ 0.032] recv PUSH_PROMISE frame <length=32, flags=0x04, stream_id=1>
; Promised Stream ID: 4
; path=/app.js
[ 0.045] recv DATA frame <length=8234, flags=0x01, stream_id=2>
; CSS received before main HTML finished!
[ 0.089] recv DATA frame <length=45123, flags=0x01, stream_id=4>
; JS received before main HTML finished!
The Core Question You’re Answering
“Can the server know what resources the client will need—and send them proactively?”
Before you write any code, sit with this question. Push sounds great in theory: don’t wait for the client to discover dependencies. But in practice, push often fails because the client might already have the resources cached.
Concepts You Must Understand First
Stop and research these before coding:
- HTTP/2 Framing
- What are streams, frames, and how does multiplexing work?
- What’s the difference between HEADERS, DATA, and PUSH_PROMISE frames?
- How does flow control work in HTTP/2?
- Book Reference: “High Performance Browser Networking” Ch. 12 - Ilya Grigorik
- Server Push Mechanism
- What is a PUSH_PROMISE and when is it sent?
- How does the client accept or reject a push?
- What are the rules about push (same origin, etc.)?
- Book Reference: RFC 7540 Section 8.2
- Why Push Often Fails
- What happens if client already has the resource cached?
- What is push pollution?
- What are alternatives (preload, Early Hints 103)?
- Book Reference: “HTTP/2 Server Push Considered Harmful” - Jake Archibald
Questions to Guide Your Design
Before implementing, think through these:
- Push Decisions
- How do you determine what to push for a given page?
- How do you avoid pushing cached resources?
- What’s the priority of pushed resources vs requested?
- Implementation Details
- How do you integrate push with your HTTP/2 server?
- When do you send PUSH_PROMISE (before or with response)?
- How do you handle push cancellation?
- Measurement
- How do you measure if push actually helped?
- What metrics indicate push success vs failure?
- How do you A/B test push strategies?
Thinking Exercise
Trace HTTP/2 Push
For this sequence:
1. Client connects to server (new connection)
2. Client requests /index.html
3. Server analyzes HTML, decides to push /style.css and /app.js
4. Client already has /style.css cached
5. Client doesn't have /app.js cached
Trace through:
- What frames does the server send?
- What does the client do with the PUSH_PROMISEs?
- How does the client signal it doesn’t want /style.css?
- What are the network bytes transferred vs saved?
Questions:
- How could the server know about the client’s cache?
- What if /style.css has changed since client cached it?
- What if client requested /app.js before push arrived?
The Interview Questions They’ll Ask
Prepare to answer these:
- “Explain HTTP/2 multiplexing and how it differs from HTTP/1.1.”
- “What is server push and what problem does it solve?”
- “Why has server push been largely abandoned? What replaced it?”
- “What are Early Hints (103) and how do they compare to push?”
- “How does HTTP/3 differ from HTTP/2?”
Hints in Layers
Hint 1: Getting Started
Use Go’s net/http package with HTTP/2 support. http.Pusher interface lets you push resources.
Hint 2: Push Discovery
Parse HTML responses for <link>, <script>, <img> tags. Maintain a map of page → push candidates.
Hint 3: Cache Awareness Implement a simple cache digest: client sends hash of cached URLs in a header. Server only pushes uncached.
Hint 4: Measurement Log when push completes vs when client would have requested. Calculate time saved.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| HTTP/2 protocol | “High Performance Browser Networking” by Ilya Grigorik | Ch. 12-13 |
| Server implementation | “HTTP: The Definitive Guide” (updated for HTTP/2) | Ch. on protocols |
| Push strategy | web.dev articles on server push | Various |
Implementation Hints
Conceptual approach:
1. Create HTTP/2 server with TLS (required for HTTP/2)
2. Handler for main page requests:
a. Check if http.Pusher is available
b. Analyze page for push candidates (parse HTML, check config)
c. Check client's cache digest header
d. For each uncached candidate: pusher.Push(url, opts)
e. Send main response
3. Push candidate discovery:
- Maintain registry of page dependencies
- Auto-discover from HTML parsing
- Honor Link preload headers
4. Cache digest handling:
- Define header format for cache digest
- Parse and check before pushing
5. Metrics collection:
- Track push acceptance rate
- Track cache hit rate
- Track estimated time saved
Learning milestones:
- Basic HTTP/2 server works → You understand the protocol
- Push promises are sent and received → You understand push mechanism
- Cache-aware pushing reduces waste → You understand why push often fails
- Metrics show actual improvement → You can evaluate push effectiveness
Project 9: Build a Layout Shift Debugger
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: JavaScript
- Alternative Programming Languages: TypeScript
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: DOM / Layout / CLS
- Software or Tool: Performance Observer, MutationObserver
- Main Book: “Web Performance in Action” by Jeremy Wagner
What you’ll build: A visual debugging tool that highlights layout shifts as they happen, shows exactly which elements moved and why, and provides actionable recommendations. Like a CLS-specific DevTools extension.
Why it teaches performance: CLS is the most confusing Core Web Vital. Understanding what causes layout shifts requires deep knowledge of CSS layout, font loading, async content, and the DOM. This tool forces you to understand all of it.
Core challenges you’ll face:
- Capturing layout shift events → maps to Performance Observer API
- Visualizing element movement → maps to understanding getBoundingClientRect
- Identifying shift causes → maps to correlation with DOM changes
- Providing actionable fixes → maps to CSS layout best practices
Key Concepts:
- Layout Shift Entry: web.dev CLS documentation
- Layout Algorithms: “CSS: The Definitive Guide” by Eric Meyer
- Font Loading: “Web Font Loading Patterns” by Zach Leatherman
- Intrinsic Sizing:
aspect-ratio, explicit dimensions
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Understanding of CSS layout, DOM APIs, Core Web Vitals basics
Real World Outcome
You’ll have a debugging tool that runs in the browser:
Browser overlay showing:
┌─────────────────────────────────────────────────────────────────┐
│ 🔴 LAYOUT SHIFT DETECTED! │
│ │
│ CLS Score: 0.15 (+0.08) │
│ Shifted Element: <div class="ad-container"> │
│ Movement: 0px → 250px (Y-axis) │
│ Impact Fraction: 0.4 │
│ Distance Fraction: 0.2 │
│ │
│ LIKELY CAUSE: Image without dimensions │
│ <img src="ad-banner.jpg"> (no width/height) │
│ │
│ FIX: Add explicit dimensions: │
│ <img src="ad-banner.jpg" width="728" height="90"> │
│ OR use aspect-ratio: 728/90; │
│ │
│ [Highlight Element] [Pause Recording] [Export Report] │
└─────────────────────────────────────────────────────────────────┘
Console logging:
[CLS Debug] Shift #1 at 1,234ms
Element: .hero-image (240px shift)
Cause: Image loaded (no dimensions reserved)
Suggestion: Add width="1200" height="600"
[CLS Debug] Shift #2 at 2,100ms
Element: .article-content (180px shift)
Cause: Web font swap
Suggestion: Use font-display: optional or match fallback metrics
[CLS Debug] Total CLS: 0.23 (POOR - needs work!)
The Core Question You’re Answering
“WHY did my page layout jump—and how do I stop it?”
Before you write any code, sit with this question. CLS is frustrating because the shift has already happened by the time you notice it. You need to catch shifts in real-time and correlate them with their causes.
Concepts You Must Understand First
Stop and research these before coding:
- Layout Shift Entry API
- What properties does a layout-shift entry have?
- What is
hadRecentInputand why does it matter? - How is the shift value calculated (impact × distance)?
- Book Reference: web.dev - CLS documentation
- CSS Layout Mechanics
- What triggers layout recalculation?
- How does content insertion affect surrounding elements?
- What is the containing block and how does it affect shifts?
- Book Reference: “CSS: The Definitive Guide” Ch. 7 - Eric Meyer
- Common CLS Causes
- Images/iframes without dimensions
- Dynamically injected content
- Web fonts causing FOIT/FOUT
- Animations that affect layout
- Book Reference: web.dev - Optimize CLS
Questions to Guide Your Design
Before implementing, think through these:
- Detection
- How do you capture layout shifts with Performance Observer?
- How do you identify WHICH element shifted?
- How do you correlate shifts with DOM/style changes?
- Visualization
- How do you highlight the shifted element?
- How do you show the before/after positions?
- How do you handle rapid successive shifts?
- Diagnosis
- How do you determine the CAUSE of a shift?
- Can you detect images without dimensions?
- Can you detect font loading?
Thinking Exercise
Trace a Layout Shift
For this page load sequence:
<body>
<header>Logo</header>
<main>
<img src="hero.jpg"> <!-- No dimensions! -->
<h1>Title</h1>
</main>
</body>
Timeline:
0ms - HTML parsed, layout calculated
100ms - hero.jpg request starts
500ms - hero.jpg (400px tall) finishes loading
501ms - Browser recalculates layout
502ms - h1 shifts down 400px
Questions while tracing:
- What is the CLS score for this shift?
- Which element is reported in the layout-shift entry?
- How would adding
height="400"to the img prevent this? - What if
imghadheight: auto; width: 100%CSS?
The Interview Questions They’ll Ask
Prepare to answer these:
- “What causes Cumulative Layout Shift and how do you fix it?”
- “Why do images without dimensions cause layout shifts?”
- “How do web fonts cause CLS and what are the solutions?”
- “What’s the difference between a user-initiated shift and an unexpected shift?”
- “How would you debug a high CLS score on a production site?”
Hints in Layers
Hint 1: Getting Started
Create PerformanceObserver for layout-shift entries. Log each shift’s value and sources.
Hint 2: Finding Shifted Elements
Each entry has sources array with node and previousRect/currentRect properties.
Hint 3: Visual Highlighting
Create overlay div positioned at previousRect. Animate to currentRect. Use red border.
Hint 4: Cause Detection Use MutationObserver to track DOM changes. Correlate timing with shifts.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| CLS mechanics | web.dev documentation | “Optimize CLS” |
| CSS layout | “CSS: The Definitive Guide” by Eric Meyer | Ch. 7 |
| Font loading | “Responsible Responsive Design” by Scott Jehl | Ch. 4 |
Implementation Hints
Conceptual approach:
1. Create PerformanceObserver for layout-shift
2. For each shift entry:
- Extract shifted elements from sources
- Calculate impact and distance
- Store in shift history
3. Create visual overlay:
- Draw rectangle at previousRect (red)
- Animate to currentRect (shows movement)
4. Cause detection:
- Track image loads (no dimensions check)
- Track font loading (document.fonts.ready)
- Track DOM insertions (MutationObserver)
- Correlate timing with shifts
5. Generate recommendations:
- Images: suggest dimensions
- Fonts: suggest font-display
- Dynamic content: suggest min-height
6. Export report with all shifts and fixes
Learning milestones:
- You capture layout shifts → You understand the API
- You visualize shifted elements → You understand DOM geometry
- You identify shift causes → You understand layout mechanics
- You provide useful fixes → You can solve CLS problems
Project 10: Build a JavaScript Profiler Visualization
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: JavaScript
- Alternative Programming Languages: TypeScript
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: JavaScript / V8 / Profiling
- Software or Tool: Performance API, Long Tasks API
- Main Book: “High Performance JavaScript” by Nicholas C. Zakas
What you’ll build: A flame graph visualization tool that captures JavaScript execution and displays it as an interactive flame graph. Shows which functions take the most time and their call relationships.
Why it teaches performance: JavaScript execution is the biggest threat to responsiveness. Understanding how to profile, identify bottlenecks, and visualize execution is essential for building fast interactive applications.
Core challenges you’ll face:
- Capturing execution traces → maps to Performance API and User Timing
- Building call stack hierarchy → maps to understanding execution context
- Creating flame graph visualization → maps to hierarchical data viz
- Identifying optimization opportunities → maps to performance patterns
Key Concepts:
- User Timing API:
performance.mark(),performance.measure() - Long Tasks API: Tasks that block main thread > 50ms
- Flame Graphs: Visualization technique invented by Brendan Gregg
- V8 Performance: “High Performance JavaScript” by Nicholas C. Zakas
Difficulty: Advanced Time estimate: 2-3 weeks Prerequisites: Strong JavaScript knowledge, understanding of call stacks and async execution
Real World Outcome
You’ll have a profiling tool with visualization:
Usage:
import { Profiler } from './js-profiler';
const profiler = new Profiler();
profiler.start();
// Your application runs...
const trace = profiler.stop();
profiler.visualize(trace, '#flame-graph-container');
Flame graph showing:
┌───────────────────────────────────────────────────────────────────────────┐
│ Total Time: 847ms │
├───────────────────────────────────────────────────────────────────────────┤
│ │
│ ████████████████████████████████████████████████████████████████ main() │
│ ├──────────────────────────┤├─────────────────────────────────┤ │
│ │ fetchData() ││ renderUI() │ │
│ │ 312ms ││ 534ms │ │
│ ├──────────┤├─────────────┤│├────────────┤├─────────────────┤│ │
│ │parseJSON ││ wait │││sortItems ││ renderList ││ │
│ │ 45ms ││ 267ms │││ 89ms ││ 445ms ││ │
│ │ ││ │││ │├─────┤├─────────┤││ │
│ │ ││ │││ ││item1││ item2 │││ │
│ │ ││ │││ ││ 12ms││ 433ms │←── HOTSPOT! │
│ │
│ Hover for details. Click to zoom. │
└───────────────────────────────────────────────────────────────────────────┘
ANALYSIS:
⚠️ renderList.item2 takes 433ms (51% of total)
Location: src/components/List.js:142
Called 1x, avg 433ms
Suggestion: Check for expensive DOM operations or computations
The Core Question You’re Answering
“WHERE is my JavaScript spending all its time?”
Before you write any code, sit with this question. JavaScript runs on a single thread. If something takes 500ms, nothing else can happen. Flame graphs show exactly where time goes, making optimization obvious.
Concepts You Must Understand First
Stop and research these before coding:
- JavaScript Execution Model
- How does the call stack work?
- What is the event loop and task queue?
- Why do long tasks block interactivity?
- Book Reference: “High Performance JavaScript” Ch. 1 - Nicholas C. Zakas
- Profiling APIs
- How do
performance.mark()andperformance.measure()work? - What is the Long Tasks API?
- What’s the difference between self-time and total-time?
- Book Reference: MDN - User Timing API
- How do
- Flame Graphs
- What does width represent in a flame graph?
- Why is call order (x-axis) alphabetical not temporal?
- How do you read a flame graph to find bottlenecks?
- Book Reference: Brendan Gregg’s Flame Graphs documentation
Questions to Guide Your Design
Before implementing, think through these:
- Data Collection
- How do you capture function timing without manual instrumentation?
- How do you handle async functions (Promises, callbacks)?
- What’s the overhead of profiling?
- Data Processing
- How do you build the call tree from timing entries?
- How do you calculate self-time vs total-time?
- How do you aggregate multiple calls to the same function?
- Visualization
- How do you render a flame graph (SVG? Canvas?)?
- How do you handle very deep call stacks?
- How do you make it interactive (zoom, hover)?
Thinking Exercise
Trace Function Execution
For this code:
function main() {
const data = fetchData(); // 100ms
const sorted = sortData(data); // 50ms
render(sorted); // 200ms
}
function render(data) {
setupDOM(); // 20ms
for (const item of data) {
renderItem(item); // 30ms each, 5 items
}
finalize(); // 30ms
}
Questions while tracing:
- What’s the total time for
main()? - What’s the self-time for
render()(excluding children)? - Which function is the hotspot?
- How would you draw this as a flame graph?
The Interview Questions They’ll Ask
Prepare to answer these:
- “How would you profile JavaScript code in production?”
- “What’s the difference between self-time and total-time?”
- “How do you identify the biggest performance bottleneck in JavaScript?”
- “What causes long tasks and how do you break them up?”
- “How does the browser’s main thread work and why does blocking it matter?”
Hints in Layers
Hint 1: Getting Started
Wrap functions with timing: performance.mark('fn-start'), performance.mark('fn-end'), performance.measure('fn', 'fn-start', 'fn-end').
Hint 2: Building the Tree
Performance measures have startTime and duration. Overlapping measures indicate nesting (parent-child).
Hint 3: Rendering Flame Graph Use SVG rectangles. Width = duration (scaled). Y position = stack depth. Color by self-time percentage.
Hint 4: Making it Useful Add: source locations (Error().stack), call counts, hover details, zoom on click.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| JavaScript execution | “High Performance JavaScript” by Nicholas C. Zakas | Ch. 1-2 |
| Profiling techniques | Chrome DevTools documentation | Performance panel |
| Flame graphs | Brendan Gregg’s blog | “Flame Graphs” article |
Implementation Hints
Conceptual approach:
1. Create Profiler class that wraps the app
2. Instrumentation options:
- Manual: performance.mark/measure API
- Auto: Proxy wrapper for objects
- Build-time: Babel plugin to inject timing
3. Build call tree from measures:
- Sort by startTime
- Stack: if measure starts before previous ends, it's a child
- Calculate self-time = duration - sum(children durations)
4. Generate flame graph data:
- Root node = entire profile
- Children sorted alphabetically (not by time!)
- Width proportional to total time
5. Render as SVG:
- Rectangles for each call
- Color gradient: green (fast) → red (slow)
- Hover: show details
- Click: zoom to that subtree
6. Analysis output:
- Top 10 by self-time
- Top 10 by call count
- Suggestions based on patterns
Learning milestones:
- You capture timing data → You understand Performance API
- You build the call tree → You understand execution flow
- You render a flame graph → You understand profiling visualization
- You identify hotspots → You can optimize real code
Project 11: Build a Font Loading Optimizer
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: JavaScript (Node.js + Browser)
- Alternative Programming Languages: Go
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Typography / CSS / Loading
- Software or Tool: fonttools, Font Face Observer
- Main Book: “Responsible Responsive Design” by Scott Jehl
What you’ll build: A complete font loading optimization toolkit that analyzes fonts, generates optimized subsets, creates fallback font metrics, and implements the best loading strategy (preload, font-display, etc.).
Why it teaches performance: Fonts are sneaky performance killers. They block text rendering, cause layout shifts, and add significant weight. Mastering font loading means understanding CSS, browser rendering, and file formats.
Core challenges you’ll face:
- Understanding font file formats → maps to WOFF2, variable fonts
- Subsetting for minimal size → maps to unicode-range, character sets
- Matching fallback metrics → maps to preventing CLS
- Implementing loading strategies → maps to font-display, preload
Key Concepts:
- Font Loading API:
document.fonts, FontFace - font-display:
swap,optional,fallback - Fallback Matching: Adjusting system fonts to match web fonts
- Variable Fonts: Single file for all weights/styles
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: CSS font knowledge, understanding of file formats
Real World Outcome
You’ll have a font optimization CLI:
Example Output:
$ ./font-optimizer analyze ./fonts/
Analyzing font files...
┌──────────────────────────────────────────────────────────────────────────┐
│ FONT ANALYSIS │
├──────────────────────────────────────────────────────────────────────────┤
│ File: Inter-Regular.woff2 │
│ Size: 98KB │
│ Glyphs: 2,548 │
│ Unicode Ranges: Latin, Latin Extended, Cyrillic │
│ Features: kern, liga, calt │
├──────────────────────────────────────────────────────────────────────────┤
│ RECOMMENDATIONS: │
│ ⚠️ Contains Cyrillic (42KB) - remove if not needed │
│ ⚠️ Contains 847 unused glyphs │
│ ✅ Already WOFF2 format (best compression) │
└──────────────────────────────────────────────────────────────────────────┘
$ ./font-optimizer subset ./fonts/Inter-Regular.woff2 --chars "latin" --output ./optimized/
Subsetting Inter-Regular.woff2...
Original: 98KB
Subset (Latin only): 24KB
Savings: 76% (74KB)
Generating fallback metrics for system fonts...
/* Use this CSS for zero-CLS font loading */
@font-face {
font-family: 'Inter';
src: url('/fonts/Inter-Regular-subset.woff2') format('woff2');
font-display: swap;
unicode-range: U+0000-00FF, U+0131, U+0152-0153;
}
/* Fallback font matched to Inter metrics */
.fallback-font {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif;
font-size-adjust: 0.51; /* Matches Inter x-height */
letter-spacing: -0.011em; /* Matches Inter spacing */
}
<link rel="preload" href="/fonts/Inter-Regular-subset.woff2" as="font" type="font/woff2" crossorigin>
The Core Question You’re Answering
“How do I use custom fonts without destroying performance?”
Before you write any code, sit with this question. Fonts are often 100KB+ each. Without optimization, you’re shipping a novel’s worth of unused characters. Without proper loading, users see invisible text (FOIT) or jarring text swaps (FOUT).
Concepts You Must Understand First
Stop and research these before coding:
- Font File Formats
- What’s the difference between WOFF, WOFF2, TTF, OTF?
- Why is WOFF2 the best for web?
- What are variable fonts and why do they matter?
- Book Reference: MDN - Web fonts
- Font Loading Behavior
- What is FOIT (Flash of Invisible Text)?
- What is FOUT (Flash of Unstyled Text)?
- What does each
font-displayvalue do? - Book Reference: web.dev - Optimize WebFont loading
- Font Metrics
- What are ascender, descender, x-height, cap-height?
- Why do different fonts cause layout shifts?
- How does
size-adjustandascent-overridework? - Book Reference: “The Elements of Typographic Style” by Bringhurst
Questions to Guide Your Design
Before implementing, think through these:
- Analysis
- How do you read font metadata (glyphs, features)?
- How do you detect unnecessary character sets?
- How do you calculate potential savings?
- Subsetting
- Which characters do you keep for “Latin” subset?
- How do you preserve OpenType features?
- What tools can subset fonts?
- Fallback Matching
- How do you measure font metrics?
- How do you adjust system fonts to match?
- How do you test for layout shift?
Thinking Exercise
Trace Font Loading
For a page with this CSS:
@font-face {
font-family: 'CustomFont';
src: url('/fonts/custom.woff2') format('woff2');
font-display: swap;
}
body {
font-family: 'CustomFont', Arial, sans-serif;
}
Timeline:
0ms - HTML parsed, CSS parsed
10ms - Render tree built with Arial (fallback)
11ms - First paint with Arial
100ms - Font request starts
800ms - Font download complete
801ms - Rerender with CustomFont
802ms - Layout shift! (if metrics differ)
Questions while tracing:
- What does the user see at 11ms?
- What causes the layout shift at 802ms?
- How would
font-display: optionalchange this? - How would preloading change the timeline?
The Interview Questions They’ll Ask
Prepare to answer these:
- “What is font-display and what are the different values?”
- “How do web fonts cause CLS and how do you prevent it?”
- “What is font subsetting and why is it important?”
- “When would you use font-display: optional vs swap?”
- “How do you preload fonts and what gotchas exist?”
Hints in Layers
Hint 1: Getting Started
Use fontkit npm package to read font files. It exposes glyphs, features, and metrics.
Hint 2: Subsetting
Use glyphhanger or pyftsubset (fonttools) to create subsets. Define character ranges per language.
Hint 3: Fallback Metrics
Compare metrics: originalFont.ascent / originalFont.unitsPerEm gives normalized ascender. Adjust fallback with CSS.
Hint 4: Testing Create before/after screenshots. Compare layout. Use Chromium’s layout shift debugging.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Font loading | “Responsible Responsive Design” by Scott Jehl | Ch. 4 |
| Typography basics | “The Elements of Typographic Style” by Bringhurst | Ch. 2 |
| CSS fonts | MDN documentation | Web fonts guide |
Implementation Hints
Conceptual approach:
1. Font analysis:
- Read font with fontkit
- Extract: format, size, glyph count, features
- Detect character sets (Latin, Cyrillic, etc.)
- Calculate per-range sizes
2. Subsetting:
- Define character ranges per language/use
- Use pyftsubset or similar
- Preserve necessary OpenType features
- Output optimized WOFF2
3. Fallback metrics generation:
- Get metrics: unitsPerEm, ascender, descender, lineGap
- Calculate CSS overrides: ascent-override, descent-override, line-gap-override
- Or use size-adjust for older browsers
4. Loading strategy output:
- Generate @font-face with unicode-range
- Generate preload link
- Generate JS fallback for font-display: optional
5. Test harness:
- Render with/without font
- Screenshot comparison
- CLS measurement
Learning milestones:
- You analyze font files → You understand font formats
- You subset fonts significantly → You understand character sets
- Fallback fonts match metrics → You understand font metrics
- Zero CLS font loading → You’ve mastered font performance
Project 12: Build a Resource Hint Generator
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: JavaScript (Node.js)
- Alternative Programming Languages: Go, Python
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: HTML / HTTP / Loading
- Software or Tool: Puppeteer, HTML parser
- Main Book: “High Performance Browser Networking” by Ilya Grigorik
What you’ll build: A tool that analyzes a page’s resource loading pattern and automatically generates optimal resource hints (preconnect, preload, prefetch, dns-prefetch) to insert in the <head>.
Why it teaches performance: Resource hints are powerful but often misused. Understanding when to use each hint—and when not to—requires understanding browser resource prioritization, connection costs, and the critical rendering path.
Core challenges you’ll face:
- Analyzing resource loading patterns → maps to understanding dependencies
- Determining optimal hints → maps to knowing when each hint helps
- Avoiding hint abuse → maps to understanding the costs
- Handling different page types → maps to static vs dynamic analysis
Key Concepts:
- preconnect: “High Performance Browser Networking” Ch. 10 - Ilya Grigorik
- preload: Resource prioritization and the
asattribute - prefetch: Next-page resources
- dns-prefetch: Lightweight DNS resolution
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Understanding of DNS/TCP/TLS, HTTP, browser loading
Real World Outcome
You’ll have a CLI that generates optimal resource hints:
Example Output:
$ ./hint-generator https://example.com
Analyzing https://example.com...
Loading page and capturing resource timing...
Found 47 resources from 8 origins
┌──────────────────────────────────────────────────────────────────────────┐
│ ANALYSIS RESULTS │
├──────────────────────────────────────────────────────────────────────────┤
│ │
│ CRITICAL RESOURCES (block rendering): │
│ ├── /styles/main.css (CSS, 45KB, 234ms) │
│ ├── /scripts/app.js (JS, 123KB, 456ms) │
│ └── /fonts/Inter.woff2 (Font, 24KB, 189ms) │
│ │
│ THIRD-PARTY ORIGINS: │
│ ├── fonts.googleapis.com (2 resources, first at 100ms) │
│ ├── cdn.example.com (8 resources, first at 200ms) │
│ └── analytics.google.com (1 resource, not critical) │
│ │
│ LCP ELEMENT: <img class="hero" src="/images/hero.jpg"> │
│ Loaded at: 1,234ms (resource discovered at 456ms) │
│ │
└──────────────────────────────────────────────────────────────────────────┘
RECOMMENDED RESOURCE HINTS:
<!-- Preconnect to critical third-party origins (saves ~100-300ms each) -->
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://cdn.example.com" crossorigin>
<!-- DNS-prefetch for non-critical origins (saves ~20-100ms) -->
<link rel="dns-prefetch" href="https://analytics.google.com">
<!-- Preload critical resources discovered late -->
<link rel="preload" href="/fonts/Inter.woff2" as="font" type="font/woff2" crossorigin>
<link rel="preload" href="/images/hero.jpg" as="image" fetchpriority="high">
<!-- Prefetch likely next pages (only if confident) -->
<!-- <link rel="prefetch" href="/product/popular-item.html"> -->
IMPACT ESTIMATE:
├── Preconnect savings: ~400ms (2 origins × ~200ms each)
├── Font preload savings: ~150ms (discovered 150ms earlier)
├── LCP image preload savings: ~300ms (discovered 300ms earlier)
└── Total estimated improvement: ~850ms
WARNINGS:
⚠️ Don't preload more than 3-5 resources (diminishing returns)
⚠️ Font preload requires crossorigin attribute
⚠️ Prefetch only if user likely to navigate there
The Core Question You’re Answering
“How can I tell the browser what resources to prioritize—before it discovers them?”
Before you write any code, sit with this question. Browsers are smart but can’t predict the future. Resource hints let you give the browser a head start on connections and downloads.
Concepts You Must Understand First
Stop and research these before coding:
- Connection Costs
- How long does DNS + TCP + TLS take?
- What is connection reuse and when does it happen?
- Why does the first request to a domain take longer?
- Book Reference: “High Performance Browser Networking” Ch. 2-4 - Ilya Grigorik
- Resource Hints
- What does each hint type do (preconnect, preload, prefetch, dns-prefetch)?
- What are the costs of each hint?
- When does each hint help vs hurt?
- Book Reference: web.dev - Resource hints
- Browser Resource Prioritization
- How does the browser decide what to load first?
- What is the preload scanner?
- How does
fetchprioritywork? - Book Reference: Chrome priority documentation
Questions to Guide Your Design
Before implementing, think through these:
- Analysis
- How do you determine which resources are critical?
- How do you detect third-party origins?
- How do you find the LCP element and its resources?
- Recommendations
- When is preconnect better than dns-prefetch?
- Which resources deserve preload?
- How do you avoid over-hinting?
- Validation
- How do you verify hints actually help?
- How do you detect conflicts with existing hints?
- How do you handle different network conditions?
Thinking Exercise
Trace Resource Discovery
For this page:
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="/styles/main.css">
</head>
<body>
<img src="/hero.jpg">
<script src="/app.js"></script>
</body>
</html>
Where main.css contains: @import url('/styles/fonts.css');
Trace resource discovery:
- HTML parsed → main.css discovered
- main.css downloaded → fonts.css discovered (chain!)
- HTML body parsed → hero.jpg, app.js discovered
Questions:
- Which resource is discovered latest?
- How would preload help fonts.css?
- What’s the difference between preloading fonts.css vs the font file?
The Interview Questions They’ll Ask
Prepare to answer these:
- “What’s the difference between preload and prefetch?”
- “When would you use preconnect vs dns-prefetch?”
- “What are the downsides of too many resource hints?”
- “How do you preload a font correctly?”
- “What is fetchpriority and when would you use it?”
Hints in Layers
Hint 1: Getting Started
Use Puppeteer to load the page. Capture all resource timing with performance.getEntriesByType('resource').
Hint 2: Identifying Critical Resources Critical = blocks rendering (CSS) or blocks interactivity (sync JS) or is the LCP element.
Hint 3: Origin Analysis Group resources by origin. For each third-party origin with critical resources, recommend preconnect.
Hint 4: Preload Candidates Good preload candidates: fonts (always late-discovered), LCP images, critical CSS @imports.
Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Resource hints | “High Performance Browser Networking” by Ilya Grigorik | Ch. 10 |
| Browser loading | web.dev documentation | “Optimize resource loading” |
| HTTP connections | “HTTP: The Definitive Guide” | Ch. 4 |
Implementation Hints
Conceptual approach:
1. Load page with Puppeteer
2. Capture all resource timing:
- URL, initiator, timing phases
- Request priority
- Initiator chain (who requested this?)
3. Identify critical resources:
- CSS in <head> (render-blocking)
- Sync JS in <head> (parser-blocking)
- Fonts referenced by CSS
- LCP element's source
4. Group by origin:
- First-party vs third-party
- First resource time per origin
- Critical resources per origin
5. Generate recommendations:
- Preconnect: third-party origins with multiple critical resources
- DNS-prefetch: third-party origins with non-critical resources
- Preload: late-discovered critical resources
- Prefetch: probable next-page resources
6. Validate:
- Check for existing hints
- Warn about over-hinting
- Estimate impact
Learning milestones:
- You analyze resource timing → You understand loading
- You identify critical resources → You understand the critical path
- You generate correct hints → You understand each hint type
- Your hints measurably improve performance → You can apply this in production
Project 13: Build a Lighthouse CI Alternative
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: JavaScript (Node.js)
- Alternative Programming Languages: Go
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 3. The “Service & Support” Model (B2B Utility)
- Difficulty: Level 3: Advanced
- Knowledge Area: CI/CD / Performance Testing / Automation
- Software or Tool: Puppeteer, Lighthouse node module
- Main Book: “Web Performance in Action” by Jeremy Wagner
What you’ll build: A performance testing system that runs in CI/CD, captures performance metrics across multiple runs, compares against baselines, and fails builds that regress beyond thresholds. Like Lighthouse CI but understanding how it works.
Why it teaches performance: Performance isn’t a one-time fix—it’s continuous. Understanding how to automate performance testing, set budgets, and prevent regressions teaches you to maintain performance over time.
Core challenges you’ll face:
- Running headless browsers in CI → maps to containerization and environments
- Dealing with variance → maps to statistical significance
- Setting meaningful thresholds → maps to performance budgets
- Reporting actionable results → maps to developer experience
Key Concepts:
- Performance Budgets: File size, timing, and Core Web Vitals limits
- Statistical Variance: Running multiple times, removing outliers
- CI Integration: GitHub Actions, GitLab CI, etc.
- Lighthouse Programmatic API: Running Lighthouse from Node.js
Difficulty: Advanced Time estimate: 2-3 weeks Prerequisites: CI/CD experience, Docker basics, statistical concepts
Real World Outcome
You’ll have a CI/CD performance testing system:
GitHub Action workflow:
- name: Performance Test
run: npm run perf-test
env:
PERF_BUDGET_LCP: 2500
PERF_BUDGET_CLS: 0.1
PERF_BUDGET_BUNDLE: 200000
Console output:
$ npm run perf-test
🚀 Performance Test v1.0.0
Running 5 iterations on https://staging.example.com
Iteration 1/5: LCP=2,340ms, INP=89ms, CLS=0.04
Iteration 2/5: LCP=2,412ms, INP=92ms, CLS=0.04
Iteration 3/5: LCP=2,289ms, INP=87ms, CLS=0.05
Iteration 4/5: LCP=2,356ms, INP=91ms, CLS=0.04
Iteration 5/5: LCP=2,401ms, INP=88ms, CLS=0.04
┌──────────────────────────────────────────────────────────────────────────┐
│ PERFORMANCE RESULTS (median of 5 runs) │
├──────────────────────────────────────────────────────────────────────────┤
│ │
│ Core Web Vitals: │
│ ├── LCP: 2,356ms (budget: 2,500ms) ✅ PASS │
│ ├── INP: 89ms (budget: 200ms) ✅ PASS │
│ └── CLS: 0.04 (budget: 0.1) ✅ PASS │
│ │
│ Bundle Sizes: │
│ ├── main.js: 187KB (budget: 200KB) ✅ PASS │
│ ├── main.css: 45KB (budget: 50KB) ✅ PASS │
│ └── Total: 412KB (budget: 500KB) ✅ PASS │
│ │
│ Compared to baseline (main branch): │
│ ├── LCP: -150ms (6% faster) 📈 │
│ ├── INP: +5ms (6% slower) 📉 │
│ └── Bundle: +12KB (3% larger) 📉 │
│ │
└──────────────────────────────────────────────────────────────────────────┘
All budgets passed! ✅
Uploading results to dashboard...
View report: https://perf.example.com/reports/abc123
Failure example:
┌──────────────────────────────────────────────────────────────────────────┐
│ ❌ PERFORMANCE REGRESSION DETECTED │
├──────────────────────────────────────────────────────────────────────────┤
│ │
│ LCP: 3,456ms (budget: 2,500ms) ❌ OVER BUDGET BY 956ms! │
│ │
│ This PR increased LCP by 1,100ms (47% slower) │
│ │
│ Changes in this PR that may have caused this: │
│ ├── Added hero-video.mp4 (4.2MB) - consider lazy loading │
│ ├── New sync script in <head> - consider defer/async │
│ └── main.js increased 89KB - check for unintended imports │
│ │
└──────────────────────────────────────────────────────────────────────────┘
Build failed due to performance regression.
The Core Question You’re Answering
“How do I prevent performance regressions from reaching production?”
Before you write any code, sit with this question. Every feature, every dependency, every line of code can affect performance. Without automated testing, regressions slip through and compound over time.
Concepts You Must Understand First
Stop and research these before coding:
- Performance Budgets
- What metrics should have budgets?
- How do you choose budget thresholds?
- What’s the difference between hard and soft budgets?
- Book Reference: web.dev - Performance budgets
- Statistical Variance in Performance
- Why do performance metrics vary between runs?
- How many runs do you need for reliable results?
- How do you handle outliers?
- Book Reference: “Measuring and Monitoring Web Performance” - HPBN
- CI/CD Environments
- How do you run headless Chrome in Docker?
- What environment differences affect performance?
- How do you minimize external variance?
- Book Reference: Lighthouse CI documentation
Questions to Guide Your Design
Before implementing, think through these:
- Test Environment
- Should you test against staging or production?
- How do you handle authentication?
- How do you simulate realistic network conditions?
- Metrics and Budgets
- Which metrics deserve budgets?
- How do you handle metrics that vary a lot?
- Should budgets be absolute or relative to baseline?
- Reporting and Integration
- How do you report results in PR comments?
- How do you store historical data?
- How do you help developers fix issues?
Implementation Hints
Conceptual approach:
1. CLI tool that:
- Takes URL(s) to test
- Reads budget config (JSON or env vars)
- Runs tests in headless Chrome
2. Test execution:
- Run N iterations (5+ for statistical significance)
- Capture: Core Web Vitals, bundle sizes, custom metrics
- Calculate: median, p75, standard deviation
- Remove outliers (> 2 standard deviations)
3. Baseline comparison:
- Fetch baseline from storage (S3, database)
- Calculate diff for each metric
- Flag significant regressions
4. Budget checking:
- Compare results against budgets
- Allow percentage tolerance
- Support warning vs failure thresholds
5. Reporting:
- Generate PR comment (Markdown)
- Upload to dashboard
- Exit with failure if budgets exceeded
6. Storage:
- Save results with git SHA
- Track trends over time
Learning milestones:
- You run Lighthouse programmatically → You understand the API
- Your results are statistically reliable → You understand variance
- You catch regressions in CI → You understand continuous testing
- Developers can act on your reports → You’ve built useful tooling
Project 14: Build an Edge Caching Simulator
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: Go
- Alternative Programming Languages: Node.js, Rust
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: CDN / Caching / Edge Computing
- Software or Tool: HTTP proxy, cache algorithms
- Main Book: “High Performance Browser Networking” by Ilya Grigorik
What you’ll build: A CDN-like caching proxy that sits between clients and your origin server, implementing cache control logic, vary headers, cache invalidation, and geographic simulation. Understand how CDNs work by building one.
Why it teaches performance: CDNs are critical for performance but often treated as magic. Building a caching proxy teaches you exactly how Cache-Control headers work, how cache keys are computed, and why cache invalidation is so hard.
Core challenges you’ll face:
- Implementing HTTP caching semantics → maps to RFC 7234
- Building a cache key system → maps to Vary headers, query params
- Handling cache invalidation → maps to purging, TTLs, stale-while-revalidate
- Simulating edge locations → maps to geographic latency
Key Concepts:
- Cache-Control Directives: max-age, s-maxage, stale-while-revalidate
- Cache Keys: How URLs + headers determine cache entries
- Vary Header: Response varies based on request headers
- Cache Invalidation: Purge, ban, TTL strategies
Difficulty: Advanced Time estimate: 3-4 weeks Prerequisites: Strong HTTP knowledge, proxy/networking experience
Real World Outcome
You’ll have a caching proxy:
Starting the proxy:
$ ./edge-cache --origin https://api.example.com --port 8080 --cache-dir /tmp/cache
Edge Cache Proxy v1.0.0
Origin: https://api.example.com
Cache directory: /tmp/cache
Listening on :8080
[CACHE] Warming up...
[CACHE] Ready. 0 entries, 0 bytes.
Request flow:
# First request - cache MISS
$ curl -I http://localhost:8080/api/products
HTTP/1.1 200 OK
X-Cache: MISS
X-Cache-Key: GET|/api/products|Accept-Encoding:gzip
X-Origin-Time: 245ms
Cache-Control: public, max-age=3600, s-maxage=86400
# Second request - cache HIT
$ curl -I http://localhost:8080/api/products
HTTP/1.1 200 OK
X-Cache: HIT
X-Cache-Age: 12
X-Cache-TTL: 85988
# Request with different Accept header - separate cache entry
$ curl -I -H "Accept: application/xml" http://localhost:8080/api/products
HTTP/1.1 200 OK
X-Cache: MISS
X-Cache-Key: GET|/api/products|Accept:application/xml|Accept-Encoding:gzip
# Stale-while-revalidate
$ curl -I http://localhost:8080/api/products
HTTP/1.1 200 OK
X-Cache: STALE (revalidating in background)
X-Cache-Age: 3650
Admin commands:
# Purge specific URL
$ curl -X PURGE http://localhost:8080/api/products
{"purged": 1, "key": "GET|/api/products|*"}
# Cache stats
$ curl http://localhost:8080/__cache/stats
{
"entries": 1247,
"size_bytes": 52428800,
"hit_rate": 0.87,
"miss_rate": 0.13,
"stale_rate": 0.02,
"avg_origin_time_ms": 234
}
The Core Question You’re Answering
“What does a CDN actually DO—and how does caching really work?”
Before you write any code, sit with this question. CDNs aren’t magic. They’re HTTP proxies with smart caching. Understanding the caching logic helps you use CDNs effectively and debug caching issues.
Concepts You Must Understand First
Stop and research these before coding:
- HTTP Caching Semantics (RFC 7234)
- How does
max-agediffer froms-maxage? - What does
privatevspublicmean? - How does conditional revalidation work (ETag, Last-Modified)?
- Book Reference: “HTTP: The Definitive Guide” Ch. 7
- How does
- Cache Key Computation
- What is a cache key and how is it computed?
- How does the
Varyheader affect cache keys? - What about query parameters?
- Book Reference: RFC 7234 Section 4
- Cache Invalidation Strategies
- What is TTL-based expiration?
- What is purge vs ban?
- What is stale-while-revalidate?
- Book Reference: Fastly documentation on caching
Implementation Hints
Conceptual approach:
1. HTTP proxy foundation:
- Listen on port, accept connections
- Parse incoming requests
- Forward to origin or serve from cache
- Return response to client
2. Cache storage:
- In-memory LRU for hot entries
- Disk storage for larger cache
- Cache key: method + path + vary headers
3. Cache-Control parsing:
- Parse request and response Cache-Control
- Calculate TTL from directives
- Handle no-cache, no-store, private
4. Conditional requests:
- Store ETag and Last-Modified
- Send If-None-Match on revalidation
- Handle 304 Not Modified
5. Vary handling:
- Parse Vary header
- Include varied headers in cache key
- Normalize header values
6. Stale-while-revalidate:
- Serve stale if within tolerance
- Trigger background revalidation
- Update cache when revalidation completes
7. Admin interface:
- PURGE method for invalidation
- Stats endpoint
- Cache listing
Learning milestones:
- Basic proxy works → You understand HTTP proxying
- Caching follows RFC 7234 → You understand cache semantics
- Vary headers work correctly → You understand cache keys
- Stale-while-revalidate works → You understand advanced caching
Project 15: Build a Performance Regression Detector
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: Python
- Alternative Programming Languages: JavaScript, Go
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 3. The “Service & Support” Model (B2B Utility)
- Difficulty: Level 3: Advanced
- Knowledge Area: Statistics / Time Series / Anomaly Detection
- Software or Tool: NumPy/SciPy, statistical libraries
- Main Book: “Designing Data-Intensive Applications” by Martin Kleppmann
What you’ll build: An intelligent system that analyzes historical performance data, detects anomalies and regressions automatically, and alerts when performance degrades—even for gradual regressions.
Why it teaches performance: Performance doesn’t always regress suddenly. Sometimes it degrades gradually (2% per release adds up). Detecting these regressions requires understanding time series analysis and statistical methods.
Core challenges you’ll face:
- Defining “normal” performance → maps to baseline modeling
- Detecting significant changes → maps to statistical hypothesis testing
- Handling seasonality → maps to time series decomposition
- Minimizing false positives → maps to alert fatigue
Key Concepts:
- Change Point Detection: Finding where metrics changed
- Time Series Analysis: Trends, seasonality, noise
- Statistical Significance: Is this change real or noise?
- Anomaly Detection: Identifying unusual data points
Difficulty: Advanced Time estimate: 3-4 weeks Prerequisites: Statistics fundamentals, time series basics
Real World Outcome
You’ll have a regression detection system:
Example output:
$ ./perf-detector analyze --metric lcp --period 30d
Analyzing LCP for last 30 days...
Data points: 4,320 (144/day)
Mean: 2,340ms
Std Dev: 180ms
┌──────────────────────────────────────────────────────────────────────────┐
│ LCP TREND ANALYSIS │
├──────────────────────────────────────────────────────────────────────────┤
│ │
│ 2500 ┤ ∙∙∙∙∙∙∙∙∙∙∙ │
│ 2400 ┤ ∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙ │
│ 2300 ┤ ∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙ │
│ 2200 ┤∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙∙ │
│ 2100 ┤ │
│ └──────────────────────────────────────────────────────────────── │
│ Day 1 Day 15 Day 30 │
│ │
│ DETECTED: Gradual regression starting Day 12 │
│ LCP increased 15% (2,200ms → 2,530ms) │
│ Statistically significant (p < 0.01) │
│ │
│ LIKELY CAUSE: Correlates with release v2.3.0 (Day 12) │
│ Changes in that release: │
│ - Added analytics script (+45KB) │
│ - New hero image carousel │
│ │
└──────────────────────────────────────────────────────────────────────────┘
⚠️ ALERT: LCP regression detected
Current: 2,530ms (budget: 2,500ms)
Action: Investigate v2.3.0 changes
Webhook alert:
{
"type": "regression",
"metric": "lcp",
"severity": "warning",
"current_value": 2530,
"baseline_value": 2200,
"change_percent": 15,
"change_point": "2024-01-12T00:00:00Z",
"confidence": 0.99,
"likely_cause": {
"release": "v2.3.0",
"changes": [
"analytics.js added",
"hero carousel component"
]
}
}
The Core Question You’re Answering
“Did performance get worse—or is this just normal variance?”
Before you write any code, sit with this question. Performance metrics are noisy. A single slow reading doesn’t mean regression. You need statistical methods to distinguish real regressions from random noise.
Concepts You Must Understand First
Stop and research these before coding:
- Statistical Hypothesis Testing
- What is a p-value and statistical significance?
- What is a t-test and when do you use it?
- What is effect size and why does it matter?
- Book Reference: Any statistics textbook
- Change Point Detection
- What algorithms detect change points (CUSUM, PELT)?
- How do you handle multiple change points?
- What’s the tradeoff between sensitivity and false positives?
- Book Reference: Research papers on change point detection
- Time Series Decomposition
- What are trend, seasonality, and residual?
- How does STL decomposition work?
- How do you account for weekly/daily patterns?
- Book Reference: “Forecasting: Principles and Practice” by Hyndman
Implementation Hints
Conceptual approach:
1. Data ingestion:
- Collect metrics from RUM or synthetic tests
- Store in time series database
- Handle missing data, outliers
2. Baseline modeling:
- Calculate rolling statistics (mean, std)
- Decompose into trend + seasonality + residual
- Build "normal" distribution for each time period
3. Change point detection:
- Implement CUSUM or PELT algorithm
- Find points where distribution changes
- Calculate confidence of each change point
4. Regression classification:
- Is the change statistically significant?
- Is the effect size meaningful (>5%)?
- Is it sustained (not just a spike)?
5. Correlation with releases:
- Match change points to deploy times
- Identify changes in those releases
- Suggest likely causes
6. Alerting:
- Configurable thresholds
- Webhook/Slack/PagerDuty integration
- Alert deduplication
Learning milestones:
- You calculate baselines correctly → You understand statistics
- You detect sudden regressions → You understand change detection
- You detect gradual regressions → You understand trend analysis
- You minimize false positives → You can build production alerting
Project Comparison Table
| # | Project | Difficulty | Time | Depth of Understanding | Fun Factor |
|---|---|---|---|---|---|
| 1 | Web Performance Waterfall Visualizer | Intermediate | 1-2 weeks | ★★★★☆ | ★★★★☆ |
| 2 | Critical CSS Extractor | Intermediate | 1-2 weeks | ★★★★☆ | ★★★☆☆ |
| 3 | JavaScript Bundle Analyzer | Intermediate | 1-2 weeks | ★★★★☆ | ★★★★☆ |
| 4 | Image Optimization Pipeline | Intermediate | 1-2 weeks | ★★★☆☆ | ★★★★☆ |
| 5 | Service Worker Caching Layer | Intermediate | 1-2 weeks | ★★★★★ | ★★★★☆ |
| 6 | Core Web Vitals Monitor | Intermediate | 2 weeks | ★★★★★ | ★★★☆☆ |
| 7 | Lazy Loading System | Beginner | Weekend | ★★☆☆☆ | ★★★☆☆ |
| 8 | HTTP/2 Server Push Simulator | Advanced | 2-4 weeks | ★★★★★ | ★★★★★ |
| 9 | Layout Shift Debugger | Intermediate | 1-2 weeks | ★★★★☆ | ★★★★☆ |
| 10 | JavaScript Profiler Visualization | Advanced | 2-3 weeks | ★★★★★ | ★★★★★ |
| 11 | Font Loading Optimizer | Intermediate | 1-2 weeks | ★★★★☆ | ★★★☆☆ |
| 12 | Resource Hint Generator | Intermediate | 1-2 weeks | ★★★★☆ | ★★★☆☆ |
| 13 | Lighthouse CI Alternative | Advanced | 2-3 weeks | ★★★★★ | ★★★★☆ |
| 14 | Edge Caching Simulator | Advanced | 3-4 weeks | ★★★★★ | ★★★★★ |
| 15 | Performance Regression Detector | Advanced | 3-4 weeks | ★★★★☆ | ★★★☆☆ |
Recommendation
For Beginners (Start Here)
If you’re new to web performance:
- Start with Project 7 (Lazy Loading System) - Quick weekend project that teaches Intersection Observer, a foundational API
- Then Project 1 (Waterfall Visualizer) - Understand the network timeline that underlies everything
- Then Project 4 (Image Optimization) - Highest-impact practical skill for most websites
For Intermediate Developers
If you understand basics but want to go deeper:
- Start with Project 5 (Service Worker Caching) - Most impactful for real-world applications
- Then Project 6 (Core Web Vitals Monitor) - Understand the metrics that matter
- Then Project 3 (Bundle Analyzer) - JavaScript is usually the bottleneck
For Advanced Developers
If you want to truly master performance:
- Start with Project 8 (HTTP/2 Server Push) - Deep protocol understanding
- Then Project 10 (JS Profiler) - Flame graphs are essential for optimization
- Then Project 14 (Edge Caching Simulator) - CDN knowledge is rare and valuable
By Learning Goal
| If you want to understand… | Start with… |
|---|---|
| Browser rendering | Project 2 (Critical CSS), Project 9 (CLS Debugger) |
| Network optimization | Project 1 (Waterfall), Project 8 (HTTP/2 Push) |
| JavaScript performance | Project 3 (Bundle Analyzer), Project 10 (Profiler) |
| Caching | Project 5 (Service Worker), Project 14 (Edge Cache) |
| Measurement & Monitoring | Project 6 (Web Vitals), Project 13 (Lighthouse CI) |
| Real-world impact | Project 4 (Images), Project 11 (Fonts) |
Final Overall Project: Build a Complete Performance Platform
- File: LEARN_WEB_PERFORMANCE_OPTIMIZATION.md
- Main Programming Language: TypeScript (Full Stack)
- Alternative Programming Languages: Go (backend), Rust (tools)
- Coolness Level: Level 5: Pure Magic (Super Cool)
- Business Potential: 4. The “Open Core” Infrastructure (Enterprise Scale)
- Difficulty: Level 5: Master
- Knowledge Area: Full Stack Performance Engineering
- Software or Tool: All of the above combined
- Main Book: “High Performance Browser Networking” + “Designing Data-Intensive Applications”
What you’ll build: A complete web performance platform that combines everything you’ve learned—synthetic testing, RUM collection, optimization tools, CI/CD integration, dashboards, and alerting. Like building a mini SpeedCurve + Lighthouse + Cloudflare combined.
Why it teaches performance: This is the capstone project. Building a complete platform forces you to understand how all the pieces fit together—from browser internals to server infrastructure to data analysis.
Components to include:
- Synthetic Testing Engine (from Projects 1, 13)
- Scheduled Lighthouse runs
- Multiple locations simulation
- Performance budget enforcement
- Real User Monitoring (from Projects 6, 15)
- Client SDK for Core Web Vitals
- Backend for data ingestion
- Statistical analysis and alerting
- Optimization Tools (from Projects 2, 3, 4, 11, 12)
- Critical CSS extraction
- Bundle analysis
- Image optimization
- Font optimization
- Resource hint generation
- Caching Layer (from Projects 5, 14)
- Service worker generator
- CDN configuration generator
- Cache hit rate analytics
- Dashboard & Reporting
- Real-time performance metrics
- Trend visualization
- Regression detection
- Actionable recommendations
- CI/CD Integration
- GitHub/GitLab integration
- PR comments with perf impact
- Automatic optimization suggestions
Architecture:
┌─────────────────────────────────────────────────────────────────────────────┐
│ PERFORMANCE PLATFORM ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ CLIENTS INGESTION STORAGE │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Web SDK │───────────────────│ API │────────────│TimeSeries│ │
│ │ (RUM) │ │ Gateway │ │ Database │ │
│ └──────────┘ └────┬─────┘ └──────────┘ │
│ │ │ │
│ ┌──────────┐ │ │ │
│ │ Synthetic│ │ │ │
│ │ Runners │───────────────────────┤ │ │
│ └──────────┘ │ │ │
│ │ │ │
│ ┌──────────┐ │ │ │
│ │ CI/CD │───────────────────────┘ │ │
│ │ Webhooks │ │ │
│ └──────────┘ │ │
│ │ │
│ PROCESSING PRESENTATION │ │
│ ┌──────────────────────────┐ ┌──────────────────────┤ │
│ │ Analysis Engine │◀──────────│ Dashboard │ │
│ │ - Regression detection │ │ - Real-time metrics │ │
│ │ - Trend analysis │ │ - Trend charts │ │
│ │ - Alerting │ │ - Recommendations │ │
│ └──────────────────────────┘ └──────────────────────┘ │
│ │
│ OPTIMIZATION TOOLS │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Critical │ │ Bundle │ │ Image │ │ Font │ │ Resource │ │
│ │ CSS │ │ Analyzer │ │Optimizer │ │Optimizer │ │ Hints │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
This project is complete when:
- You can add a website and see its performance metrics within minutes
- RUM data flows in from real users
- Scheduled synthetic tests run automatically
- Regressions trigger alerts
- Optimization recommendations are actionable
- CI/CD integration blocks bad deployments
Time estimate: 2-3 months Prerequisites: Completion of at least 8-10 of the above projects
Summary
This learning path covers web performance optimization through 15 hands-on projects plus one capstone. Here’s the complete list:
| # | Project Name | Main Language | Difficulty | Time Estimate |
|---|---|---|---|---|
| 1 | Web Performance Waterfall Visualizer | JavaScript | Intermediate | 1-2 weeks |
| 2 | Critical CSS Extractor | JavaScript | Intermediate | 1-2 weeks |
| 3 | JavaScript Bundle Analyzer | JavaScript | Intermediate | 1-2 weeks |
| 4 | Image Optimization Pipeline | Node.js | Intermediate | 1-2 weeks |
| 5 | Service Worker Caching Layer | JavaScript | Intermediate | 1-2 weeks |
| 6 | Core Web Vitals Monitor | TypeScript | Intermediate | 2 weeks |
| 7 | Lazy Loading System | JavaScript | Beginner | Weekend |
| 8 | HTTP/2 Server Push Simulator | Go | Advanced | 2-4 weeks |
| 9 | Layout Shift Debugger | JavaScript | Intermediate | 1-2 weeks |
| 10 | JavaScript Profiler Visualization | JavaScript | Advanced | 2-3 weeks |
| 11 | Font Loading Optimizer | JavaScript | Intermediate | 1-2 weeks |
| 12 | Resource Hint Generator | JavaScript | Intermediate | 1-2 weeks |
| 13 | Lighthouse CI Alternative | JavaScript | Advanced | 2-3 weeks |
| 14 | Edge Caching Simulator | Go | Advanced | 3-4 weeks |
| 15 | Performance Regression Detector | Python | Advanced | 3-4 weeks |
| Final | Complete Performance Platform | TypeScript | Master | 2-3 months |
Recommended Learning Path
For beginners: Start with projects #7, #1, #4 For intermediate: Jump to projects #5, #6, #3 For advanced: Focus on projects #8, #10, #14
Expected Outcomes
After completing these projects, you will:
- Understand browser internals: Know exactly how browsers render pages and what causes slowness
- Master network optimization: Understand HTTP/2, caching, CDNs, and resource loading at a deep level
- Control JavaScript performance: Profile, analyze bundles, and optimize execution
- Implement caching strategies: Build service workers, understand CDN caching, implement cache invalidation
- Measure what matters: Build RUM systems, understand Core Web Vitals, detect regressions
- Optimize assets: Images, fonts, CSS—know exactly how to optimize each
- Automate performance: Build CI/CD integration, set budgets, prevent regressions
- Ace performance interviews: Answer any web performance question with confidence
You’ll have built 15+ working tools that demonstrate deep understanding of web performance from first principles.
“Speed is a feature. These projects will teach you to build it.”