LEARN RUST DEEP DIVE
Learn Rust: From Zero to Systems Programming Master
Goal: Deeply understand Rust’s unique strengths—ownership, borrowing, lifetimes, fearless concurrency, and zero-cost abstractions—through building real-world projects that force you to confront why these features exist and how they prevent entire categories of bugs.
Why Rust is Different
Every language makes tradeoffs. Here’s where Rust sits:
| Language | Memory Safety | Performance | No GC | Concurrency Safety |
|---|---|---|---|---|
| C | ❌ Manual | ✅ Maximum | ✅ Yes | ❌ Data races easy |
| C++ | ❌ Manual | ✅ Maximum | ✅ Yes | ❌ Data races easy |
| Go | ✅ GC | 🟡 Good | ❌ No | 🟡 Channels help |
| Java | ✅ GC | 🟡 Good | ❌ No | ❌ Data races easy |
| Python | ✅ GC | ❌ Slow | ❌ No | ❌ GIL limits |
| Rust | ✅ Ownership | ✅ Maximum | ✅ Yes | ✅ Compile-time |
Rust’s innovation: The compiler enforces memory safety at compile time through:
- Ownership: Every value has exactly one owner
- Borrowing: References must follow strict rules
- Lifetimes: The compiler tracks how long references are valid
- Send/Sync traits: The type system prevents data races
After these projects, you will viscerally understand why C programmers get segfaults and use-after-free bugs, and why Rust programmers don’t.
Core Concept Analysis
The Ownership Model
┌─────────────────────────────────────────────────────────────┐
│ OWNERSHIP RULES │
├─────────────────────────────────────────────────────────────┤
│ 1. Each value has exactly ONE owner │
│ 2. When the owner goes out of scope, the value is dropped │
│ 3. Ownership can be MOVED (transferred) or BORROWED │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ BORROWING RULES │
├─────────────────────────────────────────────────────────────┤
│ At any given time, you can have EITHER: │
│ • ONE mutable reference (&mut T) │
│ • OR any number of immutable references (&T) │
│ References must always be valid (no dangling pointers) │
└─────────────────────────────────────────────────────────────┘
Why This Matters (The Problems Rust Solves)
In C/C++, these bugs are common:
- Use-after-free: Accessing memory that’s been deallocated
- Double-free: Freeing the same memory twice
- Data races: Two threads accessing data, at least one writing
- Buffer overflows: Writing past array bounds
- Null pointer dereference: Accessing memory at address 0
- Dangling pointers: References to freed memory
Rust makes these IMPOSSIBLE at compile time.
The Mental Model Shift
C Programmer Thinks: "I allocate, I free, I manage lifetimes in my head"
Result: Bugs when mental model is wrong
Rust Programmer Thinks: "The compiler tracks ownership, I satisfy its rules"
Result: If it compiles, these bugs don't exist
Project List
Projects are ordered to build understanding progressively. Each project teaches specific Rust concepts that make it unique.
Project 1: Ownership Visualizer (See the Borrow Checker’s Mind)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: None (this is Rust-specific)
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 1: Beginner
- Knowledge Area: Ownership / Borrowing / Compiler Internals
- Software or Tool: Rust Compiler, cargo
- Main Book: “The Rust Programming Language” by Steve Klabnik and Carol Nichols
What you’ll build: A CLI tool that takes Rust code snippets and outputs a visual ASCII diagram showing ownership transfers, borrows, and lifetimes—like seeing through the compiler’s eyes.
Why it teaches Rust’s strengths: This forces you to deeply understand ownership rules by implementing a system that explains them. You can’t visualize what you don’t understand. By the end, ownership will be intuitive.
Core challenges you’ll face:
- Parsing Rust code to identify variable bindings → maps to understanding let, mut, and move semantics
- Tracking when ownership moves vs. borrows → maps to the core ownership model
- Identifying lifetime scopes → maps to understanding when values are dropped
- Detecting borrow checker violations → maps to the rules the compiler enforces
Key Concepts:
- Ownership fundamentals: “The Rust Programming Language” Chapter 4 - Steve Klabnik
- Move semantics: “Programming Rust, 2nd Edition” Chapter 4 - Jim Blandy
- The Drop trait: “Rust for Rustaceans” Chapter 1 - Jon Gjengset
- Parsing with nom or pest: “Command-Line Rust” Chapter 8 - Ken Youens-Clark
Difficulty: Beginner Time estimate: 1-2 weeks Prerequisites: Basic programming knowledge, understand what memory allocation means conceptually
Real world outcome:
$ cargo run -- analyze snippet.rs
Analyzing: snippet.rs
fn main() {
let s1 = String::from("hello"); // s1 OWNS "hello"
│
let s2 = s1; // ownership MOVES to s2
│ // s1 is now INVALID
│ ╭──────────╮
│ │ s1: ──X │ (moved out)
│ │ s2: ───● │ (now owns)
│ ╰──────────╯
│
println!("{}", s2); // OK: s2 is valid
// println!("{}", s1); // ERROR: s1 was moved!
}
Ownership Timeline:
[0] s1 created (owns String)
[1] s1 moved to s2 (s1 invalidated)
[2] s2 used in println!
[3] s2 dropped (String freed)
Implementation Hints:
Start simple—don’t try to parse all of Rust. Focus on:
letbindings (with and withoutmut)- Assignment (which causes moves for non-Copy types)
- Function calls (which can take ownership or borrow)
- Scope boundaries (curly braces)
Questions to guide your implementation:
- What’s the difference between
let s2 = s1(move) andlet s2 = &s1(borrow)? - When does a variable get dropped? (Answer: end of its scope)
- Which types are
Copy(integers, bools) vs. which are moved (String, Vec)?
Use the syn crate to parse Rust code into an AST—don’t write your own parser from scratch.
Learning milestones:
- You can identify moves vs. copies → You understand Copy trait vs. move semantics
- You can track borrow scopes → You understand reference lifetimes
- You can detect violations before the compiler → You’ve internalized the borrow checker rules
Project 2: Memory Arena Allocator (Own Your Own Memory)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: C, C++
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: Memory Management / Allocators / Unsafe Rust
- Software or Tool: Rust std library, custom allocator API
- Main Book: “Rust for Rustaceans” by Jon Gjengset
What you’ll build: A custom memory arena allocator that pre-allocates a large block of memory and hands out chunks from it—demonstrating manual memory management while staying safe with Rust’s type system.
Why it teaches Rust’s strengths: This shows you what Rust is doing under the hood. You’ll use unsafe for the first time and understand exactly why it’s cordoned off. You’ll feel the power of having control AND safety.
Core challenges you’ll face:
- Implementing allocation without the standard allocator → maps to understanding what malloc/free actually do
- Using unsafe correctly with raw pointers → maps to understanding Rust’s safety boundary
- Ensuring memory alignment → maps to CPU memory access requirements
- Preventing use-after-free in your API design → maps to leveraging the type system for safety
Key Concepts:
- Unsafe Rust: “The Rust Programming Language” Chapter 19 - Steve Klabnik
- The Global Allocator trait: “Rust for Rustaceans” Chapter 10 - Jon Gjengset
- Memory alignment: “Computer Systems: A Programmer’s Perspective” Chapter 3 - Bryant & O’Hallaron
- Arena allocation pattern: “Programming Rust, 2nd Edition” Chapter 21 - Jim Blandy
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 1 completed, basic understanding of pointers and memory layout
Real world outcome:
// Your arena in action
fn main() {
// Create a 1MB arena
let arena = Arena::new(1024 * 1024);
// Allocate from the arena (fast: just bump a pointer!)
let name: &mut String = arena.alloc(String::from("Alice"));
let numbers: &mut [i32] = arena.alloc_slice(&[1, 2, 3, 4, 5]);
println!("Name: {}", name);
println!("Sum: {}", numbers.iter().sum::<i32>());
// Everything freed at once when arena is dropped
// No individual frees, no fragmentation!
}
// Output:
// Name: Alice
// Sum: 15
// Arena stats: allocated 89 bytes from 1048576 byte pool
Implementation Hints:
An arena allocator is beautifully simple:
- Allocate one big chunk of memory upfront
- Keep a “cursor” pointing to the next free byte
- To allocate: bump the cursor, return the old position
- To free: do nothing (or reset the whole arena)
Key insight: By tying all allocations to the arena’s lifetime, Rust’s borrow checker ensures you can’t use memory after the arena is freed.
Arena Memory Layout:
┌────────────────────────────────────────────────────┐
│ Used │ Used │ Used │ Free space... │
└────────────────────────────────────────────────────┘
↑
cursor
Allocation: just move cursor right
Deallocation: drop entire arena
Questions to guide you:
- How do you handle alignment requirements for different types?
- What happens if you try to allocate more than the arena size?
- How can you use lifetimes to tie allocated references to the arena?
Learning milestones:
- You allocate raw memory with unsafe → You understand Rust’s safety boundary
- Your API prevents use-after-free via lifetimes → You’ve designed with ownership in mind
- You handle alignment correctly → You understand low-level memory layout
- Your arena is faster than the system allocator for many small allocations → You understand allocator tradeoffs
Project 3: Fearless Concurrent Web Scraper (Data Races Impossible)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: Go (for comparison), Python (to feel the pain)
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Concurrency / Async / Networking
- Software or Tool: tokio, reqwest, scraper
- Main Book: “Rust Atomics and Locks” by Mara Bos
What you’ll build: A highly concurrent web scraper that fetches hundreds of pages simultaneously, extracts data, and aggregates results—with zero data races guaranteed by the compiler.
Why it teaches Rust’s strengths: Concurrency bugs are notoriously hard to find and reproduce. Rust’s Send and Sync traits make data races a compile-time error. You’ll share data between threads and feel the compiler protecting you.
Core challenges you’ll face:
- Sharing state between async tasks → maps to Arc, Mutex, and the Send/Sync traits
- Handling rate limiting without blocking → maps to async/await and tokio runtime
- Aggregating results from many concurrent operations → maps to channels and message passing
- Graceful error handling across tasks → maps to Result propagation in async contexts
Key Concepts:
- Send and Sync traits: “Rust Atomics and Locks” Chapter 1 - Mara Bos
- Async/Await: “Asynchronous Programming in Rust” - Rust Async Book (online)
- Arc and Mutex: “The Rust Programming Language” Chapter 16 - Steve Klabnik
- Channels (mpsc): “Programming Rust, 2nd Edition” Chapter 19 - Jim Blandy
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Project 1 completed, basic HTTP understanding
Real world outcome:
$ cargo run -- --urls urls.txt --workers 50 --output results.json
🕷️ Fearless Web Scraper v1.0
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[░░░░░░░░░░░░░░░░░░░░] 0/500 pages
... scraping ...
[████████████████████] 500/500 pages
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Completed in 4.2 seconds
• 500 pages scraped
• 50 concurrent workers
• 0 data races (guaranteed by Rust!)
• 12 failed requests (logged to errors.log)
Results written to results.json
Implementation Hints:
The key insight is Rust’s trait bounds:
Send: Safe to transfer between threadsSync: Safe to share references between threads
// This WON'T compile—Rc is not Send!
let counter = Rc::new(RefCell::new(0));
tokio::spawn(async move {
// Error: Rc<RefCell<i32>> cannot be sent between threads safely
});
// This WILL compile—Arc<Mutex<_>> is Send + Sync!
let counter = Arc::new(Mutex::new(0));
tokio::spawn(async move {
let mut count = counter.lock().unwrap();
*count += 1;
});
Architecture suggestion:
- Main thread reads URLs from file
- Spawns N worker tasks
- Workers pull URLs from a channel, fetch, parse, send results back
- Aggregator task collects results, writes to output
Questions to guide you:
- Why can’t you share a
Vecdirectly between threads? (It’s notSync) - What’s the difference between
MutexandRwLock? - When would you use channels vs. shared state?
Learning milestones:
- The compiler stops you from sharing non-thread-safe types → You understand Send/Sync
- **You successfully share state with Arc<Mutex<_>>** → You understand interior mutability
- You use channels for message passing → You understand Rust’s concurrency primitives
- Your scraper handles 100+ concurrent connections → You’ve built a real async system
Project 4: Zero-Copy Parser (Performance Without Sacrifice)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: C (for comparison)
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 3. The “Service & Support” Model
- Difficulty: Level 3: Advanced
- Knowledge Area: Parsing / Lifetimes / Zero-Copy Design
- Software or Tool: nom or pest parser combinators
- Main Book: “Programming Rust, 2nd Edition” by Jim Blandy
What you’ll build: A high-performance log file parser that processes gigabytes of logs without copying data—using Rust’s lifetime system to safely reference the original buffer.
Why it teaches Rust’s strengths: This is where lifetimes shine. You’ll return references to the input data, and the compiler will ensure those references don’t outlive the source. This is impossible to do safely in C without extreme discipline.
Core challenges you’ll face:
- Designing APIs that return references with lifetimes → maps to explicit lifetime annotations
- Avoiding unnecessary allocations → maps to understanding &str vs String
- Parsing without copying the input buffer → maps to the borrowing model
- Making the parser generic over input types → maps to lifetime bounds on generics
Key Concepts:
- Lifetimes in depth: “The Rust Programming Language” Chapter 10 - Steve Klabnik
- Zero-copy parsing: “Rust for Rustaceans” Chapter 3 - Jon Gjengset
- The Cow type: “Programming Rust, 2nd Edition” Chapter 13 - Jim Blandy
- Parser combinators with nom: nom documentation + tutorials
Difficulty: Advanced Time estimate: 2-3 weeks Prerequisites: Projects 1-2 completed, comfort with references
Real world outcome:
$ cargo run -- parse access.log --format nginx
📊 Zero-Copy Log Parser
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Parsing 2.3 GB log file...
Memory usage: 48 MB (vs 2.3 GB if we copied everything!)
Parse time: 1.8 seconds
Throughput: 1.28 GB/s
Top 10 Endpoints:
/api/users │████████████████████│ 45,231 hits
/api/products │███████████████ │ 34,102 hits
/health │████████ │ 18,445 hits
...
Unique IPs: 12,847
Total Requests: 8,234,521
Errors (5xx): 127 (0.0015%)
Implementation Hints:
The magic of zero-copy:
// COPIES the data (allocates new String)
fn get_name_copying(input: &str) -> String {
input[5..10].to_string()
}
// ZERO-COPY (returns reference to original)
fn get_name_zero_copy<'a>(input: &'a str) -> &'a str {
&input[5..10]
}
The lifetime 'a says: “The returned reference lives as long as the input.”
Your log entry might look like:
struct LogEntry<'a> {
ip: &'a str, // Points into original buffer
timestamp: &'a str, // No allocation
method: &'a str, // No copying
path: &'a str, // Maximum speed
status: u16, // Small value, ok to copy
}
Questions to guide you:
- What happens if you try to use a
LogEntryafter the input buffer is freed? - How does the compiler prevent this at compile time?
- When SHOULD you copy (turn
&strintoString)?
Learning milestones:
- You write functions with explicit lifetime annotations → You understand lifetime syntax
- Your parser returns references to input data → You understand zero-copy design
- The compiler prevents dangling references → You’ve seen lifetimes protect you
- Your parser is 10x faster than one that copies → You understand the performance benefit
Project 5: Type-State Builder Pattern (Make Invalid States Unrepresentable)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: Haskell (similar type system power)
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Type System / Generics / API Design
- Software or Tool: Rust type system, PhantomData
- Main Book: “Rust for Rustaceans” by Jon Gjengset
What you’ll build: An HTTP request builder where the type system ensures you can’t send a request without setting required fields—illegal states are compile-time errors, not runtime exceptions.
Why it teaches Rust’s strengths: This showcases Rust’s type system beyond memory safety. You’ll encode state machine transitions in the type system itself. Invalid API usage becomes a compile error.
Core challenges you’ll face:
- Encoding states as zero-sized types → maps to PhantomData and marker traits
- Transitioning between states via method chaining → maps to consuming self and returning new types
- Making the API impossible to misuse → maps to compile-time invariant enforcement
- Keeping the API ergonomic despite type complexity → maps to good generic design
Key Concepts:
- Type-state pattern: “Rust for Rustaceans” Chapter 3 - Jon Gjengset
- PhantomData: “The Rust Programming Language” Chapter 19 - Steve Klabnik
- Zero-sized types: “Programming Rust, 2nd Edition” Chapter 11 - Jim Blandy
- Builder pattern in Rust: “Rust API Guidelines” - rust-lang.github.io
Difficulty: Intermediate Time estimate: 1 week Prerequisites: Project 1 completed, basic generics understanding
Real world outcome:
// This COMPILES ✅
let response = HttpRequest::new()
.method(Method::POST) // State: HasMethod
.url("https://api.example.com") // State: HasUrl
.header("Content-Type", "application/json")
.body(json!({"name": "Alice"})) // State: HasBody
.send() // Only available when all required fields set!
.await?;
// This DOES NOT COMPILE ❌
let response = HttpRequest::new()
.url("https://api.example.com")
.send() // Error: `send` not found—method not set!
.await?;
// Compiler error:
// error[E0599]: no method named `send` found for struct
// `HttpRequest<NoMethod, HasUrl>` in the current scope
// |
// 45 | .send()
// | ^^^^ method not found in `HttpRequest<NoMethod, HasUrl>`
// |
// = note: `send` requires `HttpRequest<HasMethod, HasUrl>`
Implementation Hints:
The type-state pattern uses generics to track state:
// State markers (zero-sized, exist only at compile time)
struct NoMethod;
struct HasMethod;
struct NoUrl;
struct HasUrl;
// Request builder with state encoded in types
struct HttpRequest<MethodState, UrlState> {
method: Option<Method>,
url: Option<String>,
_method_state: PhantomData<MethodState>,
_url_state: PhantomData<UrlState>,
}
impl HttpRequest<NoMethod, NoUrl> {
fn new() -> Self { /* ... */ }
}
// Only adds method() when method is not set
impl<U> HttpRequest<NoMethod, U> {
fn method(self, m: Method) -> HttpRequest<HasMethod, U> {
// Consumes self, returns NEW type with HasMethod
}
}
// send() only exists when BOTH are set
impl HttpRequest<HasMethod, HasUrl> {
async fn send(self) -> Response {
// Can only call this in valid state!
}
}
Questions to guide you:
- Why do we use
PhantomData<T>instead of storingTdirectly? - How does “consuming self” prevent using the builder in the wrong order?
- Could you do this in Java or Python? (Hint: not at compile time)
Learning milestones:
- You encode states as types → You understand zero-sized types
- Invalid transitions don’t compile → You’ve made illegal states unrepresentable
- Your API guides users to correct usage → You understand the power of Rust’s type system
- You apply this to another domain (database connections, file handles) → You’ve internalized the pattern
Project 6: Lock-Free Concurrent Queue (Atomics Without Fear)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: C++, C
- Coolness Level: Level 5: Pure Magic
- Business Potential: 4. The “Open Core” Infrastructure
- Difficulty: Level 4: Expert
- Knowledge Area: Lock-Free Programming / Atomics / Memory Ordering
- Software or Tool: std::sync::atomic, crossbeam
- Main Book: “Rust Atomics and Locks” by Mara Bos
What you’ll build: A high-performance lock-free MPMC (multi-producer multi-consumer) queue that multiple threads can push/pop from simultaneously without any mutexes—using only atomic operations.
Why it teaches Rust’s strengths: Lock-free programming is notoriously error-prone in C/C++. Rust’s type system helps (Send/Sync), and the explicit memory ordering parameters make you think about CPU memory models. This is systems programming at its finest.
Core challenges you’ll face:
- Understanding memory ordering (SeqCst, Release, Acquire) → maps to CPU memory models and visibility
- Implementing compare-and-swap loops → maps to atomic operations
- Preventing ABA problems → maps to hazard pointers or epoch-based reclamation
- Ensuring your queue is actually correct → maps to formal reasoning about concurrency
Key Concepts:
- Memory ordering: “Rust Atomics and Locks” Chapter 3 - Mara Bos
- Compare-and-swap: “Rust Atomics and Locks” Chapter 4 - Mara Bos
- Lock-free data structures: “The Art of Multiprocessor Programming” - Herlihy & Shavit
- Epoch-based reclamation: crossbeam-epoch documentation
Difficulty: Expert Time estimate: 3-4 weeks Prerequisites: Projects 1-3 completed, strong understanding of threads
Real world outcome:
$ cargo run --release -- benchmark
🔥 Lock-Free Queue Benchmark
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Configuration: 8 producers, 8 consumers, 10M operations each
Mutex-based Queue:
Throughput: 2.3 million ops/sec
Latency p99: 4.2 μs
Your Lock-Free Queue:
Throughput: 18.7 million ops/sec (8.1x faster!)
Latency p99: 0.3 μs (14x lower!)
Correctness verification:
✅ All 80M items accounted for
✅ No duplicates
✅ No data corruption
✅ Stress test passed (1 hour, no hangs)
Implementation Hints:
Start with a bounded single-producer single-consumer queue (simpler):
SPSC Ring Buffer:
┌───┬───┬───┬───┬───┬───┬───┬───┐
│ A │ B │ C │ │ │ │ │ │
└───┴───┴───┴───┴───┴───┴───┴───┘
↑ ↑
tail head
(consumer) (producer)
Producer: writes at head, increments head atomically
Consumer: reads at tail, increments tail atomically
No locks needed! Just atomics.
Memory ordering matters:
// When publishing data (producer)
self.head.store(new_head, Ordering::Release);
// "Release" ensures: all prior writes are visible before this store
// When reading data (consumer)
let head = self.head.load(Ordering::Acquire);
// "Acquire" ensures: all writes before the Release are visible
Questions to guide you:
- What’s the difference between
Ordering::RelaxedandOrdering::SeqCst? - What is the ABA problem and why is it dangerous?
- How do you safely free memory in a lock-free structure?
Learning milestones:
- You implement a correct SPSC queue → You understand basic atomics
- You handle the producer-consumer synchronization → You understand Release/Acquire
- You extend to MPMC → You understand CAS loops
- Your queue beats
std::sync::mpscin benchmarks → You’ve mastered lock-free programming
Project 7: Embedded LED Controller (No OS, No Problem)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: C (traditional choice for embedded)
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 3. The “Service & Support” Model
- Difficulty: Level 3: Advanced
- Knowledge Area: Embedded Systems / No-std Rust / Hardware
- Software or Tool: Raspberry Pi Pico, Embassy or RTIC framework
- Main Book: “Making Embedded Systems, 2nd Edition” by Elecia White + “The Embedded Rust Book”
What you’ll build: An LED light controller running on a microcontroller (like Raspberry Pi Pico) with no operating system—pure bare-metal Rust that directly manipulates hardware registers.
Why it teaches Rust’s strengths: Embedded is where C has reigned for 50 years. Rust brings memory safety to microcontrollers. You’ll write #![no_std] code and see that Rust’s safety guarantees work even without an OS or heap.
Core challenges you’ll face:
- Writing no_std code without the standard library → maps to core vs std library
- Directly accessing hardware registers safely → maps to volatile reads/writes
- Managing resources without a heap → maps to static allocation
- Interrupt handling in Rust → maps to RTIC or Embassy frameworks
Key Concepts:
- No-std Rust: “The Embedded Rust Book” - rust-embedded.github.io
- Memory-mapped I/O: “Making Embedded Systems, 2nd Edition” Chapter 4 - Elecia White
- Volatile access: “Programming Rust, 2nd Edition” Chapter 22 - Jim Blandy
- Embedded HALs: “Rust for Rustaceans” Chapter 12 - Jon Gjengset
Difficulty: Advanced Time estimate: 2-3 weeks Prerequisites: Projects 1-2 completed, some electronics knowledge helpful
Real world outcome:
Physical setup: Raspberry Pi Pico + 8 LEDs + button
Your Rust code controls:
- LED patterns (chase, fade, rainbow)
- Button input handling via interrupt
- PWM for brightness control
- USB serial for live control from computer
┌─────────────────────────────────────┐
│ 🔴 🟢 🔵 🟡 🔴 🟢 🔵 🟡 │
│ ← LEDs │
│ │
│ [Button] │
│ │
│ USB → Computer for control │
└─────────────────────────────────────┘
$ screen /dev/ttyACM0
> pattern chase
LED pattern: chase
> speed 50
Chase speed: 50ms
> brightness 75
Brightness: 75%
Implementation Hints:
Embedded Rust looks different:
#![no_std] // Don't link the standard library
#![no_main] // We define our own entry point
use panic_halt as _; // Define panic behavior
#[cortex_m_rt::entry]
fn main() -> ! {
// Get peripherals
let peripherals = rp2040_pac::Peripherals::take().unwrap();
// Configure GPIO pin as output
// (Exact API depends on HAL you use)
loop {
led.set_high();
delay.delay_ms(500);
led.set_low();
delay.delay_ms(500);
}
}
The -> ! means “never returns”—embedded code runs forever.
Questions to guide you:
- What happens if you panic in no_std? (You need to define it!)
- How is memory managed without a heap? (Static allocation or stack only)
- What makes Rust safer than C for embedded? (Type-safe register access, no buffer overflows)
Learning milestones:
- You blink an LED with Rust → You can write no_std code
- You handle button interrupts → You understand embedded async patterns
- You implement PWM for LED brightness → You can work with hardware timers
- You communicate over USB serial → You’ve built a complete embedded system
Project 8: Plugin System with Dynamic Loading (Traits as Interfaces)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: C (traditional FFI approach)
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 4. The “Open Core” Infrastructure
- Difficulty: Level 3: Advanced
- Knowledge Area: Traits / Dynamic Dispatch / FFI / ABI
- Software or Tool: libloading, abi_stable crate
- Main Book: “Rust for Rustaceans” by Jon Gjengset
What you’ll build: A host application that loads plugins at runtime from shared libraries (.so/.dll), where plugins implement a trait interface—demonstrating Rust’s approach to polymorphism and FFI.
Why it teaches Rust’s strengths: Traits are Rust’s answer to interfaces/abstract classes, but they’re more powerful. You’ll learn the difference between static and dynamic dispatch, and see how Rust handles ABI stability (or lack thereof).
Core challenges you’ll face:
- Defining a stable plugin ABI → maps to understanding why Rust has no stable ABI
- Loading shared libraries at runtime → maps to FFI and unsafe boundaries
- Using trait objects for dynamic dispatch → maps to dyn Trait and vtables
- Ensuring plugin safety → maps to sandboxing considerations
Key Concepts:
- Traits and trait objects: “The Rust Programming Language” Chapter 17 - Steve Klabnik
- Dynamic dispatch (dyn): “Rust for Rustaceans” Chapter 2 - Jon Gjengset
- FFI: “The Rust Programming Language” Chapter 19 - Steve Klabnik
- ABI stability: abi_stable crate documentation
Difficulty: Advanced Time estimate: 2-3 weeks Prerequisites: Projects 1 and 5 completed, understanding of trait bounds
Real world outcome:
$ ls plugins/
greeting_plugin.so
math_plugin.so
weather_plugin.so
$ cargo run
🔌 Plugin Host v1.0
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Loading plugins from ./plugins/
✅ Loaded: greeting_plugin v1.0
Commands: greet, farewell
✅ Loaded: math_plugin v2.1
Commands: add, multiply, factorial
✅ Loaded: weather_plugin v1.2
Commands: forecast, temperature
> greet Alice
[greeting_plugin] Hello, Alice! Welcome!
> factorial 10
[math_plugin] 10! = 3628800
> forecast London
[weather_plugin] London: Cloudy, 12°C, 80% humidity
Implementation Hints:
The plugin trait:
// In shared crate used by host and plugins
pub trait Plugin: Send + Sync {
fn name(&self) -> &str;
fn version(&self) -> &str;
fn commands(&self) -> Vec<&str>;
fn execute(&self, command: &str, args: &[&str]) -> Result<String, PluginError>;
}
The ABI problem: Rust doesn’t guarantee struct layouts between compilations. Use abi_stable crate or a C-like FFI interface.
// Plugin entry point (C ABI for stability)
#[no_mangle]
pub extern "C" fn create_plugin() -> *mut dyn Plugin {
Box::into_raw(Box::new(MyPlugin::new()))
}
Questions to guide you:
- What’s the difference between
impl Traitanddyn Trait? - Why can’t you just return
Box<dyn Plugin>across FFI boundaries? - How does the vtable (virtual dispatch table) work?
Learning milestones:
- You load a shared library and call a function → You understand FFI basics
- Your plugin implements a trait → You understand dynamic dispatch
- You handle ABI stability → You understand Rust’s compilation model
- You hot-reload plugins without restarting → You’ve built a production-quality plugin system
Project 9: Custom Smart Pointer (Understand Rc, Arc, RefCell)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: C++ (unique_ptr, shared_ptr comparison)
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: Smart Pointers / Interior Mutability / Drop
- Software or Tool: Rust standard library source code
- Main Book: “Programming Rust, 2nd Edition” by Jim Blandy
What you’ll build: Your own implementations of Rc (reference counting), Arc (atomic reference counting), and RefCell (runtime borrow checking)—understanding exactly how Rust’s smart pointers work under the hood.
Why it teaches Rust’s strengths: Smart pointers are fundamental to ownership patterns in Rust. By building them yourself, you’ll understand exactly when to use each one and what tradeoffs they make.
Core challenges you’ll face:
- Implementing reference counting correctly → maps to understanding Drop and cycles
- Making Arc thread-safe with atomics → maps to understanding why Rc isn’t Send
- Runtime borrow checking in RefCell → maps to understanding the borrowing rules
- Handling memory properly in Drop → maps to manual memory management in unsafe
Key Concepts:
- Smart pointers: “The Rust Programming Language” Chapter 15 - Steve Klabnik
- Interior mutability: “Rust for Rustaceans” Chapter 8 - Jon Gjengset
- Drop trait: “Programming Rust, 2nd Edition” Chapter 13 - Jim Blandy
- Reference cycles: “The Rust Programming Language” Chapter 15 - Steve Klabnik
Difficulty: Advanced Time estimate: 2-3 weeks Prerequisites: Projects 1-2 and 6 completed
Real world outcome:
// Your Rc in action (matches std::rc::Rc behavior)
fn main() {
let a = MyRc::new(5);
println!("Count after creating a: {}", MyRc::strong_count(&a)); // 1
let b = MyRc::clone(&a);
println!("Count after creating b: {}", MyRc::strong_count(&a)); // 2
{
let c = MyRc::clone(&a);
println!("Count after creating c: {}", MyRc::strong_count(&a)); // 3
}
println!("Count after c goes out of scope: {}", MyRc::strong_count(&a)); // 2
}
// Your RefCell in action
fn main() {
let cell = MyRefCell::new(vec![1, 2, 3]);
// Multiple immutable borrows OK
let borrow1 = cell.borrow();
let borrow2 = cell.borrow();
println!("{:?}", *borrow1);
drop(borrow1);
drop(borrow2);
// Mutable borrow after immutable borrows released
cell.borrow_mut().push(4);
// This would panic at runtime:
// let b = cell.borrow();
// let m = cell.borrow_mut(); // panic: already borrowed!
}
Implementation Hints:
Rc structure:
struct RcInner<T> {
value: T,
strong_count: Cell<usize>,
// weak_count for Weak references (optional advanced feature)
}
pub struct MyRc<T> {
ptr: NonNull<RcInner<T>>,
}
impl<T> Clone for MyRc<T> {
fn clone(&self) -> Self {
// Increment count
let inner = unsafe { self.ptr.as_ref() };
inner.strong_count.set(inner.strong_count.get() + 1);
MyRc { ptr: self.ptr }
}
}
impl<T> Drop for MyRc<T> {
fn drop(&mut self) {
// Decrement count, free if zero
}
}
RefCell uses runtime borrow tracking:
pub struct MyRefCell<T> {
value: UnsafeCell<T>,
borrow_state: Cell<isize>, // 0 = unborrowed, >0 = immutably borrowed, -1 = mutably borrowed
}
Questions to guide you:
- Why is Rc not thread-safe? (Cell uses non-atomic operations)
- How does Arc make it thread-safe? (AtomicUsize instead of Cell)
- What happens if you create a cycle with Rc? (Memory leak!)
Learning milestones:
- Your Rc correctly counts references → You understand reference counting
- Drop frees memory at zero references → You understand RAII
- Your Arc is thread-safe → You understand atomic operations
- Your RefCell panics on borrow violations → You understand runtime checking
Project 10: Procedural Macro Library (Compile-Time Code Generation)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: None (Rust-specific feature)
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 4. The “Open Core” Infrastructure
- Difficulty: Level 4: Expert
- Knowledge Area: Metaprogramming / Macros / Compiler Integration
- Software or Tool: syn, quote, proc-macro2
- Main Book: “The Rust Programming Language” by Steve Klabnik (Chapter 19) + “Rust for Rustaceans”
What you’ll build: A derive macro that auto-generates code at compile time—like implementing #[derive(Builder)] that creates builder patterns, or #[derive(Serialize)] that creates serialization code.
Why it teaches Rust’s strengths: Macros are Rust’s metaprogramming system. Unlike C preprocessor macros, Rust procedural macros operate on the AST with full type information. They enable libraries like serde, which generates optimal serialization code at compile time.
Core challenges you’ll face:
- Parsing Rust syntax with syn → maps to understanding TokenStream and AST
- Generating valid Rust code with quote → maps to code generation
- Handling edge cases and error reporting → maps to compile-time diagnostics
- Testing macro output → maps to macro debugging techniques
Key Concepts:
- Procedural macros: “The Rust Programming Language” Chapter 19 - Steve Klabnik
- TokenStream manipulation: “Rust for Rustaceans” Chapter 9 - Jon Gjengset
- syn crate for parsing: syn documentation
- quote crate for generation: quote documentation
Difficulty: Expert Time estimate: 2-3 weeks Prerequisites: Most prior projects completed, strong Rust fundamentals
Real world outcome:
// User writes this:
#[derive(Builder)]
struct Config {
host: String,
port: u16,
#[builder(default = "false")]
debug: bool,
#[builder(optional)]
timeout: Option<u64>,
}
// Your macro generates this at compile time:
impl Config {
fn builder() -> ConfigBuilder {
ConfigBuilder::default()
}
}
struct ConfigBuilder {
host: Option<String>,
port: Option<u16>,
debug: bool,
timeout: Option<u64>,
}
impl ConfigBuilder {
fn host(mut self, value: String) -> Self {
self.host = Some(value);
self
}
fn port(mut self, value: u16) -> Self {
self.port = Some(value);
self
}
fn debug(mut self, value: bool) -> Self {
self.debug = value;
self
}
fn timeout(mut self, value: u64) -> Self {
self.timeout = Some(value);
self
}
fn build(self) -> Result<Config, &'static str> {
Ok(Config {
host: self.host.ok_or("host is required")?,
port: self.port.ok_or("port is required")?,
debug: self.debug,
timeout: self.timeout,
})
}
}
// Usage:
fn main() {
let config = Config::builder()
.host("localhost".into())
.port(8080)
.build()
.unwrap();
}
Implementation Hints:
Procedural macros live in a separate crate with proc-macro = true:
// In Cargo.toml
[lib]
proc-macro = true
// In lib.rs
use proc_macro::TokenStream;
use quote::quote;
use syn::{parse_macro_input, DeriveInput};
#[proc_macro_derive(Builder, attributes(builder))]
pub fn derive_builder(input: TokenStream) -> TokenStream {
let input = parse_macro_input!(input as DeriveInput);
let name = &input.ident;
let builder_name = format_ident!("{}Builder", name);
// Extract fields, generate builder methods...
let expanded = quote! {
impl #name {
fn builder() -> #builder_name {
#builder_name::default()
}
}
struct #builder_name {
// ...generated fields...
}
// ...generated impl...
};
TokenStream::from(expanded)
}
Questions to guide you:
- What’s the difference between declarative macros (
macro_rules!) and procedural macros? - How do you parse attributes like
#[builder(default = "value")]? - How do you provide good error messages when macro input is invalid?
Learning milestones:
- You parse a struct with syn → You understand Rust’s AST
- You generate valid code with quote → You understand code generation
- Your macro handles edge cases gracefully → You understand macro hygiene
- You create a genuinely useful derive macro → You’ve mastered metaprogramming
Project 11: Async Runtime from Scratch (Understand Futures and Executors)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: None (deep Rust internals)
- Coolness Level: Level 5: Pure Magic
- Business Potential: 4. The “Open Core” Infrastructure
- Difficulty: Level 5: Master
- Knowledge Area: Async / Futures / Executors / Polling
- Software or Tool: std::future, waker, context
- Main Book: “Asynchronous Programming in Rust” (Rust Async Book) + “Rust for Rustaceans”
What you’ll build: A minimal async runtime like a tiny tokio—implementing the executor that polls futures, the reactor that handles I/O events, and the waker system that schedules tasks.
Why it teaches Rust’s strengths: Rust’s async is unique: zero-cost abstractions with no hidden allocations. By building a runtime, you’ll understand that async/await is just syntax sugar over state machines and polling.
Core challenges you’ll face:
- Understanding the Future trait and poll method → maps to how async desugars
- Implementing a waker and context → maps to how tasks get rescheduled
- Building an event loop with epoll/kqueue → maps to OS-level async I/O
- Managing task queues and scheduling → maps to executor design
Key Concepts:
- Future trait: “Asynchronous Programming in Rust” - Rust Async Book
- Wakers and Context: “Rust for Rustaceans” Chapter 8 - Jon Gjengset
- State machine desugaring: “Programming Rust, 2nd Edition” Chapter 20 - Jim Blandy
- epoll/kqueue: “The Linux Programming Interface” Chapter 63 - Michael Kerrisk
Difficulty: Master Time estimate: 1 month+ Prerequisites: All prior projects, deep understanding of ownership and lifetimes
Real world outcome:
// Your runtime powers this async code:
fn main() {
MyRuntime::block_on(async {
println!("Starting async operations...");
// Spawn concurrent tasks
let task1 = spawn(async {
sleep(Duration::from_secs(1)).await;
"Task 1 complete"
});
let task2 = spawn(async {
sleep(Duration::from_millis(500)).await;
"Task 2 complete"
});
// Wait for both
let (r1, r2) = join!(task1, task2);
println!("{}, {}", r1, r2);
});
}
// Output:
// Starting async operations...
// Task 2 complete, Task 1 complete
// (Task 2 finishes first because it sleeps less)
// Benchmark vs tokio:
// Your runtime: 50k tasks/sec
// Tokio: 500k tasks/sec
// Not bad for learning!
Implementation Hints:
The core insight: async functions become state machines.
// This async function:
async fn example() {
println!("before");
some_future.await;
println!("after");
}
// Becomes roughly this state machine:
enum ExampleState {
Start,
Waiting(SomeFuture),
Done,
}
impl Future for Example {
fn poll(self: Pin<&mut Self>, cx: &mut Context) -> Poll<()> {
match self.state {
Start => {
println!("before");
self.state = Waiting(some_future);
// Fall through to poll the inner future
}
Waiting(ref mut f) => {
match Pin::new(f).poll(cx) {
Poll::Ready(()) => {
println!("after");
self.state = Done;
return Poll::Ready(());
}
Poll::Pending => return Poll::Pending,
}
}
Done => panic!("polled after completion"),
}
}
}
The executor just polls futures:
impl Runtime {
fn block_on<F: Future>(&self, future: F) -> F::Output {
let waker = /* create waker that does nothing for now */;
let mut cx = Context::from_waker(&waker);
let mut future = pin!(future);
loop {
match future.as_mut().poll(&mut cx) {
Poll::Ready(output) => return output,
Poll::Pending => {
// Wait for I/O events...
}
}
}
}
}
Questions to guide you:
- What does
Pindo and why is it needed for async? - How does a waker know which task to wake?
- What’s the difference between poll-based and callback-based async?
Learning milestones:
- You understand Future::poll → You know how async works at the lowest level
- You implement a simple executor → You can run futures to completion
- You integrate with epoll/kqueue → Your runtime handles real I/O
- You implement spawn() for concurrent tasks → You’ve built a multi-task runtime
Project 12: TCP/IP Stack in Userspace (Network Programming Mastery)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: C (traditional approach)
- Coolness Level: Level 5: Pure Magic
- Business Potential: 3. The “Service & Support” Model
- Difficulty: Level 5: Master
- Knowledge Area: Networking / Protocol Implementation / Raw Sockets
- Software or Tool: tun/tap interfaces, raw sockets
- Main Book: “TCP/IP Illustrated, Volume 1” by W. Richard Stevens
What you’ll build: A userspace TCP/IP stack that handles Ethernet frames, IP packets, and TCP connections—bypassing the kernel’s network stack entirely.
Why it teaches Rust’s strengths: Network protocols require precise bit manipulation and careful memory management. Rust’s strong types and safety guarantees make implementing protocols safer than C while maintaining the same performance.
Core challenges you’ll face:
- Parsing and constructing network packets → maps to byte-level manipulation with types
- Implementing TCP state machine → maps to enums and pattern matching
- Handling checksums and byte order → maps to safe low-level operations
- Managing connection state safely → maps to ownership for resource management
Key Concepts:
- TCP state machine: “TCP/IP Illustrated, Volume 1” Chapters 17-24 - Stevens
- Network byte order: “Computer Networks” Chapter 5 - Tanenbaum
- Raw sockets and TUN/TAP: “The Linux Programming Interface” Chapter 58 - Kerrisk
- Bitfield parsing in Rust: “Programming Rust, 2nd Edition” - Jim Blandy
Difficulty: Master Time estimate: 1-2 months Prerequisites: Strong networking fundamentals, all prior Rust projects
Real world outcome:
$ sudo cargo run -- --interface tun0
🌐 RustTCP - Userspace TCP/IP Stack
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Interface: tun0
IP: 192.168.1.100/24
Listening on port 80...
[TCP] SYN received from 192.168.1.1:54321
[TCP] -> SYN-ACK sent
[TCP] ACK received - Connection ESTABLISHED
[HTTP] Request: GET / HTTP/1.1
[HTTP] Response: 200 OK (342 bytes)
[TCP] FIN received
[TCP] -> FIN-ACK sent
[TCP] Connection CLOSED
Statistics:
Packets received: 1,247
Packets sent: 1,189
TCP connections: 12
Checksum errors: 0
Retransmissions: 3
Implementation Hints:
Define packet structures with explicit layout:
#[repr(C, packed)]
struct EthernetHeader {
dst_mac: [u8; 6],
src_mac: [u8; 6],
ethertype: u16,
}
#[repr(C, packed)]
struct Ipv4Header {
version_ihl: u8,
dscp_ecn: u8,
total_length: u16,
identification: u16,
flags_fragment: u16,
ttl: u8,
protocol: u8,
checksum: u16,
src_ip: [u8; 4],
dst_ip: [u8; 4],
}
TCP state machine maps beautifully to Rust enums:
enum TcpState {
Listen,
SynReceived,
Established,
FinWait1,
FinWait2,
CloseWait,
LastAck,
TimeWait,
Closed,
}
fn handle_packet(&mut self, packet: &TcpPacket) {
self.state = match (&self.state, packet.flags) {
(Listen, SYN) => {
self.send_syn_ack();
SynReceived
}
(SynReceived, ACK) => Established,
(Established, FIN) => {
self.send_ack();
CloseWait
}
// ... other transitions
_ => self.state.clone(),
};
}
Questions to guide you:
- How does TCP handle packet loss? (Retransmission with exponential backoff)
- What’s the purpose of the TIME_WAIT state?
- How do you calculate IP and TCP checksums?
Learning milestones:
- You parse and construct Ethernet/IP headers → You understand network byte order
- You respond to ICMP ping → You have basic connectivity
- You complete a TCP three-way handshake → You understand TCP basics
- You serve an HTTP page over your stack → You’ve built a complete network stack
Project 13: Database Storage Engine (B-Trees and Transactions)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: C, C++, Go
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 4. The “Open Core” Infrastructure
- Difficulty: Level 4: Expert
- Knowledge Area: Database Internals / B-Trees / ACID
- Software or Tool: File I/O, memory mapping
- Main Book: “Designing Data-Intensive Applications” by Martin Kleppmann
What you’ll build: A persistent key-value storage engine with B-tree indexing, write-ahead logging, and ACID transactions—the core of databases like SQLite or LevelDB.
Why it teaches Rust’s strengths: Database engines require extreme care with memory and file I/O. Rust’s ownership model naturally expresses concepts like “this transaction owns these page locks.”
Core challenges you’ll face:
- Implementing B-tree node splitting and merging → maps to complex data structure manipulation
- Write-ahead logging for crash recovery → maps to file I/O and fsync
- Page-level locking for concurrency → maps to RwLock and ownership
- Memory-mapped I/O for performance → maps to unsafe and raw pointers
Key Concepts:
- B-tree algorithms: “Algorithms, Fourth Edition” Chapter 6 - Sedgewick
- Write-ahead logging: “Designing Data-Intensive Applications” Chapter 7 - Kleppmann
- ACID properties: “Designing Data-Intensive Applications” Chapter 7 - Kleppmann
- Memory-mapped files: “The Linux Programming Interface” Chapter 49 - Kerrisk
Difficulty: Expert Time estimate: 1 month+ Prerequisites: Projects 1-4 completed, understanding of data structures
Real world outcome:
$ cargo run --release
🗄️ RustKV - Embedded Storage Engine
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
> put user:1 {"name": "Alice", "age": 30}
OK (0.1ms)
> put user:2 {"name": "Bob", "age": 25}
OK (0.08ms)
> get user:1
{"name": "Alice", "age": 30} (0.02ms)
> BEGIN
Transaction started
> put user:1 {"name": "Alice", "age": 31}
OK (staged)
> ROLLBACK
Transaction rolled back
> get user:1
{"name": "Alice", "age": 30} # Unchanged!
Benchmark:
Random writes: 150,000 ops/sec
Random reads: 450,000 ops/sec
Range scans: 2.1M keys/sec
Crash recovery test: PASSED
(Killed process during write, all committed data recovered)
Implementation Hints:
B-tree node structure:
enum BTreeNode {
Internal {
keys: Vec<Key>,
children: Vec<PageId>,
},
Leaf {
keys: Vec<Key>,
values: Vec<Value>,
next_leaf: Option<PageId>, // For range scans
},
}
Page layout on disk:
┌─────────────────────────────────────────────────────────┐
│ Page Header (16 bytes) │
│ - page_id: u64 │
│ - page_type: u8 (Internal/Leaf) │
│ - num_keys: u16 │
│ - checksum: u32 │
├─────────────────────────────────────────────────────────┤
│ Key-Value Data (4080 bytes) │
│ [key1][value1][key2][value2]... │
└─────────────────────────────────────────────────────────┘
Total: 4KB (typical page size)
Write-ahead log ensures durability:
struct WAL {
file: File,
}
impl WAL {
fn log(&mut self, entry: &WalEntry) -> io::Result<()> {
self.file.write_all(&entry.serialize())?;
self.file.sync_data()?; // Crucial! Data on disk before we return
Ok(())
}
}
Questions to guide you:
- Why B-trees instead of binary trees for disk storage?
- What’s the difference between
sync_dataandsync_all? - How does MVCC (Multi-Version Concurrency Control) work?
Learning milestones:
- You implement a working B-tree in memory → You understand B-tree algorithms
- You persist to disk with correct page layout → You understand storage formats
- You survive crashes with WAL → You understand durability
- You support concurrent transactions → You’ve built a real database engine
Project 14: Game Boy Emulator (Retro Hardware Simulation)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: C++, C
- Coolness Level: Level 5: Pure Magic
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 4: Expert
- Knowledge Area: Emulation / CPU Architecture / Graphics
- Software or Tool: SDL2 or minifb for graphics
- Main Book: “Game Boy Coding Adventure” by Maximilien Dagois + Pan Docs
What you’ll build: A complete Game Boy emulator that runs real cartridge ROMs—emulating the CPU, memory, graphics (PPU), and sound.
Why it teaches Rust’s strengths: Emulators require cycle-accurate simulation with tight performance. Rust’s zero-cost abstractions let you write clean, high-level code that compiles to fast machine code. Pattern matching makes instruction decoding beautiful.
Core challenges you’ll face:
- Implementing the LR35902 CPU instruction set → maps to pattern matching and state machines
- Cycle-accurate timing → maps to precise scheduling and accounting
- Graphics scanline rendering → maps to low-level graphics programming
- Memory banking and cartridge handling → maps to memory-mapped I/O
Key Concepts:
- Z80-like instruction set: “Game Boy Coding Adventure” - Dagois
- PPU rendering: Pan Docs (gbdev.io)
- Cycle timing: “Writing a Game Boy Emulator” - blog series by various authors
- Graphics with SDL2: “Rust Game Development” or minifb documentation
Difficulty: Expert Time estimate: 1-2 months Prerequisites: Projects 1-4 completed, understanding of binary/hex
Real world outcome:
┌────────────────────────────────────────────────┐
│ │
│ ┌──────────────────────────────────┐ │
│ │ │ │
│ │ ████ TETRIS ████ │ │
│ │ │ │
│ │ ▓▓ │ │
│ │ ▓▓ │ │
│ │ ▓▓▓▓▓▓ │ │
│ │ │ │
│ │ ████████ │ │
│ │ ████████ │ │
│ │ ████████████ │ │
│ │ │ │
│ │ SCORE: 1337 LEVEL: 5 │ │
│ │ │ │
│ └──────────────────────────────────┘ │
│ │
│ [A] [B] [SELECT] [START] │
│ [↑] │
│ [←] [→] │
│ [↓] │
│ │
└────────────────────────────────────────────────┘
$ cargo run --release tetris.gb
🎮 RustBoy Emulator
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
ROM: tetris.gb
Cartridge Type: ROM ONLY
Running at 59.73 FPS (4.19 MHz CPU)
Controls: Arrow keys, Z=A, X=B, Enter=Start
Implementation Hints:
CPU instruction decoding with pattern matching:
fn execute(&mut self, opcode: u8) -> u8 {
match opcode {
0x00 => { /* NOP */ 4 }
0x01 => { /* LD BC, d16 */
let value = self.read_word();
self.regs.set_bc(value);
12
}
0x06 => { /* LD B, d8 */
self.regs.b = self.read_byte();
8
}
0x80..=0x87 => { /* ADD A, r */
let reg = opcode & 0x07;
let value = self.read_reg(reg);
self.add(value);
4
}
// ... hundreds more
_ => panic!("Unknown opcode: {:#04x}", opcode),
}
}
Memory map:
0x0000 - 0x3FFF: ROM Bank 0 (16 KB)
0x4000 - 0x7FFF: ROM Bank 1-N (switchable, 16 KB)
0x8000 - 0x9FFF: Video RAM (8 KB)
0xA000 - 0xBFFF: External RAM (8 KB)
0xC000 - 0xDFFF: Work RAM (8 KB)
0xE000 - 0xFDFF: Echo RAM (mirrors C000-DDFF)
0xFE00 - 0xFE9F: Sprite Attribute Table (OAM)
0xFF00 - 0xFF7F: I/O Registers
0xFF80 - 0xFFFE: High RAM (127 bytes)
0xFFFF: Interrupt Enable Register
Questions to guide you:
- How does the PPU render graphics in scanlines?
- What interrupts does the Game Boy use (VBlank, LCD STAT, Timer)?
- How do different Memory Bank Controllers (MBC1, MBC3, MBC5) work?
Learning milestones:
- Your CPU passes test ROMs → You’ve implemented the instruction set
- Tetris title screen appears → Your PPU renders tiles correctly
- Games are playable → You’ve nailed timing and input
- You pass timing test ROMs → You’re cycle-accurate
Project 15: Rust to WebAssembly Game (Cross-Platform Compilation)
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: AssemblyScript (TypeScript-like)
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: WebAssembly / Game Development / Cross-Compilation
- Software or Tool: wasm-pack, wasm-bindgen, macroquad or bevy
- Main Book: “Programming WebAssembly with Rust” by Kevin Hoffman
What you’ll build: A browser-playable game (like Snake, Breakout, or a simple platformer) compiled from Rust to WebAssembly—demonstrating Rust’s ability to target the web with near-native performance.
Why it teaches Rust’s strengths: WebAssembly is a perfect target for Rust: sandboxed, portable, fast. This project shows that Rust isn’t just for systems programming—it can replace JavaScript for performance-critical web code.
Core challenges you’ll face:
- Cross-compiling to wasm32 target → maps to Rust’s portable compilation
- Interfacing with JavaScript APIs → maps to wasm-bindgen FFI
- Managing memory in a sandboxed environment → maps to no-std-like constraints
- Optimizing wasm bundle size → maps to link-time optimization
Key Concepts:
- WebAssembly basics: “Programming WebAssembly with Rust” - Kevin Hoffman
- wasm-bindgen: wasm-bindgen guide (rustwasm.github.io)
- Game loops: “Game Programming Patterns” - Robert Nystrom (gameprogrammingpatterns.com)
- Canvas 2D API: MDN Web Docs
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Projects 1-3 completed, basic JavaScript/HTML
Real world outcome:
<!-- Your game running in the browser -->
<!DOCTYPE html>
<html>
<head>
<title>Rust Snake Game</title>
</head>
<body>
<canvas id="game" width="400" height="400"></canvas>
<script type="module">
import init, { Game } from './pkg/snake_game.js';
async function run() {
await init();
const game = new Game();
function gameLoop() {
game.update();
game.render();
requestAnimationFrame(gameLoop);
}
gameLoop();
}
run();
</script>
</body>
</html>
<!--
Game runs at 60 FPS
WASM bundle size: 47 KB (gzipped: 19 KB)
No JavaScript game logic—100% Rust!
-->
Implementation Hints:
Project structure:
snake-game/
├── Cargo.toml
├── src/
│ └── lib.rs # Game logic
├── www/
│ ├── index.html
│ └── style.css
└── pkg/ # Generated by wasm-pack
Using wasm-bindgen:
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub struct Game {
snake: Vec<(i32, i32)>,
food: (i32, i32),
direction: Direction,
}
#[wasm_bindgen]
impl Game {
#[wasm_bindgen(constructor)]
pub fn new() -> Game {
Game {
snake: vec![(5, 5), (4, 5), (3, 5)],
food: (10, 10),
direction: Direction::Right,
}
}
pub fn update(&mut self) {
// Move snake, check collisions
}
pub fn render(&self) {
// Draw to canvas via web_sys
}
pub fn key_down(&mut self, key: &str) {
// Handle input
}
}
Build command:
wasm-pack build --target web
Questions to guide you:
- How does Rust pass strings to JavaScript? (Copying through linear memory)
- Why is WASM faster than JavaScript? (Ahead-of-time compilation, typed)
- How do you minimize bundle size? (wasm-opt, LTO, strip)
Learning milestones:
- You compile Rust to WASM and run it → You understand the toolchain
- You call Rust functions from JavaScript → You understand wasm-bindgen
- Your game runs at 60 FPS → You’ve optimized the game loop
- You deploy to the web → You’ve built a complete web application
Project Comparison Table
| Project | Difficulty | Time | Key Rust Concept | Coolness |
|---|---|---|---|---|
| 1. Ownership Visualizer | Beginner | Weekend-1 week | Ownership/Borrowing | ⭐⭐⭐ |
| 2. Memory Arena | Advanced | 1-2 weeks | Unsafe, Allocators | ⭐⭐⭐⭐ |
| 3. Concurrent Scraper | Intermediate | 1-2 weeks | Send/Sync, Async | ⭐⭐⭐ |
| 4. Zero-Copy Parser | Advanced | 2-3 weeks | Lifetimes | ⭐⭐⭐⭐ |
| 5. Type-State Builder | Intermediate | 1 week | Type System | ⭐⭐⭐ |
| 6. Lock-Free Queue | Expert | 3-4 weeks | Atomics | ⭐⭐⭐⭐⭐ |
| 7. Embedded LED | Advanced | 2-3 weeks | no_std, Hardware | ⭐⭐⭐⭐ |
| 8. Plugin System | Advanced | 2-3 weeks | Traits, FFI | ⭐⭐⭐ |
| 9. Smart Pointers | Advanced | 2-3 weeks | Rc/Arc/RefCell | ⭐⭐⭐⭐ |
| 10. Proc Macros | Expert | 2-3 weeks | Metaprogramming | ⭐⭐⭐⭐ |
| 11. Async Runtime | Master | 1 month+ | Futures, Executors | ⭐⭐⭐⭐⭐ |
| 12. TCP/IP Stack | Master | 1-2 months | Network Protocols | ⭐⭐⭐⭐⭐ |
| 13. Database Engine | Expert | 1 month+ | B-Trees, ACID | ⭐⭐⭐⭐ |
| 14. Game Boy Emulator | Expert | 1-2 months | CPU Emulation | ⭐⭐⭐⭐⭐ |
| 15. WASM Game | Intermediate | 1-2 weeks | WebAssembly | ⭐⭐⭐ |
Recommended Learning Path
Phase 1: Foundations (4-6 weeks)
- Ownership Visualizer - See the borrow checker’s mind
- Concurrent Scraper - Experience fearless concurrency
- Type-State Builder - Appreciate the type system
After Phase 1, you’ll understand why Rust is designed the way it is.
Phase 2: Deep Dive (6-8 weeks)
- Zero-Copy Parser - Master lifetimes
- Smart Pointers - Understand Rc/Arc/RefCell internals
- Memory Arena - Touch unsafe Rust safely
After Phase 2, you can write idiomatic, performant Rust code.
Phase 3: Systems Mastery (8-12 weeks)
- Embedded LED - Write no_std code
- Lock-Free Queue - Conquer atomics
- Database Engine - Build serious infrastructure
After Phase 3, you’re a systems programmer who happens to use Rust.
Phase 4: Wizardry (8-12 weeks)
- Async Runtime - Understand futures from first principles
- TCP/IP Stack - Master network protocols
- Game Boy Emulator - Simulate hardware accurately
After Phase 4, you understand computers at a fundamental level.
Final Project: Build a Distributed Key-Value Store with Raft Consensus
- File: LEARN_RUST_DEEP_DIVE.md
- Main Programming Language: Rust
- Alternative Programming Languages: Go (comparison with etcd)
- Coolness Level: Level 5: Pure Magic
- Business Potential: 5. The “Industry Disruptor”
- Difficulty: Level 5: Master
- Knowledge Area: Distributed Systems / Consensus / Replication
- Software or Tool: tokio, tonic (gRPC), sled (embedded DB)
- Main Book: “Designing Data-Intensive Applications” by Martin Kleppmann
What you’ll build: A distributed, fault-tolerant key-value store that survives node failures—implementing the Raft consensus algorithm for leader election and log replication. Think a mini etcd or Consul.
Why this is the ultimate Rust project: This combines everything:
- Ownership: Managing connection and state lifecycles
- Concurrency: Handling simultaneous client requests
- Async: Networking with tokio
- Safety: Correct distributed algorithms are hard; Rust helps
Core challenges you’ll face:
- Implementing Raft leader election → maps to state machines and timeouts
- Log replication across nodes → maps to network I/O and consistency
- Handling network partitions → maps to distributed systems correctness
- Building the client API → maps to RPC and serialization
What you’ll understand by the end:
- How distributed databases like CockroachDB work
- Why consensus is hard (and how Raft makes it tractable)
- How to build fault-tolerant systems
- Why Rust is excellent for infrastructure software
Real world outcome:
# Start a 3-node cluster
$ rustkv --node 1 --peers 127.0.0.1:5001,127.0.0.1:5002,127.0.0.1:5003
$ rustkv --node 2 --peers 127.0.0.1:5001,127.0.0.1:5002,127.0.0.1:5003
$ rustkv --node 3 --peers 127.0.0.1:5001,127.0.0.1:5002,127.0.0.1:5003
# Client operations
$ rustkv-cli put mykey "Hello, Distributed World!"
OK (committed to 3/3 nodes)
$ rustkv-cli get mykey
"Hello, Distributed World!"
# Kill the leader!
$ kill -9 $(pgrep -f "rustkv --node 1")
# Cluster elects new leader, operations continue
$ rustkv-cli get mykey
"Hello, Distributed World!" (served by node 2)
# Bring node 1 back
$ rustkv --node 1 --peers ...
[Node 1] Catching up... replicated 47 log entries from leader
# All nodes consistent again!
Key Concepts:
- Raft consensus: “In Search of an Understandable Consensus Algorithm” - Diego Ongaro (the Raft paper)
- Distributed systems patterns: “Designing Data-Intensive Applications” Chapters 8-9 - Kleppmann
- gRPC in Rust: tonic documentation
- State machine replication: “Distributed Systems” - van Steen & Tanenbaum
Learning milestones:
- Leader election works → You understand Raft’s election protocol
- Log replication maintains consistency → You understand distributed commit
- System survives minority node failures → You’ve achieved fault tolerance
- Clients see linearizable reads → You’ve built a production-quality system
Summary
| # | Project | Main Language |
|---|---|---|
| 1 | Ownership Visualizer | Rust |
| 2 | Memory Arena Allocator | Rust |
| 3 | Fearless Concurrent Web Scraper | Rust |
| 4 | Zero-Copy Parser | Rust |
| 5 | Type-State Builder Pattern | Rust |
| 6 | Lock-Free Concurrent Queue | Rust |
| 7 | Embedded LED Controller | Rust |
| 8 | Plugin System with Dynamic Loading | Rust |
| 9 | Custom Smart Pointer | Rust |
| 10 | Procedural Macro Library | Rust |
| 11 | Async Runtime from Scratch | Rust |
| 12 | TCP/IP Stack in Userspace | Rust |
| 13 | Database Storage Engine | Rust |
| 14 | Game Boy Emulator | Rust |
| 15 | Rust to WebAssembly Game | Rust |
| Final | Distributed KV Store with Raft | Rust |
The Rust Mindset
By completing these projects, you won’t just know Rust—you’ll think in Rust:
“If it compiles, the memory is safe.”
“The type system is my friend, not my enemy.”
“Explicit is better than implicit—lifetimes tell me the truth.”
“I can write systems code and sleep at night.”
Welcome to Rust. Let’s build something great.
“Rust is not just a language. It’s a promise that the compiler has your back.”