← Back to all projects

ADVANCED RUST ECOSYSTEM DEEP DIVE

In 2015, Rust 1.0 was released with a radical promise: Memory safety without garbage collection. For years, developers had to choose between the speed of C/C++ and the safety of Java/Python. Rust broke that dichotomy.

Learn Advanced Rust: From Zero to Systems Wizard

Goal: Master the dark arts of the Rust ecosystemโ€”moving beyond simple ownership to grasp the mechanics of pinning, custom memory management, zero-cost abstraction limits, and the boundary between high-level async and low-level no_std hardware control. You will learn to write code that is not just safe, but optimally performant and architecturally bulletproof. By the end of this journey, you will visualize memory as a physical grid, understand the hidden state machines of async code, and be able to bridge the gap between high-level semantics and the โ€œbare metalโ€ of the machine.


Why Advanced Rust Matters

In 2015, Rust 1.0 was released with a radical promise: โ€œMemory safety without garbage collection.โ€ For years, developers had to choose between the speed of C/C++ and the safety of Java/Python. Rust broke that dichotomy.

But โ€œsafeโ€ code is just the beginning. The worldโ€™s most critical infrastructureโ€”from the Firecracker microVM powering AWS Lambda to the Linux Kernelโ€™s new Rust modulesโ€”demands more than just safety. It demands:

  1. Deterministic Performance: Zero-cost abstractions are only zero-cost if you understand how the compiler optimizes them.
  2. Async Mastery: async/await is sugar over a complex state machine. Misunderstanding Pin leads to inexplicable compiler errors or, worse, undefined behavior in unsafe code.
  3. Hardware Sovereignty: In no_std environments, you are the OS. Understanding custom allocators and memory layouts isnโ€™t optional; itโ€™s the job description.
  4. Type-Level Engineering: Const generics and complex trait bounds allow you to catch logic errors at compile time that would be runtime crashes in any other language.

The Memory Hierarchy in Rust

When you work with Advanced Rust, you arenโ€™t just thinking about variables; you are thinking about where data lives and how it moves through the hierarchy:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                         CPU                                     โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                                                    โ”‚
โ”‚  โ”‚ Registersโ”‚  โ† The "Now" (64 bits, 0 cycles)                  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”˜                                                    โ”‚
โ”‚       โ”‚                                                         โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”                                                    โ”‚
โ”‚  โ”‚ L1/L2/L3โ”‚  โ† The "Near" (KB/MB, 4-40 cycles)                 โ”‚
โ”‚  โ”‚ Caches  โ”‚                                                    โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”˜                                                    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
        โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚       RAM        โ† The "Far" (GB, 100+ cycles)                  โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                                                 โ”‚
โ”‚  โ”‚   Stack    โ”‚  โ† Fast, Automatic, LIFO, Moveable              โ”‚
โ”‚  โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค                                                 โ”‚
โ”‚  โ”‚   Heap     โ”‚  โ† Flexible, Manual/Managed, !Unpin-heavy       โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                                                 โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
        โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚       NVMe/SSD   โ† The "Eternal" (TB, 10,000+ cycles)           โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Memory Hierarchy

Every Box, Arc, and Pin is a decision about where in this hierarchy your data should reside and how it should be accessed.


Core Concept Analysis

1. The Pinning Contract: โ€œDonโ€™t Move the Tent Stakeโ€

In standard Rust, everything is moveable. String moves, Vec moves, even Box can be moved. But some typesโ€”specifically self-referential ones used in async futuresโ€”contain pointers to their own fields.

If you move a self-referential struct, the internal pointer stays pointing at the old memory address.

Normal Struct (Moveable)       Self-Referential (Needs Pin)
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”              โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Data A      โ”‚              โ”‚   Data A <โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค              โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค   โ”‚   โ”‚
โ”‚   Data B      โ”‚              โ”‚   Pointer B โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜              โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
      โ”‚                              โ”‚
      โ–ผ                              โ–ผ
  Moved in Memory                If Moved:
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”              โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Data A      โ”‚              โ”‚   Data A (New Loc)    โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค              โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค       โ”‚
โ”‚   Data B      โ”‚              โ”‚   Pointer B (Dangling)โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜              โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”‚โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                                           โ–ผ
                                    CRASH / SEGFAULT

Pinning Contract

Pin<P> is a wrapper that guarantees the data at the pointer P will never move again until it is dropped. It is the โ€œstakeโ€ that keeps your memory fabric from ripping.

2. The Async State Machine: The Hidden โ€œmatchโ€

When you write async fn, Rust transforms it into a struct that implements Future. Every await point becomes a state in a giant enum.

async fn my_task() {
    let x = 1;
    step_one().await;
    println!("{}", x);
}

Desugars into something like:

enum MyTaskFuture {
    Start,
    WaitingForStepOne { x: i32 },
    Done,
}

impl Future for MyTaskFuture {
    fn poll(self: Pin<&mut Self>, cx: &mut Context) -> Poll<()> {
        match *self {
            State::Start => { ... transition to Waiting ... }
            State::WaitingForStepOne { x } => { ... poll step_one ... }
            State::Done => { ... }
        }
    }
}

Understanding this transformation is key to writing high-performance async code and building your own executors.

3. Custom Allocators: Taking Control of the Heap

Rustโ€™s default allocator is a โ€œGeneral Purposeโ€ one. Itโ€™s safe and robust, but itโ€™s not always the fastest. Sometimes you need a specialized strategy:

  • Arena (Bump) Allocator: Fast, linear allocation. Great for short-lived compiler passes.
  • Slab Allocator: Fixed-size blocks. Great for avoiding fragmentation in kernel drivers.
Arena Allocation Process:
[Used Memory][Free Memory.........................]
             ^
             Bump Pointer (Moves forward on every 'alloc')

[Used Memory][New Data][Free Memory...............]
                       ^
                       Pointer just "bumps" forward.

Arena Allocation

4. Memory Layout & repr(C) vs repr(Rust)

The compiler is allowed to reorder your struct fields to minimize padding. This is great for memory efficiency but deadly for FFI (Foreign Function Interface) or hardware registers.

struct Mixed {
    a: u8,
    b: u32,
    c: u8,
}

repr(C) Layout:        repr(Rust) Layout (Optimized):
[a][pad][pad][pad]     [b][b][b][b]
[b][b][b][b]           [a][c][pad][pad]
[c][pad][pad][pad]     
(12 bytes)             (8 bytes)

![Struct Memory Layout](assets/struct_memory_layout.jpg)

In Advanced Rust, you must master Layout, Alignment, and Niche Optimization to squeeze every drop of performance out of the hardware.


Concept Summary Table

Concept Cluster What You Need to Internalize
Pinning & Safety Why moving memory is dangerous for self-referential types. The Unpin trait.
Async Internals How async/await desugars into state machines. The role of Wakers and Context.
Memory Control Raw pointers, Layout, Alignment, and how to implement custom allocators safely.
no_std Ecosystem Building without the OS. The difference between core, alloc, and std.
Type Mastery Const generics, GATs (Generic Associated Types), and higher-ranked trait bounds.
Atomics & Locks Memory ordering (SeqCst, Acquire, Release) and building lock-free structures.
Metaprogramming Procedural macros, token streams, and AST manipulation for reflection.

Deep Dive Reading by Concept

This section maps each concept from above to specific book chapters for deeper understanding. Read these before or alongside the projects to build strong mental models.

Memory & Pinning

Concept Book & Chapter
Ownership & Moves The Rust Programming Language โ€” Ch. 4: โ€œUnderstanding Ownershipโ€
Pointers & Unsafe Programming Rust โ€” Ch. 19: โ€œUnsafe Codeโ€
The Pin Contract Rust for Rustaceans โ€” Ch. 8: โ€œAsynchronous Programmingโ€
Memory Layout Computer Systems: A Programmerโ€™s Perspective โ€” Ch. 3.9: โ€œHeterogeneous Data Structuresโ€

Async & Concurrency

Concept Book & Chapter
Future Trait & Polling Rust for Rustaceans โ€” Ch. 8: โ€œAsynchronous Programmingโ€
Atomics & Locks Rust Atomics and Locks โ€” Ch. 1: โ€œBasics of Rust Concurrencyโ€
Executor Design Rust for Rustaceans โ€” Ch. 8 (section on Runtime Implementation)
Memory Ordering Rust Atomics and Locks โ€” Ch. 3: โ€œMemory Orderingโ€

Advanced Type System

Concept Book & Chapter
Traits & Generics Programming Rust โ€” Ch. 11: โ€œTraits and Genericsโ€
GATs & Lifetimes Idiomatic Rust โ€” Ch. 5: โ€œAdvanced Traitsโ€
Const Generics Programming Rust โ€” Ch. 11 (section on Const Generics)
Procedural Macros Programming Rust โ€” Ch. 20: โ€œMacrosโ€

Systems & no_std

Concept Book & Chapter
Bare Metal Basics The Secret Life of Programs โ€” Ch. 5: โ€œWhere Am I?โ€
Linker Scripts Computer Systems: A Programmerโ€™s Perspective โ€” Ch. 7: โ€œLinkingโ€
Custom Allocators The Linux Programming Interface โ€” Ch. 7: โ€œMemory Allocationโ€
OS Foundations Operating Systems: Three Easy Pieces โ€” Part II: โ€œVirtualizationโ€

Project 1: The Manual Pin Projector (Understanding the Pin Contract)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 4: Hardcore Tech Flex
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Memory Management / Safety Contracts

What youโ€™ll build: A self-referential struct without using the pin-project crate. You will manually implement structural pinning and projection, handling the safety invariants required to prevent data movement. You will create a custom Future that stores a pointer to its own fields.

Why it teaches Pinning: Most developers use the pin-project macro without understanding why it exists. By implementing the projection manually, youโ€™ll see how Unpin bounds are checked, why PhantomPinned is necessary, and how to safely access fields of a pinned struct.


Real World Outcome

You will have a working self-referential struct that can be safely polled as a Future. You will verify its address stability by printing the memory address of the struct before and after an operation that would normally trigger a move. This project demonstrates complete mastery of Rustโ€™s pinning guarantees.

Example Build & Run:

$ cargo new --lib manual-pin-projector
     Created library `manual-pin-projector` package

$ cd manual-pin-projector

$ cargo add futures
    Updating crates.io index
      Adding futures v0.3.30 to dependencies.features
             futures
             โ””โ”€โ”€ features: async-await, std

$ cargo build
   Compiling proc-macro2 v1.0.78
   Compiling unicode-ident v1.0.12
   Compiling syn v2.0.48
   Compiling futures-core v0.3.30
   Compiling futures-task v0.3.30
   Compiling futures-util v0.3.30
   Compiling futures v0.3.30
   Compiling manual-pin-projector v0.1.0 (/Users/you/manual-pin-projector)
    Finished dev [unoptimized + debuginfo] target(s) in 8.23s

$ cargo run --example self_ref_future
   Compiling manual-pin-projector v0.1.0 (/Users/you/manual-pin-projector)
    Finished dev [unoptimized + debuginfo] target(s) in 1.05s
     Running `target/debug/examples/self_ref_future`

=== Manual Pin Projector Demo ===

[Step 1] Creating self-referential struct on stack...
  struct SelfRefFuture {
    data: String = "Hello, Pinning!"
    ptr_to_data: *const String = 0x7ffee8b3c5a0 (points to own 'data' field)
    _pin: PhantomPinned
  }

[Step 2] Verifying self-reference integrity...
  Address of 'data' field:     0x7ffee8b3c5a0
  Pointer field points to:     0x7ffee8b3c5a0
  โœ“ Self-reference is VALID (pointer matches actual address)

[Step 3] Moving struct to heap with Box::pin...
  Before pin: Stack address = 0x7ffee8b3c5a0
  After pin:  Heap address  = 0x600001f04020
  Pinned pointer field now:  0x600001f04020
  โœ“ Pointer updated correctly during heap move

[Step 4] Attempting unsafe move (this should fail in safe code)...
  // In safe Rust, this line would not compile:
  // let moved = pinned_future;
  // ERROR: cannot move out of `pinned_future` because it is behind a Pin

[Step 5] Polling the pinned future...
  Poll attempt #1: Poll::Pending
    Waker registered at: 0x600001f04088
    Future state: Waiting

  Poll attempt #2: Poll::Pending
    Waker address stable: 0x600001f04088 (unchanged)
    Future state: Waiting

  Poll attempt #3: Poll::Ready("Data processed successfully!")
    โœ“ Future completed without memory corruption

[Step 6] Address stability verification...
  Initial heap address:    0x600001f04020
  Address after polling:   0x600001f04020
  โœ“ NO MOVEMENT OCCURRED (Pin guarantee upheld)

[Step 7] Manual projection demonstration...
  Using unsafe projection to access fields:
    Projecting to 'data' field: Pin<&mut String>
    Projecting to 'ptr_to_data': *const String (raw pointer, non-structural)

  Modifying 'data' through projection...
    Old value: "Hello, Pinning!"
    New value: "Modified through Pin projection!"
    Pointer still valid: 0x600001f04020
    โœ“ Structural pinning preserved invariants

[Step 8] Comparison with Unpin types...
  Creating normal (Unpin) struct...
    Address before move: 0x7ffee8b3c7d0
    Address after move:  0x7ffee8b3c8a0
    โœ“ Unpin types can move freely (80 bytes moved)

[Summary]
โœ“ Self-referential struct created successfully
โœ“ Pin<Box<T>> prevented unsafe movement
โœ“ Manual projection worked without UB
โœ“ Future polled to completion with stable addresses
โœ“ Demonstrated difference between Pin and Unpin

Memory layout visualization:
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Heap Allocation (0x600001f04020)   โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  +0x00: data (String)               โ”‚ โ—„โ”€โ”
โ”‚         - ptr: 0x600001e08000       โ”‚   โ”‚
โ”‚         - len: 16                   โ”‚   โ”‚
โ”‚         - cap: 16                   โ”‚   โ”‚
โ”‚  +0x18: ptr_to_data                 โ”‚ โ”€โ”€โ”˜ (self-reference)
โ”‚         - *const String: 0x600001f04020
โ”‚  +0x20: _pin (PhantomPinned)        โ”‚
โ”‚         - zero-sized marker         โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

$ cargo test
   Compiling manual-pin-projector v0.1.0 (/Users/you/manual-pin-projector)
    Finished test [unoptimized + debuginfo] target(s) in 0.89s
     Running unittests src/lib.rs (target/debug/deps/manual_pin_projector-a1b2c3d4e5f6)

running 5 tests
test tests::test_pin_guarantees ... ok
test tests::test_self_reference_validity ... ok
test tests::test_projection_safety ... ok
test tests::test_address_stability ... ok
test tests::test_future_completion ... ok

test result: ok. 5 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.02s

$ cargo doc --open
 Documenting manual-pin-projector v0.1.0 (/Users/you/manual-pin-projector)
    Finished dev [unoptimized + debuginfo] target(s) in 2.15s
     Opening /Users/you/manual-pin-projector/target/doc/manual_pin_projector/index.html

The Core Question Youโ€™re Answering

โ€œWhy canโ€™t I just use a regular reference inside my own struct?โ€

Before you write any code, sit with this question. In Rust, structs are moveable by default. If a struct contains a reference to its own field, and that struct is moved (e.g., returned from a function), the pointer inside it now points to the old memory location, creating a dangling pointer. Pin is the mechanism that forbids this move.

Concepts You Must Understand First

Stop and research these before coding:

  1. Unpin vs. !Unpin
    • What makes a type โ€œmove-safeโ€ (Unpin)?
    • Why do most types implement Unpin automatically?
    • Book Reference: โ€œRust for Rustaceansโ€ Ch. 8 - Jon Gjengset
  2. Pointer Aliasing & Dereferencing
    • Why does Rust forbid multiple mutable references to the same location?
    • How does Pin interact with &mut access?
    • Book Reference: โ€œThe Rust Programming Languageโ€ Ch. 19
  3. Self-Referential Structs
    • Why are they inherently dangerous in a language with move semantics?
    • Book Reference: โ€œProgramming Rustโ€ Ch. 21 (Context of FFI/Pinning)

Questions to Guide Your Design

  1. Safety Invariants
    • Why is get_unchecked_mut marked as unsafe?
    • What happens if you implement Drop for a pinned type and move a field?
  2. Structural Projection
    • How do you convert Pin<&mut MyStruct> to Pin<&mut PinnedField>?
    • When is it safe to allow a &mut UnpinnedField (non-pinned) access?

Thinking Exercise

The Moving Target

Consider this snippet:

struct SelfRef {
    value: String,
    ptr_to_value: *const String,
}

Questions:

  • If I put SelfRef in a Vec and the Vec reallocates, what happens to ptr_to_value?
  • How does Pin prevent Vec from moving it? (Hint: It doesnโ€™t, it prevents you from putting it in a Vec in a way that allows movement).

The Interview Questions Theyโ€™ll Ask

  1. โ€œWhat is the difference between Pin<Box<T>> and Pin<&mut T>?โ€
  2. โ€œWhy does a Future need to be pinned before it can be polled?โ€
  3. โ€œCan you explain โ€˜structural pinningโ€™ vs โ€˜non-structural pinningโ€™?โ€
  4. โ€œWhy is PhantomPinned a zero-sized type?โ€
  5. โ€œWhat are the safety requirements for implementing Drop on a !Unpin type?โ€

Hints in Layers

Hint 1: The Marker Start by adding std::marker::PhantomPinned to your struct. This tells the compiler your type is !Unpin.

Hint 2: Safe vs Unsafe Realize that to get a reference to the fields of a pinned struct, you must use unsafe code or a crate like pin-project. Try writing a method fn project(self: Pin<&mut Self>) -> Projection.

Hint 3: The Projection Struct The Projection struct should hold Pin<&mut Field> for pinned fields and &mut Field for unpinned fields.

Hint 4: Verification Use std::ptr::addr_of! to verify addresses without triggering moves or creating invalid references.

Books That Will Help

Topic Book Chapter
Pin Internals โ€œRust for Rustaceansโ€ Ch. 8
Unsafe Safety โ€œThe Rust Programming Languageโ€ Ch. 19
Memory Layout โ€œProgramming Rustโ€ Ch. 21

Project 2: The Box-less Async Trait (Zero-Cost Async)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 3: Genuinely Clever
  • Difficulty: Level 4: Expert
  • Knowledge Area: Async / Metaprogramming

What youโ€™ll build: A trait that supports async methods without using the #[async_trait] macro. You will use Generic Associated Types (GATs) to define the return type of an async function as a named future, allowing the compiler to stack-allocate the future instead of heap-allocating (boxing) it.

Why it teaches Zero-Cost: The async-trait crate is the industry standard, but it introduces a mandatory Box for every call. By building a GAT-based alternative, youโ€™ll understand how the compiler handles async return types and why static dispatch is the โ€œHoly Grailโ€ of Rust performance.


Real World Outcome

A library that allows defining high-performance, zero-allocation async interfaces. Youโ€™ll benchmark this against async-trait and show a 0-byte allocation count in the hot path. This demonstrates that GATs enable true zero-cost async abstractions without heap allocation.

Example Build & Benchmark:

$ cargo new --lib boxless-async-trait
     Created library `boxless-async-trait` package

$ cd boxless-async-trait

$ cargo add async-trait tokio --features tokio/full
    Updating crates.io index
      Adding async-trait v0.1.77 to dependencies
      Adding tokio v1.35.1 to dependencies.features

$ cargo add criterion --dev --features criterion/async_tokio
      Adding criterion v0.5.1 to dev-dependencies

$ cargo bench --bench allocation_comparison
   Compiling boxless-async-trait v0.1.0
    Finished bench [optimized] target(s) in 4.72s
     Running benches/allocation_comparison.rs

โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘         Async Trait Performance Comparison                       โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘ Testing: Process 10,000 async calls                             โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

Benchmarking async_trait (boxed)
  Warming up for 3.0000 s
  Collecting 100 samples in estimated 5.2340 s (2.5M iterations)

async_trait/10k_calls  time:   [2.0845 ยตs 2.0912 ยตs 2.0987 ยตs]
                       thrpt:  [476.48K elem/s 478.19K elem/s 479.73K elem/s]

Memory Analysis:
  Total allocations: 10,000
  Bytes allocated:   160,000 (16 bytes per Box)
  Allocation rate:   76.66 MB/s

Benchmarking GAT-based (zero-alloc)
  Warming up for 3.0000 s
  Collecting 100 samples in estimated 5.0123 s (5.1M iterations)

GAT-based/10k_calls    time:   [982.34 ns 985.67 ns 989.45 ns]
                       thrpt:  [1.0107M elem/s 1.0145M elem/s 1.0179M elem/s]

Memory Analysis:
  Total allocations: 0
  Bytes allocated:   0
  Allocation rate:   0 MB/s

โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

PERFORMANCE SUMMARY:
โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘ Metric                 โ”‚ async_trait โ”‚ GAT-based โ”‚ Improvement โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘ Time per 10k calls     โ”‚ 2.09 ยตs     โ”‚ 985 ns    โ”‚ 2.12x      โ•‘
โ•‘ Throughput             โ”‚ 478K ops/s  โ”‚ 1.01M/s   โ”‚ 2.12x      โ•‘
โ•‘ Allocations            โ”‚ 10,000      โ”‚ 0         โ”‚ โˆž          โ•‘
โ•‘ Memory allocated       โ”‚ 160 KB      โ”‚ 0 bytes   โ”‚ โˆž          โ•‘
โ•‘ CPU cache pressure     โ”‚ HIGH        โ”‚ LOW       โ”‚ Better     โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

$ cargo run --example real_world_usage
   Compiling boxless-async-trait v0.1.0
    Finished dev [unoptimized + debuginfo] target(s) in 1.82s
     Running `target/debug/examples/real_world_usage`

=== Real-World Usage Example ===

[1] Defining trait with GAT-based async method...

trait AsyncService {
    type ProcessFut<'a>: Future<Output = String> + 'a
    where Self: 'a;

    fn process<'a>(&'a self, data: &'a str) -> Self::ProcessFut<'a>;
}

[2] Implementing for concrete type...

struct DataProcessor {
    prefix: String,
}

impl AsyncService for DataProcessor {
    type ProcessFut<'a> = impl Future<Output = String> + 'a;

    fn process<'a>(&'a self, data: &'a str) -> Self::ProcessFut<'a> {
        async move { format!("{}: {}", self.prefix, data) }
    }
}

[3] Running async operations...

Processing "hello" โ†’ Result: "PREFIX: hello"
  Stack allocation at: 0x7ffee4b2c890
  Future size: 64 bytes (on stack)
  โœ“ Zero heap allocations

Processing "world" โ†’ Result: "PREFIX: world"
  Stack allocation at: 0x7ffee4b2c8d0
  โœ“ Zero heap allocations

[4] Comparison with async-trait...

Processing "hello" with async-trait
  Heap allocation at: 0x600002504020
  Box size: 16 bytes + Future size: 96 bytes
  โš  1 heap allocation required

[Summary]
โœ“ GAT-based async traits enable zero-allocation async
โœ“ 2.12x faster than async-trait in benchmarks
โœ“ 100% reduction in heap allocations
โœ“ Type-safe lifetime management
โœ“ No vtable indirection (static dispatch)

$ cargo test
   Compiling boxless-async-trait v0.1.0
    Finished test [unoptimized + debuginfo] target(s) in 1.24s
     Running unittests src/lib.rs

running 4 tests
test tests::test_gat_zero_alloc ... ok
test tests::test_lifetime_bounds ... ok
test tests::test_static_dispatch ... ok
test tests::test_vs_async_trait ... ok

test result: ok. 4 passed; 0 failed

The Core Question Youโ€™re Answering

โ€œWhy does async fn in a trait usually require a Box?โ€

Async functions return a hidden type (the state machine). In a trait, the compiler doesnโ€™t know the size of this state machine for every possible implementation. async-trait solves this by putting that state machine in a Box (pointer-sized). Your goal is to tell the compiler exactly where to find that type without the Box.

Concepts You Must Understand First

  1. Generic Associated Types (GATs)
    • How can an associated type have its own lifetime parameters?
    • Book Reference: โ€œIdiomatic Rustโ€ Ch. 5
  2. Async Desugaring
    • What does an async fn look like to the compiler?
    • Book Reference: โ€œRust for Rustaceansโ€ Ch. 8
  3. Higher-Ranked Trait Bounds (HRTBs)
    • What does for<'a> mean in a trait bound?

Questions to Guide Your Design

  1. Lifetime Elision
    • How do you capture the lifetime of &self in the returned future?
  2. Trait Objects
    • Why does this approach make the trait no longer โ€œObject Safeโ€?
    • Can you use dyn MyAsyncService with this GAT approach?

Thinking Exercise

Desugaring the Sugar

Take a standard async fn:

async fn hello(s: &str) -> usize { s.len() }

Now, try to write the same thing without the async keyword, using fn hello(...) -> impl Future.... Notice the lifetime issues when the input s is used inside the future. How does a GAT solve the โ€œnamed return typeโ€ problem?

The Interview Questions Theyโ€™ll Ask

  1. โ€œWhy canโ€™t we have async fn in traits natively (before Rust 1.75)?โ€
  2. โ€œWhat is a GAT and how does it solve the lifetime-in-traits problem?โ€
  3. โ€œWhat are the performance implications of using #[async_trait]?โ€
  4. โ€œHow does the compiler determine the size of an async future?โ€

Hints in Layers

Hint 1: The GAT definition Start by defining the associated type with a lifetime: type Fut<'a>: Future<Output = ()> + 'a where Self: 'a;

Hint 2: Implementation In the implementation, youโ€™ll need to use impl Future or a concrete type. Since you canโ€™t use impl Future in associated types easily yet, you might need to use a crate like real-async-trait for inspiration or use Box only during the development phase to see where it hurts.

Hint 3: Capturing Lifetimes The where Self: 'a bound is crucial. It tells the compiler that the future canโ€™t outlive the service itself.

Books That Will Help

Topic Book Chapter
GAT Mastery โ€œIdiomatic Rustโ€ Ch. 5
Async Internals โ€œRust for Rustaceansโ€ Ch. 8
Dispatch performance โ€œEffective Rustโ€ Item 12

Project 3: Custom Arena Allocator (Memory Locality)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 3: Genuinely Clever
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Memory Management / Unsafe

What youโ€™ll build: A high-performance โ€œBump Allocatorโ€ (Arena). You will pre-allocate a large chunk of memory and provide an alloc<T> method that returns a reference to a part of that memory, without calling the global allocator for every object.

Why it teaches Mastery: You will have to handle memory alignment manually, use raw pointers safely, and understand how the Allocator trait (unstable) works in Rust. Youโ€™ll see why arenas are used in compilers and high-performance game engines to avoid fragmentation.


Real World Outcome

You will have an Arena struct that can allocate objects 10-50x faster than Box::new(). You will use this to build a simple linked list that is cache-friendly because all nodes are contiguous in memory.

Example Benchmark:

$ cargo bench
Standard Box: 45.2 ns/alloc
Arena Alloc:  1.8 ns/alloc (SPEED UP!)

The Core Question Youโ€™re Answering

โ€œCan I allocate memory faster than the Operating System?โ€

Yes, you can. By asking the OS for one giant block and managing the โ€œbumpingโ€ of a pointer yourself, you bypass the complex bookkeeping and locking of general-purpose allocators like jemalloc.

Concepts You Must Understand First

  1. Memory Alignment
    • Why canโ€™t you just put a u32 at any address? (Hint: CPU bus errors).
    • Book Reference: โ€œComputer Systemsโ€ Ch. 3.9
  2. Raw Pointers and Layout
    • How does std::alloc::Layout calculate size and alignment?
    • Book Reference: โ€œThe Rust Programming Languageโ€ Ch. 19
  3. PhantomData & Lifetimes
    • How do you ensure the references returned by the arena donโ€™t outlive the arena itself?

Questions to Guide Your Design

  1. Alignment Calculation
    • How do you calculate the next aligned address? (addr + align - 1) & !(align - 1)?
  2. Deallocation
    • Why is Drop usually a โ€œno-opโ€ for objects in an arena?
    • How do you free the entire arena at once?

Thinking Exercise

Trace the Bump

Imagine your arena has 100 bytes.

  1. You allocate a u32 (4 bytes, 4-align).
  2. You allocate a u8 (1 byte, 1-align).
  3. You allocate a u64 (8 bytes, 8-align). How many bytes of padding were inserted between the u8 and the u64?

The Interview Questions Theyโ€™ll Ask

  1. โ€œWhat is an arena allocator and why is it fast?โ€
  2. โ€œWhat happens if an arena runs out of memory?โ€
  3. โ€œHow do you handle the Drop trait for objects inside an arena?โ€
  4. โ€œWhy is cache locality better with an arena?โ€

Hints in Layers

Hint 1: The Base Pointer Use std::alloc::alloc to get your initial chunk of memory. Store it as a *mut u8.

Hint 2: Alignment Use pointer::align_offset to find how many bytes you need to skip to satisfy the alignment of T.

Hint 3: Safety Wrap everything in a safe method pub fn alloc<T>(&self, value: T) -> &mut T. Be very careful with the lifetimes!

Books That Will Help

Topic Book Chapter
Custom Allocators โ€œThe Linux Programming Interfaceโ€ Ch. 7
Alignment & Structs โ€œComputer Systemsโ€ Ch. 3.9
Unsafe Memory โ€œProgramming Rustโ€ Ch. 19

Project 4: The no_std Kernel Core (The Naked Machine)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 5: Pure Magic (Super Cool)
  • Difficulty: Level 4: Expert
  • Knowledge Area: Embedded Systems / OS Development

What youโ€™ll build: A minimal bootable kernel in Rust that runs on bare metal (QEMU x86 or ARM). You will disable the standard library, implement a custom panic handler, and write directly to VGA text buffer or a serial port.

Why it teaches no_std: You cannot use println!, Vec, or even main. This project forces you to understand the startup sequence of a program, how the stack is initialized, and how Rustโ€™s core library (core) differs from the standard library (std).


Real World Outcome

A .bin file that you can boot in QEMU. You will see your custom text appearing on the screen without any OS (like Windows or Linux) running underneath.

Example Output:

$ cargo boot
[QEMU Starting...]
RUST KERNEL v0.1.0
Hardware initialized.
Memory: 128MB detected.
VGA Buffer Initialized at 0xb8000.
Writing 'Hello, World!' to screen...
> _

The Core Question Youโ€™re Answering

โ€œWhat is left of Rust when you take away the Operating System?โ€

Most people think of Rust as a language for apps. By stripping away std, you realize Rust is actually a language for building the things apps run on. You are left with types, traits, and raw memory.

Concepts You Must Understand First

  1. Rust core vs std
    • What are you missing when you lose std? (No heap, no files, no threads).
    • Book Reference: โ€œRust in Actionโ€ Ch. 12
  2. The Entry Point (_start)
    • Why do we need #[no_mangle]?
    • How does the linker know where the program starts?
    • Book Reference: โ€œComputer Systemsโ€ Ch. 7
  3. Memory-Mapped I/O
    • What does it mean to โ€œwrite to an addressโ€ to show text on screen?

Questions to Guide Your Design

  1. Panic Handling
    • If the program panics on bare metal, where does the error message go?
  2. The Stack
    • Who sets up the stack pointer before Rust code runs? (Hint: The Bootloader).
  3. Linker Scripts
    • How do you tell the compiler to put the code at a specific physical address (like 0x100000)?

Thinking Exercise

The Invisible OS

List 5 things you use every day in Rust (e.g., String, Box, println!, std::thread, std::fs). For each one, research what Operating System feature it relies on (e.g., String needs a heap allocator/malloc). How will you replace these in your kernel?

The Interview Questions Theyโ€™ll Ask

  1. โ€œWhat is the difference between core and alloc crates?โ€
  2. โ€œHow do you implement a global allocator in a no_std environment?โ€
  3. โ€œWhat is the purpose of the #[panic_handler] attribute?โ€
  4. โ€œWhat is a โ€˜freestandingโ€™ binary?โ€

Hints in Layers

Hint 1: The target You need a target that doesnโ€™t assume an OS. Use rustup target add thumbv7em-none-eabihf (for ARM) or create a custom JSON target spec for x86.

Hint 2: The VGA Buffer The VGA text buffer is usually at physical address 0xb8000. You can create a &mut [u16] pointing there and write ASCII values.

Hint 3: Volatile Use core::ptr::write_volatile when writing to hardware. The compiler might otherwise โ€œoptimize awayโ€ your writes if it thinks the memory isnโ€™t used.

Books That Will Help

Topic Book Chapter
Startup & Linking โ€œComputer Systemsโ€ Ch. 7
VGA & Hardware โ€œHow Computers Really Workโ€ Ch. 8
Embedded Rust โ€œThe Secret Life of Programsโ€ Ch. 5

Project 5: Const Generic Matrix (Type-Level Math)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 3: Genuinely Clever
  • Difficulty: Level 2: Intermediate
  • Knowledge Area: Mathematics / Type Systems

What youโ€™ll build: A Matrix library where the dimensions (rows/cols) are part of the type. Multiplying a Matrix<3, 2> by a Matrix<2, 4> is allowed, but multiplying by a Matrix<5, 5> will fail to compile.

Why it teaches Const Generics: Youโ€™ll learn to use constant values as type parameters. This eliminates runtime boundary checks and ensures that your math is logically sound before the program even runs.


Real World Outcome

A library that catches โ€œDimension Mismatchโ€ errors during compilation. Your code will be faster because the compiler knows the exact size of arrays at compile time.

Example Code:

let a = Matrix::<3, 2>::new();
let b = Matrix::<2, 4>::new();
let c = a * b; // Compiles! Result is Matrix<3, 4>

let d = Matrix::<5, 5>::new();
let e = a * d; // Error: "expected Matrix<2, _>, found Matrix<5, 5>"

The Core Question Youโ€™re Answering

โ€œCan I force the compiler to understand linear algebra?โ€

Yes. By moving values (like 3 and 2) into the type system, the compiler can perform the logic of dimension checking for you.

Concepts You Must Understand First

  1. Const Generics basics
    • How to define struct Matrix<const R: usize, const C: usize>.
    • Book Reference: โ€œProgramming Rustโ€ Ch. 11
  2. Monomorphization
    • Why does the compiler generate a new version of your function for every different size?
    • Book Reference: โ€œIdiomatic Rustโ€ Ch. 5
  3. Trait implementation for Generics
    • How to implement Mul only for matrices where columns of A match rows of B.

Questions to Guide Your Design

  1. Storage
    • Should you use Vec<T> or [T; R * C]? (Hint: Since R and C are const, an array is faster!).
  2. Operations
    • How do you define the return type of a multiplication as Matrix<R1, C2>?
  3. Bounds
    • How do you handle cases where you need R * C to be calculated at compile time? (Hint: generic_const_exprs feature on Nightly).

Thinking Exercise

The Cost of Freedom

If you have 100 different matrix sizes in your program, how does that affect the binary size? Compare this to a library where dimensions are stored as runtime integers.

The Interview Questions Theyโ€™ll Ask

  1. โ€œWhat is the difference between a generic type parameter and a const generic parameter?โ€
  2. โ€œHow do const generics reduce runtime overhead?โ€
  3. โ€œWhat are the limitations of const generics in Stable Rust currently?โ€
  4. โ€œCan you explain โ€˜Type Stateโ€™ pattern using const generics?โ€

Hints in Layers

Hint 1: The definition struct Matrix<const R: usize, const C: usize> { data: [[f32; C]; R] }

Hint 2: Implementation Use the impl block: impl<const R: usize, const C: usize> Matrix<R, C> { ... }

Hint 3: Multiplication The Mul trait implementation: impl<const R1: usize, const C1: usize, const C2: usize> Mul<Matrix<C1, C2>> for Matrix<R1, C1> { type Output = Matrix<R1, C2>; ... }

Books That Will Help

Topic Book Chapter
Const Generics โ€œProgramming Rustโ€ Ch. 11
Matrix Math โ€œMath for Programmersโ€ Ch. 4
Type-level logic โ€œIdiomatic Rustโ€ Ch. 5

Project 6: Atomic Lock-Free Queue (The Concurrency Beast)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 5: Pure Magic (Super Cool)
  • Difficulty: Level 5: Master
  • Knowledge Area: Concurrency / Low-Level Atomics

What youโ€™ll build: A Single-Producer Single-Consumer (SPSC) or Multi-Producer Multi-Consumer (MPMC) lock-free queue using only atomic operations. You will not use Mutex or RwLock.

Why it teaches Atomics: You will dive into memory ordering (SeqCst, Acquire, Release). Youโ€™ll learn how to coordinate multiple threads without ever โ€œstoppingโ€ the CPU with a lock. This is the heart of high-performance message passing.


Real World Outcome

A thread-safe queue that can handle millions of messages per second with microsecond latency. You will benchmark this against std::sync::mpsc.

Example Benchmark:

$ cargo run --example benchmark
std::mpsc:   2.1M ops/sec
Your Queue: 18.5M ops/sec (LOCK-FREE WIN!)

The Core Question Youโ€™re Answering

โ€œHow do two threads talk without a Mutex?โ€

By using Atomic operations (Compare-and-Swap, Load, Store) and understanding how the CPU reorders instructions, you can create โ€œwait-freeโ€ data structures.

Concepts You Must Understand First

  1. Atomic Memory Ordering
    • What is the difference between Acquire/Release and Relaxed?
    • Book Reference: โ€œRust Atomics and Locksโ€ Ch. 3 - Mara Bos
  2. The ABA Problem
    • Why can a pointer look the same but be different?
    • Book Reference: โ€œRust Atomics and Locksโ€ Ch. 9
  3. Cache Lines & False Sharing
    • Why should the head and tail pointers live on different cache lines?

Questions to Guide Your Design

  1. Head/Tail management
    • How do you know when the queue is full?
  2. Memory Barriers
    • Which atomic operation โ€œpublishesโ€ the data to the other thread?
  3. Spinning
    • What should a thread do if the queue is empty? yield_now() or a busy-loop?

Thinking Exercise

The Out-of-Order CPU

Imagine a CPU reorders your code so the โ€œTail Incrementโ€ happens before the โ€œData Writeโ€. What happens to the consumer thread? How does a Release barrier prevent this?

The Interview Questions Theyโ€™ll Ask

  1. โ€œWhat is a โ€˜lock-freeโ€™ data structure?โ€
  2. โ€œExplain the difference between Acquire and Release memory ordering.โ€
  3. โ€œWhat is โ€˜False Sharingโ€™ and how do you prevent it in Rust?โ€
  4. โ€œWhy is SeqCst the default, and why is it often slower?โ€

Hints in Layers

Hint 1: The Ring Buffer Start with a fixed-size array and two atomic indices (head and tail).

Hint 2: The SPSC simple case In SPSC, only one thread writes to tail and one writes to head. This simplifies things immenselyโ€”you only need Acquire/Release.

Hint 3: Padding Use #[repr(align(64))] on your atomic indices to ensure they live on different cache lines. This prevents โ€œCache Line Bouncingโ€.

Books That Will Help

Topic Book Chapter
Atomic Mastery โ€œRust Atomics and Locksโ€ Ch. 1-3
Lock-Free Design โ€œRust Atomics and Locksโ€ Ch. 9
Concurrency Patterns โ€œProgramming Rustโ€ Ch. 19

Project 7: The Zero-Copy Protocol Parser (Lifetime Mastery)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 3: Genuinely Clever
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Parsing / Performance

What youโ€™ll build: A parser for a complex binary format (like a database file or a network packet) that performs zero allocations. The parsed structures will hold references (&[u8]) directly into the input buffer instead of copying data into new String or Vec objects.

Why it teaches Lifetimes: This project is the ultimate test of your lifetime skills. Youโ€™ll need to link the lifetime of the parsed struct to the lifetime of the input buffer. Youโ€™ll encounter โ€œthe self-referential problemโ€ if you try to store the buffer and the parser together.


Real World Outcome

A parser that can process gigabytes of data with almost zero memory overhead. You will verify this by parsing a 1GB file while watching the process memory stay below 10MB.

Example Benchmark:

$ cargo bench
Standard Parser (copying): 450 MB/s
Your Zero-Copy Parser:    2.8 GB/s (6x faster)

$ /usr/bin/time -v ./your_parser large_file.bin
Maximum resident set size (kbytes): 8192 (STABLE!)

Detailed Memory Profiling Output:

$ valgrind --tool=massif --massif-out-file=massif.out ./zero_copy_parser test_data.bin
$ ms_print massif.out

--------------------------------------------------------------------------------
  MB
10.00 ^                                                                       #
     |                                                                       #
 9.00 +                                                                       #
     |                                                                       #
 8.00 +                                                                       #
     |                                                                       #
 7.00 +                                                                       #
     |                                                                       #
 6.00 +                                                                       #
     |                                                                       #
 5.00 +                                                                       #
     |                                                                       #
 4.00 +@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@#
     |@                                                                       #
 3.00 +@                                                                       #
     |@                                                                       #
 2.00 +@                                                                       #
     |@                                                                       #
 1.00 +@                                                                       #
     |@                                                                       #
 0.00 +@-----------------------------------------------------------------------#
       0                                                                     100
                           Time (seconds)

Peak memory usage: 4.2 MB (Initial buffer mapping only)
Total allocations: 47 (All from setup, ZERO in hot path)

$ heaptrack ./zero_copy_parser large_dataset.bin
heaptrack output will be written to "/tmp/heaptrack.zero_copy_parser.12345.gz"
Processing 1,000,000 records...

HEAP SUMMARY:
    total heap usage: 47 allocs, 47 frees, 4,321,088 bytes allocated

Peak heap memory consumption: 4.12 MB

Functions with most allocations:
  92.3% (3,987,456 bytes) in 1 allocation at std::fs::File::open
   7.7% (333,632 bytes) in 46 allocations at setup routines
   0.0% (0 bytes) in record parsing (ZERO-COPY ACHIEVED!)

$ hyperfine --warmup 3 'copying_parser data.bin' 'zero_copy_parser data.bin'
Benchmark 1: copying_parser data.bin
  Time (mean ยฑ ฯƒ):      2.234 s ยฑ  0.089 s    [User: 1.891 s, System: 0.341 s]
  Range (min โ€ฆ max):    2.157 s โ€ฆ  2.401 s    10 runs
  Allocations:         18,442,891 allocs

Benchmark 2: zero_copy_parser data.bin
  Time (mean ยฑ ฯƒ):      387.2 ms ยฑ  12.3 ms    [User: 201.4 ms, System: 184.9 ms]
  Range (min โ€ฆ max):    375.1 ms โ€ฆ 412.8 ms    10 runs
  Allocations:         47 allocs (CONSTANT!)

Summary
  'zero_copy_parser data.bin' ran
    5.77 ยฑ 0.25 times faster than 'copying_parser data.bin'
    393,103 times fewer allocations than 'copying_parser data.bin'

$ cargo run --release -- --profile parse network_packets.pcap
[Parser] Memory-mapping file: 1,073,741,824 bytes
[Parser] File mapped at: 0x7f8a4c000000
[Parser] Starting zero-copy parse...

Parsing Statistics:
==================
Records parsed:     10,485,760
Parse time:         382.4 ms
Throughput:         2.74 GB/s
Records/sec:        27.4 million/s

Memory Profile:
===============
Process RSS:        4.21 MB (STABLE)
Heap allocations:   47 (All during initialization)
Hot-path allocs:    0 (ZERO!)
Memory efficiency:  99.6% (Only metadata stored)

Lifetime Validation:
====================
All 10,485,760 parsed records maintain valid references to mmap'd buffer
No dangling pointers detected (verified with MIRI)
Buffer remains valid for entire parse duration
Zero heap escapes (all data borrowed, not owned)

Performance Breakdown per Operation:
=====================================
Operation               | Copying Parser | Zero-Copy Parser | Speedup
------------------------|----------------|------------------|--------
Header parse            |    45 ns       |     8 ns        |  5.6x
Field extraction        |    82 ns       |    12 ns        |  6.8x
String conversion       |   123 ns       |     3 ns        | 41.0x  (*)
Complete record parse   |   250 ns       |    23 ns        | 10.9x

(*) Zero-copy returns &str instead of String, eliminating UTF-8 validation overhead

Cache Performance (perf stat):
===============================
Copying Parser:
  2,891,234,567 cache-references
    423,891,234 cache-misses    # 14.7% miss rate

Zero-Copy Parser:
    891,234,567 cache-references
     23,891,234 cache-misses    #  2.7% miss rate (5.4x better locality!)

The Core Question Youโ€™re Answering

โ€œHow do I process data without touching the heap?โ€

By mastering 'a lifetimes, you can build structures that โ€œborrowโ€ their data from a buffer. This is how high-performance tools like ripgrep and nom achieve extreme speed.

Concepts You Must Understand First

  1. Lifetime Propagation
    • How do you link Struct<'a> to buffer: &'a [u8]?
    • Book Reference: โ€œThe Rust Programming Languageโ€ Ch. 10
  2. Alignment & Safe Casting
    • Why is it unsafe to cast &[u8] directly to &MyStruct? (Hint: Alignment!).
    • Book Reference: โ€œPractical Binary Analysisโ€ Ch. 2
  3. The โ€˜Borrowed Stringโ€™ Pattern
    • Using std::borrow::Cow or &str instead of String.

Questions to Guide Your Design

  1. Safety
    • How do you handle a buffer that is too short for the expected structure?
  2. The โ€œStreamingโ€ Problem
    • What happens if the data you need spans across two different network packets (buffers)?
  3. Endianness
    • How do you parse a u32 from 4 bytes in a zero-copy way?

Thinking Exercise

The Lifetime Trap

Try to create a struct that holds the Vec<u8> AND the parsed struct that borrows from it. Why does the compiler scream โ€œcannot move out of borrowed valueโ€? How do you solve this using ouroboros or Pin?

The Interview Questions Theyโ€™ll Ask

  1. โ€œWhat is zero-copy and why does it matter for high-performance systems?โ€
  2. โ€œWhy canโ€™t you easily return a zero-copy struct from a function that owns the buffer?โ€
  3. โ€œWhat are the safety risks of casting a &[u8] to a &MyStruct?โ€
  4. โ€œHow do crates like serde handle zero-copy deserialization?โ€

Hints in Layers

Hint 1: The struct struct Packet<'a> { header: &'a [u8], payload: &'a [u8] }

Hint 2: Alignment Donโ€™t use std::mem::transmute. Use the zerocopy crateโ€™s patterns or manually read using u32::from_le_bytes(slice.try_into().unwrap()).

Hint 3: Slicing The entire parser should just be a series of &buffer[start..end] operations.

Books That Will Help

Topic Book Chapter
Lifetimes โ€œThe Rust Programming Languageโ€ Ch. 10
Binary Formats โ€œPractical Binary Analysisโ€ Ch. 2
Performance Parsing โ€œRust in Actionโ€ Ch. 7

Project 8: Building a custom Runtime (Waker/Executor)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 4: Hardcore Tech Flex
  • Difficulty: Level 5: Master
  • Knowledge Area: Async Internals

What youโ€™ll build: Your own mini-Tokio. You will implement a Task struct, a Waker that knows how to reschedule tasks, and an Executor that runs a loop polling futures until they are ready.

Why it teaches Async: You will finally understand what the poll method actually does. Youโ€™ll see how the Waker connects the event (like a timer or a socket) back to the executor to wake up a specific task.


Real World Outcome

Youโ€™ll be able to run async code without tokio or async-std. Youโ€™ll understand exactly how await suspends a function and how hardware interrupts trigger a โ€œwake upโ€.

Example Usage:

let mut executor = MyExecutor::new();
executor.spawn(async {
    println!("Step 1");
    Timer::after(Duration::from_secs(1)).await;
    println!("Step 2");
});
executor.run(); // Blocks until all tasks are done

The Core Question Youโ€™re Answering

โ€œWho calls โ€˜pollโ€™ on my Future?โ€

The Executor does. You will build the loop that manages the โ€œReadyโ€ queue and understands how to put the CPU to sleep when no tasks are ready.

Concepts You Must Understand First

  1. The Reactor/Executor Pattern
    • Who handles the I/O (Reactor) vs who runs the code (Executor)?
    • Book Reference: โ€œRust for Rustaceansโ€ Ch. 8
  2. Waker Internals
    • What is a RawWakerVTable and why is it so full of unsafe code?
  3. Arc-based Task Management
    • How do you share a task between the executor and the waker safely?

Questions to Guide Your Design

  1. The Queue
    • What data structure should hold the โ€œReadyโ€ tasks? (Hint: Crossbeam or an atomic linked list).
  2. Context
    • How do you create a std::task::Context to pass to the poll method?
  3. Efficiency
    • How do you make the executor โ€œsleepโ€ if no futures are ready to be polled?

Thinking Exercise

The Eternal Loop

Write down the pseudo-code for the run() method. How does it handle a situation where a Future returns Poll::Pending? What keeps the loop from spinning at 100% CPU usage?

The Interview Questions Theyโ€™ll Ask

  1. โ€œExplain the relationship between a Future, a Waker, and an Executor.โ€
  2. โ€œWhy is the poll method designed to be non-blocking?โ€
  3. โ€œWhat happens if a Waker is dropped before the task is finished?โ€
  4. โ€œHow does tokio handle multi-threaded scheduling?โ€

Hints in Layers

Hint 1: The Task struct A Task should contain the Future and a way to signal the executor. struct Task { future: Mutex<BoxFuture>, executor_tx: Sender<TaskId> }

Hint 2: The Waker The Wakerโ€™s wake() method should just send the TaskId back into the executorโ€™s queue.

Hint 3: The VTable This is the hardest part. Look at the std::task::RawWaker documentation. Youโ€™ll need to implement clone, wake, wake_by_ref, and drop as raw C-style function pointers.

Books That Will Help

Topic Book Chapter
Async Design โ€œRust for Rustaceansโ€ Ch. 8
Concurrency Primitives โ€œRust Atomics and Locksโ€ Ch. 1
VTable Mechanics โ€œProgramming Rustโ€ Ch. 11 (Traits)

Project 9: Physical Units Lib (Type-Safe Engineering)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 3: Genuinely Clever
  • Difficulty: Level 3: Advanced
  • Knowledge Area: Type Systems / Metaprogramming

What youโ€™ll build: A library where numbers are tagged with their units (Meters, Seconds, Kilograms). Adding 5.meters() + 10.seconds() will fail to compile, but 10.meters() / 2.seconds() will produce a value of type Velocity (Meters per Second).

Why it teaches Mastery: You will use complex Trait bounds and PhantomData. This is the ultimate โ€œCorrectnessโ€ flex, ensuring that logic errors (like the Mars Climate Orbiter crash) are impossible in your code.


Real World Outcome

A library that makes engineering calculations 100% type-safe.

Example Code:

let distance = 100.meters();
let time = 10.seconds();
let speed = distance / time; // Type is Speed (MetersPerSecond)

let result = distance + time; // Error: "cannot add Distance and Time"

The Core Question Youโ€™re Answering

โ€œHow can the compiler check my physics homework?โ€

By encoding units in the type parameters, the compilerโ€™s trait solver becomes a dimensional analysis engine.

Concepts You Must Understand First

  1. PhantomData
    • Using a type parameter that isnโ€™t actually stored in the struct.
  2. Generic Associated Types (GATs) or Trait Arithmetic
    • How to define that Distance / Time = Speed.
  3. Operator Overloading
    • Implementing Add, Sub, Mul, Div for generic types.

Questions to Guide Your Design

  1. Base Units
    • How do you represent the 7 SI base units?
  2. Derived Units
    • How does the system handle Meters^2 (Area) or Meters^3 (Volume)?
  3. Optimization
    • Does this โ€œUnitโ€ wrapper add any runtime cost? (Hint: It shouldnโ€™t!).

Thinking Exercise

The Dimensional Grid

Imagine a struct Unit<const M: i8, const S: i8, const KG: i8>.

  • Meters: Unit<1, 0, 0>
  • Seconds: Unit<0, 1, 0>
  • Meters per Second: Unit<1, -1, 0> How would the Mul trait look for two such units? (Hint: M3 = M1 + M2).

The Interview Questions Theyโ€™ll Ask

  1. โ€œWhat is a zero-sized type and how is it used in this library?โ€
  2. โ€œHow do you prevent users from creating invalid units?โ€
  3. โ€œCan you explain how the compiler optimizes away these wrappers?โ€

Hints in Layers

Hint 1: The Struct struct Value<T, U> { val: T, _unit: PhantomData<U> }

Hint 2: Trait Arithmetic Use traits to define relations: trait Divide<RHS> { type Output; }

Hint 3: Macros Use a macro to define the boilerplate for 20+ different units so you donโ€™t repeat yourself.

Books That Will Help

Topic Book Chapter
Type-safe units โ€œIdiomatic Rustโ€ Ch. 5
Generic Math โ€œProgramming Rustโ€ Ch. 11
Zero-Cost Wrappers โ€œEffective Rustโ€ Item 5

Project 10: Procedural Macro for Trait Reflection (Metaprogramming)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 3: Genuinely Clever
  • Difficulty: Level 4: Expert
  • Knowledge Area: Metaprogramming / Compiler Plugins

What youโ€™ll build: A derive macro #[derive(Reflect)] that generates a metadata struct for any Rust struct, allowing you to iterate over field names and types at runtime (a feature Rust doesnโ€™t have natively).

Why it teaches Macros: Youโ€™ll learn how to parse Rust code into an AST (Abstract Syntax Tree) and generate new code. This is the foundation for crates like serde, diesel, and bevy.


Real World Outcome

Youโ€™ll be able to print the fields of a struct without manually writing a Debug implementation or using reflection libraries. This is perfect for building your own serialization or GUI inspectors.

Example Usage:

#[derive(Reflect)]
struct User {
    name: String,
    age: u32,
    email: String,
}

#[derive(Reflect)]
struct Product {
    id: u64,
    price: f32,
    in_stock: bool,
}

fn main() {
    println!("=== User Reflection ===");
    for field in User::fields() {
        println!("  Field: {}, Type: {}", field.name, field.type_name);
    }

    println!("\n=== Product Reflection ===");
    for field in Product::fields() {
        println!("  Field: {}, Type: {}", field.name, field.type_name);
    }
}

Console Output:

$ cargo run
=== User Reflection ===
  Field: name, Type: alloc::string::String
  Field: age, Type: u32
  Field: email, Type: alloc::string::String

=== Product Reflection ===
  Field: id, Type: u64
  Field: price, Type: f32
  Field: in_stock, Type: bool

Generated Code (via cargo expand):

When you use #[derive(Reflect)], the macro generates code like this:

// Original code
#[derive(Reflect)]
struct User {
    name: String,
    age: u32,
    email: String,
}

// What the macro generates (shown via `cargo expand`)
struct User {
    name: String,
    age: u32,
    email: String,
}

impl Reflect for User {
    fn fields() -> &'static [FieldInfo] {
        &[
            FieldInfo {
                name: "name",
                type_name: "alloc::string::String",
                offset: 0usize,
            },
            FieldInfo {
                name: "age",
                type_name: "u32",
                offset: 24usize,
            },
            FieldInfo {
                name: "email",
                type_name: "alloc::string::String",
                offset: 32usize,
            },
        ]
    }

    fn type_name() -> &'static str {
        "User"
    }

    fn field_count() -> usize {
        3usize
    }
}

Advanced Usage - Building a Generic Inspector:

fn inspect<T: Reflect>(type_name: &str) {
    println!("\nโ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—");
    println!("โ•‘ Type Inspector: {:<22} โ•‘", type_name);
    println!("โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ");
    println!("โ•‘ Field Count: {:<24} โ•‘", T::field_count());
    println!("โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ");

    for (i, field) in T::fields().iter().enumerate() {
        println!("โ•‘ [{}] {:<32} โ•‘", i, field.name);
        println!("โ•‘     Type: {:<27} โ•‘", field.type_name);
        println!("โ•‘     Offset: {} bytes{:<17} โ•‘", field.offset, "");
        if i < T::field_count() - 1 {
            println!("โ•Ÿโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ข");
        }
    }
    println!("โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•");
}

fn main() {
    inspect::<User>("User");
    inspect::<Product>("Product");
}

Output:

โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘ Type Inspector: User                  โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘ Field Count: 3                        โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘ [0] name                              โ•‘
โ•‘     Type: alloc::string::String       โ•‘
โ•‘     Offset: 0 bytes                   โ•‘
โ•Ÿโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ข
โ•‘ [1] age                               โ•‘
โ•‘     Type: u32                         โ•‘
โ•‘     Offset: 24 bytes                  โ•‘
โ•Ÿโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ข
โ•‘ [2] email                             โ•‘
โ•‘     Type: alloc::string::String       โ•‘
โ•‘     Offset: 32 bytes                  โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘ Type Inspector: Product               โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘ Field Count: 3                        โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘ [0] id                                โ•‘
โ•‘     Type: u64                         โ•‘
โ•‘     Offset: 0 bytes                   โ•‘
โ•Ÿโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ข
โ•‘ [1] price                             โ•‘
โ•‘     Type: f32                         โ•‘
โ•‘     Offset: 8 bytes                   โ•‘
โ•Ÿโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ข
โ•‘ [2] in_stock                          โ•‘
โ•‘     Type: bool                        โ•‘
โ•‘     Offset: 12 bytes                  โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

Verification via cargo expand:

$ cargo install cargo-expand
$ cargo expand --lib

# Shows the exact code generated by your procedural macro
# Compare this to what you expected to verify correctness

The Core Question Youโ€™re Answering

โ€œHow do I write code that writes code?โ€

By intercepting the compilation process, you can read the programmerโ€™s intent (the struct definition) and generate the boilerplate necessary to inspect it at runtime.

Concepts You Must Understand First

  1. Token Streams
    • What is the difference between proc_macro::TokenStream and proc_macro2::TokenStream?
    • Book Reference: โ€œProgramming Rustโ€ Ch. 20
  2. AST Parsing with syn
    • How to navigate a DeriveInput struct to find fields.
    • Book Reference: syn crate documentation
  3. Code Generation with quote
    • How to use the quote! macro to turn variables back into Rust code.

Questions to Guide Your Design

  1. Hygiene
    • How do you ensure your generated code doesnโ€™t conflict with the userโ€™s variables?
  2. Error Handling
    • What happens if the user tries to #[derive(Reflect)] on an enum? How do you show a nice compiler error?
  3. Visibility
    • Does your macro work for private fields? Should it?

Thinking Exercise

The Compilerโ€™s View

Look at a simple struct. Now imagine it as a tree (AST). What are the nodes? (StructName, FieldList, FieldName, TypeName). Draw this tree on paper. This is what you will be traversing with the syn crate.

The Interview Questions Theyโ€™ll Ask

  1. โ€œWhat is a procedural macro and how does it differ from a declarative macro (macro_rules!)?โ€
  2. โ€œWhy do procedural macros need to live in their own crate?โ€
  3. โ€œExplain the role of the syn and quote crates.โ€
  4. โ€œWhat is โ€˜macro hygieneโ€™?โ€

Hints in Layers

Hint 1: The Crate Type Make sure your Cargo.toml has proc-macro = true in the [lib] section.

Hint 2: The entry point #[proc_macro_derive(Reflect)] pub fn reflect_derive(input: TokenStream) -> TokenStream { ... }

Hint 3: Field iteration Use syn::parse_macro_input!(input as DeriveInput) then match on Data::Struct.

Books That Will Help

Topic Book Chapter
Macros โ€œProgramming Rustโ€ Ch. 20
Metaprogramming โ€œThe Rust Programming Languageโ€ Ch. 19
Advanced Syn syn crate docs Tutorials

Project 11: The no_std Game Boy Core (CPU Simulation)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 5: Pure Magic (Super Cool)
  • Difficulty: Level 5: Master
  • Knowledge Area: Computer Architecture / Emulation

What youโ€™ll build: A CPU core (LR35902) for the Game Boy that works in a no_std environment. You will implement the registers, the instruction set, and the memory map.

Why it teaches Systems: This project combines no_std, bit manipulation, and architectural understanding. Youโ€™ll see why Rustโ€™s safety is a superpower when implementing complex state machines like a CPU.


Real World Outcome

A library that can be compiled to WebAssembly (no_std) or run on an ESP32 to execute original Game Boy ROMs. You will be able to load a .gb file and watch the CPU cycles execute.

Example: Boot Sequence & CPU Trace

$ cargo run --release -- --rom roms/tetris.gb --trace --frames 3

โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘            Game Boy Emulator Core v0.1.0 (no_std)                โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘ ROM: tetris.gb                                                   โ•‘
โ•‘ Size: 32768 bytes (32 KB)                                        โ•‘
โ•‘ Type: ROM ONLY (No MBC)                                          โ•‘
โ•‘ Checksum: 0x3B VALID โœ“                                           โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

[BOOT] Initializing CPU (LR35902 @ 4.194304 MHz)
[BOOT] Initializing PPU (LCD Controller)
[BOOT] Initializing Memory Map (64KB address space)
[BOOT] Loading Boot ROM (256 bytes)
[BOOT] Starting execution at 0x0000

โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ• CPU EXECUTION TRACE โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

Cycle: 0000000 | PC: 0x0000 | SP: 0xFFFE | [BOOT ROM]
  OP: 0x31 (LD SP, d16)    Operand: 0xFFFE
  Registers: A:00 F:00 B:00 C:00 D:00 E:00 H:00 L:00
  Flags: [Z:0 N:0 H:0 C:0]
  โ†’ Setting Stack Pointer to 0xFFFE
  Cycles: 12

Cycle: 0000012 | PC: 0x0003 | SP: 0xFFFE
  OP: 0xAF (XOR A, A)      Operand: --
  Registers: A:00 F:00 B:00 C:00 D:00 E:00 H:00 L:00
  Flags: [Z:0 N:0 H:0 C:0]
  โ†’ A = A ^ A = 0x00
  Flags: [Z:1 N:0 H:0 C:0] (Zero flag SET)
  Cycles: 4

Cycle: 0000016 | PC: 0x0004 | SP: 0xFFFE
  OP: 0x21 (LD HL, d16)    Operand: 0xFF26
  Registers: A:00 F:80 B:00 C:00 D:00 E:00 H:00 L:00
  Flags: [Z:1 N:0 H:0 C:0]
  โ†’ HL = 0xFF26 (Audio Master Control)
  Cycles: 12

Cycle: 0000028 | PC: 0x0007 | SP: 0xFFFE
  OP: 0x0E (LD C, d8)      Operand: 0x11
  Registers: A:00 F:80 B:00 C:00 D:00 E:00 H:FF L:26
  Flags: [Z:1 N:0 H:0 C:0]
  โ†’ C = 0x11
  Cycles: 8

... [Boot ROM execution continues for 244 cycles] ...

Cycle: 0000244 | PC: 0x00FC | SP: 0xFFFE | [BOOT ROM โ†’ GAME ROM]
  OP: 0xE0 (LDH (a8), A)   Operand: 0x50
  Registers: A:01 F:80 B:00 C:13 D:00 E:D8 H:01 L:4D
  Flags: [Z:1 N:0 H:0 C:0]
  โ†’ Writing 0x01 to 0xFF50 (Disabling Boot ROM)
  Memory[0xFF50] = 0x01
  Cycles: 12

[BOOT] Boot ROM disabled. Switching to Game ROM at 0x0100

Cycle: 0000256 | PC: 0x0100 | SP: 0xFFFE | [GAME ROM START]
  OP: 0x00 (NOP)           Operand: --
  Registers: A:01 F:B0 B:00 C:13 D:00 E:D8 H:01 L:4D
  Flags: [Z:1 N:0 H:1 C:1]
  โ†’ No operation
  Cycles: 4

Cycle: 0000260 | PC: 0x0101 | SP: 0xFFFE
  OP: 0xC3 (JP d16)        Operand: 0x0150
  Registers: A:01 F:B0 B:00 C:13 D:00 E:D8 H:01 L:4D
  Flags: [Z:1 N:0 H:1 C:1]
  โ†’ Jumping to 0x0150 (Game Entry Point)
  Cycles: 16

Cycle: 0000276 | PC: 0x0150 | SP: 0xFFFE | [GAME INIT]
  OP: 0x3E (LD A, d8)      Operand: 0x00
  Registers: A:01 F:B0 B:00 C:13 D:00 E:D8 H:01 L:4D
  Flags: [Z:1 N:0 H:1 C:1]
  โ†’ A = 0x00
  Cycles: 8

... [Game initialization continues] ...

โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ• PPU RENDERING TRACE โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

[PPU] Frame 0 | Scanline: 000 | Mode: OAM_SCAN (Cycles: 80)
  LCD Control (0xFF40): 0x91 [LCD_ON BG_ON WIN_OFF OBJ_OFF]
  Scroll: SCX=0x00, SCY=0x00
  Window: WX=0x00, WY=0x00
  BG Palette: 0xFC [11 11 11 00]

[PPU] Frame 0 | Scanline: 000 | Mode: DRAWING (Cycles: 172)
  Drawing background tiles from 0x9800 (Tile Map 0)
  Tile Data Base: 0x8000
  Rendering 160 pixels...
  โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ [COMPLETE]

[PPU] Frame 0 | Scanline: 000 | Mode: HBLANK (Cycles: 204)
  Horizontal blank period (waiting for next scanline)

[PPU] Frame 0 | Scanline: 001 | Mode: OAM_SCAN (Cycles: 80)
[PPU] Frame 0 | Scanline: 001 | Mode: DRAWING (Cycles: 172)
[PPU] Frame 0 | Scanline: 001 | Mode: HBLANK (Cycles: 204)

... [Scanlines 2-143 continue] ...

[PPU] Frame 0 | Scanline: 144 | Mode: VBLANK (Cycles: 456)
  โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
  โ•‘ VBLANK INTERRUPT TRIGGERED                                 โ•‘
  โ•‘ Frame Complete: 144 scanlines rendered                     โ•‘
  โ•‘ Total Cycles: 70224 (16.74ms @ 4.194 MHz)                  โ•‘
  โ•‘ FPS: 59.73 Hz (Target: 59.73 Hz) โœ“                         โ•‘
  โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

[INT] VBLANK interrupt requested (IF: 0x01)
[INT] Jumping to interrupt handler at 0x0040

Cycle: 0070224 | PC: 0x0040 | SP: 0xFFFC | [INTERRUPT HANDLER]
  OP: 0xC5 (PUSH BC)       Operand: --
  Registers: A:00 F:80 B:01 C:44 D:00 E:56 H:C0 L:00
  โ†’ Pushing BC to stack: [0xFFFB] = 0x01, [0xFFFA] = 0x44
  SP: 0xFFFC โ†’ 0xFFFA
  Cycles: 16

... [VBLANK handler executes] ...

[PPU] Frame 1 | Starting new frame
[PPU] Frame 2 | Starting new frame
[PPU] Frame 3 | Starting new frame

โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ• EXECUTION SUMMARY โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

Total Cycles Executed: 210,672
Total Instructions: 18,234
Total Frames Rendered: 3
Average FPS: 59.73 Hz
Execution Time: 50.23ms (simulated)

CPU Instruction Breakdown:
  8-bit Loads:    4,523 (24.8%)
  16-bit Loads:     891 ( 4.9%)
  Arithmetic:     3,012 (16.5%)
  Logic:          2,341 (12.8%)
  Jumps/Calls:    1,876 (10.3%)
  Stack Ops:        567 ( 3.1%)
  Other:          5,024 (27.6%)

Memory Access Statistics:
  ROM Reads:     15,234
  RAM Reads:      8,901
  RAM Writes:     4,567
  I/O Reads:      1,234
  I/O Writes:       892

Interrupt Statistics:
  VBLANK:    3 (100%)
  LCD_STAT:  0
  TIMER:     0
  SERIAL:    0
  JOYPAD:    0

[EMULATOR] Execution complete. Exiting.

Example: no_std Build for Embedded Target

$ cargo build --target thumbv7em-none-eabihf --release
   Compiling gameboy-core v0.1.0
    Finished release [optimized] target(s) in 3.42s

$ arm-none-eabi-size target/thumbv7em-none-eabihf/release/libgameboy_core.a
   text    data     bss     dec     hex filename
  12840      24    2048   14912    3a40 gameboy_core.o

[BUILD] Successfully compiled for ARM Cortex-M4 (no_std)
[BUILD] Code size: 12.5 KB (Flash)
[BUILD] RAM usage: 2 KB (Static)

The Core Question Youโ€™re Answering

โ€œHow does a CPU actually process instructions?โ€

You will build the fetch-decode-execute loop from scratch, learning how bits in memory are translated into movements of data between registers.

Concepts You Must Understand First

  1. Binary & Hexadecimal
    • Mastering bitmasks, shifts, and carries.
  2. Memory Banking (MBC)
    • How to access more ROM than the CPU address space allows.
    • Book Reference: โ€œGame Boy Coding Adventureโ€
  3. Instruction Sets
    • Reading an opcode table and implementing 200+ instructions.

Questions to Guide Your Design

  1. State Management
    • How do you represent the CPU registers? (Hint: A struct with u8 and u16 fields).
  2. The Memory Bus
    • How do you route a read(0xFF40) to the PPU instead of the RAM?
  3. Timing
    • How do you ensure the CPU doesnโ€™t run โ€œtoo fastโ€? (Hint: Cycle counting).

Thinking Exercise

The Flag Register

The Game Boy CPU has a flag register (Z, N, H, C). Research how the โ€œHalf-Carryโ€ (H) flag works. Why is it used for BCD (Binary Coded Decimal) math? Try implementing a u8 addition that correctly sets all 4 flags.

The Interview Questions Theyโ€™ll Ask

  1. โ€œHow do you implement an emulatorโ€™s main loop in Rust?โ€
  2. โ€œWhy is no_std important for an emulation core?โ€
  3. โ€œHow do you handle the 8-bit vs 16-bit register access (e.g., AF, BC, DE, HL)?โ€
  4. โ€œWhat is the most difficult Game Boy instruction to implement correctly? (Hint: DAA).โ€

Hints in Layers

Hint 1: The Registers Use #[repr(C)] and a union or bit-shifting to allow accessing HL as a u16 and H or L as u8.

Hint 2: Opcode Dispatch A giant match opcode { ... } is actually very efficient in Rust. The compiler will often turn it into a jump table.

Hint 3: Testing Use โ€œBlarggโ€™s Test ROMsโ€. They are the industry standard for verifying CPU instruction correctness.

Books That Will Help

Topic Book Chapter
CPU Arch โ€œComputer Organization and Designโ€ Ch. 4
Game Boy Internals โ€œGame Boy Coding Adventureโ€ Full Book
Bit Manipulation โ€œArt of Computer Programmingโ€ Vol 4

Project 12: High-Performance KV Store (Custom Everything)

๐Ÿ“– View Detailed Guide โ†’

  • Main Programming Language: Rust
  • Coolness Level: Level 5: Pure Magic (Super Cool)
  • Difficulty: Level 5: Master
  • Knowledge Area: Databases / Systems Engineering

What youโ€™ll build: A Key-Value store that uses a custom Arena Allocator for the index, Zero-Copy parsing for values, and Atomics for thread-safe access. It should support millions of operations per second on a single core.

Why it teaches Mastery: This is the โ€œFinal Bossโ€ of the learning path. It requires you to integrate almost every advanced concept: pinning, custom allocation, atomics, and lifetimes.


Real World Outcome

A database engine that rivals Redis or RocksDB in speed for specific workloads. You will build a CLI tool to interact with your store.

Example Session:

$ ./my_kv_store --bench
Writing 1,000,000 keys... DONE (0.45s)
Reading 1,000,000 keys... DONE (0.31s)
Throughput: 3.2M req/s
Memory Usage: 140MB

The Core Question Youโ€™re Answering

โ€œHow do I build a production-grade system component?โ€

You will move from โ€œwriting codeโ€ to โ€œengineering a system,โ€ where you must balance memory usage, disk I/O, and CPU cycles.

Concepts You Must Understand First

  1. LSM Trees or B-Trees
    • Which data structure is better for writes vs reads?
    • Book Reference: โ€œDesigning Data-Intensive Applicationsโ€ Ch. 3
  2. Memory Mapping (mmap)
    • How to treat a file on disk as if it were in RAM.
    • Book Reference: โ€œThe Linux Programming Interfaceโ€ Ch. 49
  3. Concurrent Data Structures
    • How to allow multiple threads to read the index while one thread writes.

Questions to Guide Your Design

  1. Persistence
    • What happens if the power goes out? How do you ensure data isnโ€™t corrupted? (Hint: WAL - Write Ahead Log).
  2. Compaction
    • If you update a key 100 times, do you store 100 versions? How do you clean them up?
  3. Naming
    • How do you handle keys that are longer than your indexโ€™s fixed-size slots?

Thinking Exercise

The Tail Latency

If your KV store is 99% fast but 1% takes 100ms (due to a lock or a disk flush), your users will be unhappy. How do you use std::sync::atomic to ensure the โ€œHot Pathโ€ of reads never blocks?

The Interview Questions Theyโ€™ll Ask

  1. โ€œWhat is the difference between an LSM Tree and a B+ Tree?โ€
  2. โ€œWhy is mmap faster than read/write for some workloads?โ€
  3. โ€œHow do you handle database recovery after a crash?โ€
  4. โ€œWhat is โ€˜Write Amplificationโ€™?โ€

Hints in Layers

Hint 1: The Index Use a SkipList or a B-Tree. If you want to be hardcore, implement a lock-free SkipList.

Hint 2: The Data Append every write to a log file (Append-Only Log). This makes writes extremely fast.

Hint 3: The Reader The reader should use mmap to map the log file into memory. Use zero-copy parsing to turn the raw bytes into your Value struct.

Books That Will Help

Topic Book Chapter
DB Internals โ€œDesigning Data-Intensive Applicationsโ€ Ch. 3
System Programming โ€œThe Linux Programming Interfaceโ€ Ch. 49
Lock-Free Rust โ€œRust Atomics and Locksโ€ Ch. 9

Final Overall Project: A Self-Hosting, no_std, Async OS Core

This is the ultimate test of your Rust journey. You will build a micro-kernel that:

  1. Runs in no_std mode on bare metal.
  2. Uses a Custom Global Allocator that you wrote (Project 3).
  3. Implements an Async Executor to handle hardware interrupts as futures (Project 8).
  4. Uses Procedural Macros to define system calls (Project 10).
  5. Employs Const Generics for type-safe memory maps (Project 5).

Real World Outcome

A functioning micro-OS that can run its own shell and execute simple user programs. You will be one of the few developers on earth who has built an OS from scratch using modern memory-safe languages.

The Core Question Youโ€™re Answering

โ€œCan I build the entire stack?โ€

Yes. From the hardware boot sequence to the async application logic, you have mastered every layer of the modern computing stack.

Books That Will Help

Topic Book Chapter
Complete OS Design โ€œOperating Systems: Three Easy Piecesโ€ Full Book
Advanced Rust Patterns โ€œRust for Rustaceansโ€ Full Book
Low-Level Secrets โ€œThe Secret Life of Programsโ€ Full Book

Summary

This learning path covers Advanced Rust through 12 hands-on projects. Hereโ€™s the complete list:

# Project Name Main Language Difficulty Time Estimate
1 Manual Pin Projector Rust Advanced 3-5 days
2 Box-less Async Trait Rust Expert 1 week
3 Custom Arena Allocator Rust Advanced 1 week
4 no_std Kernel Core Rust Expert 2 weeks
5 Const Generic Matrix Rust Intermediate 1-2 weeks
6 Atomic Lock-Free Queue Rust Master 2-3 weeks
7 Zero-Copy Parser Rust Advanced 1 week
8 Custom Future Runtime Rust Master 2-3 weeks
9 Physical Units Lib Rust Advanced 1 week
10 Reflect Derive Macro Rust Expert 1 week
11 no_std Game Boy Core Rust Master 1 month+
12 High-Perf KV Store Rust Master 1-2 months

For beginners (to Advanced concepts): Start with projects #1, #3, #5 For intermediate (in Advanced concepts): Jump to projects #2, #7, #9, #10 For advanced (Systems Masters): Focus on projects #4, #6, #8, #11, #12

Expected Outcomes

After completing these projects, you will:

  • Understand exactly how Pin works and why itโ€™s necessary for safety.
  • Be able to build high-performance systems without hidden allocations.
  • Master the no_std ecosystem for embedded or kernel-level work.
  • Use atomics to build lock-free data structures that scale with CPU cores.
  • Harness the full power of Rustโ€™s type system to move runtime errors to compile time.

Youโ€™ll have built 12 working projects that demonstrate deep understanding of the Rust ecosystem from first principles.