Project 11: Unsafe Rust Soundness Audit and Boundary Hardening

Build and audit a documented unsafe boundary where safety invariants are explicit, enforceable, and test-backed.

Quick Reference

Attribute Value
Difficulty Level 4: Expert
Time Estimate 2-3 weeks
Main Programming Language Rust
Alternative Programming Languages C, C++
Coolness Level Level 5: Pure Magic (Super Cool)
Business Potential 3. The “Service & Support” Model
Prerequisites Projects 1-10 and FFI familiarity
Key Topics Safety invariants, soundness docs, unsafe isolation, audits

1. Learning Objectives

  1. Isolate unsafe operations behind safe interfaces.
  2. Write explicit safety invariants and soundness documentation.
  3. Build an unsafe inventory with ownership and review cadence.
  4. Validate boundary assumptions with targeted regression tests.

2. Theoretical Foundation

2.1 Core Concepts

  • Unsafe boundary: narrow zone where manual guarantees are required.
  • Soundness contract: conditions under which safe API cannot invoke UB.
  • Invariant ownership: each safety claim has a clear maintainer.
  • Audit workflow: repeatable review, not one-time trust.

2.2 Why This Matters

Many critical Rust systems need unsafe for FFI or low-level memory work. Sustainable safety depends on proof discipline, not optimism.

2.3 Common Misconceptions

  • “Unsafe code is always bad” -> undocumented unsafe is bad.
  • “If tests pass, unsafe is safe” -> tests are necessary but not sufficient.
  • “One audit is enough” -> unsafe assumptions drift over time.

3. Project Specification

3.1 What You Will Build

A module-level unsafe boundary for a pointer/FFI-heavy component with:

  • invariant documentation
  • per-site SAFETY: rationale
  • safe wrapper API
  • audit checklist and ownership table
  • regression tests for boundary violations

3.2 Functional Requirements

  1. All unsafe sites are inventoried.
  2. Every unsafe block has a concrete safety contract.
  3. Public API remains safe and validates preconditions.
  4. Boundary regressions are covered by tests.

3.3 Non-Functional Requirements

  • Auditability: another engineer can verify assumptions.
  • Containment: unsafe code is localized.
  • Longevity: contracts survive refactors through docs/tests.

3.4 Example Usage / Output

$ cargo test --package unsafe_boundary
running 91 tests
91 passed; 0 failed

$ rg -n "SAFETY:" src/
src/buffer.rs:48: // SAFETY: pointer non-null, aligned, len validated, exclusive mut access
src/ffi.rs:71: // SAFETY: ownership transfer contract documented in module invariants

$ cargo doc --package unsafe_boundary
Generated target/doc/unsafe_boundary/index.html

3.5 Real World Outcome

Your unsafe code becomes reviewable engineering: assumptions are explicit, localized, and continuously validated.


4. Solution Architecture

4.1 High-Level Design

Public Safe API -> Precondition checks -> Unsafe core module -> External memory/FFI surface
                         │                    │
                         └---- invariants + tests + audit records ----┘

4.2 Key Components

Component Responsibility Key Decision
Unsafe inventory Track every unsafe site owner + invariant + review date
Boundary module Encapsulate raw pointer/FFI ops private internals only
Safety docs Explain assumptions and guarantees concrete pre/post conditions
Regression tests Enforce boundary contracts include adversarial cases

5. Implementation Guide

5.1 The Core Question You’re Answering

“Can an engineer who did not write this code verify and trust the unsafe boundary?”

5.2 Concepts You Must Understand First

  1. Rust UB model and unsafe operations.
  2. Aliasing/lifetime validity rules.
  3. Ownership transfer semantics in FFI.
  4. Safety-comment best practices.

5.3 Questions to Guide Your Design

  1. Which invariants are required at each unsafe call?
  2. Which checks must happen at API boundary versus internals?
  3. How will audits be triggered and recorded?

5.4 Thinking Exercise

Create a table with columns: unsafe site, operation type, invariant, failure consequence, test coverage, owner.

5.5 The Interview Questions They’ll Ask

  1. “What makes a safe Rust API unsound?”
  2. “How do you audit an unsafe block effectively?”
  3. “What must a SAFETY: comment contain?”
  4. “How do you prevent unsafe creep in large codebases?”

5.6 Hints in Layers

  • Hint 1: Inventory before modification.
  • Hint 2: Convert implicit assumptions into explicit invariants.
  • Hint 3: Move unsafe into private modules.
  • Hint 4: Turn every boundary bug into a permanent regression test.

5.7 Books That Will Help

Topic Book Chapter
Unsafe foundations “The Rustonomicon” Unsafe + FFI chapters
API invariants “Rust for Rustaceans” API design discussions
Engineering discipline “Effective Rust” Maintainability and safety items

6. Testing Strategy

  • Boundary tests for invalid pointer/length/ownership conditions.
  • Regression tests for previously discovered safety bugs.
  • Optional Miri/sanitizer checks for additional confidence.

7. Common Pitfalls & Debugging

Pitfall Symptom Solution
Vague safety comments unreviewable unsafe blocks define precise pre/post conditions
Unsafe sprawl growing review burden centralize in boundary modules
Missing ownership semantics leaks/double-free risk explicit transfer and drop contracts

8. Self-Assessment Checklist

  • Unsafe inventory is complete and current.
  • Every unsafe site has a precise invariant statement.
  • Safe API rejects invalid boundary conditions.
  • Audit schedule and ownership are documented.

9. Completion Criteria

Minimum Viable Completion

  • Unsafe boundary is isolated and documented.

Full Completion

  • Regression suite covers boundary invariants and failure paths.

Excellence

  • Includes formal-ish soundness note and reviewer playbook.