Project 5: 2D/3D Transformation Visualizer

A visual tool that shows how matrices transform shapes. Draw a square, apply a rotation matrix, see it rotate. Apply a shear matrix, see it skew. Compose multiple transformations and see the result.

Quick Reference

Attribute Value
Difficulty Level 3: Advanced (The Engineer)
Main Programming Language Python (with Pygame or Matplotlib)
Alternative Programming Languages JavaScript (Canvas/WebGL), C (SDL/OpenGL), Rust
Coolness Level Level 4: Hardcore Tech Flex
Business Potential 1. The “Resume Gold” (Educational/Personal Brand)
Knowledge Area Linear Transformations / Computer Graphics
Software or Tool Graphics Engine
Main Book “Computer Graphics from Scratch” by Gabriel Gambetta

1. Learning Objectives

By completing this project, you will:

  1. Translate math definitions into deterministic implementation steps.
  2. Build validation checks that make correctness observable.
  3. Diagnose numerical, logical, and data-shape failures early.
  4. Explain tradeoffs in interviews using evidence from your own build.

2. All Theory Needed (Per-Concept Breakdown)

This project applies the following theory clusters:

  • Symbolic-to-numeric translation (expressions, data shapes, invariants)
  • Stability constraints (precision, scaling, stopping criteria)
  • Optimization or inference logic (depending on project objective)
  • Evaluation discipline (error analysis, test coverage, reproducibility)

Concept A: Mathematical Representation Discipline

Fundamentals A math expression is not executable until you define representation, ordering, and domain constraints. The same equation can be represented as a token stream, tree, matrix pipeline, or probability graph. Choosing representation determines what bugs you can catch early.

Deep Dive into the concept Most project failures begin before algorithm selection: they start with ambiguous representation. If your parser cannot distinguish unary minus from subtraction, your calculator fails. If your matrix dimensions are implicit rather than validated, your linear algebra pipeline fails silently. If your probabilistic assumptions (independence, stationarity, or class priors) are not explicit, your inference can look accurate on one split and collapse on another. The core implementation move is to treat representation as a contract. Define each object with shape, domain, and semantic intent. Then enforce invariants at boundaries: input parser, preprocessing, training loop, evaluation stage. This makes debugging local instead of global.

How this fits this project You will encode each operation with explicit contracts and invariant checks.

Definitions & key terms

  • Invariant: Property that must hold before and after each operation.
  • Shape contract: Expected dimensional structure of vectors/matrices/tensors.
  • Domain constraint: Allowed value range (for example log input > 0).

Mental model diagram

User Input -> Representation Layer -> Validated Operation -> Observable Output
              (tokens/shapes)        (invariants pass)       (tests/plots/logs)

How it works

  1. Parse/ingest data into typed structures.
  2. Validate shape/domain invariants.
  3. Execute operation.
  4. Compare observed output with expected behavior.
  5. Record failure signature if mismatch appears.

Minimal concrete example

PSEUDOCODE
read expression
tokenize with precedence rules
if token sequence invalid -> return syntax error
evaluate tree
if domain violation -> return bounded diagnostic
print value and confidence check

Common misconceptions

  • “If it runs once, representation is correct.” -> false.
  • “Type checks are enough without shape checks.” -> false.

Check-your-understanding questions

  1. Which invariant catches division-by-zero earliest?
  2. Why does shape validation belong at boundaries rather than only in core logic?
  3. Predict failure if tokenization ignores unary minus.

Check-your-understanding answers

  1. Domain check on denominator before operation execution.
  2. Boundary validation keeps errors local and diagnostic.
  3. Expressions like -2^2 get misinterpreted and produce wrong precedence behavior.

Real-world applications Feature preprocessing, model-serving input validation, and experiment-tracking schema enforcement.

Where you’ll apply it This project and every downstream project in the sprint.

References

  • CSAPP (Bryant & O’Hallaron), floating-point chapter
  • Math for Programmers (Paul Orland), representation-oriented chapters

Key insight Correct representation reduces the complexity of every later decision.

Summary Stable ML math implementations start with explicit contracts, not implicit assumptions.

Homework/Exercises

  1. Write five invariants for your project.
  2. Build a failing test input for each invariant.

Solutions

  1. Include at least one shape, one domain, one convergence, one reproducibility, and one output-range invariant.
  2. Each failing input should trigger exactly one diagnostic to keep root-cause analysis clean.

3. Build Blueprint

  1. Scope the smallest end-to-end slice that produces visible output.
  2. Add deterministic tests and edge-case probes.
  3. Layer complexity only after baseline behavior is stable.
  4. Add metrics logging before optimization.
  5. Run failure drills: perturb inputs, scale values, and check stability.

4. Real-World Outcome (Target)

[Window showing a blue square at origin]

> rotate 45
[Square rotates 45° counterclockwise, transformation matrix shown:
 cos(45°)  -sin(45°)     0.707  -0.707
 sin(45°)   cos(45°)  =  0.707   0.707 ]

> scale 2 0.5
[Square stretches horizontally, squashes vertically]
[Matrix: [[2, 0], [0, 0.5]]]

> shear_x 0.5
[Square becomes parallelogram]

> reset
> compose rotate(30) scale(1.5, 1.5) translate(100, 50)
[Shows combined transformation: scale, then rotate, then move]
[Final matrix displayed]

Implementation Hints: Rotation matrix for angle θ:

R = [[cos(θ), -sin(θ)],
     [sin(θ),  cos(θ)]]

To transform a point: new_point = matrix @ old_point (matrix-vector multiplication).

For composition: if you want “first A, then B”, compute B @ A (right-to-left). This is why matrix order matters!

For 3D, add a z-coordinate and use 3x3 matrices. For translations, use 3x3 (2D) or 4x4 (3D) homogeneous coordinates.

Learning milestones:

  1. Rotation and scaling work visually → You understand matrices as spatial transformations
  2. Composition order affects result → You understand matrix multiplication deeply
  3. You can predict transformation outcome from matrix → You’ve internalized linear transformations

5. Core Design Notes from Main Guide

Core Question

“What does it mean for a matrix to ‘transform space’?”

When you apply a 2x2 matrix to every point in a plane, something remarkable happens: the entire plane stretches, rotates, shears, or flips. Lines stay lines. Parallel lines stay parallel (or become the same line). The origin stays fixed. This is the geometric essence of linear algebra. In machine learning, every layer of a neural network applies a matrix transformation to its input, followed by a nonlinear activation. The matrix learns to stretch and rotate the data into a form where the next layer can better separate classes. By building a transformation visualizer, you develop the visual intuition for what weight matrices actually DO to data—not as abstract numbers, but as geometric operations on space.

Concepts You Must Understand First

Stop and research these before coding:

  1. The Rotation Matrix
    • Why is the 2D rotation matrix [[cos(t), -sin(t)], [sin(t), cos(t)]]?
    • Where do the cos and sin terms come from geometrically?
    • Why is the negative sign in the top-right, not somewhere else?
    • Book Reference: “Computer Graphics from Scratch” Chapter 11 - Gabriel Gambetta
  2. Scaling and Shear Matrices
    • What does [[sx, 0], [0, sy]] do to a shape?
    • What does [[1, k], [0, 1]] do (shear)?
    • How does a negative scale factor cause reflection?
    • Book Reference: “3D Math Primer for Graphics” Chapter 4 - Dunn & Parberry
  3. Matrix Composition and Order
    • Why does “first rotate, then scale” differ from “first scale, then rotate”?
    • If transformations are T1, T2, T3 applied in that order, what’s the combined matrix?
    • What does it mean for matrix multiplication to be non-commutative?
    • Book Reference: “Math for Programmers” Chapter 4 - Paul Orland
  4. Homogeneous Coordinates
    • Why can’t a 2x2 matrix represent translation?
    • How do homogeneous coordinates [x, y, 1] solve this problem?
    • What does a 3x3 matrix in homogeneous coordinates represent?
    • Book Reference: “Computer Graphics: Principles and Practice” Chapter 7 - Hughes et al.
  5. The Connection to Neural Networks
    • How is a neural network layer like a linear transformation?
    • What role do weight matrices play in “reshaping” data?
    • Why do we need nonlinear activations after linear transformations?
    • Book Reference: “Deep Learning” Chapter 6 - Goodfellow et al.

Questions to Guide Your Design

Before implementing, think through these:

  1. How will you represent shapes to be transformed? As lists of points? As polygons?
  2. How will you animate transformations smoothly (interpolation)?
  3. How will you visualize the transformation matrix alongside the geometric result?
  4. How will you compose multiple transformations and show the combined effect?
  5. How will you extend from 2D to 3D? What changes?
  6. How will you implement homogeneous coordinates for translation?

Thinking Exercise

Before coding, work through these transformations by hand:

Start with the unit square: corners at (0,0), (1,0), (1,1), (0,1).

Transformation 1: Rotation by 90 degrees Matrix R = [[0, -1], [1, 0]]

  • (0,0) -> (0,0)
  • (1,0) -> (0,1)
  • (1,1) -> (-1,1)
  • (0,1) -> (-1,0) Result: Square rotated counterclockwise, now in quadrant II.

Transformation 2: Scale by 2 in x, 0.5 in y Matrix S = [[2, 0], [0, 0.5]]

  • (0,0) -> (0,0)
  • (1,0) -> (2,0)
  • (1,1) -> (2,0.5)
  • (0,1) -> (0,0.5) Result: Wide, flat rectangle.

Transformation 3: Shear with k=0.5 Matrix H = [[1, 0.5], [0, 1]]

  • (0,0) -> (0,0)
  • (1,0) -> (1,0)
  • (1,1) -> (1.5,1)
  • (0,1) -> (0.5,1) Result: Parallelogram leaning to the right.

Now compose: First rotate 45 degrees, then scale by 2 R_45 = [[0.707, -0.707], [0.707, 0.707]] S_2 = [[2, 0], [0, 2]] Combined = S_2 @ R_45 = [[1.414, -1.414], [1.414, 1.414]]

Verify that applying the combined matrix gives the same result as applying R_45 then S_2 separately.

Interview Questions

  1. “Derive the 2D rotation matrix.”
    • Expected: A point (x, y) at angle theta and radius r rotates to angle (theta + phi). Use cos(theta+phi) = cos(theta)cos(phi) - sin(theta)sin(phi) and similarly for sin.
  2. “Why is the order of matrix transformations important?”
    • Expected: Matrix multiplication is not commutative. Rotate-then-scale gives different result than scale-then-rotate. Demonstrate with a specific example.
  3. “How do you represent translation using matrices?”
    • Expected: Use homogeneous coordinates: [[1,0,tx], [0,1,ty], [0,0,1]] applied to [x,y,1] gives [x+tx, y+ty, 1].
  4. “What is an orthogonal matrix and why are rotation matrices orthogonal?”
    • Expected: Orthogonal means A^T A = I. Rotation preserves lengths and angles, which is exactly what orthogonality guarantees.
  5. “How would you smoothly interpolate between two rotation matrices?”
    • Expected: Don’t interpolate matrix elements directly (causes distortion). Interpolate the angle, or use quaternions for 3D.
  6. “What happens when you apply a matrix with determinant zero?”
    • Expected: The transformation collapses space to a lower dimension. For 2D, points collapse to a line or point. Information is lost.

Hints in Layers (Treat as pseudocode guidance)

Hint 1: Start with point transformation Implement a function that takes a 2D point (x, y) and a 2x2 matrix, and returns the transformed point. Matrix-vector multiplication: [a,b;c,d] @ [x;y] = [ax+by, cx+dy].

Hint 2: Transform a list of points A shape is just a list of points. Transform each point, then draw lines between consecutive transformed points.

Hint 3: Build transformation matrices from parameters Create functions: rotation_matrix(angle), scale_matrix(sx, sy), shear_matrix(kx, ky). Compose them with matrix multiplication.

Hint 4: Animate by interpolating parameters To animate a rotation from 0 to 90 degrees, loop over angles 0, 1, 2, …, 90 and redraw each frame. This creates smooth animation.

Hint 5: For 3D, add a z-coordinate and use 3x3 matrices Rotation around z-axis uses the same 2D rotation matrix in the top-left 2x2 block, with a 1 in the bottom-right. Rotation around x or y axes is similar.

Books That Will Help

Topic Book Chapter
2D/3D transformations “Computer Graphics from Scratch” Chapter 11 - Gabriel Gambetta
Rotation matrices “Math for Programmers” Chapter 4 - Paul Orland
Transformation composition “3D Math Primer for Graphics” Chapter 8 - Dunn & Parberry
Homogeneous coordinates “Computer Graphics: Principles and Practice” Chapter 7 - Hughes et al.
Linear transformations “Linear Algebra Done Right” Chapter 3 - Sheldon Axler
Geometric intuition “Essence of Linear Algebra” (video series) 3Blue1Brown
Animation and interpolation “The Nature of Code” Chapter 1 - Daniel Shiffman


6. Validation, Pitfalls, and Completion

Common Pitfalls and Debugging

Problem 1: “Outputs drift after a few iterations”

  • Why: Hidden numerical instability (unscaled features, aggressive step size, or repeated subtraction of nearly equal values).
  • Fix: Normalize inputs, reduce step size, and track relative error rather than only absolute error.
  • Quick test: Run the same task with two scales of input (for example x and 10x) and compare normalized error curves.

Problem 2: “Results are inconsistent across runs”

  • Why: Random seeds, data split randomness, or non-deterministic ordering are uncontrolled.
  • Fix: Set seeds, log configuration, and store split indices and hyperparameters with each run.
  • Quick test: Re-run three times with the same seed and confirm metrics remain inside a tight tolerance band.

Problem 3: “The project works on the demo case but fails on edge cases”

  • Why: Tests only cover happy-path inputs.
  • Fix: Add adversarial inputs (empty values, extreme ranges, near-singular matrices, rare classes).
  • Quick test: Build an edge-case test matrix and ensure every scenario reports expected behavior.

Definition of Done

  • Core functionality works on reference inputs
  • Edge cases are tested and documented
  • Results are reproducible (seeded and versioned configuration)
  • Performance or convergence behavior is measured and explained
  • A short retrospective explains what failed first and how you fixed it

7. Extension Ideas

  1. Add a stress-test mode with adversarial inputs.
  2. Add a short benchmark report (runtime + memory + error trend).
  3. Add a reproducibility bundle (seed, config, and fixed test corpus).

8. Why This Project Matters

This makes abstract matrix operations tangible. When you see that a 2x2 matrix rotates points around the origin, you understand matrices as functions that transform space. This geometric intuition is critical for understanding PCA, SVD, and neural network weight matrices.

This project is valuable because it creates observable evidence of mathematical reasoning under real implementation constraints.