LEARN COMPUTER GRAPHICS FROM SCRATCH IN C
Learn Computer Graphics From Scratch in C
Goal: Deeply understand computer graphics—from plotting individual pixels to building complete 3D renderers, ray tracers, and game engines. Master the mathematics and algorithms that power everything from video games to CGI movies.
Why Computer Graphics in C Matters
Every image you see on a screen—video games, movies, user interfaces, data visualizations—is the result of computer graphics algorithms. Most developers treat graphics as a black box (call drawImage() and hope for the best). But understanding what happens at the pixel level gives you:
- Deep appreciation for how GPUs work before touching OpenGL/Vulkan
- Mathematical intuition for linear algebra, trigonometry, and geometry
- Systems-level thinking about memory, performance, and optimization
- Foundation for game development, simulation, and visualization
- Ability to debug graphics issues that mystify other developers
After completing these projects, you will:
- Understand every pixel drawn to your screen
- Know the difference between rasterization and ray tracing
- Be able to implement lighting, shadows, and textures from scratch
- Build complete 3D renderers without any graphics API
- Transition smoothly to OpenGL/Vulkan with deep understanding
Core Concept Analysis
The Graphics Pipeline (What We’re Learning)
3D World Coordinates
↓
Model Transform (place objects in world)
↓
View Transform (camera position)
↓
Projection Transform (3D → 2D)
↓
Clipping (remove off-screen geometry)
↓
Rasterization (triangles → pixels)
↓
Fragment Processing (color, texture, lighting)
↓
Framebuffer (final image in memory)
↓
Display
Fundamental Concepts
-
Framebuffer: A chunk of memory representing pixels. Each pixel = RGB values. You write to memory, display shows it.
-
Rasterization: Converting geometric shapes (lines, triangles) into pixels. The dominant technique in real-time graphics.
-
Ray Tracing: Simulating light rays from camera through each pixel into the scene. More realistic but slower.
- Linear Algebra Essentials:
- Vectors: Direction and magnitude (position, velocity, normals)
- Matrices: Transformations (rotation, scaling, translation, projection)
- Dot product: Lighting calculations, angle between vectors
- Cross product: Surface normals, orientation
- The Rendering Equation: All lighting boils down to:
L_o = L_e + ∫ f_r * L_i * cos(θ) dω- How much light leaves a point = emitted light + reflected light from all directions
- Coordinate Systems:
- Object space: Vertices relative to object center
- World space: Objects placed in the scene
- Camera/View space: Scene from camera’s perspective
- Clip space: After projection, ready for clipping
- Screen space: Final 2D pixel coordinates
- Depth Buffer (Z-buffer): Stores depth of each pixel to handle occlusion (what’s in front of what).
Project List
Projects are ordered from foundational pixel manipulation to advanced rendering techniques.
Project 1: Framebuffer and Pixel Plotting
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Zig
- Coolness Level: Level 2: Practical but Forgettable
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 1: Beginner
- Knowledge Area: Graphics Fundamentals / Memory Layout
- Software or Tool: SDL2 (for window/display only)
- Main Book: Computer Graphics from Scratch by Gabriel Gambetta
What you’ll build: A minimal graphics library that creates a window, allocates a pixel buffer in memory, and lets you set individual pixels by their (x, y) coordinates and RGB color—then displays the result.
Why it teaches graphics: Before you can draw anything, you need to understand that an image is just an array of numbers in memory. This project strips away all abstractions—you’re directly manipulating bytes that become visible light on your screen.
Core challenges you’ll face:
- Understanding pixel memory layout → maps to how framebuffers are organized (row-major, stride, pitch)
- Color representation → maps to RGB, RGBA, bit depth, color channels
- Double buffering → maps to preventing screen tearing and flicker
- Coordinate systems → maps to screen coordinates vs. mathematical coordinates
Key Concepts:
- Framebuffer Organization: Computer Graphics from Scratch Chapter 1 - Gabriel Gambetta
- SDL2 Basics: SDL2 official documentation - libsdl.org
- Memory Layout in C: C Programming: A Modern Approach Chapter 17 - K.N. King
- Color Models: Computer Graphics: Principles and Practice Chapter 8 - Foley et al.
Difficulty: Beginner Time estimate: Weekend Prerequisites: Basic C (pointers, arrays, memory allocation), understanding of RGB color
Real world outcome:
$ ./framebuffer
[Window opens showing a gradient from black to white across the screen]
[A red pixel appears at (100, 100)]
[A green rectangle fills from (200, 200) to (300, 300)]
[Press any key to cycle through color patterns]
You’ll see your pixels appear on screen in real-time as you modify memory values.
Implementation Hints:
A framebuffer is conceptually a 2D array, but in memory it’s linear:
pixel_index = y * width + x
For a 800x600 window with 32-bit color (RGBA):
buffer_size = 800 * 600 * 4 = 1,920,000 bytes
To set a pixel at (x, y) to color (r, g, b):
offset = (y * width + x) * bytes_per_pixel
buffer[offset + 0] = r;
buffer[offset + 1] = g;
buffer[offset + 2] = b;
buffer[offset + 3] = 255; // alpha
Questions to answer as you build:
- Why do some systems use BGRA instead of RGBA?
- What happens if you write outside the buffer bounds?
- Why does
y * widthgive you the correct row? - What’s the difference between
pitchandwidth?
Use SDL2 only for creating a window and copying your buffer to the screen—do NOT use any SDL drawing functions.
Learning milestones:
- You can set individual pixels → You understand memory-to-display mapping
- You draw a gradient → You understand color interpolation
- No flickering during animation → You understand double buffering
- You can clear and redraw at 60fps → You understand the render loop
Project 2: 2D Line Drawing (Bresenham’s Algorithm)
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Go
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 1: Beginner
- Knowledge Area: Rasterization / Algorithm Design
- Software or Tool: Your framebuffer from Project 1
- Main Book: Computer Graphics: Principles and Practice by Foley, van Dam, Feiner, Hughes
What you’ll build: A line-drawing function that, given two endpoints (x0, y0) and (x1, y1), plots all the pixels that best approximate a straight line between them—using only integer arithmetic.
Why it teaches graphics: Bresenham’s algorithm is the foundation of rasterization. It elegantly solves the problem of representing continuous geometry (a mathematical line) with discrete pixels. Understanding this prepares you for triangle rasterization.
Core challenges you’ll face:
- Handling all line slopes → maps to octant handling, steep vs shallow lines
- Integer-only arithmetic → maps to avoiding floating-point for performance
- Endpoint accuracy → maps to subpixel precision and antialiasing concepts
- Drawing direction → maps to handling lines drawn right-to-left or bottom-to-top
Key Concepts:
- Bresenham’s Algorithm: Computer Graphics: Principles and Practice Chapter 3 - Foley et al.
- Rasterization Theory: Scratchapixel - Rasterization Stage
- Integer Arithmetic Optimization: Hacker’s Delight Chapter 2 - Henry Warren
Difficulty: Beginner Time estimate: 2-3 days Prerequisites: Project 1 (Framebuffer), basic algebra
Real world outcome:
$ ./line_drawer
[Window shows a star pattern made of lines radiating from center]
[Press SPACE to cycle: horizontal lines, vertical lines, diagonal lines, random lines]
[Each line draws correctly regardless of direction or slope]
Implementation Hints:
The key insight: at each step, you move one pixel in the “fast” direction (let’s say X). The question is whether to also step in the “slow” direction (Y).
Keep track of an “error” term. When the error exceeds a threshold, step in the slow direction and reset the error.
The beauty of Bresenham: this “error” can be tracked with integer addition/subtraction only—no division, no floating point.
For a line from (0,0) to (5,2):
Step 0: (0,0) - plot
Step 1: (1,0) - move X, error accumulates
Step 2: (2,1) - move X, error overflows, move Y too
Step 3: (3,1) - move X, error accumulates
Step 4: (4,2) - move X, error overflows, move Y too
Step 5: (5,2) - plot, done
Questions to explore:
- Why is the algorithm called “midpoint” in some texts?
- How would you extend this for subpixel endpoints?
- What’s the connection to DDA (Digital Differential Analyzer)?
- How do GPUs implement this at hardware level?
Learning milestones:
- Lines work for slope 0-1 → You understand the basic algorithm
- Lines work for all 8 octants → You understand symmetry and generalization
- No gaps in steep lines → You correctly swap X and Y roles
- Lines at 10,000 per second → You understand why integer math matters
Project 3: 2D Shape Rasterization (Circles, Triangles, Polygons)
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Zig
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Rasterization / Computational Geometry
- Software or Tool: Your line drawer from Project 2
- Main Book: Computer Graphics from Scratch by Gabriel Gambetta
What you’ll build: Functions to rasterize filled circles (Midpoint Circle Algorithm), filled triangles (scanline or barycentric), and arbitrary convex polygons—all plotted pixel by pixel.
Why it teaches graphics: Triangles are the fundamental primitive of 3D graphics. Every 3D model is made of triangles. Understanding how to fill a triangle efficiently is essential—it’s what GPUs do billions of times per second.
Core challenges you’ll face:
- Circle symmetry exploitation → maps to 8-way symmetry optimization
- Triangle filling strategies → maps to scanline vs barycentric approaches
- Edge handling (which pixels belong to the triangle?) → maps to fill conventions, top-left rule
- Convex polygon decomposition → maps to triangulation algorithms
Key Concepts:
- Midpoint Circle Algorithm: Computer Graphics: Principles and Practice Chapter 3 - Foley et al.
- Triangle Rasterization: Computer Graphics from Scratch Chapter 7 - Gabriel Gambetta
- Barycentric Coordinates: Scratchapixel - Rasterization: Practical Implementation
- Fill Conventions: Real-Time Rendering Chapter 23 - Akenine-Möller et al.
Difficulty: Intermediate Time estimate: 1 week Prerequisites: Project 2 (Line Drawing), basic geometry
Real world outcome:
$ ./shape_rasterizer
[Window shows: filled red circle, filled green triangle, filled blue hexagon]
[Shapes can overlap, demonstrating painter's algorithm (later drawn = on top)]
[Press keys to animate: spinning triangle, bouncing circle, morphing polygon]
[No gaps at edges, no missing pixels, clean fills]
Implementation Hints:
For circles: The Midpoint Circle Algorithm is similar to Bresenham for lines. Due to 8-way symmetry, you only compute 1/8 of the circle and mirror it.
For triangles (scanline approach):
- Sort vertices by Y coordinate
- Walk down from top vertex, tracking left and right edges
- For each scanline (row of pixels), draw a horizontal line between edges
For triangles (barycentric approach):
- Find the bounding box of the triangle
- For each pixel in the bounding box, compute barycentric coordinates
- If all three coordinates are positive, the pixel is inside
Barycentric coordinates (λ1, λ2, λ3) satisfy:
P = λ1*A + λ2*B + λ3*C where λ1 + λ2 + λ3 = 1
If any λ is negative, the point is outside the triangle.
Questions to explore:
- Why do GPUs prefer the bounding-box/barycentric approach?
- What happens when two triangles share an edge? (Hint: fill convention)
- How would you draw a filled rectangle efficiently?
- What’s the complexity of drawing a filled triangle vs. its area?
Learning milestones:
- Circles look smooth → You understand the midpoint algorithm
- Triangles fill completely with no gaps → You understand rasterization rules
- Shared edges don’t double-draw pixels → You understand fill conventions
- You can decompose a polygon into triangles → You understand triangulation
Project 4: 2D Transformations and Animation
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Python
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Linear Algebra / Transformations
- Software or Tool: Your shape rasterizer from Project 3
- Main Book: 3D Math Primer for Graphics and Game Development by Fletcher Dunn & Ian Parberry
What you’ll build: A 2D graphics system that can translate, rotate, and scale shapes using matrix transformations, with smooth animations showing objects moving, spinning, and growing.
Why it teaches graphics: Matrix transformations are the language of graphics. Every 3D engine uses 4x4 matrices for positioning objects. Understanding 2D transformations (3x3 matrices) first makes 3D intuitive.
Core challenges you’ll face:
- Implementing matrix multiplication → maps to linear algebra fundamentals
- Understanding transformation order → maps to matrix composition (TRS vs SRT)
- Rotation around arbitrary points → maps to translate-rotate-translate pattern
- Smooth animation → maps to interpolation, frame-rate independence
Key Concepts:
- Homogeneous Coordinates: 3D Math Primer Chapter 6 - Dunn & Parberry
- Transformation Matrices: Computer Graphics from Scratch Chapter 9 - Gabriel Gambetta
- Matrix Composition Order: Real-Time Rendering Chapter 4 - Akenine-Möller et al.
- Animation Interpolation: Game Programming Patterns - Robert Nystrom
Difficulty: Intermediate Time estimate: 1 week Prerequisites: Project 3 (Shape Rasterization), basic linear algebra concepts
Real world outcome:
$ ./transforms
[A square rotates smoothly around its center]
[Press T: square translates across the screen]
[Press S: square scales up and down (breathing effect)]
[Press R: square rotates around the mouse cursor position]
[Multiple shapes animate independently with combined transformations]
Implementation Hints:
Use 3x3 matrices for 2D (homogeneous coordinates):
| a b tx | | x | | a*x + b*y + tx |
| c d ty | × | y | = | c*x + d*y + ty |
| 0 0 1 | | 1 | | 1 |
Translation matrix:
| 1 0 tx |
| 0 1 ty |
| 0 0 1 |
Rotation matrix (θ radians):
| cos(θ) -sin(θ) 0 |
| sin(θ) cos(θ) 0 |
| 0 0 1 |
Scale matrix:
| sx 0 0 |
| 0 sy 0 |
| 0 0 1 |
To rotate around point (px, py):
- Translate so (px, py) is at origin
- Rotate
- Translate back
This is: T(px,py) × R(θ) × T(-px,-py)
Questions to explore:
- Why does transformation order matter? What’s the visual difference?
- How do you invert a transformation matrix?
- What happens if scale factors are negative?
- How would you implement “look at” (rotate to face a point)?
Learning milestones:
- Matrix multiplication works correctly → You understand the math
- Rotation around center works → You understand the translate-rotate-translate pattern
- Combined TRS works smoothly → You understand matrix composition
- Animation is frame-rate independent → You understand delta time
Project 5: Software 3D Renderer - Wireframe
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Zig
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: 3D Graphics / Projection
- Software or Tool: Your 2D graphics system from Projects 1-4
- Main Book: Computer Graphics from Scratch by Gabriel Gambetta
What you’ll build: A 3D wireframe renderer that loads a mesh (cube, then OBJ files), applies 3D transformations (rotation, translation), projects it to 2D using perspective projection, and draws the edges.
Why it teaches graphics: This is your first step into 3D. You’ll understand how 3D coordinates become 2D screen coordinates—the fundamental operation of all 3D graphics. Wireframe lets you focus on the math without worrying about filling and lighting.
Core challenges you’ll face:
- 3D to 2D projection → maps to perspective division, the “divide by Z” magic
- Camera positioning → maps to view matrix, eye coordinates
- OBJ file parsing → maps to understanding mesh data structures
- Clipping near plane → maps to why objects “pop” when too close
Key Concepts:
- Perspective Projection: Computer Graphics from Scratch Chapter 9 - Gabriel Gambetta
- View Matrix: 3D Math Primer Chapter 7 - Dunn & Parberry
- OBJ File Format: Paul Bourke’s website (paulbourke.net/dataformats/obj/)
- Coordinate System Handedness: Real-Time Rendering Chapter 4 - Akenine-Möller et al.
Resources for key challenges:
- TinyRenderer by Dmitry V. Sokolov - Outstanding step-by-step guide
- Scratchapixel - 3D Rendering for Beginners - Deep mathematical explanations
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 4 (2D Transformations), comfortable with 3D geometry concepts
Real world outcome:
$ ./wireframe_3d
[A spinning wireframe cube appears on screen]
[Press W/S: cube moves toward/away from camera]
[Press arrow keys: cube rotates on different axes]
[Press L: load an OBJ file (teapot, bunny)]
[Complex mesh appears as wireframe, rotatable in real-time]
Implementation Hints:
The core of 3D graphics is this: perspective projection.
A point (X, Y, Z) in 3D becomes (x, y) on screen:
x = (X / Z) * focal_length + screen_width/2
y = (Y / Z) * focal_length + screen_height/2
That division by Z is why things look smaller when farther away!
Using 4x4 matrices (homogeneous 3D):
| 1 0 0 0 | | X | | X |
| 0 1 0 0 | × | Y | = | Y | then divide by W
| 0 0 1 0 | | Z | | Z |
| 0 0 1/d 0 | | 1 | | Z/d |
Final: (X*d/Z, Y*d/Z)
A mesh is: array of vertices + array of edges (pairs of vertex indices).
OBJ format basics:
v 1.0 2.0 3.0 # vertex at (1,2,3)
v 4.0 5.0 6.0 # another vertex
f 1 2 3 # face using vertices 1, 2, 3 (1-indexed)
Questions to explore:
- Why is it called “perspective division”?
- What’s the difference between orthographic and perspective projection?
- How do you handle points behind the camera (negative Z)?
- What determines the field of view?
Learning milestones:
- Cube renders correctly → You understand basic projection
- Rotation looks correct (no distortion) → You understand 3D matrix transforms
- OBJ files load and render → You understand mesh data structures
- Objects near camera don’t glitch → You understand near-plane clipping
Project 6: Software 3D Renderer - Filled Triangles with Z-Buffer
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Zig
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: 3D Graphics / Rasterization
- Software or Tool: Your wireframe renderer from Project 5
- Main Book: Computer Graphics from Scratch by Gabriel Gambetta
What you’ll build: Extend your wireframe renderer to fill triangles with solid colors and implement a depth buffer (Z-buffer) so closer triangles correctly occlude farther ones.
Why it teaches graphics: The Z-buffer is one of the most important inventions in computer graphics. It elegantly solves the visibility problem (what’s in front?) with a simple per-pixel depth comparison. This is exactly what GPUs do.
Core challenges you’ll face:
- Z-interpolation across triangles → maps to perspective-correct interpolation
- Z-buffer precision issues → maps to depth fighting, near/far plane ratio
- Performance with many triangles → maps to why GPUs exist
- Back-face culling → maps to winding order, surface normals
Key Concepts:
- Z-Buffer Algorithm: Computer Graphics from Scratch Chapter 12 - Gabriel Gambetta
- Depth Buffer Precision: Real-Time Rendering Chapter 23 - Akenine-Möller et al.
- Back-Face Culling: Scratchapixel - Rasterization
- Perspective-Correct Interpolation: Computer Graphics: Principles and Practice Chapter 8 - Foley et al.
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 5 (Wireframe 3D), Project 3 (Triangle Rasterization)
Real world outcome:
$ ./zbuffer_3d
[A solid-colored rotating cube - faces correctly overlap]
[Load the Utah Teapot: you see a solid 3D teapot, not wireframe]
[Move camera through a scene with multiple objects]
[Objects correctly occlude each other at all angles]
[Press D to visualize the depth buffer as a grayscale image]
Implementation Hints:
The Z-buffer is a 2D array the same size as your framebuffer, storing the depth of each pixel:
float z_buffer[WIDTH * HEIGHT]; // Initialize to INFINITY (or FAR_PLANE)
When drawing a pixel:
if (z < z_buffer[y * WIDTH + x]) {
z_buffer[y * WIDTH + x] = z;
framebuffer[y * WIDTH + x] = color;
}
// Otherwise, don't draw - something closer is already there
For triangle rasterization with Z:
- Compute Z at each vertex after projection
- Interpolate Z across the triangle (using barycentric coordinates)
- At each pixel, compare interpolated Z with Z-buffer
Important: You must interpolate 1/Z, not Z directly, for perspective correctness!
Back-face culling (optimization):
- Compute the normal of each triangle
- If the normal points away from the camera, skip the triangle
- Uses the cross product of two edges
Questions to explore:
- Why interpolate 1/Z instead of Z?
- What happens with very far near/far plane ratios?
- How does back-face culling reduce work by ~50%?
- What’s “Z-fighting” and when does it occur?
Learning milestones:
- Simple scenes render correctly → You understand basic Z-buffering
- Complex meshes (1000+ triangles) work → You handle interpolation correctly
- No Z-fighting artifacts → You understand depth precision
- Back-face culling works → You understand surface orientation
Project 7: Flat Shading and Diffuse Lighting
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Zig
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: 3D Graphics / Lighting
- Software or Tool: Your Z-buffer renderer from Project 6
- Main Book: Fundamentals of Computer Graphics by Steve Marschner & Peter Shirley
What you’ll build: Add lighting to your renderer with flat shading—each triangle gets a single color based on its angle to a light source, implementing the Lambertian diffuse reflection model.
Why it teaches graphics: Lighting is what makes 3D look 3D. Without it, you just have colored shapes. The dot product between surface normal and light direction is the foundation of all lighting—from games to Hollywood CGI.
Core challenges you’ll face:
- Computing surface normals → maps to cross product of triangle edges
- The dot product for lighting → maps to N·L gives cosine of angle
- Light direction vectors → maps to point lights vs directional lights
- Color clamping → maps to HDR vs LDR, tone mapping concepts
Key Concepts:
- Lambertian Reflection: Fundamentals of Computer Graphics Chapter 10 - Marschner & Shirley
- Surface Normals: 3D Math Primer Chapter 9 - Dunn & Parberry
- Dot Product Geometry: Computer Graphics from Scratch Chapter 13 - Gabriel Gambetta
- Light Types: Real-Time Rendering Chapter 5 - Akenine-Möller et al.
Difficulty: Advanced Time estimate: 1 week Prerequisites: Project 6 (Z-Buffer), understanding of vectors and dot product
Real world outcome:
$ ./flat_shading
[A 3D sphere made of triangles, clearly lit from one side]
[Dark side faces away from light, bright side faces toward it]
[Rotate the object: lighting changes correctly based on orientation]
[Move the light: shadows shift realistically]
[Multiple objects cast and receive light independently]
Implementation Hints:
Surface normal from triangle vertices A, B, C:
edge1 = B - A
edge2 = C - A
normal = normalize(cross(edge1, edge2))
Lambertian diffuse lighting:
light_dir = normalize(light_position - surface_point)
intensity = max(0, dot(normal, light_dir))
final_color = base_color * intensity
The max(0, ...) prevents negative light (surfaces facing away).
For flat shading, compute one normal per triangle (from its vertices) and use it for the entire triangle.
The dot product N·L:
- Returns 1 when surface faces directly at light
- Returns 0 when surface is perpendicular to light
- Returns negative when surface faces away
Questions to explore:
- Why do we normalize vectors before the dot product?
- What’s the visual difference between point and directional lights?
- How would you add ambient light (so shadows aren’t pure black)?
- Why does flat shading look “faceted” on curved surfaces?
Learning milestones:
- Lighting visibly affects triangles → You understand the basic model
- Spheres look 3D, not flat → Normals are computed correctly
- Moving light changes shading correctly → Light direction is normalized
- No “inside-out” lighting bugs → Normals point the right way (consistent winding)
Project 8: Gouraud and Phong Shading
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Zig
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: 3D Graphics / Advanced Lighting
- Software or Tool: Your flat-shaded renderer from Project 7
- Main Book: Fundamentals of Computer Graphics by Steve Marschner & Peter Shirley
What you’ll build: Implement Gouraud shading (interpolate colors across triangles) and Phong shading (interpolate normals, compute lighting per-pixel), adding specular highlights.
Why it teaches graphics: These are the shading techniques used in real-time graphics before modern GPUs. Understanding the trade-offs (compute at vertex vs. pixel) prepares you for GPU shader programming.
Core challenges you’ll face:
- Per-vertex vs per-pixel computation → maps to vertex shaders vs fragment shaders
- Interpolating normals correctly → maps to renormalization after interpolation
- Specular highlights (Phong reflection) → maps to view-dependent lighting
- Performance implications → maps to why vertex shading was preferred historically
Key Concepts:
- Gouraud Shading: Computer Graphics: Principles and Practice Chapter 16 - Foley et al.
- Phong Reflection Model: Fundamentals of Computer Graphics Chapter 10 - Marschner & Shirley
- Specular Highlights: Real-Time Rendering Chapter 5 - Akenine-Möller et al.
- Normal Interpolation: Scratchapixel - Shading
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 7 (Flat Shading), comfortable with interpolation
Real world outcome:
$ ./smooth_shading
[Toggle between Flat/Gouraud/Phong with F/G/P keys]
[Flat: visible triangle facets on sphere]
[Gouraud: smooth shading but specular highlights look wrong]
[Phong: smooth shading with crisp specular highlights]
[A shiny teapot with visible specular reflection that moves as you orbit]
Implementation Hints:
Gouraud Shading:
- Compute vertex normals (average of adjacent face normals)
- Compute lighting at each vertex (including specular)
- Interpolate the resulting colors across the triangle
Phong Shading:
- Compute vertex normals
- Interpolate normals across the triangle
- Renormalize the interpolated normal at each pixel
- Compute lighting per-pixel
Phong specular component:
view_dir = normalize(camera_pos - surface_point)
reflect_dir = reflect(-light_dir, normal)
spec_intensity = pow(max(0, dot(view_dir, reflect_dir)), shininess)
specular = light_color * spec_intensity
Higher shininess = smaller, sharper highlights.
Full Phong lighting model:
final_color = ambient + diffuse + specular
= ka * ambient_light
+ kd * (N·L) * diffuse_color
+ ks * (R·V)^n * specular_color
Questions to explore:
- Why does Gouraud shading fail for specular highlights?
- Why must interpolated normals be renormalized?
- What’s the Blinn-Phong modification and why is it often preferred?
- How many more calculations does Phong require vs Gouraud?
Learning milestones:
- Gouraud produces smooth color gradients → You understand vertex interpolation
- Phong shows crisp specular highlights → Per-pixel lighting works
- You notice the performance difference → You understand the trade-off
- Shininess parameter changes highlight size → You understand specular exponent
Project 9: Texture Mapping
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Zig
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: 3D Graphics / Texturing
- Software or Tool: Your Phong-shaded renderer from Project 8
- Main Book: Real-Time Rendering by Akenine-Möller, Haines, Hoffman
What you’ll build: Add texture mapping to your renderer—load images (PNG/BMP), map them onto 3D surfaces using UV coordinates, with perspective-correct interpolation.
Why it teaches graphics: Texture mapping is what makes 3D graphics look real. Instead of simple colors, surfaces can have complex patterns, details, and images. Understanding UV mapping and perspective correction is essential for game development.
Core challenges you’ll face:
- UV coordinate interpolation → maps to perspective-correct texturing
- Texture sampling → maps to nearest neighbor vs bilinear filtering
- Loading image files → maps to image formats, color channels
- UV wrapping modes → maps to repeat, clamp, mirror
Key Concepts:
- Texture Mapping: Real-Time Rendering Chapter 6 - Akenine-Möller et al.
- Perspective-Correct Interpolation: Computer Graphics from Scratch Chapter 14 - Gabriel Gambetta
- Bilinear Filtering: Fundamentals of Computer Graphics Chapter 11 - Marschner & Shirley
- Image Loading in C: stb_image.h - Sean Barrett (single-header library)
Resources for key challenges:
- stb_image.h - Simple image loading for C
- TinyRenderer Texture Lesson - Great walkthrough
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 8 (Smooth Shading), understanding of perspective-correct interpolation
Real world outcome:
$ ./textured_3d
[A cube with different textures on each face (wood, brick, stone)]
[A textured sphere (Earth map wrapped around it)]
[Press N/B to toggle nearest/bilinear filtering - see quality difference]
[Textures look correct at all angles - no "swimming" or distortion]
Implementation Hints:
UV coordinates map 2D texture onto 3D surface:
- (0,0) = bottom-left of texture
- (1,1) = top-right of texture
For each vertex, store (x, y, z, u, v).
Critical: You must do perspective-correct interpolation!
Wrong (affine):
u = u1 * w1 + u2 * w2 + u3 * w3 // This distorts!
Correct (perspective):
// Interpolate u/z, v/z, and 1/z
u_over_z = (u1/z1)*w1 + (u2/z2)*w2 + (u3/z3)*w3
v_over_z = (v1/z1)*w1 + (v2/z2)*w2 + (v3/z3)*w3
one_over_z = (1/z1)*w1 + (1/z2)*w2 + (1/z3)*w3
// Then recover u and v
u = u_over_z / one_over_z
v = v_over_z / one_over_z
Texture sampling:
// Convert (u, v) in [0,1] to pixel coordinates
tex_x = (int)(u * texture_width) % texture_width
tex_y = (int)(v * texture_height) % texture_height
color = texture[tex_y * texture_width + tex_x]
Questions to explore:
- Why does affine interpolation cause texture “swimming”?
- What’s the visual difference between nearest and bilinear filtering?
- How do UV values outside [0,1] handle (wrapping modes)?
- What are mipmaps and why do they help with distant textures?
Learning milestones:
- Textures appear on surfaces → UV mapping works
- No texture distortion at angles → Perspective-correct interpolation works
- Bilinear filtering smooths texels → Sampling is correct
- Textures tile correctly (repeating patterns) → Wrap modes work
Project 10: Camera System (FPS-Style Controls)
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Zig
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 2: Intermediate
- Knowledge Area: 3D Graphics / Camera Mathematics
- Software or Tool: Your textured renderer from Project 9
- Main Book: 3D Math Primer for Graphics and Game Development by Fletcher Dunn & Ian Parberry
What you’ll build: A proper camera system with WASD movement, mouse look, and the math to correctly position the view matrix—just like a first-person shooter.
Why it teaches graphics: The view matrix is fundamental to 3D graphics. Understanding how camera position/orientation translates to the “view” of the scene is essential. This project bridges rendering and interactive applications.
Core challenges you’ll face:
- View matrix construction → maps to “look-at” matrix derivation
- Euler angles and gimbal lock → maps to pitch, yaw, roll limitations
- Mouse input to rotation → maps to sensitivity, smoothing
- Forward/right vectors → maps to camera-relative movement
Key Concepts:
- View Matrix (LookAt): 3D Math Primer Chapter 7 - Dunn & Parberry
- Euler Angles: 3D Math Primer Chapter 8 - Dunn & Parberry
- Camera Implementation: Game Programming Patterns - Robert Nystrom
- Input Handling: SDL2 documentation
Difficulty: Intermediate Time estimate: 3-5 days Prerequisites: Project 5+ (any 3D renderer), matrix math understanding
Real world outcome:
$ ./fps_camera
[You're inside a 3D scene with textured walls and objects]
[WASD moves you forward/back/left/right relative to where you're looking]
[Mouse movement looks around (FPS-style)]
[You can walk through the scene, exploring from any angle]
[No gimbal lock - looking straight up/down works correctly]
Implementation Hints:
The camera has position and orientation (pitch, yaw):
struct Camera {
float pos[3]; // x, y, z
float pitch; // up/down rotation
float yaw; // left/right rotation
};
Camera’s forward direction from angles:
forward[0] = cos(pitch) * sin(yaw);
forward[1] = sin(pitch);
forward[2] = cos(pitch) * cos(yaw);
Right vector (for strafing):
right = cross(forward, world_up) // world_up = (0, 1, 0)
View matrix using LookAt:
target = pos + forward
view_matrix = lookAt(pos, target, up)
Movement (camera-relative):
if (W pressed) pos += forward * speed * dt;
if (A pressed) pos -= right * speed * dt;
// etc.
Questions to explore:
- What is gimbal lock and when does it occur?
- Why use LookAt instead of directly constructing the matrix?
- How would you implement a third-person camera?
- What’s the difference between Euler angles and quaternions?
Learning milestones:
- WASD moves in look direction → Forward vector is correct
- Mouse look feels natural → Sensitivity and axes are correct
- No weird flipping at vertical limits → Pitch is clamped
- Movement is frame-rate independent → Delta time is used
Project 11: Ray Tracer - Basic Spheres and Planes
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Go
- Coolness Level: Level 5: Pure Magic
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: Ray Tracing / Computer Graphics
- Software or Tool: Framebuffer from Project 1 (no GPU)
- Main Book: Ray Tracing in One Weekend by Peter Shirley
What you’ll build: A ray tracer that renders spheres and planes with proper lighting, shadows, and reflections—outputting beautiful images pixel by pixel.
Why it teaches graphics: Ray tracing is the “physically correct” approach to rendering. While rasterization is fast, ray tracing naturally handles reflections, refractions, and shadows. Understanding both gives you complete graphics knowledge.
Core challenges you’ll face:
- Ray-sphere intersection → maps to solving quadratic equations
- Ray generation from camera → maps to projecting rays through pixels
- Shadow rays → maps to testing visibility to light sources
- Recursive reflections → maps to bouncing rays, recursion depth
Key Concepts:
- Ray-Sphere Intersection: Ray Tracing in One Weekend Chapter 5 - Peter Shirley
- Ray Generation: Ray Tracing in One Weekend Chapter 4 - Peter Shirley
- Shadow Rays: Fundamentals of Computer Graphics Chapter 4 - Marschner & Shirley
- Recursive Ray Tracing: Scratchapixel - Introduction to Ray Tracing
Resources for key challenges:
- Ray Tracing in One Weekend - Free book, exceptional tutorial
- Ray Tracing in One Weekend in C - C implementation guide
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 1 (Framebuffer), solid understanding of vectors and geometry
Real world outcome:
$ ./raytracer
Rendering 800x600 image...
[Progress bar updates]
Done! Saved to output.ppm
$ open output.ppm
[Beautiful image: shiny spheres on a checkerboard plane]
[Shadows cast onto the plane]
[Spheres reflect each other]
Implementation Hints:
A ray is: P(t) = origin + t * direction
For each pixel:
- Generate a ray from camera through the pixel
- Test intersection with all objects
- Take the closest hit
- Compute lighting at that point
- For reflective surfaces, cast a new ray and recurse
Ray-sphere intersection (sphere at center C, radius r):
(P - C)·(P - C) = r²
Substitute P = O + t*D:
(O + tD - C)·(O + tD - C) = r²
This is a quadratic in t: at² + bt + c = 0
a = D·D
b = 2 * D·(O - C)
c = (O - C)·(O - C) - r²
discriminant = b² - 4ac
if discriminant < 0: no intersection
else: t = (-b - sqrt(discriminant)) / (2a) [take smaller positive t]
Shadow test: from hit point, cast ray toward light. If it hits something, point is in shadow.
Questions to explore:
- Why does ray tracing naturally produce soft shadows (with area lights)?
- How do you prevent “shadow acne” (surface self-shadowing)?
- What’s the relationship between recursion depth and realism?
- Why is ray tracing historically slow compared to rasterization?
Learning milestones:
- Spheres render with correct perspective → Ray generation works
- Shading varies across surface → Normal calculation works
- Shadows appear correctly → Shadow rays work
- Spheres reflect each other → Recursive ray tracing works
Project 12: Ray Tracer - Materials (Metal, Glass, Diffuse)
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Go
- Coolness Level: Level 5: Pure Magic
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: Ray Tracing / Material Science
- Software or Tool: Your basic ray tracer from Project 11
- Main Book: Ray Tracing in One Weekend by Peter Shirley
What you’ll build: Extend your ray tracer with different material types—perfect mirrors (metal), glass with refraction (dielectric), and matte surfaces (Lambertian diffuse).
Why it teaches graphics: Materials define how light interacts with surfaces. Understanding reflection, refraction, and diffuse scattering is fundamental physics that applies to all rendering, including physically-based rendering (PBR) in modern games.
Core challenges you’ll face:
- Reflection vector calculation → maps to angle of incidence = angle of reflection
- Refraction (Snell’s Law) → maps to bending light at material boundaries
- Total internal reflection → maps to critical angle physics
- Diffuse (Lambertian) scattering → maps to random hemisphere sampling
Key Concepts:
- Reflection and Refraction: Ray Tracing in One Weekend Chapters 9-10 - Peter Shirley
- Snell’s Law: Fundamentals of Computer Graphics Chapter 13 - Marschner & Shirley
- Fresnel Equations: Physically Based Rendering Chapter 8 - Pharr & Humphreys
- Monte Carlo Sampling: Ray Tracing: The Next Week Chapter 2 - Peter Shirley
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 11 (Basic Ray Tracer)
Real world outcome:
$ ./raytracer_materials
[Image with three spheres:]
- Left: Matte red sphere (soft, no reflections)
- Center: Chrome sphere (perfect mirror reflection)
- Right: Glass sphere (see-through with refraction)
[Glass sphere shows the world behind it, distorted]
[Chrome sphere reflects the scene including other spheres]
Implementation Hints:
Reflection:
// r = d - 2(d·n)n
reflect(d, n) = d - 2 * dot(d, n) * n
Refraction (Snell’s Law):
n1 * sin(θ1) = n2 * sin(θ2)
For ray entering glass (air→glass): n1=1.0, n2=1.5 For ray exiting glass (glass→air): n1=1.5, n2=1.0
When n1/n2 * sin(θ1) > 1, total internal reflection occurs (no refraction possible).
Schlick’s approximation for Fresnel (how much reflects vs refracts):
r0 = ((n1-n2)/(n1+n2))²
schlick = r0 + (1-r0) * (1-cos(θ))^5
Diffuse (Lambertian):
- Pick a random direction in the hemisphere above the surface
- Bounce the ray in that direction
- This naturally creates soft, matte appearance
Questions to explore:
- Why does glass have both reflection and refraction?
- What determines the “shininess” of metal (perfect vs fuzzy)?
- Why do Lambertian surfaces appear the same brightness from all angles?
- How does the index of refraction affect glass appearance?
Learning milestones:
- Chrome sphere reflects the scene → Reflection works
- Glass sphere distorts the background → Refraction works
- Glass has no dark edges → Total internal reflection handled
- Matte spheres look soft and realistic → Diffuse scattering works
Project 13: Ray Tracer - Anti-Aliasing and Soft Shadows
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Go
- Coolness Level: Level 5: Pure Magic
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: Ray Tracing / Image Quality
- Software or Tool: Your materials ray tracer from Project 12
- Main Book: Ray Tracing in One Weekend by Peter Shirley
What you’ll build: Add anti-aliasing (multiple samples per pixel) and area lights for soft shadows, dramatically improving image quality.
Why it teaches graphics: Aliasing (jagged edges, noise) is the enemy of realistic images. Understanding multi-sampling and Monte Carlo integration is essential for production-quality rendering.
Core challenges you’ll face:
- Multi-sampling per pixel → maps to Monte Carlo integration
- Random sampling strategies → maps to uniform vs stratified sampling
- Area lights → maps to soft shadow penumbras
- Noise reduction → maps to sample count vs render time trade-off
Key Concepts:
- Anti-Aliasing: Ray Tracing in One Weekend Chapter 7 - Peter Shirley
- Monte Carlo Integration: Physically Based Rendering Chapter 13 - Pharr & Humphreys
- Stratified Sampling: Ray Tracing: The Rest of Your Life - Peter Shirley
- Area Lights: Fundamentals of Computer Graphics Chapter 14 - Marschner & Shirley
Difficulty: Advanced Time estimate: 1 week Prerequisites: Project 12 (Materials)
Real world outcome:
$ ./raytracer_aa samples=1
[Jagged edges on sphere silhouettes, noisy diffuse surfaces]
$ ./raytracer_aa samples=100
[Smooth edges, clean diffuse surfaces, soft shadow edges]
$ ./raytracer_aa samples=100 --area-light
[Shadows have soft penumbras, not hard edges]
[Render shows: "100 samples/pixel, 45 seconds"]
Implementation Hints:
Basic anti-aliasing: Instead of one ray per pixel, shoot N rays with slight random offsets:
color = (0, 0, 0)
for i in 0..samples_per_pixel:
u = (x + random()) / width // Random offset within pixel
v = (y + random()) / height
ray = camera_ray(u, v)
color += trace(ray)
color /= samples_per_pixel
Stratified sampling (better than pure random): Divide the pixel into a grid, take one sample from each cell:
for sx in 0..sqrt(samples):
for sy in 0..sqrt(samples):
u = (x + (sx + random())/sqrt(samples)) / width
v = (y + (sy + random())/sqrt(samples)) / height
// ...
Soft shadows with area lights:
- Area light = many point lights distributed on a surface
- For shadow test, pick random point on light surface
- Average many samples → soft shadow
Questions to explore:
- Why does doubling samples only reduce noise by ~41% (sqrt(2))?
- What’s the visual difference between 16 and 256 samples?
- How do production renderers achieve clean images in reasonable time?
- What is importance sampling and why does it help?
Learning milestones:
- Edges become smooth with more samples → Anti-aliasing works
- Diffuse surfaces become less noisy → Sampling converges
- Soft shadows appear with area lights → Light sampling works
- You understand the time/quality trade-off → Monte Carlo intuition
Project 14: Clipping and View Frustum Culling
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Zig
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 1. The “Resume Gold”
- Difficulty: Level 3: Advanced
- Knowledge Area: 3D Graphics / Optimization
- Software or Tool: Your rasterization renderer (Projects 5-9)
- Main Book: Real-Time Rendering by Akenine-Möller, Haines, Hoffman
What you’ll build: Implement proper 3D clipping against all 6 frustum planes (near, far, left, right, top, bottom) and frustum culling to skip objects entirely outside the view.
Why it teaches graphics: Without clipping, triangles behind the camera or partially off-screen cause rendering artifacts or crashes. Frustum culling is a critical optimization—why process triangles you can’t see?
Core challenges you’ll face:
- Sutherland-Hodgman clipping → maps to clipping polygons against planes
- Near plane clipping → maps to preventing divide-by-zero in projection
- Bounding volume tests → maps to sphere/box vs frustum intersection
- Handling triangle splitting → maps to one triangle → multiple after clip
Key Concepts:
- Sutherland-Hodgman Algorithm: Computer Graphics: Principles and Practice Chapter 8 - Foley et al.
- View Frustum: Real-Time Rendering Chapter 22 - Akenine-Möller et al.
- Frustum Culling: Real-Time Rendering Chapter 19 - Akenine-Möller et al.
- Bounding Volumes: 3D Math Primer Chapter 12 - Dunn & Parberry
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 6+ (Z-buffer renderer), solid geometry understanding
Real world outcome:
$ ./clipped_renderer
[Objects passing through the camera don't cause glitches]
[Objects partially off-screen render correctly (clipped parts invisible)]
[Moving camera close to objects: no visual artifacts]
[Console shows: "Culled 450/500 objects - 90% reduction!"]
Implementation Hints:
The view frustum is 6 planes forming a truncated pyramid:
- Near plane (z = near)
- Far plane (z = far)
- Left, Right, Top, Bottom (defined by field of view)
Sutherland-Hodgman clipping (clip polygon against one plane):
For each edge (V1, V2):
if V1 inside and V2 inside: output V2
if V1 inside and V2 outside: output intersection
if V1 outside and V2 inside: output intersection, then V2
if V1 outside and V2 outside: output nothing
Apply to all 6 planes sequentially.
Frustum culling (optimization before clipping):
For each object:
if bounding_sphere_outside_frustum(object): skip entirely
else: clip and render triangles
Sphere vs plane test:
distance = dot(sphere_center, plane_normal) - plane_distance
if distance < -sphere_radius: sphere is entirely outside
Questions to explore:
- Why must clipping happen in 3D (before perspective division)?
- What happens if you don’t clip against the near plane?
- How do bounding volume hierarchies (BVH) accelerate culling?
- What’s the difference between clipping and culling?
Learning milestones:
- No crashes when objects pass through camera → Near clipping works
- Partial triangles render correctly → Sutherland-Hodgman works
- Significant speedup in complex scenes → Frustum culling works
- Edge cases handled (triangles split into many) → Robust implementation
Project 15: Simple OpenGL Renderer
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: C++, Rust (with bindings)
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: GPU Programming / Graphics APIs
- Software or Tool: OpenGL 3.3+, GLFW, GLAD
- Main Book: LearnOpenGL by Joey de Vries (learnopengl.com)
What you’ll build: Recreate your software renderer using OpenGL—VAOs, VBOs, shaders, uniforms—understanding how GPUs accelerate everything you built by hand.
Why it teaches graphics: After building a software renderer, using OpenGL reveals how GPUs parallelize the exact same concepts. You’ll understand shaders as the GPU equivalent of your per-vertex and per-pixel code.
Core challenges you’ll face:
- OpenGL state machine → maps to binding buffers, setting state
- Shader compilation → maps to writing GLSL, debugging errors
- Buffer objects → maps to uploading data to GPU
- Uniforms and attributes → maps to passing data to shaders
Key Concepts:
- OpenGL Basics: LearnOpenGL - Getting Started
- Shaders (GLSL): LearnOpenGL - Shaders
- Vertex Buffer Objects: LearnOpenGL - Hello Triangle
- OpenGL Context: OpenGL Tutorial
Difficulty: Intermediate Time estimate: 1-2 weeks Prerequisites: Any software rendering project, basic understanding of what GPU does
Real world outcome:
$ ./opengl_renderer
[Same textured, lit 3D scene as your software renderer]
[But running at 500+ FPS instead of 30]
[You can load much more complex meshes]
[Press V to toggle between vertex-lit and pixel-lit]
Implementation Hints:
OpenGL follows a pattern:
- Generate an object (glGenBuffers, glGenVertexArrays)
- Bind it (glBindBuffer, glBindVertexArray)
- Configure/Upload data (glBufferData, glVertexAttribPointer)
- Use it in draw calls (glDrawArrays, glDrawElements)
Minimal vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main() {
gl_Position = projection * view * model * vec4(aPos, 1.0);
}
Minimal fragment shader:
#version 330 core
out vec4 FragColor;
void main() {
FragColor = vec4(1.0, 0.5, 0.2, 1.0); // orange
}
Your software renderer’s per-vertex code → vertex shader Your software renderer’s per-pixel code → fragment shader
Questions to explore:
- Why is OpenGL called a “state machine”?
- What’s the difference between VBO and VAO?
- How do uniforms differ from vertex attributes?
- Why must you compile shaders at runtime?
Learning milestones:
- Triangle appears on screen → You understand the pipeline setup
- 3D transformations work → Uniform matrices are correct
- Phong lighting in shaders → You can write GLSL
- You appreciate the speed difference → GPU parallelization is real
Project 16: 2.5D DOOM-Style Raycaster
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++
- Coolness Level: Level 5: Pure Magic
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 3: Advanced
- Knowledge Area: Retro Graphics / Game Development
- Software or Tool: SDL2 for display
- Main Book: Game Engine Black Book: Wolfenstein 3D by Fabien Sanglard
What you’ll build: A Wolfenstein 3D/DOOM-style raycasting engine—walking through textured corridors, looking left and right, all without true 3D math. Pure 2D map with column-by-column raycasting.
Why it teaches graphics: Before full 3D was feasible, games used clever tricks. Raycasting is not ray tracing—it’s a 2D algorithm that creates the illusion of 3D. Understanding this history shows how constraints drive innovation.
Core challenges you’ll face:
- DDA (Digital Differential Analyzer) for grid traversal → maps to efficient ray-grid intersection
- Wall texture mapping → maps to column-based texture slices
- Fisheye correction → maps to why raw distance looks wrong
- Floor/ceiling casting → maps to horizontal raycasting
Key Concepts:
- Raycasting Algorithm: Lodev’s Raycasting Tutorial - Lode Vandevenne
- DDA Algorithm: Game Engine Black Book: Wolfenstein 3D Chapter 4 - Fabien Sanglard
- Texture Mapping in Raycasters: Game Engine Black Book: Wolfenstein 3D Chapter 5 - Fabien Sanglard
- History of 2.5D Engines: Masters of Doom by David Kushner
Resources for key challenges:
- DOOM-like Game Engine Tutorial - Yuriy Georgiev
- Lodev’s Raycasting Tutorial - The definitive resource
Difficulty: Advanced Time estimate: 2-3 weeks Prerequisites: Project 1-3 (2D rendering basics), trigonometry
Real world outcome:
$ ./raycaster
[First-person view of textured corridors]
[WASD to move, mouse/arrows to look]
[Walk through a maze of walls with different textures]
[Smooth movement and rotation]
[Floors and ceilings with textures (if implemented)]
Implementation Hints:
The world is a 2D grid map:
1111111111
1000000001
1000110001
1000110001
1000000001
1111111111
1 = wall, 0 = empty
For each screen column (x = 0 to width):
- Calculate ray direction based on player angle + x offset
- Use DDA to step through grid cells until hitting a wall
- Calculate distance to wall
- Calculate wall height:
height = screen_height / distance - Draw vertical slice of wall texture
Fisheye correction:
// Wrong (causes fisheye):
distance = raw_distance_to_wall
// Correct:
distance = raw_distance * cos(ray_angle - player_angle)
DDA for grid traversal:
- Don’t check every pixel along the ray
- Jump from grid cell boundary to boundary
- Much faster than stepping pixel by pixel
Questions to explore:
- Why can’t you look up/down in Wolfenstein 3D?
- How did DOOM add sloped floors and variable height?
- Why is this called “2.5D” instead of 3D?
- How did these games run on 386 processors?
Learning milestones:
- Walls render at correct heights → Basic raycasting works
- No fisheye distortion → Correction applied correctly
- Textured walls → Column slicing works
- Smooth movement through the level → Player controls work
Project 17: Particle System
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: C++, Rust, Go
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 2: Intermediate
- Knowledge Area: Visual Effects / Simulation
- Software or Tool: SDL2 or your OpenGL renderer
- Main Book: Real-Time Rendering by Akenine-Möller, Haines, Hoffman
What you’ll build: A particle system that simulates fire, smoke, sparks, or explosions—thousands of particles with velocity, acceleration, color changes, and fading.
Why it teaches graphics: Particle systems are used everywhere: games (explosions, magic), movies (dust, debris), and data visualization. Understanding particles teaches you about simulation, blending, and managing thousands of objects efficiently.
Core challenges you’ll face:
- Particle pooling → maps to efficient memory management
- Physics simulation → maps to velocity, acceleration, forces
- Blending/transparency → maps to additive vs alpha blending
- Billboard rendering → maps to sprites always facing camera
Key Concepts:
- Particle Systems: Real-Time Rendering Chapter 13 - Akenine-Möller et al.
- Alpha Blending: Computer Graphics from Scratch Chapter 15 - Gabriel Gambetta
- Object Pooling: Game Programming Patterns - Robert Nystrom
- Physics Integration: Game Physics Engine Development Chapter 3 - Ian Millington
Difficulty: Intermediate Time estimate: 1 week Prerequisites: Project 1-3 (2D rendering) or Project 15 (OpenGL basics)
Real world outcome:
$ ./particles
[Click to spawn an explosion - hundreds of particles fly outward]
[Particles change color from yellow → orange → red → dark → fade out]
[Press F for fire: continuous upward flame effect]
[Press S for smoke: slow-rising, expanding gray particles]
[10,000 particles at 60 FPS - smooth and performant]
Implementation Hints:
Each particle has:
struct Particle {
float x, y, z; // position
float vx, vy, vz; // velocity
float ax, ay, az; // acceleration
float life; // remaining lifetime (0-1)
float size;
uint32_t color;
bool active;
};
Use a pool (fixed array) to avoid malloc/free per particle:
Particle pool[MAX_PARTICLES];
To spawn: find inactive particle, initialize it. To update:
for each particle:
if (active):
vx += ax * dt
vy += ay * dt
vz += az * dt
x += vx * dt
y += vy * dt
z += vz * dt
life -= decay * dt
if life <= 0: active = false
Additive blending (for fire, sparks):
final = background + particle_color
Alpha blending (for smoke):
final = background * (1-alpha) + particle_color * alpha
Questions to explore:
- Why use a pool instead of dynamic allocation?
- How do you sort particles for correct transparency?
- What’s the performance difference between CPU and GPU particles?
- How do emitters control particle spawning patterns?
Learning milestones:
- Particles spawn and move → Basic simulation works
- Particles fade out and die → Lifecycle management works
- Fire looks like fire → Color/size over lifetime tuned
- 10,000+ particles at 60 FPS → Performance is good
Project 18: Font Rendering
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++
- Coolness Level: Level 3: Genuinely Clever
- Business Potential: 3. The “Service & Support” Model
- Difficulty: Level 3: Advanced
- Knowledge Area: Typography / Rasterization
- Software or Tool: FreeType library (or stb_truetype)
- Main Book: The Rust Programming Language Appendix on Graphics (for concepts)
What you’ll build: A text rendering system that loads TrueType fonts and renders high-quality text to the screen, with proper kerning, subpixel positioning, and anti-aliasing.
Why it teaches graphics: Text rendering is deceptively complex. Understanding glyph rasterization, font metrics, and anti-aliasing teaches you about the intersection of vector graphics, bitmaps, and perception.
Core challenges you’ll face:
- Bezier curve rasterization → maps to how TrueType fonts define shapes
- Glyph caching → maps to texture atlases, performance
- Font metrics → maps to baseline, ascender, kerning
- Anti-aliased edges → maps to coverage computation
Key Concepts:
- TrueType Format: Apple TrueType Reference Manual
- Font Rasterization: The Raster Tragedy - Beat Stamm
- stb_truetype: Sean Barrett’s stb libraries
- FreeType: FreeType Tutorial
Difficulty: Advanced Time estimate: 1-2 weeks Prerequisites: Project 1-3 (2D rendering), understanding of curves
Real world outcome:
$ ./font_renderer
[Window displays: "Hello, World!" in a nice font]
[Text is smooth and anti-aliased, not jagged]
[Different sizes: 12pt, 24pt, 48pt all look crisp]
[Press K to toggle kerning - see "AV" spacing improve]
[Type to see text rendered in real-time]
Implementation Hints:
Using stb_truetype (simpler than FreeType):
#define STB_TRUETYPE_IMPLEMENTATION
#include "stb_truetype.h"
// Load font file
unsigned char *font_buffer = load_file("arial.ttf");
stbtt_fontinfo font;
stbtt_InitFont(&font, font_buffer, 0);
// Rasterize a glyph
int width, height, xoff, yoff;
unsigned char *bitmap = stbtt_GetCodepointBitmap(&font, 0,
stbtt_ScaleForPixelHeight(&font, 32), // 32px height
'A', &width, &height, &xoff, &yoff);
// 'bitmap' is now grayscale glyph (alpha values)
For performance, build a texture atlas:
- Render all glyphs to one large texture
- Store UV coordinates for each glyph
- Drawing text = drawing textured quads
Font metrics matter:
- Baseline: where the text “sits”
- Ascender: height above baseline (top of ‘d’)
- Descender: depth below baseline (bottom of ‘g’)
- Advance width: how far to move after each glyph
- Kerning: adjustment between specific pairs (AV, To)
Questions to explore:
- Why does text at small sizes need hinting?
- What’s subpixel rendering (ClearType) and why does it help?
- Why do fonts store curves instead of bitmaps?
- How does kerning differ from fixed-width spacing?
Learning milestones:
- Single glyphs render correctly → Rasterization works
- Words space properly → Advance widths applied
- Text looks smooth at all sizes → Anti-aliasing works
- Kerning pairs look professional → “AV” looks better
Project 19: 2D Game with Custom Graphics Engine
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Zig
- Coolness Level: Level 4: Hardcore Tech Flex
- Business Potential: 2. The “Micro-SaaS / Pro Tool”
- Difficulty: Level 3: Advanced
- Knowledge Area: Game Development / Graphics Integration
- Software or Tool: Your accumulated graphics code (Projects 1-18)
- Main Book: Game Programming Patterns by Robert Nystrom
What you’ll build: A complete 2D game (platformer, shooter, or puzzle) using your own graphics code—sprites, animation, tilemaps, collision, particles, text—everything you’ve built.
Why it teaches graphics: Integration is where the learning happens. Making all your graphics code work together for a real game reveals gaps in your understanding and forces you to think about architecture.
Core challenges you’ll face:
- Game loop structure → maps to fixed timestep, interpolation
- Sprite batching → maps to performance when drawing many sprites
- Animation systems → maps to sprite sheets, state machines
- Tilemap rendering → maps to efficient large-world rendering
Key Concepts:
- Game Loop: Game Programming Patterns Chapter 9 - Robert Nystrom
- Component Systems: Game Programming Patterns Chapter 14 - Robert Nystrom
- Sprite Animation: 2D Game Engine Programming with C - innercomputing.com
- Collision Detection: Real-Time Collision Detection - Christer Ericson
Resources for key challenges:
- Game Programming Patterns - Free online book
- 2D Game Engine Programming with C - Comprehensive guide
Difficulty: Advanced Time estimate: 2-4 weeks Prerequisites: Most of Projects 1-18, game design concepts
Real world outcome:
$ ./my_game
[Title screen with your game name, rendered in your font system]
[Press ENTER to play]
[Your character (animated sprite) runs and jumps through levels]
[Particle effects when you shoot/jump]
[Parallax scrolling backgrounds]
[Score and lives displayed with your text renderer]
[Game is playable and fun - not just a tech demo]
Implementation Hints:
Structure your game around a fixed-timestep loop:
while (running) {
input = poll_input();
// Fixed timestep for physics
accumulator += frame_time;
while (accumulator >= dt) {
update(dt);
accumulator -= dt;
}
render(accumulator / dt); // interpolation factor
}
Sprite batching:
- Don’t call “draw sprite” 1000 times
- Collect all sprite data into one buffer
- Draw once with many sprites
Tilemap:
- Only draw visible tiles (camera culling)
- Tiles are just texture atlas coordinates
Animation:
struct Animation {
int frames[MAX_FRAMES];
int frame_count;
float frame_duration;
float timer;
int current_frame;
};
Questions to explore:
- How do you handle collision with the tilemap?
- What’s the difference between update rate and frame rate?
- How do you implement smooth camera following?
- When do you use physics engines vs custom collision?
Learning milestones:
- Character moves and animates → Core loop works
- Multiple levels with tilemaps → Data-driven design works
- 60 FPS with many sprites/particles → Performance is acceptable
- It’s actually fun to play → You’re now a game developer
Project 20: Mini 3D Game Engine
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++, Zig
- Coolness Level: Level 5: Pure Magic
- Business Potential: 4. The “Open Core” Infrastructure
- Difficulty: Level 4: Expert
- Knowledge Area: Game Engines / Systems Architecture
- Software or Tool: Your 3D renderer + OpenGL (optional)
- Main Book: Game Engine Architecture by Jason Gregory
What you’ll build: A minimal but complete 3D game engine—scene graph, resource loading, entity system, camera controls, and a playable 3D demo (walk around an environment, interact with objects).
Why it teaches graphics: A game engine ties together everything: rendering, input, audio, physics, AI. Building one shows you how professional engines (Unity, Unreal) are structured and why certain patterns exist.
Core challenges you’ll face:
- Scene graph design → maps to hierarchical transformations
- Resource management → maps to loading/unloading assets
- Entity-component systems → maps to composition over inheritance
- Hot-reloading → maps to rapid iteration workflows
Key Concepts:
- Game Engine Architecture: Game Engine Architecture - Jason Gregory
- Entity-Component-System: Game Programming Patterns Chapter 14 - Robert Nystrom
- Scene Graphs: Real-Time Rendering Chapter 19 - Akenine-Möller et al.
- Resource Management: Game Engine Architecture Chapter 6 - Jason Gregory
Resources for key challenges:
- Kohi Game Engine by Travis Vroman - Excellent video series
- PRDeving’s Game Engine in Pure C - Pure C approach
Difficulty: Expert Time estimate: 1-3 months Prerequisites: All previous projects, strong systems programming
Real world outcome:
$ ./my_engine
[3D environment loads: room with furniture, outdoor area with trees]
[WASD + mouse to navigate in first/third person]
[Press E to interact with objects (open doors, pick up items)]
[Press TAB to see debug overlay (FPS, draw calls, entities)]
[Modify a config file: changes apply without restart (hot reload)]
[Load different levels/scenes from data files]
Implementation Hints:
Entity-Component-System (ECS):
// Entities are just IDs
typedef uint32_t Entity;
// Components are data
struct Transform { float pos[3], rot[3], scale[3]; };
struct MeshRenderer { Mesh *mesh; Material *material; };
struct Collider { float bounds[3]; };
// Systems operate on components
void render_system(World *world) {
for each entity with (Transform, MeshRenderer):
draw_mesh(transform.pos, mesh_renderer.mesh);
}
Scene graph for hierarchies (parent-child transforms):
struct Node {
Transform local_transform;
Transform world_transform; // computed from parent
Node *parent;
Node *children;
};
Resource management:
- Reference counting or handles
- Async loading for large assets
- Unload unused resources
Questions to explore:
- What’s the difference between scene graph and ECS?
- How do commercial engines handle thousands of entities?
- What does “data-oriented design” mean for game engines?
- Why is hot-reloading so important for game development?
Learning milestones:
- Scene loads from file → Resource system works
- Multiple entities with components → ECS works
- Parent-child transforms work → Scene graph works
- You can prototype a game quickly → Engine is usable
Project Comparison Table
| # | Project | Difficulty | Time | Depth of Understanding | Fun Factor |
|---|---|---|---|---|---|
| 1 | Framebuffer/Pixels | Beginner | Weekend | ★★☆☆☆ | ★★☆☆☆ |
| 2 | Line Drawing (Bresenham) | Beginner | 2-3 days | ★★★☆☆ | ★★☆☆☆ |
| 3 | Shape Rasterization | Intermediate | 1 week | ★★★★☆ | ★★★☆☆ |
| 4 | 2D Transformations | Intermediate | 1 week | ★★★★☆ | ★★★☆☆ |
| 5 | 3D Wireframe | Advanced | 1-2 weeks | ★★★★☆ | ★★★★☆ |
| 6 | Z-Buffer Filled 3D | Advanced | 1-2 weeks | ★★★★★ | ★★★★☆ |
| 7 | Flat Shading | Advanced | 1 week | ★★★★☆ | ★★★★☆ |
| 8 | Gouraud/Phong Shading | Advanced | 1-2 weeks | ★★★★★ | ★★★★☆ |
| 9 | Texture Mapping | Advanced | 1-2 weeks | ★★★★★ | ★★★★★ |
| 10 | FPS Camera | Intermediate | 3-5 days | ★★★☆☆ | ★★★★★ |
| 11 | Ray Tracer Basic | Advanced | 1-2 weeks | ★★★★★ | ★★★★★ |
| 12 | Ray Tracer Materials | Advanced | 1-2 weeks | ★★★★★ | ★★★★★ |
| 13 | Anti-Aliasing/Soft Shadows | Advanced | 1 week | ★★★★☆ | ★★★★★ |
| 14 | Clipping/Culling | Advanced | 1-2 weeks | ★★★★☆ | ★★★☆☆ |
| 15 | OpenGL Renderer | Intermediate | 1-2 weeks | ★★★★☆ | ★★★★☆ |
| 16 | DOOM-Style Raycaster | Advanced | 2-3 weeks | ★★★★★ | ★★★★★ |
| 17 | Particle System | Intermediate | 1 week | ★★★☆☆ | ★★★★★ |
| 18 | Font Rendering | Advanced | 1-2 weeks | ★★★★☆ | ★★★☆☆ |
| 19 | 2D Game | Advanced | 2-4 weeks | ★★★★★ | ★★★★★ |
| 20 | 3D Game Engine | Expert | 1-3 months | ★★★★★ | ★★★★★ |
Recommended Learning Path
Based on building deep understanding while keeping motivation high:
Phase 1: 2D Foundations (2-3 weeks)
Start here: Projects 1 → 2 → 3 → 4
This builds your fundamental understanding of how pixels become images. You’ll have a solid 2D graphics library when done.
Phase 2: 3D Software Rendering (4-6 weeks)
The core journey: Projects 5 → 6 → 7 → 8 → 9 → 10
This is where the magic happens. By the end, you’ll have built a complete 3D renderer—without any GPU help. This knowledge is invaluable.
Phase 3: Choose Your Adventure
Option A: Ray Tracing Path (3-4 weeks) Projects 11 → 12 → 13
Beautiful, physically-based images. Great for understanding light transport. Results look amazing.
Option B: Real-Time Path (4-6 weeks) Projects 14 → 15 → 16 → 17
GPU acceleration, classic game tech, visual effects. Great for game development.
Phase 4: Integration & Application (4-8 weeks)
Projects 18 → 19 → 20
Put it all together. Build something playable. Become a game developer.
Final Capstone Project: Build a Complete 3D Game
- File: LEARN_COMPUTER_GRAPHICS_FROM_SCRATCH_IN_C.md
- Main Programming Language: C
- Alternative Programming Languages: Rust, C++
- Coolness Level: Level 5: Pure Magic
- Business Potential: 4. The “Open Core” Infrastructure
- Difficulty: Level 5: Master
- Knowledge Area: Full-Stack Game Development
- Software or Tool: Everything you’ve built
- Main Book: Game Engine Architecture by Jason Gregory
What you’ll build: A polished, playable 3D game—whether an FPS, adventure, puzzle, or exploration game—using your engine, demonstrating every concept you’ve learned: 3D rendering, lighting, textures, particles, text, audio, and game logic.
Why it teaches graphics: The final integration forces you to understand how everything fits together. You’ll encounter real problems that textbooks don’t cover. You’ll make trade-offs. You’ll optimize. You’ll ship.
Core challenges you’ll face:
- Performance profiling → maps to finding and fixing bottlenecks
- Content pipeline → maps to getting assets into your engine
- Polish and juice → maps to making it feel good
- Scoping and finishing → maps to actually completing a project
Key Concepts:
- Performance Optimization: Game Engine Architecture Chapter 15 - Jason Gregory
- Game Feel: Game Feel by Steve Swink
- Content Pipelines: Game Engine Architecture Chapter 6 - Jason Gregory
- Shipping Games: The Art of Game Design by Jesse Schell
Difficulty: Master Time estimate: 2-6 months Prerequisites: All previous projects
Real world outcome:
$ ./my_3d_game
[Professional-looking title screen]
[3D gameplay with smooth controls]
[Enemies, puzzles, or challenges]
[Sound effects and music]
[Win/lose conditions]
[Something you're proud to show anyone]
Implementation Hints:
Choose scope wisely:
- A small, polished game > a large, broken one
- 5 minutes of great gameplay > 60 minutes of mediocre gameplay
- Finish something. Then iterate.
What “polish” means:
- Screen shake when things explode
- Particle effects for feedback
- Sound effects for every action
- Smooth transitions between states
- No crashes, no soft-locks
Content is king:
- Placeholder art → final art
- Level design matters as much as code
- Playtest with real people
Learning milestones:
- Playable prototype exists → Core loop works
- Other people can play it → It’s stable
- People want to keep playing → It’s fun
- You’re proud of it → You’re a game developer
Essential Resources Summary
Books
| Book | Author | Best For |
|---|---|---|
| Computer Graphics from Scratch | Gabriel Gambetta | Beginners, theory + practice |
| Ray Tracing in One Weekend | Peter Shirley | Ray tracing introduction |
| Real-Time Rendering | Akenine-Möller et al. | Reference, advanced topics |
| 3D Math Primer | Dunn & Parberry | Mathematics foundations |
| Game Engine Architecture | Jason Gregory | Engine development |
| Game Programming Patterns | Robert Nystrom | Architecture patterns |
| Fundamentals of Computer Graphics | Marschner & Shirley | Academic, comprehensive |
Online Resources
- Scratchapixel - Outstanding free tutorials
- LearnOpenGL - Best OpenGL tutorial
- TinyRenderer - Software renderer walkthrough
- Lodev’s Tutorials - Raycasting and more
- Pikuma Courses - Paid but excellent (3D graphics, game physics)
Code References
- stb libraries - Single-file C libraries (image loading, font rendering)
- SDL2 - Cross-platform window/input
- GLFW - OpenGL window/context
- GLAD - OpenGL loader
Summary
| # | Project | Main Language |
|---|---|---|
| 1 | Framebuffer and Pixel Plotting | C |
| 2 | 2D Line Drawing (Bresenham’s Algorithm) | C |
| 3 | 2D Shape Rasterization (Circles, Triangles, Polygons) | C |
| 4 | 2D Transformations and Animation | C |
| 5 | Software 3D Renderer - Wireframe | C |
| 6 | Software 3D Renderer - Filled Triangles with Z-Buffer | C |
| 7 | Flat Shading and Diffuse Lighting | C |
| 8 | Gouraud and Phong Shading | C |
| 9 | Texture Mapping | C |
| 10 | Camera System (FPS-Style Controls) | C |
| 11 | Ray Tracer - Basic Spheres and Planes | C |
| 12 | Ray Tracer - Materials (Metal, Glass, Diffuse) | C |
| 13 | Ray Tracer - Anti-Aliasing and Soft Shadows | C |
| 14 | Clipping and View Frustum Culling | C |
| 15 | Simple OpenGL Renderer | C |
| 16 | 2.5D DOOM-Style Raycaster | C |
| 17 | Particle System | C |
| 18 | Font Rendering | C |
| 19 | 2D Game with Custom Graphics Engine | C |
| 20 | Mini 3D Game Engine | C |
| Final | Complete 3D Game (Capstone) | C |
Sources
- Pikuma - 3D Computer Graphics Programming
- Scratchapixel 4.0
- Computer Graphics from Scratch by Gabriel Gambetta
- TinyRenderer by ssloy
- zauonlok/renderer - Software Renderer in C89
- Ray Tracing in One Weekend
- Ray Tracing in One Weekend in C
- LearnOpenGL.com
- OpenGL Tutorial
- Kohi Game Engine by Travis Vroman
- PRDeving’s Game Engine in Pure C
- DOOM-like Game Engine Tutorial
- Lodev’s Raycasting Tutorial
- stb libraries by Sean Barrett