Project 6: HTTP/1.1 Server with Request Pooling (Capstone)

Build a minimal HTTP/1.1 server that parses requests safely, uses a request-scoped allocator, and enforces strict invariants.

Quick Reference

Attribute Value
Difficulty Expert
Time Estimate 2-4 weeks
Main Programming Language C (Alternatives: Rust, Zig)
Alternative Programming Languages Rust, Zig
Coolness Level Level 4 (Hardcore Tech Flex)
Business Potential Level 4 (Service foundation)
Prerequisites Sockets, buffers, parsing, memory allocation
Key Topics HTTP parsing, buffer invariants, request pooling

1. Learning Objectives

By completing this project, you will:

  1. Implement a safe HTTP/1.1 request parser with strict limits.
  2. Manage partial reads and buffering without overflow or data loss.
  3. Use a per-request arena allocator for predictable lifetimes.
  4. Enforce invariants for parser state, buffers, and ownership.
  5. Produce deterministic server behavior and error responses.

2. All Theory Needed (Per-Concept Breakdown)

2.1 HTTP/1.1 Request Parsing and Grammar

Fundamentals

HTTP/1.1 requests consist of a request line, headers, and an optional body. Each line is terminated by CRLF (\r\n). The request line has three parts: method, path, and version. Headers are Name: Value pairs. A parser must enforce these rules strictly and reject invalid requests. The invariants are that the request line has exactly three tokens, headers are terminated by an empty line, and header sizes do not exceed configured limits. If these rules are not enforced, the server can misinterpret requests, leading to security bugs or crashes. Even small deviations, like accepting a missing HTTP version, can open the door to ambiguous parsing and request smuggling.

Deep Dive into the Concept

HTTP/1.1 is a text protocol with strict grammar. A robust server must parse it as a state machine with explicit states: reading request line, reading headers, reading body. The request line must match METHOD SP PATH SP HTTP/VERSION CRLF. If the request line is malformed, the server should reject the request before reading headers. This is an important invariant because it prevents partially valid requests from advancing the parser into inconsistent states.

Headers are also constrained. Each header line must contain a colon separating the name from the value. Header names are case-insensitive and must not contain whitespace or control characters. Values may contain leading spaces but must not contain CR or LF. This is an injection risk: if you accept raw CR or LF in header values, you enable request smuggling or header splitting. Therefore, your parser must validate that header lines contain only allowed characters. This is not about being pedantic; it is about security and correctness.

HTTP/1.1 allows persistent connections, which means multiple requests may be sent on the same TCP connection. Even if you implement only a single-request-per-connection server, you must still parse the request correctly and know when it ends. The end of headers is indicated by an empty line (CRLF). For requests with bodies, you must use Content-Length to know how many bytes to read. If you ignore Content-Length, you may mis-handle POST requests. For this project, you can implement GET-only with no body, but you should still parse and ignore bodies deterministically or explicitly reject them. The rule must be documented.

Parsing must handle partial reads. The request line and headers might arrive in multiple recv calls. Therefore, the parser must accept a buffer that may contain incomplete lines and preserve the partial data until the next read. This implies a critical invariant: the buffer must always contain a prefix of the request that is consistent with the parser state. For example, if you have read half of a header line, the parser should not advance to the next state. This is why a state machine design is mandatory.

Another crucial invariant is maximum size limits. Without limits, a client can send extremely long headers or a very long request line and cause memory exhaustion. You must choose limits (e.g., max request line length, max total headers size) and enforce them. If the limits are exceeded, you must return a 413 or 414 error and close the connection. This is a deterministic safety rule that protects your server.

HTTP parsing also includes normalization of the path. At minimum, you should reject paths containing .. or NUL bytes to prevent directory traversal when serving static files. This is a security invariant: the path must resolve within the document root. Even for a minimal server, you should enforce this rule because it is a common vulnerability.

Finally, the parser must be deterministic in error handling. For any malformed request, you should return a specific status code and log the exact error. This is not only for debugging; it prevents ambiguous behavior and makes tests reliable. A good approach is to define error codes for parser failures and map them to HTTP responses.

How this fits on projects

This concept defines the HTTP parser and the invariants that keep it correct. The server’s correctness depends on it.

Definitions & key terms

  • Request line: METHOD SP PATH SP VERSION.
  • CRLF: Carriage return and line feed, \r\n.
  • Header section: Lines until an empty CRLF line.
  • Content-Length: Header indicating body size.

Mental model diagram (ASCII)

STATE: REQ_LINE -> HEADERS -> BODY -> DONE

How it works (step-by-step, with invariants and failure modes)

  1. Read bytes into buffer.
  2. Parse request line if CRLF found.
  3. Parse headers line by line until CRLF CRLF.
  4. If Content-Length > 0, read body bytes.
  5. Validate invariants and respond.

Failure modes: missing CRLF, header overflow, invalid method, path traversal.

Minimal concrete example

if (!find_crlf(buf)) return NEED_MORE_DATA;

Common misconceptions

  • “HTTP is just strings; parsing is easy.” (It is easy to get wrong.)
  • “You can ignore limits.” (That invites DoS attacks.)
  • “If the request line is valid, headers are safe.” (Headers can still be malicious.)

Check-your-understanding questions

  1. Why must you enforce maximum header sizes?
  2. What marks the end of headers?
  3. Why is path normalization necessary?

Check-your-understanding answers

  1. To prevent memory exhaustion and DoS attacks.
  2. A blank line: \r\n\r\n.
  3. To prevent directory traversal and unsafe file access.

Real-world applications

  • Web servers, proxies, and load balancers.
  • HTTP libraries in embedded devices.

Where you will apply it

  • This project: See §3.2 Functional Requirements and §4.4 Algorithm Overview.
  • Also used in: P04 JSON Parser for parsing techniques.

References

  • RFC 9112 (HTTP/1.1 message syntax).
  • “UNIX Network Programming” Vol 1 by Stevens.

Key insights

HTTP parsing is simple in theory but demands strict invariants to be safe.

Summary

A correct HTTP parser is defined by strict grammar, limits, and deterministic error handling.

Homework/Exercises to practice the concept

  1. Write a function that splits a request line into three tokens.
  2. Implement a header parser that rejects lines without a colon.
  3. Add a maximum header size and test with an oversized input.

Solutions to the homework/exercises

  1. Find two spaces and split method/path/version.
  2. Search for ‘:’ and validate both name and value.
  3. If buffer exceeds limit, return a 413 error.

2.2 Streaming I/O, Buffers, and Partial Reads

Fundamentals

TCP is a stream protocol: data arrives in arbitrary chunks. This means you may receive half a request line, an entire request, or multiple requests in a single recv. A server must handle partial reads by buffering data and parsing only when complete lines are available. The buffer invariants are that you never read beyond the buffer length, you never discard unread data, and you always know how many bytes are valid. If these invariants are broken, the server will misparse requests or crash. Thinking in terms of a stream, not packets, is the mental shift that makes these invariants intuitive.

Deep Dive into the Concept

Partial reads are one of the most common sources of bugs in network servers. recv returns the number of bytes available at that moment, which may be less than a full request line or header block. If you assume a full line is present, you will parse incomplete data and either reject valid requests or corrupt state. Therefore, your parser must operate on a buffer with explicit length tracking. A typical pattern is to maintain a buffer and a len indicating how many bytes are valid. When you read more data, you append it to the buffer and update len. When you consume data (e.g., after parsing a line), you either move the remaining bytes down or maintain an offset pointer into the buffer.

The invariant is that len always reflects the number of valid bytes in the buffer. If you update len incorrectly, you will either lose data or parse garbage. Another invariant is that the buffer must not overflow. If the incoming data would exceed the buffer capacity, you must either expand the buffer (dynamic growth) or reject the request. For this project, you should define a maximum buffer size and reject if it is exceeded. This is a safety decision and should be enforced consistently.

Parsing with partial reads typically requires a state machine. For example, in the REQ_LINE state, you scan for CRLF. If not found, you return and read more data. When found, you parse the line and move to the HEADERS state, leaving any extra data in the buffer. Similarly, for headers, you scan for CRLF CRLF. The key invariant is that you only advance states when the required delimiter is present. This ensures that the parser is never out of sync with the buffer contents.

Buffer management also includes compaction. After consuming some bytes, you may want to move the remaining bytes to the start of the buffer to make room for more data. This is an overlapping copy operation and must be done with memmove. If you use memcpy, you may corrupt data. This is similar to the gap buffer concept: overlapping memory movement must be safe. Additionally, you must adjust offsets correctly after compaction. If you forget to adjust offsets, your parser will read the wrong bytes.

A subtle point is that HTTP headers are line-based, but bodies can be binary. If you decide to support request bodies, you must treat body data as raw bytes, not as text. This means you should not search for CRLF in the body; you should read exactly the number of bytes specified by Content-Length. This is another invariant: once you enter the BODY state, you must count bytes rather than scan for delimiters. Mixing these approaches leads to parsing errors and security issues.

Finally, streaming I/O requires deterministic timeouts and limits. If a client sends data too slowly, the server could hang forever. For this project, you can ignore timeouts, but you should at least design the buffer logic so that it can handle stalled reads without corrupting state. This means the parser state must be preserved between reads and must be resumable.

How this fits on projects

This concept drives the buffering strategy and the state machine that handles partial requests. It is central to the server’s correctness.

Definitions & key terms

  • Partial read: A read that returns fewer bytes than a complete message.
  • Buffer length: The count of valid bytes in the buffer.
  • Compaction: Moving unconsumed bytes to the start of the buffer.

Mental model diagram (ASCII)

buffer: [parsed][unparsed..............]
         ^ consumed        ^ len
compaction -> move unparsed to start

How it works (step-by-step, with invariants and failure modes)

  1. Read into buffer at offset len.
  2. Update len by bytes read.
  3. Attempt to parse based on current state.
  4. If a full line is consumed, advance state and adjust buffer.
  5. If not enough data, read again.

Failure modes: dropping bytes, misaligned offsets, buffer overflow.

Minimal concrete example

ssize_t n = recv(fd, buf + len, cap - len, 0);
len += (size_t)n;

Common misconceptions

  • “recv returns a full line.” (It does not.)
  • “You can parse as soon as any data arrives.” (You need complete delimiters.)
  • “Buffers can be arbitrarily large.” (They must be bounded.)

Check-your-understanding questions

  1. Why must you track len explicitly?
  2. What happens if you parse without finding CRLF?
  3. Why should you use memmove for compaction?

Check-your-understanding answers

  1. Because only part of the buffer is valid at any time.
  2. You may parse incomplete data and corrupt state.
  3. Because source and destination ranges overlap.

Real-world applications

  • Network servers, proxies, and parsers.
  • Any streaming protocol handling partial frames.

Where you will apply it

  • This project: See §3.2 Functional Requirements and §5.10 Phase 2.
  • Also used in: P02 Gap Buffer for memmove patterns.

References

  • Stevens, “UNIX Network Programming” Vol 1.
  • RFC 9112 examples on message framing.

Key insights

Streaming I/O requires stateful parsing and strict buffer invariants.

Summary

If you do not handle partial reads correctly, your server will misparse requests. Buffer invariants and state machines are the solution.

Homework/Exercises to practice the concept

  1. Write a small program that reads from stdin and prints complete lines only.
  2. Implement buffer compaction with memmove.
  3. Test with inputs split across multiple reads.

Solutions to the homework/exercises

  1. Accumulate data until newline is found, then print.
  2. Use memmove(buf, buf + consumed, len - consumed).
  3. Simulate reads of 1 byte at a time and verify output.

2.3 Request-Scoped Ownership and Pooling

Fundamentals

A request-scoped allocator (arena or pool) ensures that all memory allocated during request handling is freed at once when the request completes. This simplifies ownership: the request owns everything allocated for it, and the server resets the pool after responding. The invariant is that no pointer allocated for one request is used after the request ends. This prevents leaks and simplifies cleanup, but requires strict lifetime boundaries. It also encourages a clean separation between request data and long-lived server state, which makes reasoning about memory far easier. In practice, this means request objects should never be cached globally and should be destroyed or reset in a single, explicit step.

Deep Dive into the Concept

Request pooling is a practical application of the arena allocator from Project 3. Each incoming request gets a pool (either per-connection or per-request). When the request is complete, you reset the pool, reclaiming all memory at once. This is efficient because you avoid per-allocation frees and reduce fragmentation. The trade-off is that you must ensure no pointer escapes the request scope. This is a strict invariant that should be documented and enforced. It also means you should avoid caching request data unless you explicitly copy it into longer-lived storage.

In the server, the request pool is used for parsing headers, storing temporary strings, and constructing the response. For example, you might store header names and values as slices or copies allocated in the pool. The pool owns them until the response is sent, then they are invalid. This is safe because the request is done. But you must ensure that you do not store these pointers in any long-lived global structures. This is where ownership rules intersect with architecture: keep request data within a request context struct and never store it elsewhere.

Pooling also affects error handling. If a parse error occurs, you can simply reset the pool and close the connection or send an error response. You do not need to free each header or buffer manually. This simplifies error paths and reduces the risk of leaks. However, you must ensure that the pool reset happens exactly once per request, even on error. This is another invariant: each request ends with a pool reset, and no request uses a pool that has been reset.

There are design decisions here: you can allocate a pool per connection and reset it after each request, or you can allocate a pool per request. Per-connection pools avoid allocation overhead but require careful reset timing if multiple requests are pipelined. For this project, you can simplify by handling one request at a time per connection and resetting after the response. This keeps invariants simple and avoids pipelining complexity.

A pool allocator also provides deterministic memory usage. Because the pool has a fixed size, you can enforce a maximum request memory footprint. If the pool fills up, you can reject the request with a 413 or 500 response. This is a safety invariant that prevents memory exhaustion. It also makes testing easier: you can inject a small pool size to test failure paths deterministically.

Finally, request-scoped ownership aligns with testing and debugging. You can validate that memory leaks do not occur by confirming that the pool’s used returns to zero after each request. This is a simple invariant that is easy to test and log. It provides a sanity check that your server is not leaking per-request memory.

One more subtle point is that pooling changes how you structure APIs. Functions that need temporary buffers should take a request context so they can allocate from the pool, instead of calling malloc directly. This keeps ownership consistent and prevents mixed allocation models that are hard to free correctly. By making the request context explicit, you make the lifetime boundary visible in your code, which reduces accidental pointer escapes.

How this fits on projects

This concept ties Project 3 (arena allocator) into the HTTP server and defines the memory ownership rules for request data.

Definitions & key terms

  • Request scope: The lifetime of a single request.
  • Pool reset: Reclaiming all request memory at once.
  • Request context: Struct holding per-request data and pool.

Mental model diagram (ASCII)

Request begins -> allocate in pool -> respond -> reset pool
All request pointers invalid after reset

How it works (step-by-step, with invariants and failure modes)

  1. Initialize request pool at request start.
  2. Allocate headers, buffers, and temporary strings in pool.
  3. Build response and send it.
  4. Reset pool and clear request state.

Failure modes: using pointers after reset, forgetting to reset on error, pool overflow.

Minimal concrete example

req->pool_used = 0; // reset

Common misconceptions

  • “Pooling hides leaks.” (It only hides them if you never reset.)
  • “Request data can be stored globally.” (That violates request lifetime.)
  • “Pool overflow can be ignored.” (It must be handled deterministically.)

Check-your-understanding questions

  1. Why is a request pool safer than per-node malloc/free?
  2. What happens if you keep a header pointer after reset?
  3. How do you handle pool overflow?

Check-your-understanding answers

  1. It centralizes cleanup and reduces error paths.
  2. The pointer becomes invalid and may point to reused memory.
  3. Return an error and reject the request.

Real-world applications

  • Web servers use per-request pools for headers and routing data.
  • HTTP proxies use pools for parsing and buffering.

Where you will apply it

  • This project: See §3.2 Functional Requirements and §4.3 Data Structures.
  • Also used in: P03 Memory Arena.

References

  • Arena allocation patterns in web servers.
  • Stevens, “UNIX Network Programming” Vol 1.

Key insights

Pooling makes memory management predictable, but only if you respect request lifetimes.

Summary

Request pools simplify cleanup and enforce ownership boundaries. They are the key to safe request handling.

Homework/Exercises to practice the concept

  1. Add a pool size limit and test overflow handling.
  2. Log pool usage before and after reset.
  3. Allocate request headers in the pool and verify they are freed after reset.

Solutions to the homework/exercises

  1. Return error when used + size > capacity.
  2. Assert that used == 0 after reset.
  3. Reset the pool and confirm header pointers are invalidated.

3. Project Specification

3.1 What You Will Build

A minimal HTTP/1.1 server that handles one request per connection, parses the request line and headers, serves static files, and uses a request-scoped memory pool.

Included:

  • TCP listener
  • Request parser with limits
  • Static file response
  • Request pool allocator

Excluded:

  • TLS
  • HTTP/2
  • Concurrent request handling

3.2 Functional Requirements

  1. Listen: Bind to a port and accept connections.
  2. Parse: Parse request line and headers safely.
  3. Serve: Serve files from a document root.
  4. Pool: Use per-request pool for allocations.
  5. Errors: Return correct HTTP error responses.

3.3 Non-Functional Requirements

  • Performance: Handle requests with minimal allocation overhead.
  • Reliability: No buffer overflows or use-after-free.
  • Usability: Clear configuration and deterministic behavior.

3.4 Example Usage / Output

./httpserver 8080 ./www
curl -v http://localhost:8080/index.html

3.5 Data Formats / Schemas / Protocols

Error response format:

HTTP/1.1 <code> <reason>
Content-Length: <n>
Content-Type: text/plain

<message>

3.6 Edge Cases

  • Request line too long.
  • Headers exceed limit.
  • Missing Host header (HTTP/1.1 requirement).
  • Path traversal attempts.

3.7 Real World Outcome

3.7.1 How to Run (Copy/Paste)

make
./httpserver 8080 ./www

3.7.2 Golden Path Demo (Deterministic)

Serve a fixed index.html and return 200 OK.

3.7.3 CLI Terminal Transcript (Exact)

$ ./httpserver 8080 ./www
[server] listening on :8080
[conn 1] GET /index.html
[conn 1] 200 OK (128 bytes)
$ curl -v http://localhost:8080/index.html
> GET /index.html HTTP/1.1
> Host: localhost:8080
>
< HTTP/1.1 200 OK
< Content-Length: 128
< Content-Type: text/html
<
<html>...</html>

3.7.4 Failure Demo (Deterministic)

$ printf 'GET /../../etc/passwd HTTP/1.1\r\nHost: x\r\n\r\n' | nc localhost 8080
HTTP/1.1 400 Bad Request
Content-Length: 24
Content-Type: text/plain

invalid path traversal

3.7.5 Exit Codes

  • 0: clean shutdown
  • 2: bind/listen failure
  • 3: parse error
  • 4: file not found

4. Solution Architecture

4.1 High-Level Design

client -> socket -> buffer -> parser -> request -> response -> socket
                          ^            |
                          |            v
                     request pool    file read

4.2 Key Components

| Component | Responsibility | Key Decisions | |———–|—————-|—————| | Listener | Accept connections | Blocking I/O | | Parser | Parse request line/headers | State machine | | Pool | Allocate request data | Reset per request | | Responder | Build and send response | Static files only |

4.3 Data Structures (No Full Code)

typedef struct {
    char method[8];
    char path[256];
    size_t content_length;
    Header *headers;
    Arena pool;
} Request;

4.4 Algorithm Overview

Key Algorithm: Parse Loop

  1. Read bytes into buffer.
  2. If request line complete, parse it.
  3. Parse headers until CRLF CRLF.
  4. Validate request and serve file.

Complexity Analysis:

  • Time: O(n) in request size.
  • Space: O(n) within pool limits.

5. Implementation Guide

5.1 Development Environment Setup

cc --version
make --version

5.2 Project Structure

httpserver/
├── include/http.h
├── src/http.c
├── src/parser.c
├── src/server.c
├── tests/http_test.c
└── Makefile

5.3 The Core Question You’re Answering

“How do I parse and serve HTTP requests safely in C without memory bugs?”

5.4 Concepts You Must Understand First

  1. HTTP request grammar and parsing.
  2. Streaming I/O with partial reads.
  3. Request-scoped ownership via pooling.

5.5 Questions to Guide Your Design

  1. What limits will you enforce for request lines and headers?
  2. How will you buffer partial reads without overflow?
  3. How will you ensure path traversal is rejected?

5.6 Thinking Exercise

Simulate receiving a request line split across two recv calls. How does your parser resume?

5.7 The Interview Questions They’ll Ask

  1. Why must HTTP parsers handle partial reads?
  2. How do you prevent header buffer overflow?
  3. What is a safe maximum request size?

5.8 Hints in Layers

Hint 1: Start with blocking I/O and one request per connection. Hint 2: Implement a strict state machine parser. Hint 3: Use the arena from Project 3 for request memory.

5.9 Books That Will Help

| Topic | Book | Chapter | |——|——|———| | HTTP | RFC 9112 | All | | Sockets | “UNIX Network Programming” Vol 1 | Ch. 4-6 | | Allocators | “C Interfaces and Implementations” | Ch. 5 |

5.10 Implementation Phases

Phase 1: Core Server (4-5 days)

Goals: Accept connections and serve a fixed response. Tasks:

  1. Implement socket setup and accept loop.
  2. Send a fixed 200 OK response.
  3. Add logging. Checkpoint: curl receives a 200 OK response.

Phase 2: Parser and Buffering (6-8 days)

Goals: Parse request line and headers with limits. Tasks:

  1. Implement buffered reads and state machine.
  2. Parse request line and validate method/path/version.
  3. Parse headers with size limits. Checkpoint: Valid and invalid requests behave as expected.

Phase 3: Pooling and Static Files (5-7 days)

Goals: Serve files and manage request memory. Tasks:

  1. Integrate request pool allocator.
  2. Implement path normalization and file serving.
  3. Add error responses. Checkpoint: Static file served correctly and pool resets.

5.11 Key Implementation Decisions

| Decision | Options | Recommendation | Rationale | |———-|———|—————-|———–| | Request handling | Single or multiple per connection | Single | Simpler invariants | | Buffer size | Fixed or dynamic | Fixed + limit | Deterministic safety | | Memory model | Arena or malloc/free | Arena | Simpler cleanup |


6. Testing Strategy

6.1 Test Categories

| Category | Purpose | Examples | |———|———|———-| | Parser Tests | Valid/invalid requests | malformed request line | | Buffer Tests | Partial reads | split CRLF | | Security Tests | Path traversal | “../” in path |

6.2 Critical Test Cases

  1. Request line split across two reads.
  2. Headers exceeding maximum size.
  3. Path traversal attempt returns 400.

6.3 Test Data

GET /index.html HTTP/1.1\r\nHost: x\r\n\r\n

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

| Pitfall | Symptom | Solution | |———|———|———-| | Assuming full line in buffer | Parse errors | Use state machine | | No header size limit | Memory blow-up | Enforce limits | | Path traversal | Security issue | Normalize and reject |

7.2 Debugging Strategies

  • Log parser state transitions.
  • Use nc to send malformed requests.

7.3 Performance Traps

Reading files with small buffers can be slow. Use a moderate buffer size for file sending.


8. Extensions & Challenges

8.1 Beginner Extensions

  • Add Content-Type detection by file extension.
  • Add support for HEAD requests.

8.2 Intermediate Extensions

  • Add keep-alive support.
  • Add simple logging to a file.

8.3 Advanced Extensions

  • Add a minimal HTTP router.
  • Implement non-blocking I/O with select/poll.

9. Real-World Connections

9.1 Industry Applications

  • Web servers and reverse proxies.
  • Embedded HTTP interfaces.
  • tinyhttpd, civetweb (compare design choices).

9.3 Interview Relevance

  • Network programming and parsing questions.
  • Memory ownership and safety in server design.

10. Resources

10.1 Essential Reading

  • RFC 9112 (HTTP/1.1 message syntax).
  • “UNIX Network Programming” Vol 1.

10.2 Video Resources

  • Network programming lectures.

10.3 Tools & Documentation

  • curl and nc for manual testing.

11. Self-Assessment Checklist

11.1 Understanding

  • I can explain HTTP request grammar.
  • I can describe partial read handling.
  • I can explain request-scoped ownership.

11.2 Implementation

  • Parser tests pass.
  • Static file serving works.
  • Error responses are deterministic.

11.3 Growth

  • I can explain this server design in an interview.
  • I can extend it to keep-alive if needed.

12. Submission / Completion Criteria

Minimum Viable Completion:

  • Server accepts connections and parses request line/headers.
  • Deterministic error responses on invalid requests.

Full Completion:

  • Static file serving with path normalization.
  • Request pool reset after each request.

Excellence (Going Above & Beyond):

  • Keep-alive support and router.

13. Additional Content Rules (Hard Requirements)

13.1 Determinism

All demos use fixed inputs and produce deterministic responses.

13.2 Outcome Completeness

  • Success and failure demos included.
  • Exit codes specified in §3.7.5.

13.3 Cross-Linking

13.4 No Placeholder Text

All content is complete and explicit.