Project 9: HTTP/API Testing Suite

A HTTP testing suite that reports headers, timing, and TLS info.

Quick Reference

Attribute Value
Difficulty Level 2: Intermediate
Time Estimate 1 week
Main Programming Language Bash
Alternative Programming Languages Python, Go
Coolness Level Level 2: Practical
Business Potential 2. Micro-SaaS / Pro tool
Prerequisites Basic Linux CLI
Key Topics HTTP and TLS Observability, DNS Resolution and Name System Behavior, Transport, Sockets, and Connection State

1. Learning Objectives

By completing this project, you will:

  1. Build the core tool described in the project and validate output against a golden transcript.
  2. Explain how the tool maps to the Linux networking layer model.
  3. Diagnose at least one real or simulated failure using the tool’s output.

2. All Theory Needed (Per-Concept Breakdown)

This section includes every concept required to implement this project successfully.

HTTP and TLS Observability

Fundamentals HTTP is a request/response protocol built on top of TCP (or QUIC). Every interaction has a method, a URL, headers, and a response status. TLS secures HTTP by encrypting traffic and authenticating the server with certificates. Tools like curl and wget expose the HTTP exchange and timing phases: DNS lookup, TCP connect, TLS handshake, time to first byte (TTFB), and content transfer. Understanding these phases lets you pinpoint where an HTTP request is slow or failing. This concept focuses on interpreting HTTP and TLS behavior as observable system evidence rather than as opaque application behavior.

Deep Dive HTTP observability begins by treating each request as a pipeline. The client resolves the hostname (DNS), opens a TCP connection, negotiates TLS (if HTTPS), sends the HTTP request, and waits for the response. Each phase can fail or add latency. A slow DNS resolver can add hundreds of milliseconds before the connection even starts. A congested network can slow the TCP handshake. A misconfigured TLS certificate can cause a hard failure even if the network is healthy. By breaking the request into phases, you can isolate the bottleneck.

TLS adds its own structure. The client sends a ClientHello with supported cipher suites and extensions, including SNI (Server Name Indication). The server chooses a cipher suite and returns its certificate chain. The client validates the chain against trusted roots and verifies the hostname. Any mismatch or missing intermediate certificate can fail the handshake. From an operator standpoint, this means “HTTPS is broken” could be a certificate issue, not a network issue. Tools like curl -v expose the certificate details and handshake outcome.

HTTP itself has semantics that affect performance. Redirects add additional round trips, often across different hosts. Persistent connections and HTTP/2 multiplexing reduce the cost of repeated requests, while disabling keep-alives forces expensive new connections. Headers like Cache-Control determine whether responses can be reused or must be revalidated. Authentication headers add another layer of failure modes (expired tokens, missing credentials, or incorrect scopes). Observability here means reading the response headers and status codes as evidence: 301/302 show redirects, 401 indicates auth problems, 429 indicates rate limiting.

Timing is the most practical diagnostic signal. curl can output time_namelookup, time_connect, time_appconnect (TLS), time_starttransfer (TTFB), and time_total. A high TTFB suggests a slow upstream application; a high time_connect suggests network latency; a high time_appconnect suggests TLS negotiation issues or deep certificate chains. This breakdown lets you identify whether the problem is the network, the TLS layer, or the application itself. It turns “slow API” into “DNS adds 200ms, server adds 1.2s.”

Finally, HTTP observability intersects with security. Certificate expiry, weak ciphers, or hostname mismatches can cause clients to refuse connections. TLS versions also matter: older servers with outdated protocols may fail modern clients. For diagnostics, the operator needs to verify the certificate subject, issuer, validity period, and alternative names. That is why this concept belongs in a networking tools guide: HTTP and TLS are application-visible layers, but the evidence is collected by networking tools.

How this fit on projects

  • HTTP/API Testing Suite (Project 9)
  • Network Troubleshooting Wizard (Project 13)

Definitions & key terms

  • TTFB: Time to first byte; time until the first response byte arrives.
  • SNI: Server Name Indication; TLS extension indicating the requested hostname.
  • TLS handshake: Cryptographic negotiation and certificate verification process.

Mental model diagram

DNS -> TCP connect -> TLS handshake -> HTTP request -> HTTP response

How it works (step-by-step, invariants, failure modes)

  1. Resolve hostname to IP.
  2. Establish TCP connection.
  3. Negotiate TLS and validate certificate.
  4. Send HTTP request with headers.
  5. Receive status + headers + body. Invariants: TLS must validate for HTTPS; status codes reflect server state. Failure modes: bad DNS, TLS validation failure, auth errors, redirect loops.

Minimal concrete example Protocol transcript (simplified):

GET /status HTTP/1.1
Host: api.example.com

HTTP/1.1 200 OK
Content-Type: application/json
Cache-Control: max-age=60

Common misconceptions

  • “HTTPS failure means the network is down.” (It may be a certificate issue.)
  • “TTFB equals network latency.” (TTFB includes server processing time.)

Check-your-understanding questions

  1. What does time_appconnect represent in curl timing?
  2. Why might a redirect chain cause slow responses?
  3. How does SNI affect TLS validation?

Check-your-understanding answers

  1. Time spent completing the TLS handshake.
  2. Each redirect adds another DNS/TCP/TLS cycle.
  3. SNI tells the server which certificate to present for the hostname.

Real-world applications

  • API monitoring, CDN debugging, and TLS certificate audits.

Where you’ll apply it Projects 9 and 13.

References

  • RFC 9110 (HTTP semantics)
  • TLS 1.3 (RFC 8446)

Key insights HTTP performance issues are often phase-specific; timing breakdowns reveal the true bottleneck.

Summary You can now interpret HTTP/TLS behavior as a sequence of measurable phases and diagnose where failures occur.

Homework/Exercises to practice the concept

  • Capture curl timing for three endpoints and compare phase bottlenecks.
  • Inspect TLS certificates for two sites and explain differences in chains.

Solutions to the homework/exercises

  • Identify which phase dominates for each endpoint; DNS-heavy, TLS-heavy, or server-heavy.
  • Certificate chains differ by issuer and intermediates; missing intermediates cause validation errors.

DNS Resolution and Name System Behavior

Fundamentals DNS is the internet’s naming system: it maps human-friendly names to resource records such as A, AAAA, MX, and TXT. A client (stub resolver) typically asks a recursive resolver to answer. If the recursive resolver does not have the answer cached, it follows the hierarchy: root servers point to TLD servers, which point to authoritative servers for the domain. RFC 1034 defines the conceptual model and RFC 1035 defines the protocol and message format. The root zone is served by 13 named authorities (A through M) with many anycast instances worldwide. On Linux, name resolution is often mediated by systemd-resolved; resolvectl shows which upstream servers are in use, whether DNSSEC validation is enabled, and which interface supplied the configuration. This chapter teaches you to treat DNS as a multi-stage system with caches, delegation, and failure modes rather than as a simple lookup table.

Deep Dive DNS resolution is a distributed, cached workflow with explicit authority boundaries. The stub resolver (part of glibc, systemd-resolved, or another resolver component) forwards a query to a recursive resolver. The recursive resolver answers from cache if possible, or performs iterative resolution: it asks a root server for the TLD delegation, asks the TLD server for the domain’s authoritative server, and then asks the authoritative server for the actual record. Each response contains referrals and glue records, and the resolver follows them until it obtains an authoritative answer. This delegation chain explains why DNS failures can occur in specific segments: a root server issue affects only the first step, while a broken authoritative server affects only its zone.

Caching is central to DNS correctness. Every answer has a TTL, and resolvers cache both positive and negative responses. A short TTL allows rapid changes but increases load and latency; a long TTL increases stability but delays recovery from mistakes. Negative caching (caching NXDOMAIN) can cause failures to persist longer than expected. When you troubleshoot DNS, you must distinguish between the authoritative truth and the cached reality. This is why comparing multiple resolvers is such a powerful technique: if one resolver is wrong, it is usually a cache or policy issue; if all resolvers are wrong, the authoritative zone is likely at fault.

Linux introduces an additional layer of complexity: multiple components can manage resolver configuration. systemd-resolved may serve a local stub address (often 127.0.0.53), NetworkManager may set per-interface DNS servers, and VPN clients may override DNS settings. resolvectl surfaces the runtime state, revealing which upstreams are actually being used and which interface contributed them. This is essential when you see “DNS works sometimes,” because the system might be switching between upstreams or applying split DNS rules. Without this visibility you might debug the wrong resolver entirely.

DNSSEC adds cryptographic integrity. It uses signatures (RRSIG) and chain-of-trust records (DS, DNSKEY) to allow a validating resolver to verify that an answer has not been tampered with. If validation fails, the resolver can return a “bogus” result, which is functionally a failure even if the record exists. This is not a DNSSEC bug; it is the intended protection. The important mental model is: DNSSEC provides integrity, not availability. A missing signature or a broken chain can cause resolution failure even when the authoritative server is reachable.

Failure modes map cleanly to the resolution chain. NXDOMAIN can be legitimate or a poisoned response. SERVFAIL can indicate upstream outages, misconfigured DNSSEC, or authoritative server errors. Inconsistent answers across resolvers point to caching, geo-based responses, or split-horizon DNS. The proper diagnostic approach is layered: query the system resolver (what applications see), query a public recursive resolver (what the internet sees), then query authoritative servers directly (the truth for the zone). If those disagree, you have located the fault boundary. This is exactly the diagnostic muscle the DNS Deep Dive Tool project will train.

Finally, remember that DNS is a dependency for nearly all applications. A slow or inconsistent resolver adds latency to every request. That means “network is slow” can be a DNS problem even if packets are flowing perfectly. By treating DNS as a system with hierarchies, caches, and validation, you gain the ability to diagnose outages that look random but are actually deterministic.

How this fit on projects

  • DNS Deep Dive Tool (Project 3)
  • Connectivity Diagnostic Suite (Project 2)

Definitions & key terms

  • Resolver: Client or service that performs DNS lookups for applications.
  • Authoritative server: DNS server that hosts the original records for a zone.
  • TTL: Time a record can be cached.

Mental model diagram

App -> Stub Resolver -> Recursive Resolver
                       |-> Root -> TLD -> Authoritative
                       |-> Cache

How it works (step-by-step, invariants, failure modes)

  1. App asks stub resolver for name.
  2. Stub asks recursive resolver.
  3. Recursive uses cache or queries root/TLD/authoritative.
  4. Answer returned, cached for TTL. Invariants: DNS is hierarchical; records are cached with TTL. Failure modes: wrong resolver, DNSSEC validation failure, stale cache.

Minimal concrete example Protocol transcript (simplified):

Query: A example.com
Root -> referral to .com
TLD -> referral to example.com authoritative
Auth -> A 93.184.216.34 TTL 86400

Common misconceptions

  • “DNS is just a file.” (It is a distributed, cached system.)
  • “If one resolver works, DNS is fine.” (Different resolvers can have different caches.)

Check-your-understanding questions

  1. What does a recursive resolver do that a stub resolver does not?
  2. Why can two users see different DNS answers for the same name?
  3. Why can DNSSEC cause lookups to fail even if records exist?

Check-your-understanding answers

  1. It performs iterative queries and caching on behalf of the client.
  2. Caches and different upstream resolvers yield different answers.
  3. Missing or invalid signatures cause validation failure.

Real-world applications

  • Debugging website outages, email misrouting, and CDN propagation issues.

Where you’ll apply it Projects 2 and 3.

References

  • DNS conceptual and protocol standards (RFC 1034/1035).
  • Root servers and 13 named authorities (IANA).
  • resolvectl description (systemd-resolved interface).

Key insights DNS failures are often cache or resolver-path problems, not record problems.

Summary You now know the DNS chain of responsibility and how Linux exposes its resolver state.

Homework/Exercises to practice the concept

  • Draw the resolution path for a domain with a CNAME that points to a CDN.
  • Explain how TTL affects incident recovery timelines.

Solutions to the homework/exercises

  • The resolver must follow the CNAME to its target and query that name’s authoritative servers.
  • Short TTLs speed recovery but increase query load; long TTLs delay changes.

Transport, Sockets, and Connection State

Fundamentals Transport protocols are where application intent becomes network behavior. TCP provides reliable, ordered streams with connection state; UDP provides connectionless datagrams with minimal overhead. Linux exposes the kernel’s view of these endpoints as sockets, and ss is the modern tool that surfaces socket state, queues, and ownership. The TCP state machine (LISTEN, SYN_RECV, ESTABLISHED, TIME_WAIT, CLOSE_WAIT) is the lens through which you interpret what ss shows. If you can read that state correctly, you can diagnose whether the issue is in the app (not accepting connections), the network (packets lost or blocked), or the kernel (resource limits, backlogs, or port exhaustion).

Deep Dive Sockets are kernel objects that bind an application to a local address and port (or a Unix path), and they encapsulate the transport protocol’s lifecycle. For TCP, the lifecycle is explicit: LISTEN indicates a server is waiting; SYN_SENT and SYN_RECV indicate the handshake is in progress; ESTABLISHED indicates data transfer; FIN_WAIT and TIME_WAIT indicate closure; CLOSE_WAIT indicates the peer closed while the local app has not. Each state corresponds to a specific point in the TCP state machine, and ss exposes these states along with queue depths, timers, and owning processes. This is why ss is the foundation of serious network debugging: it shows what the kernel believes about every connection, not what you think should be happening.

Queue depths (Recv-Q and Send-Q) are among the most underused diagnostics. A high Recv-Q typically means the application is not reading fast enough; a high Send-Q can mean congestion, a slow receiver, or a blocked network path. These counters let you distinguish “network slow” from “application slow” in seconds. Combine this with state counts and you can identify issues like SYN floods (many SYN_RECV), port exhaustion (large numbers of TIME_WAIT or open file limits), or application bugs (CLOSE_WAIT buildup because the app never closes its sockets).

Understanding TIME_WAIT is critical. TIME_WAIT is not a broken connection; it is a safety mechanism that ensures late packets from an old connection cannot corrupt a new one that reuses the same 4-tuple. At scale, TIME_WAIT is normal. It only becomes a problem when it exhausts ephemeral ports or indicates inefficient connection patterns (e.g., many short-lived connections without reuse). That distinction matters in incident response: you should not “fix” TIME_WAIT without first proving that it is causing a real resource limit.

UDP requires a different interpretation. UDP sockets do not have a state machine; they are simply endpoints that may send or receive datagrams. Seeing a UDP socket in ss does not imply a session exists. For local IPC, Unix domain sockets appear alongside network sockets, which means you must be able to distinguish them to avoid false assumptions about external connectivity. A “port in use” error might be a Unix socket, not a TCP socket. That difference changes your fix.

Modern ss output also includes timers and TCP internal statistics that hint at congestion control and retransmissions. While you do not need to tune these in this guide, you should learn to interpret them: persistent retransmissions suggest path loss; a growing send queue suggests the receiver is slow or the path is constrained; a backlog of pending connections suggests the server is overloaded or under-provisioned. These are operational signals, not academic trivia.

Finally, always correlate socket state with application behavior. If a server reports LISTEN on the expected port, but clients cannot connect, you now know the failure is between the network and the socket. If the server shows no LISTEN state, the issue is in the application or configuration. That correlation is the essence of socket-level troubleshooting. This chapter gives you the vocabulary and mental model to move from “it times out” to “the kernel never completed the handshake because SYNs were dropped at the firewall,” which is exactly what senior operators do in production.

How this fit on projects

  • Socket State Analyzer (Project 4)
  • Network Troubleshooting Wizard (Project 13)
  • Real-Time Network Security Monitor (Project 15)

Definitions & key terms

  • Socket: Kernel object representing a network endpoint.
  • TIME_WAIT: TCP state that prevents old packets from interfering with new connections.
  • Recv-Q / Send-Q: Kernel buffers for received and outgoing data.

Mental model diagram

LISTEN -> SYN_RECV -> ESTABLISHED -> FIN_WAIT -> TIME_WAIT
   ^                                          |
   |------------------ new connection --------|

How it works (step-by-step, invariants, failure modes)

  1. Server socket enters LISTEN.
  2. Client sends SYN -> server SYN_RECV.
  3. ACK completes handshake -> ESTABLISHED.
  4. FIN/ACK close -> TIME_WAIT. Invariants: handshake required for TCP; TIME_WAIT exists for safety. Failure modes: excessive TIME_WAIT, CLOSE_WAIT due to app not closing.

Minimal concrete example Socket state excerpt (conceptual):

LISTEN 0.0.0.0:443 -> nginx
ESTAB  192.168.1.10:443 <-> 203.0.113.5:52341
TIME_WAIT 192.168.1.10:443 <-> 203.0.113.7:52388

Common misconceptions

  • “TIME_WAIT means the connection is stuck.” (It often means it closed correctly.)
  • “UDP has a state machine.” (It does not in the TCP sense.)

Check-your-understanding questions

  1. What does CLOSE_WAIT imply about the application?
  2. Why can TIME_WAIT grow large under load?
  3. What does a high Recv-Q indicate?

Check-your-understanding answers

  1. The application has not closed its side after the peer closed.
  2. Many short-lived connections create many TIME_WAIT sockets.
  3. The application is not reading fast enough.

Real-world applications

  • Diagnosing web server overload, port exhaustion, and connection leaks.

Where you’ll apply it Projects 4, 13, 15.

References

  • ss(8) description and purpose.

Key insights Socket state is the most direct evidence of application-level networking health.

Summary You can now interpret socket state to distinguish network, kernel, and application failures.

Homework/Exercises to practice the concept

  • Sketch the TCP state machine with the states you see in ss output.
  • Explain how a SYN flood would appear in socket state counts.

Solutions to the homework/exercises

  • The state machine includes LISTEN, SYN_RECV, ESTABLISHED, FIN_WAIT, TIME_WAIT, CLOSE_WAIT.
  • A SYN flood appears as elevated SYN_RECV and possibly backlog overflow.

3. Project Specification

3.1 What You Will Build

A HTTP testing suite that reports headers, timing, and TLS info.

3.2 Functional Requirements

  1. Core data collection: Gather the required system/network data reliably.
  2. Interpretation layer: Translate raw outputs into human-readable insights.
  3. Deterministic output: Produce stable, comparable results across runs.
  4. Error handling: Detect missing privileges, tools, or unsupported interfaces.

3.3 Non-Functional Requirements

  • Performance: Runs in under 5 seconds for baseline mode.
  • Reliability: Handles missing data sources gracefully.
  • Usability: Output is readable without post-processing.

3.4 Example Usage / Output

$ ./httptest.sh https://api.example.com/status

HTTP ANALYSIS
Status: 200 OK
Headers:
  Content-Type: application/json
  Cache-Control: max-age=60
Timing:
  DNS: 23 ms
  Connect: 45 ms
  TLS: 89 ms
  TTFB: 52 ms
Total: 221 ms

3.5 Data Formats / Schemas / Protocols

  • Input: CLI tool output, kernel state, or service logs.
  • Output: A structured report with sections and summarized metrics.

3.6 Edge Cases

  • Missing tool binaries or insufficient permissions.
  • Interfaces or hosts that return no data.
  • Transient states (link flaps, intermittent loss).

3.7 Real World Outcome

$ ./httptest.sh https://api.example.com/status

HTTP ANALYSIS
Status: 200 OK
Headers:
  Content-Type: application/json
  Cache-Control: max-age=60
Timing:
  DNS: 23 ms
  Connect: 45 ms
  TLS: 89 ms
  TTFB: 52 ms
Total: 221 ms

3.7.1 How to Run (Copy/Paste)

$ ./run-project.sh [options]

3.7.2 Golden Path Demo (Deterministic)

Run the tool against a known-good target and verify every section of the output matches the expected format.

3.7.3 If CLI: provide an exact terminal transcript

$ ./httptest.sh https://api.example.com/status

HTTP ANALYSIS
Status: 200 OK
Headers:
  Content-Type: application/json
  Cache-Control: max-age=60
Timing:
  DNS: 23 ms
  Connect: 45 ms
  TLS: 89 ms
  TTFB: 52 ms
Total: 221 ms

4. Solution Architecture

4.1 High-Level Design

[Collector] -> [Parser] -> [Analyzer] -> [Reporter]

4.2 Key Components

Component Responsibility Key Decisions
Collector Gather raw tool output Which tools to call and with what flags
Parser Normalize raw text/JSON Text vs JSON parsing strategy
Analyzer Compute insights Thresholds and heuristics
Reporter Format output Stable layout and readability

4.3 Data Structures (No Full Code)

  • InterfaceRecord: name, state, addresses, stats
  • RouteRecord: prefix, gateway, interface, metric
  • Observation: timestamp, source, severity, message

4.4 Algorithm Overview

Key Algorithm: Evidence Aggregation

  1. Collect raw outputs from tools.
  2. Parse into normalized records.
  3. Apply interpretation rules and thresholds.
  4. Render the final report.

Complexity Analysis:

  • Time: O(n) over number of records
  • Space: O(n) to hold parsed records

5. Implementation Guide

5.1 Development Environment Setup

# Install required tools with your distro package manager

5.2 Project Structure

project-root/
├── src/
│   ├── main
│   ├── collectors/
│   └── formatters/
├── tests/
└── README.md

5.3 The Core Question You’re Answering

“Is this HTTP endpoint working correctly, and where is the time spent?”

5.4 Concepts You Must Understand First

  1. HTTP request/response
    • Methods, headers, status codes.
    • Book Reference: “HTTP: The Definitive Guide” - Ch. 1-5
  2. TLS basics
    • Handshake, cert validation.
    • Book Reference: “Serious Cryptography” - Ch. 14
  3. Timing phases
    • DNS vs connect vs transfer.
    • Book Reference: “High Performance Browser Networking” - Ch. 4

5.5 Questions to Guide Your Design

  1. How will you parse timing metrics from curl?
  2. Which headers should always be highlighted?
  3. How will you handle redirects and auth?

5.6 Thinking Exercise

Interpret this curl format:

DNS: %{time_namelookup}
Connect: %{time_connect}
TTFB: %{time_starttransfer}
Total: %{time_total}

Question: Which value includes TLS?

5.7 The Interview Questions They’ll Ask

  1. “How do you test an API endpoint from CLI?”
  2. “What is TTFB and why does it matter?”
  3. “How do you follow redirects with curl?”
  4. “How do you send JSON with POST?”
  5. “When would you use wget over curl?”

5.8 Hints in Layers

Hint 1: Use curl -w with a custom format file. Hint 2: Use -I for headers only. Hint 3: Use -L to follow redirects. Hint 4: Use --connect-timeout for failure diagnosis.

5.9 Books That Will Help

Topic Book Chapter
HTTP “HTTP: The Definitive Guide” Ch. 1-5
TLS “Serious Cryptography” Ch. 14

5.10 Implementation Phases

Phase 1: Foundation (1-2 days)

  • Define outputs and parse a single tool.
  • Produce a minimal report.

Phase 2: Core Functionality (3-5 days)

  • Add remaining tools and interpretation logic.
  • Implement stable formatting and summaries.

Phase 3: Polish & Edge Cases (2-3 days)

  • Handle missing data and failure modes.
  • Add thresholds and validation checks.

5.11 Key Implementation Decisions

Decision Options Recommendation Rationale
Parsing format Text vs JSON JSON where available More stable parsing
Output layout Table vs sections Sections Readability for humans
Sampling One-shot vs periodic One-shot + optional loop Predictable runtime

6. Testing Strategy

6.1 Test Categories

Category Purpose Examples
Unit Tests Validate parsing Parse fixed tool output samples
Integration Tests Validate tool calls Run against a lab host
Edge Case Tests Handle failures Missing tool, no permissions

6.2 Critical Test Cases

  1. Reference run: Output matches golden transcript.
  2. Missing tool: Proper error message and partial report.
  3. Permission denied: Clear guidance for sudo or capabilities.

6.3 Test Data

Input: captured command output
Expected: normalized report with correct totals

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

Pitfall Symptom Solution
Wrong interface Empty output Verify interface names
Missing privileges Permission errors Use sudo or capabilities
Misparsed output Wrong stats Prefer JSON parsing

7.2 Debugging Strategies

  • Re-run each tool independently to compare raw output.
  • Add a verbose mode that dumps raw data sources.

7.3 Performance Traps

  • Avoid tight loops without sleep intervals.

8. Extensions & Challenges

8.1 Beginner Extensions

  • Add colored status markers.
  • Export report to a file.

8.2 Intermediate Extensions

  • Add JSON output mode.
  • Add baseline comparison.

8.3 Advanced Extensions

  • Add multi-host aggregation.
  • Add alerting thresholds.

9. Real-World Connections

9.1 Industry Applications

  • SRE runbooks and on-call diagnostics.
  • Network operations monitoring.
  • tcpdump / iproute2 / nftables
  • mtr / iperf3

9.3 Interview Relevance

  • Demonstrates evidence-based debugging and tool mastery.

10. Resources

10.1 Essential Reading

  • Primary book listed in the main guide.
  • Relevant RFCs and tool manuals.

10.2 Video Resources

  • Conference talks on Linux networking and troubleshooting.