Project 13: Network Troubleshooting Wizard
An automated troubleshooting assistant that selects tools based on symptoms.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Level 3: Advanced |
| Time Estimate | 1 week |
| Main Programming Language | Bash + Python |
| Alternative Programming Languages | Go |
| Coolness Level | Level 3: Clever |
| Business Potential | 3. Service & Support |
| Prerequisites | Basic Linux CLI |
| Key Topics | Layered Troubleshooting Methodology, Packet Flow and Netfilter Hooks, IP Addressing, Routing, and Path Discovery |
1. Learning Objectives
By completing this project, you will:
- Build the core tool described in the project and validate output against a golden transcript.
- Explain how the tool maps to the Linux networking layer model.
- Diagnose at least one real or simulated failure using the tool’s output.
2. All Theory Needed (Per-Concept Breakdown)
This section includes every concept required to implement this project successfully.
Layered Troubleshooting Methodology
Fundamentals Network troubleshooting is most effective when done in layers: physical, link, network, transport, and application. Each layer has a small set of tools and questions. The core discipline is to test from the bottom up, using evidence at each stage to decide whether to proceed. This avoids chasing application bugs when the cable is unplugged, or assuming a firewall problem when DNS is broken. A repeatable decision tree turns troubleshooting into a deterministic process rather than intuition.
Deep Dive Layered troubleshooting is a method, not a checklist. You start with physical and link evidence: interface state, carrier, and neighbor resolution. If those are healthy, you move to network-layer evidence: routes, gateways, and path probes. Only then do you test transport: TCP handshakes, socket state, and port reachability. Finally, you test application behavior: HTTP status codes, authentication, or service health checks. At each step, you generate a hypothesis and collect evidence to confirm or refute it.
This method prevents cognitive bias. Without structure, it is easy to jump to the most familiar tool or assume the most recent failure mode. Layered troubleshooting enforces a disciplined sequence: verify the prerequisites of each layer before diagnosing above it. For example, you should not debug TLS if DNS fails; you should not debug routing if the interface has no carrier. Each layer depends on the layer below.
Evidence stacking is another key principle. A single tool can mislead: traceroute can show asterisks even when the destination is reachable, and tcpdump can show packets that never reach the socket. Combining tools reduces false conclusions. If ping fails but ip route get shows a valid route, the issue may be in the path, not in the local table. If tcpdump shows SYNs but ss shows no SYN_RECV, the issue is likely firewall or policy. The methodology teaches you to cross-validate across layers.
Time and variability matter. Many issues are intermittent: a link flaps, a DNS resolver times out only under load, or a firewall rule applies only after a reload. A layered approach includes sampling over time and capturing a baseline. If the system is normally healthy, you need to know what “normal” looks like before you can declare an anomaly. This is why the troubleshooting wizard project emphasizes repeatable checks, consistent sampling, and explicit evidence in the output.
Finally, the method must produce actionable recommendations. A diagnosis without a next step is not useful in operations. That means your workflow should end with a verification step: after you change a rule or restart a service, rerun the relevant tests and confirm the issue is resolved. The method is complete only when it validates the fix.
How this fit on projects
- Network Troubleshooting Wizard (Project 13)
- Connectivity Diagnostic Suite (Project 2)
Definitions & key terms
- Layered troubleshooting: Diagnosing issues by moving up the stack only after lower layers are validated.
- Evidence stacking: Using multiple tools to confirm the same hypothesis.
- Baseline: Known-good reference state used for comparison.
Mental model diagram
Physical -> Link -> Network -> Transport -> Application
verify each layer before proceeding upward
How it works (step-by-step, invariants, failure modes)
- Verify interface and carrier.
- Verify routes and gateway reachability.
- Verify transport handshake and ports.
- Verify application behavior. Invariants: higher layers depend on lower layers. Failure modes: skipping layers, relying on one tool, ignoring intermittency.
Minimal concrete example Decision transcript (simplified):
Interface UP -> Route OK -> Ping fails -> traceroute shows loss at hop 4 -> upstream issue
Common misconceptions
- “If ping works, the app must work.” (Transport and application can still fail.)
- “One tool is enough.” (Single tools can mislead; correlate evidence.)
Check-your-understanding questions
- Why is a layered approach more reliable than ad hoc troubleshooting?
- What is an example of evidence stacking?
- Why should you validate a fix with the same tests?
Check-your-understanding answers
- It prevents skipping prerequisites and reduces false conclusions.
- Using tcpdump plus ss to confirm whether SYNs reached the socket.
- It confirms causality and prevents regressions.
Real-world applications
- Incident response, on-call diagnostics, and runbook design.
Where you’ll apply it Projects 2 and 13.
References
- SRE and operations troubleshooting guides
- Practical runbook methodologies
Key insights Good troubleshooting is a repeatable method, not a collection of tricks.
Summary You can now structure diagnostics as a layer-by-layer workflow that produces evidence and validates fixes.
Homework/Exercises to practice the concept
- Build a decision tree for “cannot reach HTTPS site.”
- Identify the minimum set of tests that prove each layer is healthy.
Solutions to the homework/exercises
- Example tree: interface -> route -> DNS -> TCP -> HTTP.
- Minimum tests: ip link, ip route get, dig, nc/curl.
Packet Flow and Netfilter Hooks
Fundamentals Linux processes packets in a predictable sequence, and Netfilter is the framework that inserts decision points into that sequence. A frame arrives on a NIC, the kernel parses it, and the packet passes through well-defined hooks: PREROUTING, INPUT, FORWARD, OUTPUT, and POSTROUTING. Firewall rules are not “global”; they attach to specific hooks (via chains), so placement is as important as the rule itself. Netfilter also provides connection tracking (state), NAT, and packet mangling. That means a rule can match on the connection state (NEW, ESTABLISHED), translate addresses, or simply allow/deny. nftables is the modern rule engine that programs these hooks with a unified syntax and richer data structures. If you internalize where each hook sits and which packets pass it, you can predict what a rule will see, why a packet was dropped, and which tool (tcpdump, ss, ip route) will reflect the outcome.
Deep Dive Think of packet processing as a conveyor belt with checkpoints, each checkpoint exposing different metadata. The packet enters through the driver and reaches the IP layer. At PREROUTING, the kernel knows the packet’s ingress interface, source and destination IP, and L4 headers, but it has not yet decided where the packet will go. This is why destination NAT (DNAT) belongs here: changing the destination before routing ensures the kernel routes the translated address, not the original. After PREROUTING, the routing decision determines whether the packet is for the local machine or must be forwarded. That single branch splits the path: local traffic goes to INPUT, forwarded traffic goes to FORWARD, and both eventually pass through POSTROUTING before transmission. Locally generated traffic starts at the socket layer, passes OUTPUT (where filtering and local policy apply), then POSTROUTING, and finally leaves the NIC.
Netfilter organizes rules into tables and chains. Tables group rule intent (filter, nat, mangle, raw), while chains define hook attachment. A base chain is bound to a hook, which means packets enter it automatically; a regular chain is only entered by an explicit jump. The order of chains and the order of rules inside a chain is the actual execution path. That is why “rule order matters” is more than a cliché: a DROP near the top of INPUT can shadow every later rule, and a NAT rule in the wrong hook may never execute. Understanding policy defaults is just as important: a default DROP in INPUT means only explicitly allowed traffic enters, while a default ACCEPT means all traffic enters unless explicitly blocked. These defaults set the baseline security posture.
Connection tracking is the other pillar. Netfilter tracks flows and labels packets as NEW, ESTABLISHED, or RELATED. This lets you write rules like “allow established connections” without enumerating ephemeral ports. It also enables NAT to be symmetric: once a flow is translated, conntrack remembers the mapping so replies are translated back. If conntrack is disabled or bypassed, those stateful expectations break. Many real-world bugs come from misunderstanding this state: for example, blocking NEW connections but forgetting to allow ESTABLISHED, or assuming a DNAT rule will automatically permit forwarding when the FORWARD chain still drops packets.
nftables modernizes rule evaluation. Rather than relying on multiple legacy tools (iptables, ip6tables, arptables, ebtables), nftables provides a single syntax and a kernel-side virtual machine. It supports sets and maps, which makes complex policies efficient: instead of a hundred “allow” rules, you can express a set of allowed IPs or ports and match in a single rule. For an operator, this changes how you reason about performance and correctness. The same logical policy can be expressed in fewer rules, with fewer ordering traps, and with clearer auditability. But the hook placement logic remains identical, because nftables still attaches to Netfilter hooks.
The critical troubleshooting mindset is to separate “where did the packet enter?” from “where did it die?” A SYN visible in tcpdump on the NIC but absent in ss indicates it was dropped before the socket layer — likely INPUT or an earlier hook. A connection that establishes locally but fails to reach another host suggests a FORWARD or POSTROUTING issue. If outbound traffic fails only after a NAT rule is applied, your mistake is probably hook placement or state. When you combine this mental model with evidence from tools, you can answer the exact question operators care about: “Which rule, in which chain, at which hook, dropped or modified this packet?” That is the difference between a fix and a guess.
How this fit on projects
- Firewall Rule Auditor (Project 6)
- Network Troubleshooting Wizard (Project 13)
- Real-Time Network Security Monitor (Project 15)
Definitions & key terms
- Netfilter hook: A defined point in the kernel packet path where filtering or mangling can occur.
- Base chain: An nftables chain attached to a hook; it is entered by packets automatically.
- Connection tracking (conntrack): Kernel subsystem that tracks flows to enable stateful filtering and NAT.
Mental model diagram
INBOUND:
NIC -> PREROUTING -> routing decision -> INPUT -> socket -> app
\-> FORWARD -> POSTROUTING -> NIC
OUTBOUND:
app -> socket -> OUTPUT -> routing decision -> POSTROUTING -> NIC
How it works (step-by-step, invariants, failure modes)
- Packet arrives at NIC and is handed to the IP layer.
- PREROUTING runs (DNAT possible).
- Routing decision selects local delivery vs forward.
- INPUT or FORWARD hook runs.
- POSTROUTING runs (SNAT possible).
- Packet is delivered locally or transmitted. Invariants: hooks run in order; rule order matters; DNAT before routing, SNAT after routing. Failure modes: rule in wrong chain, missing conntrack state, policy drop on wrong hook.
Minimal concrete example Protocol transcript (simplified):
Packet: TCP SYN to 10.0.0.10:443
PREROUTING: DNAT 10.0.0.10 -> 192.168.1.10
Routing: destination is local host
INPUT: allow 443 -> ACCEPT
Socket: delivered to nginx
Common misconceptions
- “A DROP in FORWARD blocks inbound traffic to my host.” (It does not; INPUT is for local host.)
- “NAT happens after routing.” (Destination NAT must happen before routing.)
Check-your-understanding questions
- Where must DNAT occur to affect the routing decision?
- Which chain sees locally generated packets?
- Why might a rule in INPUT never match forwarded packets?
Check-your-understanding answers
- PREROUTING.
- OUTPUT (then POSTROUTING).
- Forwarded packets go through FORWARD, not INPUT.
Real-world applications
- Server firewalls, NAT gateways, and container networking.
Where you’ll apply it Projects 6, 13, 15.
References
- netfilter.org project overview and nftables documentation.
- iptables tables and built-in chains (man page).
Key insights Correct firewalling is about hook placement as much as rule logic.
Summary You now know the kernel checkpoints where packets can be seen and controlled, and why firewall debugging starts with hook placement.
Homework/Exercises to practice the concept
- Draw the packet path for (a) inbound SSH, (b) outbound HTTPS, (c) forwarded NAT traffic.
- Mark where DNAT and SNAT would occur.
Solutions to the homework/exercises
- Inbound SSH: NIC -> PREROUTING -> INPUT -> socket.
- Outbound HTTPS: socket -> OUTPUT -> POSTROUTING -> NIC.
- Forwarded NAT: NIC -> PREROUTING (DNAT) -> FORWARD -> POSTROUTING (SNAT) -> NIC.
IP Addressing, Routing, and Path Discovery
Fundamentals
Routing is the decision process that answers, “Where should this packet go next?” Linux chooses routes using longest-prefix match and attaches that choice to an egress interface and, if needed, a next-hop gateway. The ip tool exposes both the routing tables and the policy rules that choose which table to consult, and it can ask the kernel directly which route would be used for a given destination. Path discovery tools translate that decision into evidence: tracepath probes the path and reports Path MTU (PMTU), while mtr repeatedly probes hops to surface loss and latency patterns. Together, these tools let you move from assumptions (“the route is fine”) to proof (“the kernel will use this gateway, and hop 6 drops 30% of probes”). That shift from inference to evidence is the central skill in routing diagnostics.
Deep Dive Linux routing is a policy engine, not a single static table. Before any prefix matching occurs, the kernel consults routing policy rules. These rules can select a routing table based on source address, incoming interface, firewall mark, or user-defined priority. Once a table is chosen, the kernel performs longest-prefix match: the most specific prefix wins, and metrics break ties among equally specific routes. The final selection yields an egress interface, a next hop (if the destination is not directly connected), and a preferred source IP. This explains many “route exists but traffic fails” scenarios: the route might exist in a table that is never selected for that traffic, or the preferred source IP might not be reachable on the chosen path.
The most important command in this domain is ip route get <destination>. It queries the kernel’s decision engine and returns exactly what would happen if a packet were sent: the chosen route, interface, and source address. It is your truth oracle because it reflects the kernel’s actual behavior, not your interpretation of the routing table. But a routing decision alone does not guarantee reachability. The next hop must still be reachable at Layer 2, and the path beyond the next hop must accept and forward the packet. That is why route diagnosis always includes neighbor resolution and path probing.
Path discovery tools provide that second half. tracepath sends probes with increasing TTL values and reports where ICMP responses are generated. It also discovers PMTU by observing “Packet Too Big” responses and tracking the smallest MTU on the path. mtr adds repetition, showing latency and loss over time rather than a single snapshot. This matters because routing problems often manifest as intermittent congestion or packet loss at specific hops. A static traceroute might miss a transient spike; a rolling mtr report reveals it. The pairing of ip route get (decision evidence) with mtr (path behavior) is a powerful diagnostic habit.
PMTU is a classic foot-gun. The path MTU is the smallest MTU on the path between two hosts. If you send packets larger than the PMTU and fragmentation is disabled (as it often is for modern networks), routers will drop them and send ICMP “Packet Too Big.” If those ICMP messages are blocked, the sender never learns the correct size. The result is the infamous symptom: small packets work, large packets hang. Linux tools surface this in multiple ways: tracepath reports PMTU directly; tcpdump reveals ICMP errors; and iperf3 shows throughput collapse when MTU mismatches cause retransmissions. Understanding PMTU shifts your diagnosis from “the server is slow” to “the path is constrained.”
Advanced routing problems often involve policy routing and multiple interfaces. VPNs, source-based routing, and multi-homed hosts can send different destinations through different uplinks. The kernel may choose a route based on source address or marks assigned by firewall rules. If you only look at the main table, you will miss the true behavior. The correct workflow is: inspect ip rule, identify which table is in use for the traffic in question, use ip route get with a source address when needed, and then validate with path probes. This discipline separates a correct, reproducible diagnosis from a lucky guess.
Finally, remember that routing is only one layer. A correct route can still fail if neighbor resolution fails or if the next-hop router is down. That is why routing diagnosis must be layered: (1) What route does the kernel choose? (2) Can the next hop be resolved at L2? (3) What does the path beyond the next hop look like? The tools in this guide map directly to those questions, and the projects will force you to practice that sequence until it is reflexive.
How this fit on projects
- Connectivity Diagnostic Suite (Project 2)
- Routing Table Explorer (Project 7)
- Bandwidth Monitor (Project 8)
Definitions & key terms
- Longest-prefix match: Route selection rule where the most specific prefix wins.
- PMTU: Path MTU, the smallest MTU along a path.
- Policy routing: Selecting a routing table based on metadata, not just destination.
Mental model diagram
Destination IP
|
v
Policy rules -> Routing table -> Best prefix -> Next hop + egress
|
v
tracepath / mtr validate path and MTU
How it works (step-by-step, invariants, failure modes)
- Select routing table based on policy rules.
- Find best prefix match.
- Choose next hop and source IP.
- Resolve next hop at L2.
- Probe path for PMTU and latency. Invariants: best prefix wins; PMTU <= smallest link MTU. Failure modes: wrong table, blackhole route, ICMP blocked, PMTUD failure.
Minimal concrete example Route lookup transcript:
Destination: 203.0.113.10
Selected: 0.0.0.0/0 via 192.168.1.1 dev eth0 src 192.168.1.100
Common misconceptions
- “The default route is used for everything.” (Only when no more specific prefix matches.)
- “Traceroute proves connectivity.” (It only proves ICMP/TTL handling, not application reachability.)
Check-your-understanding questions
- Why can a /24 override a /16 route?
- What does tracepath report that traceroute might not?
- How can policy routing send the same destination via different paths?
Check-your-understanding answers
- Longest-prefix match chooses the most specific route.
- Path MTU changes along the path.
- Different tables can be selected based on source or marks.
Real-world applications
- VPN split-tunneling, multi-homed servers, and performance debugging.
Where you’ll apply it Projects 2, 7, 8.
References
- ip(8) description and routing functionality.
- tracepath description (path + MTU discovery).
- mtr description (combines traceroute and ping).
- PMTU discovery standards.
Key insights Routing is a choice plus a constraint; you must verify both the chosen path and its MTU limits.
Summary You can now predict route selection and validate the path end-to-end using tracepath and mtr.
Homework/Exercises to practice the concept
- Given a routing table with overlapping prefixes, predict which route is chosen for five destinations.
- Use a diagram to show how PMTU failures cause “large packet” hangs.
Solutions to the homework/exercises
- The most specific prefix always wins; ties go to lowest metric.
- Large packets drop when the path MTU is smaller; ICMP “Packet Too Big” feedback is required to adapt.
3. Project Specification
3.1 What You Will Build
An automated troubleshooting assistant that selects tools based on symptoms.
3.2 Functional Requirements
- Core data collection: Gather the required system/network data reliably.
- Interpretation layer: Translate raw outputs into human-readable insights.
- Deterministic output: Produce stable, comparable results across runs.
- Error handling: Detect missing privileges, tools, or unsupported interfaces.
3.3 Non-Functional Requirements
- Performance: Runs in under 5 seconds for baseline mode.
- Reliability: Handles missing data sources gracefully.
- Usability: Output is readable without post-processing.
3.4 Example Usage / Output
$ sudo ./netwizard.sh "Cannot reach https://example.com"
Phase 1: Interface OK
Phase 2: DNS OK
Phase 3: ICMP OK
Phase 4: TCP 443 FAIL
Phase 5: Firewall OUTPUT DROP on 443
Diagnosis: Local firewall blocking HTTPS
3.5 Data Formats / Schemas / Protocols
- Input: CLI tool output, kernel state, or service logs.
- Output: A structured report with sections and summarized metrics.
3.6 Edge Cases
- Missing tool binaries or insufficient permissions.
- Interfaces or hosts that return no data.
- Transient states (link flaps, intermittent loss).
3.7 Real World Outcome
$ sudo ./netwizard.sh "Cannot reach https://example.com"
Phase 1: Interface OK
Phase 2: DNS OK
Phase 3: ICMP OK
Phase 4: TCP 443 FAIL
Phase 5: Firewall OUTPUT DROP on 443
Diagnosis: Local firewall blocking HTTPS
3.7.1 How to Run (Copy/Paste)
$ ./run-project.sh [options]
3.7.2 Golden Path Demo (Deterministic)
Run the tool against a known-good target and verify every section of the output matches the expected format.
3.7.3 If CLI: provide an exact terminal transcript
$ sudo ./netwizard.sh "Cannot reach https://example.com"
Phase 1: Interface OK
Phase 2: DNS OK
Phase 3: ICMP OK
Phase 4: TCP 443 FAIL
Phase 5: Firewall OUTPUT DROP on 443
Diagnosis: Local firewall blocking HTTPS
4. Solution Architecture
4.1 High-Level Design
[Collector] -> [Parser] -> [Analyzer] -> [Reporter]
4.2 Key Components
| Component | Responsibility | Key Decisions |
|---|---|---|
| Collector | Gather raw tool output | Which tools to call and with what flags |
| Parser | Normalize raw text/JSON | Text vs JSON parsing strategy |
| Analyzer | Compute insights | Thresholds and heuristics |
| Reporter | Format output | Stable layout and readability |
4.3 Data Structures (No Full Code)
- InterfaceRecord: name, state, addresses, stats
- RouteRecord: prefix, gateway, interface, metric
- Observation: timestamp, source, severity, message
4.4 Algorithm Overview
Key Algorithm: Evidence Aggregation
- Collect raw outputs from tools.
- Parse into normalized records.
- Apply interpretation rules and thresholds.
- Render the final report.
Complexity Analysis:
- Time: O(n) over number of records
- Space: O(n) to hold parsed records
5. Implementation Guide
5.1 Development Environment Setup
# Install required tools with your distro package manager
5.2 Project Structure
project-root/
├── src/
│ ├── main
│ ├── collectors/
│ └── formatters/
├── tests/
└── README.md
5.3 The Core Question You’re Answering
“Given a symptom, how do I systematically find the root cause?”
5.4 Concepts You Must Understand First
- Layered troubleshooting
- Physical -> Link -> Network -> Transport -> App.
- Tool selection
- Which tool corresponds to each layer.
- Correlation logic
- How to combine evidence for a conclusion.
5.5 Questions to Guide Your Design
- What checks are mandatory for every symptom?
- When do you stop and declare root cause?
- How will you avoid false positives?
5.6 Thinking Exercise
Design a decision tree for “SSH drops after 5 minutes”.
5.7 The Interview Questions They’ll Ask
- “How do you approach network troubleshooting systematically?”
- “Ping works but HTTP fails: what next?”
- “How do you distinguish DNS vs routing failures?”
- “How do you diagnose intermittent loss?”
- “How do you validate a fix?”
5.8 Hints in Layers
Hint 1: Start with interface and route checks. Hint 2: Only run deeper tests if earlier stages pass. Hint 3: Rank possible causes by evidence strength. Hint 4: Always include a validation step.
5.9 Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Ops practice | “The Practice of System and Network Administration” | Ch. 7-9 |
| Troubleshooting | “Linux Firewalls” | Ch. 10 |
5.10 Implementation Phases
Phase 1: Foundation (1-2 days)
- Define outputs and parse a single tool.
- Produce a minimal report.
Phase 2: Core Functionality (3-5 days)
- Add remaining tools and interpretation logic.
- Implement stable formatting and summaries.
Phase 3: Polish & Edge Cases (2-3 days)
- Handle missing data and failure modes.
- Add thresholds and validation checks.
5.11 Key Implementation Decisions
| Decision | Options | Recommendation | Rationale |
|---|---|---|---|
| Parsing format | Text vs JSON | JSON where available | More stable parsing |
| Output layout | Table vs sections | Sections | Readability for humans |
| Sampling | One-shot vs periodic | One-shot + optional loop | Predictable runtime |
6. Testing Strategy
6.1 Test Categories
| Category | Purpose | Examples |
|---|---|---|
| Unit Tests | Validate parsing | Parse fixed tool output samples |
| Integration Tests | Validate tool calls | Run against a lab host |
| Edge Case Tests | Handle failures | Missing tool, no permissions |
6.2 Critical Test Cases
- Reference run: Output matches golden transcript.
- Missing tool: Proper error message and partial report.
- Permission denied: Clear guidance for sudo or capabilities.
6.3 Test Data
Input: captured command output
Expected: normalized report with correct totals
7. Common Pitfalls & Debugging
7.1 Frequent Mistakes
| Pitfall | Symptom | Solution |
|---|---|---|
| Wrong interface | Empty output | Verify interface names |
| Missing privileges | Permission errors | Use sudo or capabilities |
| Misparsed output | Wrong stats | Prefer JSON parsing |
7.2 Debugging Strategies
- Re-run each tool independently to compare raw output.
- Add a verbose mode that dumps raw data sources.
7.3 Performance Traps
- Avoid tight loops without sleep intervals.
8. Extensions & Challenges
8.1 Beginner Extensions
- Add colored status markers.
- Export report to a file.
8.2 Intermediate Extensions
- Add JSON output mode.
- Add baseline comparison.
8.3 Advanced Extensions
- Add multi-host aggregation.
- Add alerting thresholds.
9. Real-World Connections
9.1 Industry Applications
- SRE runbooks and on-call diagnostics.
- Network operations monitoring.
9.2 Related Open Source Projects
- tcpdump / iproute2 / nftables
- mtr / iperf3
9.3 Interview Relevance
- Demonstrates evidence-based debugging and tool mastery.
10. Resources
10.1 Essential Reading
- Primary book listed in the main guide.
- Relevant RFCs and tool manuals.
10.2 Video Resources
- Conference talks on Linux networking and troubleshooting.