Project 6: Firewall Rule Auditor
A firewall auditor that inventories rulesets and simulates packet evaluation.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Level 3: Advanced |
| Time Estimate | 1 week |
| Main Programming Language | Bash |
| Alternative Programming Languages | Python, Go |
| Coolness Level | Level 3: Clever |
| Business Potential | 3. Service & Support |
| Prerequisites | Basic Linux CLI |
| Key Topics | Packet Flow and Netfilter Hooks |
1. Learning Objectives
By completing this project, you will:
- Build the core tool described in the project and validate output against a golden transcript.
- Explain how the tool maps to the Linux networking layer model.
- Diagnose at least one real or simulated failure using the tool’s output.
2. All Theory Needed (Per-Concept Breakdown)
This section includes every concept required to implement this project successfully.
Packet Flow and Netfilter Hooks
Fundamentals Linux processes packets in a predictable sequence, and Netfilter is the framework that inserts decision points into that sequence. A frame arrives on a NIC, the kernel parses it, and the packet passes through well-defined hooks: PREROUTING, INPUT, FORWARD, OUTPUT, and POSTROUTING. Firewall rules are not “global”; they attach to specific hooks (via chains), so placement is as important as the rule itself. Netfilter also provides connection tracking (state), NAT, and packet mangling. That means a rule can match on the connection state (NEW, ESTABLISHED), translate addresses, or simply allow/deny. nftables is the modern rule engine that programs these hooks with a unified syntax and richer data structures. If you internalize where each hook sits and which packets pass it, you can predict what a rule will see, why a packet was dropped, and which tool (tcpdump, ss, ip route) will reflect the outcome.
Deep Dive Think of packet processing as a conveyor belt with checkpoints, each checkpoint exposing different metadata. The packet enters through the driver and reaches the IP layer. At PREROUTING, the kernel knows the packet’s ingress interface, source and destination IP, and L4 headers, but it has not yet decided where the packet will go. This is why destination NAT (DNAT) belongs here: changing the destination before routing ensures the kernel routes the translated address, not the original. After PREROUTING, the routing decision determines whether the packet is for the local machine or must be forwarded. That single branch splits the path: local traffic goes to INPUT, forwarded traffic goes to FORWARD, and both eventually pass through POSTROUTING before transmission. Locally generated traffic starts at the socket layer, passes OUTPUT (where filtering and local policy apply), then POSTROUTING, and finally leaves the NIC.
Netfilter organizes rules into tables and chains. Tables group rule intent (filter, nat, mangle, raw), while chains define hook attachment. A base chain is bound to a hook, which means packets enter it automatically; a regular chain is only entered by an explicit jump. The order of chains and the order of rules inside a chain is the actual execution path. That is why “rule order matters” is more than a cliché: a DROP near the top of INPUT can shadow every later rule, and a NAT rule in the wrong hook may never execute. Understanding policy defaults is just as important: a default DROP in INPUT means only explicitly allowed traffic enters, while a default ACCEPT means all traffic enters unless explicitly blocked. These defaults set the baseline security posture.
Connection tracking is the other pillar. Netfilter tracks flows and labels packets as NEW, ESTABLISHED, or RELATED. This lets you write rules like “allow established connections” without enumerating ephemeral ports. It also enables NAT to be symmetric: once a flow is translated, conntrack remembers the mapping so replies are translated back. If conntrack is disabled or bypassed, those stateful expectations break. Many real-world bugs come from misunderstanding this state: for example, blocking NEW connections but forgetting to allow ESTABLISHED, or assuming a DNAT rule will automatically permit forwarding when the FORWARD chain still drops packets.
nftables modernizes rule evaluation. Rather than relying on multiple legacy tools (iptables, ip6tables, arptables, ebtables), nftables provides a single syntax and a kernel-side virtual machine. It supports sets and maps, which makes complex policies efficient: instead of a hundred “allow” rules, you can express a set of allowed IPs or ports and match in a single rule. For an operator, this changes how you reason about performance and correctness. The same logical policy can be expressed in fewer rules, with fewer ordering traps, and with clearer auditability. But the hook placement logic remains identical, because nftables still attaches to Netfilter hooks.
The critical troubleshooting mindset is to separate “where did the packet enter?” from “where did it die?” A SYN visible in tcpdump on the NIC but absent in ss indicates it was dropped before the socket layer — likely INPUT or an earlier hook. A connection that establishes locally but fails to reach another host suggests a FORWARD or POSTROUTING issue. If outbound traffic fails only after a NAT rule is applied, your mistake is probably hook placement or state. When you combine this mental model with evidence from tools, you can answer the exact question operators care about: “Which rule, in which chain, at which hook, dropped or modified this packet?” That is the difference between a fix and a guess.
How this fit on projects
- Firewall Rule Auditor (Project 6)
- Network Troubleshooting Wizard (Project 13)
- Real-Time Network Security Monitor (Project 15)
Definitions & key terms
- Netfilter hook: A defined point in the kernel packet path where filtering or mangling can occur.
- Base chain: An nftables chain attached to a hook; it is entered by packets automatically.
- Connection tracking (conntrack): Kernel subsystem that tracks flows to enable stateful filtering and NAT.
Mental model diagram
INBOUND:
NIC -> PREROUTING -> routing decision -> INPUT -> socket -> app
\-> FORWARD -> POSTROUTING -> NIC
OUTBOUND:
app -> socket -> OUTPUT -> routing decision -> POSTROUTING -> NIC
How it works (step-by-step, invariants, failure modes)
- Packet arrives at NIC and is handed to the IP layer.
- PREROUTING runs (DNAT possible).
- Routing decision selects local delivery vs forward.
- INPUT or FORWARD hook runs.
- POSTROUTING runs (SNAT possible).
- Packet is delivered locally or transmitted. Invariants: hooks run in order; rule order matters; DNAT before routing, SNAT after routing. Failure modes: rule in wrong chain, missing conntrack state, policy drop on wrong hook.
Minimal concrete example Protocol transcript (simplified):
Packet: TCP SYN to 10.0.0.10:443
PREROUTING: DNAT 10.0.0.10 -> 192.168.1.10
Routing: destination is local host
INPUT: allow 443 -> ACCEPT
Socket: delivered to nginx
Common misconceptions
- “A DROP in FORWARD blocks inbound traffic to my host.” (It does not; INPUT is for local host.)
- “NAT happens after routing.” (Destination NAT must happen before routing.)
Check-your-understanding questions
- Where must DNAT occur to affect the routing decision?
- Which chain sees locally generated packets?
- Why might a rule in INPUT never match forwarded packets?
Check-your-understanding answers
- PREROUTING.
- OUTPUT (then POSTROUTING).
- Forwarded packets go through FORWARD, not INPUT.
Real-world applications
- Server firewalls, NAT gateways, and container networking.
Where you’ll apply it Projects 6, 13, 15.
References
- netfilter.org project overview and nftables documentation.
- iptables tables and built-in chains (man page).
Key insights Correct firewalling is about hook placement as much as rule logic.
Summary You now know the kernel checkpoints where packets can be seen and controlled, and why firewall debugging starts with hook placement.
Homework/Exercises to practice the concept
- Draw the packet path for (a) inbound SSH, (b) outbound HTTPS, (c) forwarded NAT traffic.
- Mark where DNAT and SNAT would occur.
Solutions to the homework/exercises
- Inbound SSH: NIC -> PREROUTING -> INPUT -> socket.
- Outbound HTTPS: socket -> OUTPUT -> POSTROUTING -> NIC.
- Forwarded NAT: NIC -> PREROUTING (DNAT) -> FORWARD -> POSTROUTING (SNAT) -> NIC.
3. Project Specification
3.1 What You Will Build
A firewall auditor that inventories rulesets and simulates packet evaluation.
3.2 Functional Requirements
- Core data collection: Gather the required system/network data reliably.
- Interpretation layer: Translate raw outputs into human-readable insights.
- Deterministic output: Produce stable, comparable results across runs.
- Error handling: Detect missing privileges, tools, or unsupported interfaces.
3.3 Non-Functional Requirements
- Performance: Runs in under 5 seconds for baseline mode.
- Reliability: Handles missing data sources gracefully.
- Usability: Output is readable without post-processing.
3.4 Example Usage / Output
$ sudo ./fwaudit.sh
FIREWALL AUDIT
Backend: nftables (via iptables-nft)
INPUT policy: DROP
Rules: 12
Issues:
- SSH open to 0.0.0.0/0
- No IPv6 rules detected
Simulated packet:
TCP 203.0.113.50:54321 -> 192.168.1.10:5432
Result: DROP at rule 7
3.5 Data Formats / Schemas / Protocols
- Input: CLI tool output, kernel state, or service logs.
- Output: A structured report with sections and summarized metrics.
3.6 Edge Cases
- Missing tool binaries or insufficient permissions.
- Interfaces or hosts that return no data.
- Transient states (link flaps, intermittent loss).
3.7 Real World Outcome
$ sudo ./fwaudit.sh
FIREWALL AUDIT
Backend: nftables (via iptables-nft)
INPUT policy: DROP
Rules: 12
Issues:
- SSH open to 0.0.0.0/0
- No IPv6 rules detected
Simulated packet:
TCP 203.0.113.50:54321 -> 192.168.1.10:5432
Result: DROP at rule 7
3.7.1 How to Run (Copy/Paste)
$ ./run-project.sh [options]
3.7.2 Golden Path Demo (Deterministic)
Run the tool against a known-good target and verify every section of the output matches the expected format.
3.7.3 If CLI: provide an exact terminal transcript
$ sudo ./fwaudit.sh
FIREWALL AUDIT
Backend: nftables (via iptables-nft)
INPUT policy: DROP
Rules: 12
Issues:
- SSH open to 0.0.0.0/0
- No IPv6 rules detected
Simulated packet:
TCP 203.0.113.50:54321 -> 192.168.1.10:5432
Result: DROP at rule 7
4. Solution Architecture
4.1 High-Level Design
[Collector] -> [Parser] -> [Analyzer] -> [Reporter]
4.2 Key Components
| Component | Responsibility | Key Decisions |
|---|---|---|
| Collector | Gather raw tool output | Which tools to call and with what flags |
| Parser | Normalize raw text/JSON | Text vs JSON parsing strategy |
| Analyzer | Compute insights | Thresholds and heuristics |
| Reporter | Format output | Stable layout and readability |
4.3 Data Structures (No Full Code)
- InterfaceRecord: name, state, addresses, stats
- RouteRecord: prefix, gateway, interface, metric
- Observation: timestamp, source, severity, message
4.4 Algorithm Overview
Key Algorithm: Evidence Aggregation
- Collect raw outputs from tools.
- Parse into normalized records.
- Apply interpretation rules and thresholds.
- Render the final report.
Complexity Analysis:
- Time: O(n) over number of records
- Space: O(n) to hold parsed records
5. Implementation Guide
5.1 Development Environment Setup
# Install required tools with your distro package manager
5.2 Project Structure
project-root/
├── src/
│ ├── main
│ ├── collectors/
│ └── formatters/
├── tests/
└── README.md
5.3 The Core Question You’re Answering
“What traffic is allowed or denied, and why?”
5.4 Concepts You Must Understand First
- Netfilter hooks
- Where INPUT/OUTPUT/FORWARD run.
- Book Reference: “Linux Firewalls” - Ch. 1-2
- nftables vs iptables
- nftables replaces iptables.
- Book Reference: “Linux Firewalls” - Ch. 3
- Default policy semantics
- Why policy DROP changes everything.
- Book Reference: “Linux Firewalls” - Ch. 4
5.5 Questions to Guide Your Design
- How will you detect shadowed or unreachable rules?
- How will you represent chains and policy order visually?
- How will you simulate a packet through rules?
5.6 Thinking Exercise
Given these rules:
1 ACCEPT all -- lo
2 ACCEPT state ESTABLISHED,RELATED
3 DROP tcp -- 0.0.0.0/0 dpt:80
4 ACCEPT tcp -- 10.0.0.0/8 dpt:80
Question: What happens to traffic from 10.0.0.5 to port 80?
5.7 The Interview Questions They’ll Ask
- “Explain INPUT vs FORWARD chains.”
- “What is the difference between DROP and REJECT?”
- “Why is rule order important?”
- “How do you make rules persistent?”
- “What is the relationship between nftables and iptables?”
5.8 Hints in Layers
Hint 1: Use iptables-save and nft list ruleset as inputs.
Hint 2: Model rules as ordered lists with match predicates.
Hint 3: Simulate by evaluating rules in order until a verdict.
Hint 4: Flag IPv6 as a separate audit surface.
5.9 Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Netfilter basics | “Linux Firewalls” | Ch. 1-3 |
| nftables | netfilter.org docs | Online |
5.10 Implementation Phases
Phase 1: Foundation (1-2 days)
- Define outputs and parse a single tool.
- Produce a minimal report.
Phase 2: Core Functionality (3-5 days)
- Add remaining tools and interpretation logic.
- Implement stable formatting and summaries.
Phase 3: Polish & Edge Cases (2-3 days)
- Handle missing data and failure modes.
- Add thresholds and validation checks.
5.11 Key Implementation Decisions
| Decision | Options | Recommendation | Rationale |
|---|---|---|---|
| Parsing format | Text vs JSON | JSON where available | More stable parsing |
| Output layout | Table vs sections | Sections | Readability for humans |
| Sampling | One-shot vs periodic | One-shot + optional loop | Predictable runtime |
6. Testing Strategy
6.1 Test Categories
| Category | Purpose | Examples |
|---|---|---|
| Unit Tests | Validate parsing | Parse fixed tool output samples |
| Integration Tests | Validate tool calls | Run against a lab host |
| Edge Case Tests | Handle failures | Missing tool, no permissions |
6.2 Critical Test Cases
- Reference run: Output matches golden transcript.
- Missing tool: Proper error message and partial report.
- Permission denied: Clear guidance for sudo or capabilities.
6.3 Test Data
Input: captured command output
Expected: normalized report with correct totals
7. Common Pitfalls & Debugging
7.1 Frequent Mistakes
| Pitfall | Symptom | Solution |
|---|---|---|
| Wrong interface | Empty output | Verify interface names |
| Missing privileges | Permission errors | Use sudo or capabilities |
| Misparsed output | Wrong stats | Prefer JSON parsing |
7.2 Debugging Strategies
- Re-run each tool independently to compare raw output.
- Add a verbose mode that dumps raw data sources.
7.3 Performance Traps
- Avoid tight loops without sleep intervals.
8. Extensions & Challenges
8.1 Beginner Extensions
- Add colored status markers.
- Export report to a file.
8.2 Intermediate Extensions
- Add JSON output mode.
- Add baseline comparison.
8.3 Advanced Extensions
- Add multi-host aggregation.
- Add alerting thresholds.
9. Real-World Connections
9.1 Industry Applications
- SRE runbooks and on-call diagnostics.
- Network operations monitoring.
9.2 Related Open Source Projects
- tcpdump / iproute2 / nftables
- mtr / iperf3
9.3 Interview Relevance
- Demonstrates evidence-based debugging and tool mastery.
10. Resources
10.1 Essential Reading
- Primary book listed in the main guide.
- Relevant RFCs and tool manuals.
10.2 Video Resources
- Conference talks on Linux networking and troubleshooting.