Project 16: Persistence Atlas

Map persistence points and prioritize detection coverage.

Quick Reference

Attribute Value
Difficulty Level 2
Time Estimate Weekend
Main Programming Language Python (Alternatives: Python, PowerShell)
Alternative Programming Languages Python, PowerShell
Coolness Level Level 3
Business Potential Level 2
Prerequisites OS internals basics, CLI usage, logging familiarity
Key Topics persistence mapping, risk ranking

1. Learning Objectives

By completing this project, you will:

  1. Build a repeatable workflow for persistence atlas.
  2. Generate reports with deterministic outputs.
  3. Translate findings into actionable recommendations.

2. All Theory Needed (Per-Concept Breakdown)

Persistence Mapping and Prioritization

Fundamentals Persistence mechanisms allow adversaries to survive reboots and maintain access. For rootkits, persistence can occur in boot loaders, kernel modules, firmware, or user-space autostart mechanisms. A persistence atlas is a structured inventory of these mechanisms with detection and mitigation notes. Prioritization matters: some persistence points are high risk because they operate below the OS or are difficult to inspect. Mapping and ranking persistence points helps defenders focus their hunting and monitoring on the most critical areas.

Deep Dive into the concept Persistence exists across layers. Boot-level persistence includes EFI bootloaders, boot configuration data, and firmware variables. Kernel-level persistence includes drivers or modules loaded at boot. User-level persistence includes startup scripts, scheduled tasks, launch agents, and registry run keys. Rootkits prefer deeper layers because they are harder to detect and can lie to higher layers.

A persistence atlas should catalog each mechanism with attributes: privilege required to install, visibility to standard tools, and likely detection sources. A bootloader modification requires high privilege but is low visibility; a user-space startup entry is high visibility but easier to spot. By scoring each mechanism on these dimensions, you can prioritize which checks to automate and which alerts to treat as critical.

Mapping persistence also supports response. If you detect a rootkit, you need to know which persistence points to inspect. The atlas becomes a checklist for containment and remediation. It also supports defensive design: if you can close or monitor high-risk points, you reduce the rootkit’s ability to persist.

The atlas should map to MITRE ATT&CK techniques for consistency with industry frameworks. This makes it easier to communicate coverage and gaps to stakeholders. It also enables you to track improvements over time. The final output should be a structured document and a machine-readable dataset that can be used by tools.

How this fit on projects You will apply this in Section 3.2 (Functional Requirements), Section 8.2 (Intermediate Extensions), and Section 9.2 (Related Open Source Projects). Also used in: P16-persistence-atlas.

Definitions & key terms

  • Persistence: Techniques used to maintain access across reboots or logouts.
  • Atlas: A structured inventory of persistence points with metadata.
  • Depth: How low in the system stack a persistence point resides.
  • Visibility: How likely a persistence point is to appear in standard tools.

Mental model diagram

[Firmware] -> [Bootloader] -> [Kernel] -> [User Space]
   ^            ^              ^            ^
 High depth   High depth     Medium      Lower risk

How it works (step-by-step)

  1. Enumerate persistence points per OS layer.
  2. Assign scores for depth, visibility, and impact.
  3. Map each entry to detection sources and mitigations.
  4. Produce a ranked atlas and update it regularly.

Minimal concrete example

persistence_point, depth, visibility, mitigation
EFI bootloader, high, low, Secure Boot + baseline
Startup script, low, high, auditd + file integrity

Common misconceptions

  • “Persistence is only user-space.” Rootkits often persist in boot or kernel layers.
  • “All persistence points are equal.” Risk varies by depth and visibility.
  • “Once mapped, the atlas is done.” It must be updated with new techniques.

Check-your-understanding questions

  • Why are boot-level persistence points higher risk?
  • What factors should you score in an atlas?
  • How does MITRE mapping help?

Check-your-understanding answers

  • They run before the OS and can subvert higher-layer defenses.
  • Depth, visibility, impact, and exploitability.
  • It provides a standard language for coverage and gaps.

Real-world applications

  • Threat hunting programs that prioritize high-risk persistence points.
  • Security control coverage assessments.

Where you’ll apply it You will apply this in Section 3.2 (Functional Requirements), Section 8.2 (Intermediate Extensions), and Section 9.2 (Related Open Source Projects). Also used in: P16-persistence-atlas.

References

  • MITRE ATT&CK persistence techniques
  • Practical Malware Analysis - persistence chapters

Key insights Persistence mapping turns an overwhelming landscape into a prioritized checklist.

Summary Catalog persistence points, score risk, and map to detection sources.

Homework/Exercises to practice the concept

  • Create a top-10 list of persistence points for your primary OS.
  • Score each on depth and visibility.

Solutions to the homework/exercises

  • Your list should include boot, kernel, and user-space points.
  • Scores should highlight boot and kernel points as higher risk.

3. Project Specification

3.1 What You Will Build

A tool or document that delivers: Map persistence points and prioritize detection coverage.

3.2 Functional Requirements

  1. Collect required system artifacts for the task.
  2. Normalize data and produce a report output.
  3. Provide a deterministic golden-path demo.
  4. Include explicit failure handling and exit codes.

3.3 Non-Functional Requirements

  • Performance: Complete within a typical maintenance window.
  • Reliability: Outputs must be deterministic and versioned.
  • Usability: Clear CLI output and documentation.

3.4 Example Usage / Output

$ ./P16-persistence-atlas.py --report
[ok] report generated

3.5 Data Formats / Schemas / Protocols

Report JSON schema with fields: timestamp, host, findings, severity, remediation.

3.6 Edge Cases

  • Missing permissions or insufficient privileges.
  • Tooling not installed (e.g., missing sysctl or OS query tools).
  • Empty data sets (no drivers/modules found).

3.7 Real World Outcome

A deterministic report output stored in a case directory with hashes.

3.7.1 How to Run (Copy/Paste)

./P16-persistence-atlas.py --out reports/P16-persistence-atlas.json

3.7.2 Golden Path Demo (Deterministic)

  • Report file exists and includes findings with severity.

3.7.3 Failure Demo

$ ./P16-persistence-atlas.py --out /readonly/report.json
[error] cannot write report file
exit code: 2

Exit Codes:

  • 0 success
  • 2 output error

4. Solution Architecture

4.1 High-Level Design

[Collector] -> [Analyzer] -> [Report]

4.2 Key Components

Component Responsibility Key Decisions
Collector Collects raw artifacts Prefer OS-native tools
Analyzer Normalizes and scores findings Deterministic rules
Reporter Outputs report JSON + Markdown

4.3 Data Structures (No Full Code)

finding = { id, description, severity, evidence, remediation }

4.4 Algorithm Overview

Key Algorithm: Normalize and Score

  1. Collect artifacts.
  2. Normalize fields.
  3. Apply scoring rules.
  4. Output report.

Complexity Analysis:

  • Time: O(n) for n artifacts.
  • Space: O(n) for report.

5. Implementation Guide

5.1 Development Environment Setup

python3 -m venv .venv && source .venv/bin/activate
# install OS-specific tools as needed

5.2 Project Structure

project/
|-- src/
|   `-- main.py
|-- reports/
`-- README.md

5.3 The Core Question You’re Answering

“Where can rootkits persist across reboots?”

This project turns theory into a repeatable, auditable workflow.

5.4 Concepts You Must Understand First

  • Relevant OS security controls
  • Detection workflows
  • Evidence handling

5.5 Questions to Guide Your Design

  1. What data sources are trusted for this task?
  2. How will you normalize differences across OS versions?
  3. What is a high-confidence signal vs noise?

5.6 Thinking Exercise

Sketch a pipeline from data collection to report output.

5.7 The Interview Questions They’ll Ask

  1. What is the main trust boundary in this project?
  2. How do you validate findings?
  3. What would you automate in production?

5.8 Hints in Layers

Hint 1: Start with a small, deterministic dataset.

Hint 2: Normalize output fields early.

Hint 3: Add a failure path with clear exit codes.


5.9 Books That Will Help

Topic Book Chapter
Rootkit defense Practical Malware Analysis Rootkit chapters
OS internals Operating Systems: Three Easy Pieces Processes and files

5.10 Implementation Phases

Phase 1: Data Collection (3-4 days)

Goals: Collect raw artifacts reliably.

Tasks:

  1. Identify OS-native tools.
  2. Capture sample data.

Checkpoint: Raw dataset stored.

Phase 2: Analysis & Reporting (4-5 days)

Goals: Normalize and score findings.

Tasks:

  1. Build analyzer.
  2. Generate report.

Checkpoint: Deterministic report generated.

Phase 3: Validation (2-3 days)

Goals: Validate rules and handle edge cases.

Tasks:

  1. Add failure tests.
  2. Document runbook.

Checkpoint: Failure cases documented.

5.11 Key Implementation Decisions

Decision Options Recommendation Rationale
Report format JSON, CSV JSON Structured and diffable
Scoring Simple, Weighted Weighted Prioritize high risk findings

6. Testing Strategy

6.1 Test Categories

Category Purpose Examples
Unit Tests Parser logic Sample data parsing
Integration Tests End-to-end run Generate report
Edge Case Tests Missing permissions Error path

6.2 Critical Test Cases

  1. Report generated with deterministic ordering.
  2. Exit code indicates failure on invalid output path.
  3. At least one high-risk finding is flagged in test data.

6.3 Test Data

Provide a small fixture file with one known suspicious artifact.

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

Pitfall Symptom Solution
Noisy results Too many alerts Add normalization and thresholds
Missing permissions Script fails Detect and warn early

7.2 Debugging Strategies

  • Log raw inputs before normalization.
  • Add verbose mode to show rule evaluation.

7.3 Performance Traps

Scanning large datasets without filtering can be slow; restrict scope to critical paths.


8. Extensions & Challenges

8.1 Beginner Extensions

  • Add a Markdown summary report.

8.2 Intermediate Extensions

  • Add a JSON schema validator for output.

8.3 Advanced Extensions

  • Integrate with a SIEM or ticketing system.

9. Real-World Connections

9.1 Industry Applications

  • Security operations audits and detection validation.
  • osquery - endpoint inventory

9.3 Interview Relevance

  • Discussing detection workflows and auditability.

10. Resources

10.1 Essential Reading

  • Practical Malware Analysis - rootkit detection chapters

10.2 Video Resources

  • Conference talks on rootkit detection

10.3 Tools & Documentation

  • OS-native logging and audit tools

11. Self-Assessment Checklist

11.1 Understanding

  • I can describe the trust boundary for this task.

11.2 Implementation

  • Report generation is deterministic.

11.3 Growth

  • I can explain how to operationalize this check.

12. Submission / Completion Criteria

Minimum Viable Completion:

  • Report created and contains at least one finding.

Full Completion:

  • Findings are categorized with remediation guidance.

Excellence (Going Above & Beyond):

  • Integrated into a broader toolkit or pipeline.