Project 13: Bootkit Response Playbook

Create a response playbook for boot-level compromise.

Quick Reference

Attribute Value
Difficulty Level 2
Time Estimate 1-2 weeks
Main Programming Language Python (Alternatives: Python, PowerShell)
Alternative Programming Languages Python, PowerShell
Coolness Level Level 3
Business Potential Level 2
Prerequisites OS internals basics, CLI usage, logging familiarity
Key Topics IR playbooks, boot integrity

1. Learning Objectives

By completing this project, you will:

  1. Build a repeatable workflow for bootkit response playbook.
  2. Generate reports with deterministic outputs.
  3. Translate findings into actionable recommendations.

2. All Theory Needed (Per-Concept Breakdown)

Boot Chain, Secure Boot, and Measured Trust

Fundamentals The boot chain is the sequence of components that initialize a system from firmware to kernel. Secure Boot establishes trust by verifying digital signatures on boot components, while Measured Boot records hashes of those components into a TPM for later attestation. Rootkits that target the boot chain (bootkits) aim to execute before the operating system, which lets them subvert the kernel before it can defend itself. A defender must know exactly which files and firmware stages participate in boot, which signatures are expected, and where trust transitions occur. Without this map, you cannot know what to baseline or where to look for tampering.

Deep Dive into the concept Modern systems rely on a multi-stage boot process. On UEFI systems, firmware verifies a bootloader image using Platform Key (PK), Key Exchange Keys (KEKs), and signature databases (db, dbx). The bootloader then loads the OS kernel and early drivers. Secure Boot prevents unsigned or revoked components from loading, but it does not guarantee the integrity of already-signed components if an attacker can replace them with other signed-but-malicious binaries or abuse vulnerable signed drivers. Measured Boot complements Secure Boot by recording hashes of each stage into TPM PCRs. This does not block boot; it enables post-boot validation by comparing PCR values to a known-good baseline.

Trust boundaries in the boot chain exist at each handoff: firmware trusts the bootloader, the bootloader trusts the kernel, and the kernel trusts early drivers. Attackers target these boundaries because a single compromised stage can persist across reboots and hide within normal boot flows. Bootkits often target the EFI System Partition (ESP), replacing or modifying bootloaders, or they modify boot configuration data to load a malicious component early. On legacy BIOS/MBR systems, the first sectors of disk are the attack surface. Because boot components are rarely observed by routine host tools, a defender must explicitly inventory them and measure them.

Practical defense requires three activities: mapping, baselining, and verification. Mapping is enumerating the exact files, partitions, and signatures involved in boot. Baselining is recording hashes and signature metadata for those components and storing the baseline offline. Verification is continuously comparing current boot components to the baseline and alerting on drift. When updates occur, the baseline must be updated in a controlled, audited workflow.

Secure Boot policy is only as strong as the enforcement of signature databases. If dbx revocations are outdated or if a platform allows custom keys without governance, attackers can introduce their own trusted components. Measured Boot adds accountability: if PCRs change unexpectedly, you know the boot chain differs. But measuring is not detecting; you must actually retrieve and compare measurements. Rootkit defense therefore depends on operationalizing those checks, not just enabling Secure Boot in firmware.

How this fit on projects You will apply this in Section 3.1 (What You Will Build), Section 3.5 (Data Formats), and Section 4.1 (High-Level Design). Also used in: P02-boot-chain-map, P08-boot-integrity-monitor, P19-secure-boot-policy-review, P13-bootkit-response-playbook.

Definitions & key terms

  • Boot chain: Ordered sequence of firmware, bootloader, kernel, and early drivers that start the OS.
  • Secure Boot: Signature verification that blocks untrusted boot components from loading.
  • Measured Boot: Recording hashes of boot components into TPM PCRs for later attestation.
  • Bootkit: Rootkit that compromises boot components to execute before the OS.

Mental model diagram

[UEFI Firmware]
   |  (verifies)
   v
[Bootloader] --(loads)--> [Kernel] --(loads)--> [Early Drivers]
   |
   v
[TPM PCR Measurements]
   |
   v
[Attestation / Baseline Compare]

How it works (step-by-step)

  1. Firmware verifies bootloader signature using platform keys.
  2. Bootloader loads kernel and early drivers; hashes are measured into TPM.
  3. OS starts and reads boot configuration data and driver lists.
  4. Defender tool compares current hashes and PCR values to a trusted baseline.
  5. Any mismatch triggers investigation or containment.

Minimal concrete example

boot_component, path, signer, sha256
bootloader, \EFI\Microsoft\Boot\bootmgfw.efi, Microsoft, 9f...
kernel, C:\Windows\System32\ntoskrnl.exe, Microsoft, 4a...
boot_driver, C:\Windows\System32\drivers\elam.sys, Microsoft, c1...

Common misconceptions

  • “Secure Boot means no bootkits.” It reduces risk but does not prevent signed malicious components.
  • “Measured Boot blocks tampering.” It only measures; you must compare measurements.
  • “Boot integrity is a one-time check.” Updates and configuration changes require re-baselining.

Check-your-understanding questions

  • What is the difference between Secure Boot and Measured Boot?
  • Why is the ESP a common bootkit target?
  • What evidence proves a boot chain is unchanged?

Check-your-understanding answers

  • Secure Boot blocks untrusted components; Measured Boot records hashes for later validation.
  • The ESP contains bootloaders and configuration that execute before the OS; modifying it enables early execution.
  • Matching hashes or PCR measurements against a known-good baseline is strong evidence.

Real-world applications

  • Enterprise boot integrity baselining and compliance checks.
  • Incident response for suspected boot-level compromise.

Where you’ll apply it You will apply this in Section 3.1 (What You Will Build), Section 3.5 (Data Formats), and Section 4.1 (High-Level Design). Also used in: P02-boot-chain-map, P08-boot-integrity-monitor, P19-secure-boot-policy-review, P13-bootkit-response-playbook.

References

  • Microsoft Secure Boot documentation
  • NIST SP 800-147 (BIOS protection guidelines)
  • UEFI specification sections on Secure Boot

Key insights Boot integrity is a chain; the weakest or unmeasured link decides trust.

Summary Secure Boot verifies; Measured Boot records. You need both, plus baselines and monitoring.

Homework/Exercises to practice the concept

  • Enumerate the boot components on your OS and note their signature status.
  • Compare boot hashes before and after a system update.

Solutions to the homework/exercises

  • Your list should include firmware, bootloader, kernel, and early drivers with signer names.
  • After updates, at least one boot component hash should change; document it and update the baseline.

Incident Response Decisioning for Boot and Kernel Compromise

Fundamentals Incident response for rootkits is different from ordinary malware response because the trust boundary itself may be compromised. If the kernel or boot chain is untrusted, in-host remediation is unreliable. Decisioning focuses on evidence thresholds: when to contain, when to collect, and when to rebuild from trusted media. A good decision tree reduces ambiguity by defining measurable triggers and approvals. The goal is to balance operational continuity with integrity and safety.

Deep Dive into the concept Rootkit response begins with evidence. You must collect volatile data early: memory images, process lists, and network state. If you wait, the evidence may be lost or altered. The decision tree should define what constitutes “high confidence” of kernel compromise: for example, mismatched boot hashes, unsigned drivers loaded, or memory forensics indicating hidden kernel objects. These triggers should be measurable so responders are not forced to improvise.

Containment decisions depend on risk. A suspected bootkit on a domain controller is higher risk than on a non-critical workstation. The decision tree should include asset criticality, data sensitivity, and business impact. It should also define who approves destructive actions like reimaging. This reduces delays when time matters.

Rebuild vs remediation is the central choice. For many boot or kernel compromises, rebuild from trusted media is the safest path. Live remediation may be possible in some cases, but it should be the exception. The decision tree should include evidence capture requirements before rebuild, because rebuilding destroys evidence. You must also define post-rebuild validation: verifying Secure Boot, restoring baselines, and confirming that suspicious indicators are resolved.

Finally, communication is a required output. Rootkit incidents require clear reporting to stakeholders. A decision tree should include escalation paths and reporting artifacts: what was observed, what was done, and what remains unknown. This is how you maintain trust in the response process.

How this fit on projects You will apply this in Section 3.7 (Real World Outcome), Section 5.10 (Implementation Phases), and Section 11 (Self-Assessment). Also used in: P13-bootkit-response-playbook, P17-incident-response-decision-tree.

Definitions & key terms

  • Containment: Actions that limit spread or impact of a compromise.
  • Rebuild: Reimage or reinstall from trusted media to restore integrity.
  • Evidence threshold: Measured criteria that justify a response decision.
  • Post-rebuild validation: Checks that confirm restored integrity after remediation.

Mental model diagram

[Detection Signal] -> [Evidence Capture] -> [Decision Threshold]
     |                         |
     v                         v
[Contain]  <-------->  [Rebuild or Remediate]

How it works (step-by-step)

  1. Capture volatile evidence and preserve it externally.
  2. Evaluate signals against defined thresholds.
  3. Decide containment actions based on asset criticality.
  4. Rebuild from trusted media when kernel integrity is in doubt.
  5. Validate post-rebuild state and update baselines.

Minimal concrete example

if boot_hash_mismatch and unsigned_driver_loaded:
    action = 'rebuild'
elif suspicious_process and no_kernel_evidence:
    action = 'contain + investigate'

Common misconceptions

  • “You can always clean a rootkit in-place.” Kernel compromise undermines trust in local tools.
  • “Evidence can be collected later.” Volatile data disappears quickly.
  • “Rebuild is overkill.” It is often the only high-confidence remediation.

Check-your-understanding questions

  • What signals justify a rebuild?
  • Why must evidence be captured before remediation?
  • Who should approve destructive actions?

Check-your-understanding answers

  • Boot hash mismatches, hidden kernel objects, or unsigned drivers are strong triggers.
  • Remediation can destroy volatile evidence needed for attribution or learning.
  • Asset owners and security leadership should approve to balance risk and impact.

Real-world applications

  • Enterprise incident response playbooks for bootkits.
  • High-assurance environments where integrity is critical.

Where you’ll apply it You will apply this in Section 3.7 (Real World Outcome), Section 5.10 (Implementation Phases), and Section 11 (Self-Assessment). Also used in: P13-bootkit-response-playbook, P17-incident-response-decision-tree.

References

  • NIST SP 800-61 (Computer Security Incident Handling Guide)
  • SANS Incident Response resources

Key insights Rootkit response is about trust: when trust is broken, rebuild is the safest path.

Summary Define thresholds, capture evidence early, and prioritize integrity over convenience.

Homework/Exercises to practice the concept

  • Draft a decision tree for boot integrity violations.
  • List the evidence you must collect before reimaging.

Solutions to the homework/exercises

  • Your decision tree should include at least three measurable triggers.
  • Evidence should include memory image, disk image, and boot configuration.

3. Project Specification

3.1 What You Will Build

A tool or document that delivers: Create a response playbook for boot-level compromise.

3.2 Functional Requirements

  1. Collect required system artifacts for the task.
  2. Normalize data and produce a report output.
  3. Provide a deterministic golden-path demo.
  4. Include explicit failure handling and exit codes.

3.3 Non-Functional Requirements

  • Performance: Complete within a typical maintenance window.
  • Reliability: Outputs must be deterministic and versioned.
  • Usability: Clear CLI output and documentation.

3.4 Example Usage / Output

$ ./P13-bootkit-response-playbook.py --report
[ok] report generated

3.5 Data Formats / Schemas / Protocols

Report JSON schema with fields: timestamp, host, findings, severity, remediation.

3.6 Edge Cases

  • Missing permissions or insufficient privileges.
  • Tooling not installed (e.g., missing sysctl or OS query tools).
  • Empty data sets (no drivers/modules found).

3.7 Real World Outcome

A deterministic report output stored in a case directory with hashes.

3.7.1 How to Run (Copy/Paste)

./P13-bootkit-response-playbook.py --out reports/P13-bootkit-response-playbook.json

3.7.2 Golden Path Demo (Deterministic)

  • Report file exists and includes findings with severity.

3.7.3 Failure Demo

$ ./P13-bootkit-response-playbook.py --out /readonly/report.json
[error] cannot write report file
exit code: 2

Exit Codes:

  • 0 success
  • 2 output error

4. Solution Architecture

4.1 High-Level Design

[Collector] -> [Analyzer] -> [Report]

4.2 Key Components

Component Responsibility Key Decisions
Collector Collects raw artifacts Prefer OS-native tools
Analyzer Normalizes and scores findings Deterministic rules
Reporter Outputs report JSON + Markdown

4.3 Data Structures (No Full Code)

finding = { id, description, severity, evidence, remediation }

4.4 Algorithm Overview

Key Algorithm: Normalize and Score

  1. Collect artifacts.
  2. Normalize fields.
  3. Apply scoring rules.
  4. Output report.

Complexity Analysis:

  • Time: O(n) for n artifacts.
  • Space: O(n) for report.

5. Implementation Guide

5.1 Development Environment Setup

python3 -m venv .venv && source .venv/bin/activate
# install OS-specific tools as needed

5.2 Project Structure

project/
|-- src/
|   `-- main.py
|-- reports/
`-- README.md

5.3 The Core Question You’re Answering

“How do you respond when boot integrity is compromised?”

This project turns theory into a repeatable, auditable workflow.

5.4 Concepts You Must Understand First

  • Relevant OS security controls
  • Detection workflows
  • Evidence handling

5.5 Questions to Guide Your Design

  1. What data sources are trusted for this task?
  2. How will you normalize differences across OS versions?
  3. What is a high-confidence signal vs noise?

5.6 Thinking Exercise

Sketch a pipeline from data collection to report output.

5.7 The Interview Questions They’ll Ask

  1. What is the main trust boundary in this project?
  2. How do you validate findings?
  3. What would you automate in production?

5.8 Hints in Layers

Hint 1: Start with a small, deterministic dataset.

Hint 2: Normalize output fields early.

Hint 3: Add a failure path with clear exit codes.


5.9 Books That Will Help

Topic Book Chapter
Rootkit defense Practical Malware Analysis Rootkit chapters
OS internals Operating Systems: Three Easy Pieces Processes and files

5.10 Implementation Phases

Phase 1: Data Collection (3-4 days)

Goals: Collect raw artifacts reliably.

Tasks:

  1. Identify OS-native tools.
  2. Capture sample data.

Checkpoint: Raw dataset stored.

Phase 2: Analysis & Reporting (4-5 days)

Goals: Normalize and score findings.

Tasks:

  1. Build analyzer.
  2. Generate report.

Checkpoint: Deterministic report generated.

Phase 3: Validation (2-3 days)

Goals: Validate rules and handle edge cases.

Tasks:

  1. Add failure tests.
  2. Document runbook.

Checkpoint: Failure cases documented.

5.11 Key Implementation Decisions

Decision Options Recommendation Rationale
Report format JSON, CSV JSON Structured and diffable
Scoring Simple, Weighted Weighted Prioritize high risk findings

6. Testing Strategy

6.1 Test Categories

Category Purpose Examples
Unit Tests Parser logic Sample data parsing
Integration Tests End-to-end run Generate report
Edge Case Tests Missing permissions Error path

6.2 Critical Test Cases

  1. Report generated with deterministic ordering.
  2. Exit code indicates failure on invalid output path.
  3. At least one high-risk finding is flagged in test data.

6.3 Test Data

Provide a small fixture file with one known suspicious artifact.

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

Pitfall Symptom Solution
Noisy results Too many alerts Add normalization and thresholds
Missing permissions Script fails Detect and warn early

7.2 Debugging Strategies

  • Log raw inputs before normalization.
  • Add verbose mode to show rule evaluation.

7.3 Performance Traps

Scanning large datasets without filtering can be slow; restrict scope to critical paths.


8. Extensions & Challenges

8.1 Beginner Extensions

  • Add a Markdown summary report.

8.2 Intermediate Extensions

  • Add a JSON schema validator for output.

8.3 Advanced Extensions

  • Integrate with a SIEM or ticketing system.

9. Real-World Connections

9.1 Industry Applications

  • Security operations audits and detection validation.
  • osquery - endpoint inventory

9.3 Interview Relevance

  • Discussing detection workflows and auditability.

10. Resources

10.1 Essential Reading

  • Practical Malware Analysis - rootkit detection chapters

10.2 Video Resources

  • Conference talks on rootkit detection

10.3 Tools & Documentation

  • OS-native logging and audit tools

11. Self-Assessment Checklist

11.1 Understanding

  • I can describe the trust boundary for this task.

11.2 Implementation

  • Report generation is deterministic.

11.3 Growth

  • I can explain how to operationalize this check.

12. Submission / Completion Criteria

Minimum Viable Completion:

  • Report created and contains at least one finding.

Full Completion:

  • Findings are categorized with remediation guidance.

Excellence (Going Above & Beyond):

  • Integrated into a broader toolkit or pipeline.