Project 11: SELinux Kernel Module Inspector

Build a kernel tracing toolkit that observes SELinux LSM hooks and AVC cache behavior in real time.

Quick Reference

Attribute Value
Difficulty Level 5: Master
Time Estimate 1-2 months
Main Programming Language C (with bpftrace scripts)
Alternative Programming Languages Rust (kernel bindings)
Coolness Level Level 5
Business Potential 1
Prerequisites Projects 1-3, kernel tracing basics, root access
Key Topics LSM hooks, AVC cache, kernel tracing, performance analysis

1. Learning Objectives

By completing this project, you will:

  1. Identify key SELinux LSM hooks in the kernel.
  2. Trace SELinux decisions with ftrace or bpftrace.
  3. Observe AVC cache hit/miss behavior.
  4. Correlate user actions with kernel enforcement.
  5. Build safe tracing scripts that minimize system impact.

2. All Theory Needed (Per-Concept Breakdown)

LSM Hooks and SELinux Enforcement Path

Fundamentals

The Linux Security Modules (LSM) framework provides hook points where security modules like SELinux enforce access control decisions. When a process performs actions like opening a file, connecting a socket, or creating a process, the kernel invokes an LSM hook. SELinux implements these hooks and consults its policy to allow or deny the action. Understanding the hook architecture is required to build a kernel inspector that traces enforcement decisions.

Deep Dive into the concept

LSM hooks are called at critical points in kernel code. For example, security_file_permission is invoked during file access, and SELinux provides an implementation like selinux_file_permission that evaluates the policy. Hooks exist for filesystem operations, process management, IPC, networking, and more. The hook call order includes other security modules, such as capabilities, which are often evaluated before SELinux. This layered enforcement means that a denial may be caused by a different module, and your tracer should distinguish SELinux hooks from other security checks.

SELinux enforcement is centered around the security server and the Access Vector Cache (AVC). When a hook is called, SELinux uses the subject and object contexts to evaluate policy rules. If the decision is cached, the result is returned quickly. If not, the security server performs a rule lookup. The hook then returns an allow/deny decision to the kernel. This pipeline is deterministic given the policy and contexts. Your tracer will observe the hook call and, optionally, the decision result. This gives a direct view into how SELinux decisions are made, which is valuable for understanding performance and correctness.

To trace hooks, you need to identify the function names and use kernel tracing tools to attach probes. For example, with bpftrace you can attach to kprobe:selinux_file_permission or kprobe:selinux_socket_bind. These probes can capture arguments like the permission mask or the inode pointer. However, these functions are not always stable across kernel versions, so your tool should include a mapping layer or an autodetection step that checks available symbols. This reduces fragility.

Another important concept is the relationship between hooks and syscalls. A single syscall can trigger multiple hooks. For example, open() may trigger path resolution checks, inode permission checks, and file permission checks. Your tracer should be prepared to see multiple hook invocations for a single user action. This is why correlation is important: you can use timestamps and PID/comm to group events and explain the decision path. This is also why the output must be rate-limited and filtered; tracing every hook for every process can overwhelm the system.

Additional operational notes on LSM Hooks and SELinux Enforcement Path: In real systems, this concept interacts with policy versions, distribution defaults, and local overrides. Always record the exact policy version and runtime toggles when diagnosing behavior, because the same action can be allowed on one host and denied on another. When you change configuration related to this concept, capture before/after evidence (labels, logs, and outcomes) so you can justify the change, detect regressions, and roll it back if needed. Treat every tweak as a hypothesis: change one variable, re-run the same action, and compare results against a known baseline. This makes debugging repeatable and keeps your fixes defensible.

From a design perspective, treat LSM Hooks and SELinux Enforcement Path as an invariant: define what success means, which data proves it, and what failure looks like. Build tooling that supports dry-run mode and deterministic fixtures so you can validate behavior without risking production. This also makes the concept teachable to others. Finally, connect the concept to security and performance trade-offs: overly broad changes reduce security signal, while overly strict changes create operational friction. Good designs surface these trade-offs explicitly so operators can make safe decisions.

How this fit on projects

Hook tracing is central to §3.2 and §3.7. It also supports P01-selinux-context-explorer-visualizer.md by showing context use in the kernel.

Definitions & key terms

  • LSM -> Linux Security Modules framework
  • hook -> kernel callback for security decisions
  • selinux_file_permission -> SELinux hook for file access
  • security server -> SELinux policy engine in kernel

Mental model diagram

syscall -> LSM hook -> SELinux policy check -> allow/deny

How it works (step-by-step, with invariants and failure modes)

  1. Syscall triggers kernel path.
  2. LSM hook calls SELinux function.
  3. SELinux checks AVC cache or policy.
  4. Decision returned to kernel.

Invariants: hook is called for relevant operations; decisions are deterministic. Failure modes: missing symbols, tracing overload.

Minimal concrete example

$ sudo bpftrace -e 'kprobe:selinux_file_permission { printf("%s\n", comm); }'

Common misconceptions

  • “SELinux is a user-space daemon.” -> It is a kernel module invoked via hooks.
  • “Hooks fire only on denials.” -> Hooks fire on all access checks.

Check-your-understanding questions

  1. What triggers an LSM hook?
  2. Why might a single syscall generate multiple hook events?
  3. What is the role of the security server?

Check-your-understanding answers

  1. Security-relevant kernel operations like file access or socket bind.
  2. Multiple internal checks occur during syscall processing.
  3. It evaluates policy rules and returns allow/deny decisions.

Real-world applications

  • Kernel debugging and performance tuning.
  • Security auditing of enforcement paths.

Where you’ll apply it

References

  • Linux kernel LSM documentation
  • “Linux Kernel Development” (security chapters)

Key insights

SELinux enforcement is a kernel pipeline; tracing hooks reveals the real decision path.

Summary

LSM hooks are the entry points for SELinux enforcement; tracing them exposes how decisions are made.

Homework/Exercises to practice the concept

  1. List available SELinux kernel symbols with nm or /proc/kallsyms.
  2. Trace one hook for a single process.
  3. Correlate hook events with a file access.

Solutions to the homework/exercises

  1. Use grep selinux /proc/kallsyms.
  2. Filter by PID or comm in bpftrace.
  3. Run cat on a file and observe hook events.

Access Vector Cache (AVC) Internals

Fundamentals

The AVC caches SELinux decisions to avoid repeated policy lookups. It stores recent allow/deny results keyed by subject, target, class, and permissions. This cache is crucial for performance, but it adds complexity to tracing because cache hits bypass full policy evaluation. Your tool should inspect AVC statistics and, if possible, correlate cache hits with traced events.

Deep Dive into the concept

SELinux policy evaluation can be expensive if performed for every access. The AVC caches decisions to reduce overhead. Each cache entry is keyed by source context, target context, class, and permission set. When a hook is invoked, SELinux first checks the AVC. If a matching entry is found (hit), the decision is returned immediately. If not (miss), the security server performs a policy lookup and the result is stored in the cache.

AVC statistics are available via /sys/kernel/debug/selinux/avc/cache_stats or similar files. These stats include lookups, hits, misses, and evictions. A high hit rate indicates good performance; a low hit rate may indicate frequent policy changes or highly varied access patterns. Your tracer can periodically read these stats and include them in the report. You can also use kernel probes to log when cache misses occur, but this requires deeper kernel instrumentation.

Cache invalidation occurs when policy changes, such as when a boolean is toggled or a module is loaded. This flushes the cache and temporarily reduces performance. This is a normal behavior but should be noted in your tool because it affects the interpretation of hit/miss stats. If you trace during a policy update, you may see a spike in misses and hook latency.

For correctness, it’s important to understand that the AVC does not bypass policy; it stores the results of policy evaluation. Therefore, a cached allow is still an allow decision under the current policy. When policy changes, the cache must be invalidated to avoid stale decisions. Your tool can verify this by toggling a boolean and observing the cache stats reset. This is a useful demonstration in the lab.

Operational expansion for Access Vector Cache (AVC) Internals: In real systems, the behavior you observe is the product of policy, labels, and runtime state. That means your investigation workflow must be repeatable. Start by documenting the exact inputs (contexts, paths, users, domains, ports, and the action attempted) and the exact outputs (audit events, error codes, and any policy query results). Then, replay the same action after each change so you can attribute cause and effect. When the concept touches multiple subsystems, isolate variables: change one label, one boolean, or one rule at a time. This reduces confusion and prevents accidental privilege creep. Use staging environments or fixtures to test fixes before deploying them widely, and always keep a rollback path ready.

To deepen understanding, connect Access Vector Cache (AVC) Internals to adjacent concepts: how it affects policy decisions, how it appears in logs, and how it changes operational risk. Build small verification scripts that assert the expected outcome and fail loudly if the outcome diverges. Over time, these scripts become a regression suite for your SELinux posture. Finally, treat the concept as documentation-worthy: write down the invariants it guarantees, the constraints it imposes, and the exact evidence that proves it works. This makes future debugging faster and creates a shared mental model for teams.

Supplemental note for Access Vector Cache (AVC) Internals: ensure your documentation includes a minimal reproducible example and a known-good output snapshot. Pair this with a small checklist of preconditions and postconditions so anyone rerunning the exercise can validate the result quickly. This turns the concept into a repeatable experiment rather than a one-off intuition.

How this fit on projects

Hook tracing is central to §3.2 and §3.7. It also supports P01-selinux-context-explorer-visualizer.md by showing context use in the kernel.

Further depth on Access Vector Cache (AVC) Internals: In production environments, this concept is shaped by policy versions, automation layers, and distro-specific defaults. To keep reasoning consistent, capture a minimal evidence bundle every time you analyze behavior: the policy name/version, the exact labels or contexts involved, the command that triggered the action, and the resulting audit event. If the same action yields different decisions on two hosts, treat that as a signal that a hidden variable changed (boolean state, module priority, label drift, or category range). This disciplined approach prevents trial-and-error debugging and makes your conclusions defensible.

Operationally, build a short checklist for Access Vector Cache (AVC) Internals: verify prerequisites, verify labels or mappings, verify policy query results, then run the action and confirm the expected audit outcome. Track metrics that reflect stability, such as the count of denials per hour, the number of unique denial keys, or the fraction of hosts in compliance. When you must change behavior, apply the smallest change that can be verified (label fix before boolean, boolean before policy). Document the rollback path and include a post-change validation step so the system returns to a known-good state.

Definitions & key terms

  • LSM -> Linux Security Modules framework
  • hook -> kernel callback for security decisions
  • selinux_file_permission -> SELinux hook for file access
  • security server -> SELinux policy engine in kernel

Mental model diagram

syscall -> LSM hook -> SELinux policy check -> allow/deny

How it works (step-by-step, with invariants and failure modes)

  1. Syscall triggers kernel path.
  2. LSM hook calls SELinux function.
  3. SELinux checks AVC cache or policy.
  4. Decision returned to kernel.

Invariants: hook is called for relevant operations; decisions are deterministic. Failure modes: missing symbols, tracing overload.

Minimal concrete example

$ sudo bpftrace -e 'kprobe:selinux_file_permission { printf("%s\n", comm); }'

Common misconceptions

  • “SELinux is a user-space daemon.” -> It is a kernel module invoked via hooks.
  • “Hooks fire only on denials.” -> Hooks fire on all access checks.

Check-your-understanding questions

  1. What triggers an LSM hook?
  2. Why might a single syscall generate multiple hook events?
  3. What is the role of the security server?

Check-your-understanding answers

  1. Security-relevant kernel operations like file access or socket bind.
  2. Multiple internal checks occur during syscall processing.
  3. It evaluates policy rules and returns allow/deny decisions.

Real-world applications

  • Kernel debugging and performance tuning.
  • Security auditing of enforcement paths.

Where you’ll apply it

References

  • Linux kernel LSM documentation
  • “Linux Kernel Development” (security chapters)

Key insights

SELinux enforcement is a kernel pipeline; tracing hooks reveals the real decision path.

Summary

LSM hooks are the entry points for SELinux enforcement; tracing them exposes how decisions are made.

Homework/Exercises to practice the concept

  1. List available SELinux kernel symbols with nm or /proc/kallsyms.
  2. Trace one hook for a single process.
  3. Correlate hook events with a file access.

Solutions to the homework/exercises

  1. Use grep selinux /proc/kallsyms.
  2. Filter by PID or comm in bpftrace.
  3. Run cat on a file and observe hook events.

Kernel Tracing with ftrace and bpftrace

Fundamentals

Kernel tracing tools such as ftrace and bpftrace allow you to attach probes to kernel functions and capture runtime events. This is how you observe SELinux hooks without modifying the kernel. Your project will use these tools to trace selinux_* functions and report decisions. Understanding their capabilities and limitations is essential to avoid system instability.

Deep Dive into the concept

ftrace is a built-in kernel tracer that can trace function calls with low overhead. It is useful for lightweight tracing of SELinux hooks. bpftrace is a high-level language for eBPF programs that attach to kprobes, uprobes, and tracepoints. It is more flexible and allows you to filter by PID, process name, or arguments. For SELinux tracing, bpftrace is often the best tool because it can capture hook arguments and print them.

However, tracing can be expensive. Hook functions are called very frequently, and tracing them for all processes can overwhelm the system. You should always include filters, such as restricting to a specific process or domain. For example, filter by comm == "httpd" or by PID. You should also limit the output rate or aggregate counts rather than printing every event. This is part of safe tracing design.

Another challenge is symbol availability. Some kernels do not expose all SELinux symbols to tracing, or have different function names. Your tool should include a preflight check that confirms required symbols exist, and it should degrade gracefully if not. This can be done by checking /proc/kallsyms or using bpftrace -l 'kprobe:selinux_*' to list available probes.

Finally, tracing must be reproducible. For deterministic demos, you should include fixed scenarios: for example, run a specific file access test while tracing a known hook. Capture the output and compare it to a fixture. This ensures that your lab output is stable and reviewable. It also prevents users from tracing the entire system without understanding the performance implications.

Operational expansion for se cat in a loop and watch hit count increase.: In real systems, the behavior you observe is the product of policy, labels, and runtime state. That means your investigation workflow must be repeatable. Start by documenting the exact inputs (contexts, paths, users, domains, ports, and the action attempted) and the exact outputs (audit events, error codes, and any policy query results). Then, replay the same action after each change so you can attribute cause and effect. When the concept touches multiple subsystems, isolate variables: change one label, one boolean, or one rule at a time. This reduces confusion and prevents accidental privilege creep. Use staging environments or fixtures to test fixes before deploying them widely, and always keep a rollback path ready.

To deepen understanding, connect se cat in a loop and watch hit count increase. to adjacent concepts: how it affects policy decisions, how it appears in logs, and how it changes operational risk. Build small verification scripts that assert the expected outcome and fail loudly if the outcome diverges. Over time, these scripts become a regression suite for your SELinux posture. Finally, treat the concept as documentation-worthy: write down the invariants it guarantees, the constraints it imposes, and the exact evidence that proves it works. This makes future debugging faster and creates a shared mental model for teams.

Supplemental note for : ensure your documentation includes a minimal reproducible example and a known-good output snapshot. Pair this with a small checklist of preconditions and postconditions so anyone rerunning the exercise can validate the result quickly. This turns the concept into a repeatable experiment rather than a one-off intuition.

How this fit on projects

AVC cache analysis is central to §3.7 and §6.2, and is referenced in P02-avc-denial-analyzer-auto-fixer.md.

Further depth on se cat in a loop and watch hit count increase.: In production environments, this concept is shaped by policy versions, automation layers, and distro-specific defaults. To keep reasoning consistent, capture a minimal evidence bundle every time you analyze behavior: the policy name/version, the exact labels or contexts involved, the command that triggered the action, and the resulting audit event. If the same action yields different decisions on two hosts, treat that as a signal that a hidden variable changed (boolean state, module priority, label drift, or category range). This disciplined approach prevents trial-and-error debugging and makes your conclusions defensible.

Operationally, build a short checklist for se cat in a loop and watch hit count increase.: verify prerequisites, verify labels or mappings, verify policy query results, then run the action and confirm the expected audit outcome. Track metrics that reflect stability, such as the count of denials per hour, the number of unique denial keys, or the fraction of hosts in compliance. When you must change behavior, apply the smallest change that can be verified (label fix before boolean, boolean before policy). Document the rollback path and include a post-change validation step so the system returns to a known-good state.

Definitions & key terms

  • AVC -> Access Vector Cache
  • hit -> cache entry found
  • miss -> cache entry not found, policy lookup required
  • invalidation -> cache flush after policy change

Mental model diagram

hook -> AVC lookup -> [hit] allow/deny
                 -> [miss] policy lookup -> cache insert

How it works (step-by-step, with invariants and failure modes)

  1. Hook triggers AVC lookup.
  2. If hit, return decision.
  3. If miss, evaluate policy and insert cache entry.
  4. Invalidate cache when policy changes.

Invariants: cache mirrors policy decisions; invalidation on policy change. Failure modes: debugfs not mounted, stats unavailable.

Minimal concrete example

$ cat /sys/kernel/debug/selinux/avc/cache_stats

Common misconceptions

  • “Cache hits mean SELinux is not enforcing.” -> It still enforces, just faster.
  • “Cache misses indicate errors.” -> Misses are normal after policy changes.

Check-your-understanding questions

  1. What triggers an AVC cache miss?
  2. Why must the cache be invalidated on policy changes?
  3. How can you observe cache behavior?

Check-your-understanding answers

  1. No matching entry for the subject/target/class/permissions.
  2. To avoid stale decisions under new policy.
  3. Read cache stats and trace hook latency.

Real-world applications

  • Performance tuning on high-throughput systems.
  • Diagnosing SELinux overhead.

Where you’ll apply it

References

  • SELinux kernel documentation
  • “SELinux by Example” (architecture chapters)

Key insights

AVC caching is the key to SELinux performance; you must account for it in tracing.

Summary

The AVC caches SELinux decisions; cache stats provide insight into enforcement performance.

Homework/Exercises to practice the concept

  1. Read AVC stats before and after a policy change.
  2. Trigger repeated file accesses and observe cache hits.
  3. Explain how cache behavior affects tracing.

Solutions to the homework/exercises

  1. Toggle a boolean and note stats reset.
  2. Use cat in a loop and watch hit count increase.
  3. Cache hits reduce hook latency and policy lookups.

3. Project Specification

3.1 What You Will Build

A tracing toolkit named selinux-inspector that:

  • Attaches to key SELinux hooks
  • Reports AVC cache statistics
  • Provides filtered, deterministic tracing output

3.2 Functional Requirements

  1. Hook Tracing: Trace file and socket hooks.
  2. AVC Stats: Report cache stats periodically.
  3. Filters: Support PID and process name filters.
  4. Output: Produce structured logs with timestamps.
  5. Safety: Include preflight checks for symbols.

3.3 Non-Functional Requirements

  • Safety: Avoid system destabilization.
  • Determinism: Provide fixture runs for demos.
  • Performance: Default filters to reduce overhead.

3.4 Example Usage / Output

$ sudo ./selinux-inspector trace --comm httpd --hook file_permission
[00:00:01] httpd (pid 1234) selinux_file_permission

3.5 Data Formats / Schemas / Protocols

Trace log schema (v1):

{
  "ts": "2026-01-01T00:00:01Z",
  "pid": 1234,
  "comm": "httpd",
  "hook": "selinux_file_permission"
}

3.6 Edge Cases

  • Kernel lacks required symbols
  • debugfs not mounted
  • Running as non-root

3.7 Real World Outcome

3.7.1 How to Run (Copy/Paste)

sudo ./selinux-inspector trace --comm httpd --hook file_permission

3.7.2 Golden Path Demo (Deterministic)

Run a fixed file access scenario while tracing a single hook and compare output to fixture logs.

3.7.3 CLI Transcript (Success and Failure)

$ sudo ./selinux-inspector stats
AVC lookups: 12345 hits: 12001 misses: 344
Exit code: 0

$ ./selinux-inspector trace --comm httpd
ERROR: must be root
Exit code: 2

4. Solution Architecture

4.1 High-Level Design

Tracer CLI -> Probe Manager -> Output Formatter
                 |
                 v
            AVC Stats Reader

4.2 Key Components

Component Responsibility Key Decisions
Probe Manager Attach/detach probes Use bpftrace scripts
Filters Limit events PID/comm filters
Stats Reader Read AVC cache stats debugfs path
Formatter Output logs JSON + text

4.3 Data Structures (No Full Code)

TraceEvent = {
  "ts": "...",
  "pid": 1234,
  "comm": "httpd",
  "hook": "selinux_file_permission"
}

4.4 Algorithm Overview

Key Algorithm: Trace Session

  1. Validate root and symbol availability.
  2. Attach kprobes with filters.
  3. Stream events to output.
  4. Detach and summarize.

Complexity Analysis:

  • Time: O(n) events
  • Space: O(1) streaming

5. Implementation Guide

5.1 Development Environment Setup

sudo dnf install -y bpftrace kernel-devel

5.2 Project Structure

selinux-inspector/
├── scripts/
│   ├── file_permission.bt
│   └── socket_bind.bt
├── selinux_inspector.py
└── README.md

5.3 The Core Question You’re Answering

“What happens inside the kernel when SELinux makes a decision?”

5.4 Concepts You Must Understand First

  1. LSM hook architecture.
  2. AVC cache behavior.
  3. Kernel tracing tools.

5.5 Questions to Guide Your Design

  1. Which hooks provide the most insight for users?
  2. How will you keep tracing safe on production systems?
  3. How will you correlate events with user actions?

5.6 Thinking Exercise

Pick a syscall (like open) and list which hooks might fire.

5.7 The Interview Questions They’ll Ask

  1. “Where does SELinux enforce access?”
  2. “What is the AVC cache?”
  3. “How do you trace SELinux decisions safely?”

5.8 Hints in Layers

Hint 1: Start with ftrace or bpftrace on one hook

Hint 2: Add PID filters to reduce noise

Hint 3: Add a stats command to read AVC cache

5.9 Books That Will Help

Topic Book Chapter
Kernel security “Linux Kernel Development” Security chapters
Tracing “BPF Performance Tools” eBPF tracing

5.10 Implementation Phases

Phase 1: Foundation (2-3 weeks)

Goals:

  • Identify key hooks and write basic probes.

Tasks:

  1. List SELinux symbols.
  2. Write a bpftrace script for a file hook.

Checkpoint: Hook events captured for a known process.

Phase 2: Core Functionality (2-3 weeks)

Goals:

  • Add filters and structured output.

Tasks:

  1. Add PID/comm filters.
  2. Emit JSON logs.

Checkpoint: Tracing output is deterministic for a fixed scenario.

Phase 3: Polish & Edge Cases (2-4 weeks)

Goals:

  • Add AVC stats and safety checks.

Tasks:

  1. Implement AVC stats reader.
  2. Add symbol availability checks.

Checkpoint: Tool fails gracefully when symbols are missing.

5.11 Key Implementation Decisions

Decision Options Recommendation Rationale
Tracing tool ftrace vs bpftrace bpftrace flexible filters
Output text vs JSON both operator and automation use
Safety filter by default vs opt-in filter by default reduce overhead

6. Testing Strategy

6.1 Test Categories

Category Purpose Examples
Unit Tests config parsing filter logic
Integration Tests trace session known hook scenario
Edge Case Tests missing root error handling

6.2 Critical Test Cases

  1. Tool fails with clear error if not root.
  2. Tool detects missing symbols and reports them.
  3. AVC stats output matches expected format.

6.3 Test Data

fixtures/trace_output.json

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

Pitfall Symptom Solution
Tracing without filters System slowdown Default to filters
Missing debugfs No AVC stats Mount debugfs
Wrong hook names No events List symbols and update scripts

7.2 Debugging Strategies

  • Start with a simple hook and known workload.
  • Use bpftrace -l to confirm probes.

7.3 Performance Traps

  • Tracing every file permission check can overwhelm the system; filter by PID.

8. Extensions & Challenges

8.1 Beginner Extensions

  • Add a --list-hooks command.
  • Add CSV export of events.

8.2 Intermediate Extensions

  • Add per-domain statistics.
  • Integrate with AVC log parsing.

8.3 Advanced Extensions

  • Add eBPF maps for aggregation.
  • Visualize hook latency distributions.

9. Real-World Connections

9.1 Industry Applications

  • Kernel-level auditing for security teams.
  • Performance analysis for SELinux-heavy workloads.
  • bpftrace, perf, and ftrace tooling.

9.3 Interview Relevance

  • Kernel tracing and SELinux internal architecture.

10. Resources

10.1 Essential Reading

  • “Linux Kernel Development” (security)
  • “BPF Performance Tools” (eBPF tracing)

10.2 Video Resources

  • Kernel tracing conference talks

10.3 Tools & Documentation

  • bpftrace, ftrace, /proc/kallsyms

11. Self-Assessment Checklist

11.1 Understanding

  • I can explain LSM hooks and SELinux enforcement path.
  • I can explain AVC cache behavior.
  • I can use bpftrace safely.

11.2 Implementation

  • Tracing output is stable and filtered.
  • AVC stats are reported correctly.
  • Error handling works for missing symbols.

11.3 Growth

  • I can explain performance impact of tracing.
  • I documented safe usage patterns.

12. Submission / Completion Criteria

Minimum Viable Completion:

  • Trace one SELinux hook with filters.

Full Completion:

  • Include multiple hooks and AVC stats.

Excellence (Going Above & Beyond):

  • Aggregate events and visualize latency.

13 Additional Content Rules (Hard Requirements)

13.1 Determinism

  • Use fixed trace scenarios and filters.

13.2 Outcome Completeness

  • Provide success and failure demos with exit codes.

13.3 Cross-Linking

  • Link to P01 and P02 for context and AVC analysis.

13.4 No Placeholder Text

  • All sections are fully specified.