Project 8: SELinux Policy Diff Tool
Build a tool that compares two SELinux policy versions and highlights security-relevant changes.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Level 3: Advanced |
| Time Estimate | 2-3 weeks |
| Main Programming Language | Python |
| Alternative Programming Languages | Go |
| Coolness Level | Level 4 |
| Business Potential | 2 |
| Prerequisites | Projects 2-3, policy tooling basics |
| Key Topics | policy structure, rule normalization, neverallow analysis |
1. Learning Objectives
By completing this project, you will:
- Understand how SELinux policies are structured and layered.
- Normalize policy rules to enable reliable diffs.
- Detect high-risk changes like removed
neverallowrules. - Build a report that summarizes policy impact.
- Learn to interpret policy tool outputs (
sesearch,seinfo).
2. All Theory Needed (Per-Concept Breakdown)
SELinux Policy Structure and Module Priority
Fundamentals
SELinux policies are built from multiple modules layered into a single policy store. Each module can define types, attributes, and rules. Module priority determines which definitions win when conflicts exist. Understanding this structure is essential for a diff tool because changes can come from different modules and priorities, and the effective policy is what matters. Your tool must compare effective policy outputs rather than raw source modules when possible.
Deep Dive into the concept
The SELinux policy store is a compiled representation of multiple modules. Modules are compiled .pp packages that include types, rules, and file contexts. The system loads these modules with priorities, typically defaulting to 100. A higher priority module can override definitions from lower priorities, enabling local customizations. This layering means that two policies with different modules may still produce the same effective rules if overrides neutralize changes. Therefore, a diff tool should focus on effective rules, not just module files.
Tools like seinfo and sesearch query the effective policy store. When you use these tools on a policy file (.pp or .cil), they operate on the compiled policy. A policy diff tool can run sesearch -A on each policy and compare the resulting allow rules. However, sesearch output order is not stable, so normalization is required. Furthermore, policy rules can include type attributes and macros that expand into many rules. Your tool should resolve attributes and macros into explicit rule sets before diffing to avoid missing changes.
Module priorities also affect policy diff. Suppose a custom module overrides a boolean or rule from the base policy. The diff tool should report that override as a change, but also note which module introduced it. This is why capturing module metadata (name, priority, version) is useful. You can parse module lists with semodule -lfull and include it in the report.
Finally, policy structure includes constraints and neverallow rules, which are not always visible in standard allow rule output. A robust diff should include both allow and neverallow changes, because the removal of a neverallow can open security holes even if allow rules remain unchanged. Your tool should therefore include a dedicated section for neverallow changes.
Additional operational notes on SELinux Policy Structure and Module Priority: In real systems, this concept interacts with policy versions, distribution defaults, and local overrides. Always record the exact policy version and runtime toggles when diagnosing behavior, because the same action can be allowed on one host and denied on another. When you change configuration related to this concept, capture before/after evidence (labels, logs, and outcomes) so you can justify the change, detect regressions, and roll it back if needed. Treat every tweak as a hypothesis: change one variable, re-run the same action, and compare results against a known baseline. This makes debugging repeatable and keeps your fixes defensible.
From a design perspective, treat SELinux Policy Structure and Module Priority as an invariant: define what success means, which data proves it, and what failure looks like. Build tooling that supports dry-run mode and deterministic fixtures so you can validate behavior without risking production. This also makes the concept teachable to others. Finally, connect the concept to security and performance trade-offs: overly broad changes reduce security signal, while overly strict changes create operational friction. Good designs surface these trade-offs explicitly so operators can make safe decisions.
Further depth on SELinux Policy Structure and Module Priority: In production environments, this concept is shaped by policy versions, automation layers, and distro-specific defaults. To keep reasoning consistent, capture a minimal evidence bundle every time you analyze behavior: the policy name/version, the exact labels or contexts involved, the command that triggered the action, and the resulting audit event. If the same action yields different decisions on two hosts, treat that as a signal that a hidden variable changed (boolean state, module priority, label drift, or category range). This disciplined approach prevents trial-and-error debugging and makes your conclusions defensible.
Operationally, build a short checklist for SELinux Policy Structure and Module Priority: verify prerequisites, verify labels or mappings, verify policy query results, then run the action and confirm the expected audit outcome. Track metrics that reflect stability, such as the count of denials per hour, the number of unique denial keys, or the fraction of hosts in compliance. When you must change behavior, apply the smallest change that can be verified (label fix before boolean, boolean before policy). Document the rollback path and include a post-change validation step so the system returns to a known-good state.
How this fit on projects
Policy structure is central to §3.2 and §4.2. It also ties to P03-custom-application-policy-module-builder.md.
Definitions & key terms
- module -> compiled policy package
- priority -> module override order
- effective policy -> final rules after module merge
- attribute expansion -> resolving type groups into explicit rules
Mental model diagram
modules + priorities -> compiled policy -> sesearch output
How it works (step-by-step, with invariants and failure modes)
- Load modules into policy store.
- Apply module priorities to resolve conflicts.
- Compile to a single effective policy.
- Query effective rules via
sesearch.
Invariants: effective policy is the kernel-enforced policy. Failure modes: diffing raw module files instead of effective rules.
Minimal concrete example
$ semodule -lfull | head -3
Common misconceptions
- “Comparing module files is enough.” -> Effective policy may differ.
- “Module priority is irrelevant.” -> It can override important rules.
Check-your-understanding questions
- Why should a diff tool compare effective policy rules?
- What is module priority used for?
- Why are attributes significant in policy diffs?
Check-your-understanding answers
- Because the kernel enforces the effective policy, not raw modules.
- To resolve conflicts when multiple modules define the same items.
- Attributes expand into many rules and can hide changes.
Real-world applications
- Auditing policy changes across OS updates.
- Compliance reporting for security baselines.
Where you’ll apply it
- This project: §3.2, §3.7, §4.2, §6.2.
- Also used in: P12-enterprise-selinux-security-platform.md.
References
- “SELinux Notebook” (policy structure)
- Red Hat SELinux Guide (policy modules)
Key insights
Effective policy is the only policy that matters for enforcement; diff that, not source.
Summary
Policies are modular and layered. Module priorities determine the effective rules you must compare.
Homework/Exercises to practice the concept
- List installed modules with priorities.
- Identify a local module and note its priority.
- Explain how a higher priority module can override a rule.
Solutions to the homework/exercises
semodule -lfull.- Look for modules with
ppfrom local builds. - Higher priority wins in conflicts, changing effective rules.
Rule Normalization and Diffing Strategy
Fundamentals
Policy rule outputs are not stable across runs: order can change, and attributes can expand differently. To produce a reliable diff, you must normalize rules. This includes sorting, removing duplicates, and rewriting rules into a canonical form. Without normalization, your diff tool will produce noisy results and miss meaningful changes.
Deep Dive into the concept
sesearch output is text-based and not guaranteed to be stable. It may list rules in different orders, and it may collapse multiple permissions into a single line. Additionally, types in rules can be attributes (groups) rather than explicit types. For a reliable diff, you must expand attributes into their member types. The seinfo tool can list attribute memberships. Your tool should resolve each rule into explicit source_type -> target_type:class perms tuples, sort them, and then diff the sets. This yields deterministic, meaningful diffs.
Normalization also includes canonicalizing permissions. For example, { read write } and { write read } should be treated as the same set. Sorting permissions alphabetically and converting to a canonical string ensures stable comparisons. Similarly, some rules may include * or all to indicate all permissions; these should be expanded if possible or marked explicitly.
When diffing two policies, you should compute added, removed, and changed rules. For changed rules, it is useful to show the old and new permissions for the same (source, target, class) tuple. This helps identify subtle changes, such as a new write permission added to an existing rule. A good tool also groups changes by domain or by target type to make the report easier to understand.
Finally, you should handle noise from booleans. Boolean states can change the effective policy by enabling or disabling conditional rules. If you compare two policies with different boolean states, you may report changes that are not actually due to policy updates. Your tool should therefore include a boolean snapshot and note any differences, or allow the user to supply a fixed boolean state for both policies. This makes the diff accurate and repeatable.
Operational expansion for Rule Normalization and Diffing Strategy: In real systems, the behavior you observe is the product of policy, labels, and runtime state. That means your investigation workflow must be repeatable. Start by documenting the exact inputs (contexts, paths, users, domains, ports, and the action attempted) and the exact outputs (audit events, error codes, and any policy query results). Then, replay the same action after each change so you can attribute cause and effect. When the concept touches multiple subsystems, isolate variables: change one label, one boolean, or one rule at a time. This reduces confusion and prevents accidental privilege creep. Use staging environments or fixtures to test fixes before deploying them widely, and always keep a rollback path ready.
To deepen understanding, connect Rule Normalization and Diffing Strategy to adjacent concepts: how it affects policy decisions, how it appears in logs, and how it changes operational risk. Build small verification scripts that assert the expected outcome and fail loudly if the outcome diverges. Over time, these scripts become a regression suite for your SELinux posture. Finally, treat the concept as documentation-worthy: write down the invariants it guarantees, the constraints it imposes, and the exact evidence that proves it works. This makes future debugging faster and creates a shared mental model for teams.
Supplemental note for Rule Normalization and Diffing Strategy: ensure your documentation includes a minimal reproducible example and a known-good output snapshot. Pair this with a small checklist of preconditions and postconditions so anyone rerunning the exercise can validate the result quickly. This turns the concept into a repeatable experiment rather than a one-off intuition.
How this fit on projects
Policy structure is central to §3.2 and §4.2. It also ties to P03-custom-application-policy-module-builder.md.
Further depth on ook for modules with pp from local builds.: In production environments, this concept is shaped by policy versions, automation layers, and distro-specific defaults. To keep reasoning consistent, capture a minimal evidence bundle every time you analyze behavior: the policy name/version, the exact labels or contexts involved, the command that triggered the action, and the resulting audit event. If the same action yields different decisions on two hosts, treat that as a signal that a hidden variable changed (boolean state, module priority, label drift, or category range). This disciplined approach prevents trial-and-error debugging and makes your conclusions defensible.
Operationally, build a short checklist for ook for modules with pp from local builds.: verify prerequisites, verify labels or mappings, verify policy query results, then run the action and confirm the expected audit outcome. Track metrics that reflect stability, such as the count of denials per hour, the number of unique denial keys, or the fraction of hosts in compliance. When you must change behavior, apply the smallest change that can be verified (label fix before boolean, boolean before policy). Document the rollback path and include a post-change validation step so the system returns to a known-good state.
Definitions & key terms
- module -> compiled policy package
- priority -> module override order
- effective policy -> final rules after module merge
- attribute expansion -> resolving type groups into explicit rules
Mental model diagram
modules + priorities -> compiled policy -> sesearch output
How it works (step-by-step, with invariants and failure modes)
- Load modules into policy store.
- Apply module priorities to resolve conflicts.
- Compile to a single effective policy.
- Query effective rules via
sesearch.
Invariants: effective policy is the kernel-enforced policy. Failure modes: diffing raw module files instead of effective rules.
Minimal concrete example
$ semodule -lfull | head -3
Common misconceptions
- “Comparing module files is enough.” -> Effective policy may differ.
- “Module priority is irrelevant.” -> It can override important rules.
Check-your-understanding questions
- Why should a diff tool compare effective policy rules?
- What is module priority used for?
- Why are attributes significant in policy diffs?
Check-your-understanding answers
- Because the kernel enforces the effective policy, not raw modules.
- To resolve conflicts when multiple modules define the same items.
- Attributes expand into many rules and can hide changes.
Real-world applications
- Auditing policy changes across OS updates.
- Compliance reporting for security baselines.
Where you’ll apply it
- This project: §3.2, §3.7, §4.2, §6.2.
- Also used in: P12-enterprise-selinux-security-platform.md.
References
- “SELinux Notebook” (policy structure)
- Red Hat SELinux Guide (policy modules)
Key insights
Effective policy is the only policy that matters for enforcement; diff that, not source.
Summary
Policies are modular and layered. Module priorities determine the effective rules you must compare.
Homework/Exercises to practice the concept
- List installed modules with priorities.
- Identify a local module and note its priority.
- Explain how a higher priority module can override a rule.
Solutions to the homework/exercises
semodule -lfull.- Look for modules with
ppfrom local builds. - Higher priority wins in conflicts, changing effective rules.
Neverallow Rules and Risk Assessment
Fundamentals
neverallow rules are policy constraints that forbid certain accesses, even if allow rules exist. Removing or weakening a neverallow is a high-risk change because it can open access to sensitive resources. Your diff tool must detect neverallow changes and flag them explicitly. It should also classify risk based on domains and target types involved.
Deep Dive into the concept
neverallow rules act as guardrails. They are evaluated against allow rules and ensure that certain permissions are impossible. For example, many policies include neverallow rules preventing unconfined domains from accessing shadow_t. If such a rule is removed, it indicates a critical policy change that could undermine security. Unfortunately, many diff tools ignore neverallow rules because they are less visible than allow rules. Your tool should treat them as first-class diff items.
To detect changes, you can use sesearch --neverallow on both policies and compare outputs. As with allow rules, you must normalize and canonicalize the output. Then, identify any removed or modified neverallow rules and mark them as high risk. You should also detect additions, which may indicate stricter policy. Changes in neverallow rules should be highlighted at the top of the report.
Risk assessment should consider both the sensitivity of target types and the privileges of source domains. For example, allowing access to shadow_t or security_t is high risk. Allowing a confined domain to read its own log files is low risk. A simple risk model can assign a score based on type categories (system, user, kernel, etc.) and permission types (write/execute higher risk than read). The report should present a summary of high-risk changes to guide human review.
Finally, the tool should provide evidence for each high-risk change, including the exact rule difference and the module or policy version where it appeared. This helps teams investigate the cause and decide whether it is acceptable.
Operational expansion for se seinfo -a <attr> -x to list members.: In real systems, the behavior you observe is the product of policy, labels, and runtime state. That means your investigation workflow must be repeatable. Start by documenting the exact inputs (contexts, paths, users, domains, ports, and the action attempted) and the exact outputs (audit events, error codes, and any policy query results). Then, replay the same action after each change so you can attribute cause and effect. When the concept touches multiple subsystems, isolate variables: change one label, one boolean, or one rule at a time. This reduces confusion and prevents accidental privilege creep. Use staging environments or fixtures to test fixes before deploying them widely, and always keep a rollback path ready.
To deepen understanding, connect se seinfo -a <attr> -x to list members. to adjacent concepts: how it affects policy decisions, how it appears in logs, and how it changes operational risk. Build small verification scripts that assert the expected outcome and fail loudly if the outcome diverges. Over time, these scripts become a regression suite for your SELinux posture. Finally, treat the concept as documentation-worthy: write down the invariants it guarantees, the constraints it imposes, and the exact evidence that proves it works. This makes future debugging faster and creates a shared mental model for teams.
Supplemental note for : ensure your documentation includes a minimal reproducible example and a known-good output snapshot. Pair this with a small checklist of preconditions and postconditions so anyone rerunning the exercise can validate the result quickly. This turns the concept into a repeatable experiment rather than a one-off intuition.
How this fit on projects
Normalization is required in §3.2 and §3.7 and tested in §6.2. It also informs P02-avc-denial-analyzer-auto-fixer.md where rule shapes are derived.
Further depth on xpand an attribute into its member types.: In production environments, this concept is shaped by policy versions, automation layers, and distro-specific defaults. To keep reasoning consistent, capture a minimal evidence bundle every time you analyze behavior: the policy name/version, the exact labels or contexts involved, the command that triggered the action, and the resulting audit event. If the same action yields different decisions on two hosts, treat that as a signal that a hidden variable changed (boolean state, module priority, label drift, or category range). This disciplined approach prevents trial-and-error debugging and makes your conclusions defensible.
Operationally, build a short checklist for xpand an attribute into its member types.: verify prerequisites, verify labels or mappings, verify policy query results, then run the action and confirm the expected audit outcome. Track metrics that reflect stability, such as the count of denials per hour, the number of unique denial keys, or the fraction of hosts in compliance. When you must change behavior, apply the smallest change that can be verified (label fix before boolean, boolean before policy). Document the rollback path and include a post-change validation step so the system returns to a known-good state.
Final depth note on Neverallow Rules and Risk Assessment: tie the concept back to verification. Define a single, repeatable action that proves the rule works, and capture the exact artifact that proves it (a log line, a label comparison, or a policy query). If the proof changes between runs, treat that as a defect in the workflow, not a mystery. This habit prevents subtle regressions and makes audits far easier.
Definitions & key terms
- canonical form -> normalized representation of a rule
- attribute expansion -> replacing attribute with member types
- rule tuple -> (source, target, class, permissions)
- boolean snapshot -> fixed set of boolean states for comparison
Mental model diagram
raw rules -> normalize -> sorted set -> diff
How it works (step-by-step, with invariants and failure modes)
- Extract rules from both policies.
- Expand attributes into explicit types.
- Sort permissions and convert to canonical form.
- Compare sets to compute added/removed/changed.
Invariants: canonical form is stable; sorting makes output deterministic. Failure modes: missing attribute expansion or boolean mismatch.
Minimal concrete example
allow httpd_t http_port_t:tcp_socket { name_bind name_connect }
Common misconceptions
- “Text diff is enough.” -> Order and macros cause noise.
- “Booleans don’t affect diffs.” -> They change effective rules.
Check-your-understanding questions
- Why is attribute expansion important?
- How do you handle permission ordering differences?
- Why should you snapshot booleans?
Check-your-understanding answers
- Attributes represent many types; without expansion you miss rule changes.
- Sort permissions into a canonical order.
- Different boolean states can create false diffs.
Real-world applications
- Auditing policy updates during OS upgrades.
- Compliance reports for regulated environments.
Where you’ll apply it
- This project: §3.2, §3.7, §6.2.
- Also used in: P12-enterprise-selinux-security-platform.md.
References
- “SELinux Notebook” (policy tooling)
- setools documentation
Key insights
A policy diff is only as good as its normalization pipeline.
Summary
Normalize rules into canonical sets to produce meaningful diffs.
Homework/Exercises to practice the concept
- Normalize a set of rules by sorting permissions.
- Expand an attribute into its member types.
- Compare two rule sets and list changes.
Solutions to the homework/exercises
- Alphabetically sort permissions and join into a canonical string.
- Use
seinfo -a <attr> -xto list members. - Compare sorted lists and output added/removed entries.
3. Project Specification
3.1 What You Will Build
A CLI tool named seldiff that compares two SELinux policy snapshots and produces a structured report of allow/neverallow changes.
Included features:
- Load two policy files or current vs baseline
- Extract allow and neverallow rules
- Normalize and diff rule sets
- Risk scoring and summary
3.2 Functional Requirements
- Policy Loading: Accept
.ppor policy store paths. - Rule Extraction: Pull allow and neverallow rules.
- Normalization: Expand attributes and canonicalize permissions.
- Diff Engine: Compute added/removed/changed rules.
- Reporting: Summarize high-risk changes.
3.3 Non-Functional Requirements
- Determinism: Output must be stable for identical inputs.
- Usability: Provide human-readable and JSON outputs.
- Performance: Handle large policies under 30 seconds.
3.4 Example Usage / Output
$ seldiff old.pp new.pp
Added allow rules: 3
Removed allow rules: 1
Removed neverallow rules: 1 (HIGH RISK)
3.5 Data Formats / Schemas / Protocols
JSON output schema (v1):
{
"added": 3,
"removed": 1,
"neverallow_removed": 1,
"high_risk": ["neverallow unconfined_t shadow_t:file { read }"]
}
3.6 Edge Cases
- Boolean state differences causing noise
- Policies missing
neverallowsections - Large attribute expansions
3.7 Real World Outcome
3.7.1 How to Run (Copy/Paste)
./seldiff baseline.pp current.pp --format text
3.7.2 Golden Path Demo (Deterministic)
Use a frozen baseline policy file and a modified copy with known changes; ensure rule output is sorted.
3.7.3 CLI Transcript (Success and Failure)
$ ./seldiff baseline.pp current.pp
Summary written to report.txt
Exit code: 0
$ ./seldiff missing.pp current.pp
ERROR: policy file not found
Exit code: 2
3.7.4 If CLI: exit codes
0success1differences found2invalid input
4. Solution Architecture
4.1 High-Level Design
Policy A -> Rule Extractor -> Normalizer -> Diff Engine -> Report
Policy B -> Rule Extractor -> Normalizer -> Diff Engine -> Report
4.2 Key Components
| Component | Responsibility | Key Decisions |
|---|---|---|
| Loader | Read policy files | Use sesearch on policy paths |
| Normalizer | Expand attributes | Use seinfo for memberships |
| Diff Engine | Compare canonical sets | Sort for determinism |
| Reporter | Risk summary | Highlight neverallow changes |
4.3 Data Structures (No Full Code)
Rule = (source, target, cls, perms)
4.4 Algorithm Overview
Key Algorithm: Policy Diff
- Extract rules from both policies.
- Normalize to canonical tuples.
- Compute set differences.
- Produce risk summary.
Complexity Analysis:
- Time: O(n) rules
- Space: O(n) rules
5. Implementation Guide
5.1 Development Environment Setup
sudo dnf install -y setools-console
5.2 Project Structure
seldiff/
├── seldiff/
│ ├── cli.py
│ ├── extract.py
│ ├── normalize.py
│ ├── diff.py
│ └── report.py
└── tests/
5.3 The Core Question You’re Answering
“What changed in policy, and does it weaken security?”
5.4 Concepts You Must Understand First
- Policy structure and module priority.
- Rule normalization and attribute expansion.
- Neverallow rules and risk assessment.
5.5 Questions to Guide Your Design
- How will you make output deterministic?
- How will you isolate boolean-related differences?
- How will you score risk?
5.6 Thinking Exercise
Given two rule sets, identify which changes are most security-sensitive and why.
5.7 The Interview Questions They’ll Ask
- “What is a neverallow rule?”
- “Why can’t you diff policies with plain text tools?”
- “How do module priorities affect effective policy?”
5.8 Hints in Layers
Hint 1: Start with sesearch -A outputs
Hint 2: Normalize by sorting and splitting permissions
Hint 3: Add neverallow extraction and risk scoring
5.9 Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Policy structure | “SELinux Notebook” | Modules section |
| Policy language | “SELinux by Example” | Policy chapters |
| Security analysis | “Security Engineering” | Access control chapters |
5.10 Implementation Phases
Phase 1: Foundation (4-5 days)
Goals:
- Extract and normalize allow rules.
Tasks:
- Use
sesearchto extract allow rules. - Build canonical rule tuples.
Checkpoint: Rule count matches expected fixture.
Phase 2: Core Functionality (1 week)
Goals:
- Add diff engine and reporting.
Tasks:
- Compute added/removed/changed rules.
- Output summary report.
Checkpoint: Report matches golden path.
Phase 3: Polish & Edge Cases (4-5 days)
Goals:
- Add neverallow diffing and risk scoring.
Tasks:
- Extract neverallow rules.
- Flag high-risk changes.
Checkpoint: Neverallow changes highlighted correctly.
5.11 Key Implementation Decisions
| Decision | Options | Recommendation | Rationale |
|---|---|---|---|
| Rule extraction | parse CIL vs use sesearch |
sesearch |
simpler, stable output |
| Boolean handling | ignore vs snapshot | snapshot | reduce noise |
| Risk model | heuristic vs manual | heuristic | scalable reporting |
6. Testing Strategy
6.1 Test Categories
| Category | Purpose | Examples |
|---|---|---|
| Unit Tests | normalization | permission sorting |
| Integration Tests | policy diff | baseline vs modified |
| Edge Case Tests | missing files | error handling |
6.2 Critical Test Cases
- Removed neverallow rule flagged as high risk.
- Added allow rule reported as medium risk.
- Same rules in different order produce no diff.
6.3 Test Data
fixtures/policy_baseline.pp
fixtures/policy_modified.pp
7. Common Pitfalls & Debugging
7.1 Frequent Mistakes
| Pitfall | Symptom | Solution |
|---|---|---|
| Comparing raw module files | noisy diffs | compare effective rules |
| Ignoring booleans | false positives | snapshot boolean state |
| Not expanding attributes | missing changes | expand types via seinfo |
7.2 Debugging Strategies
- Compare a single domain’s rules first.
- Use small fixture policies for testing.
7.3 Performance Traps
- Expanding large attributes without caching can be slow.
8. Extensions & Challenges
8.1 Beginner Extensions
- Add CSV export.
- Highlight top changed domains.
8.2 Intermediate Extensions
- Add module metadata diff (version/priority).
- Include boolean differences in report.
8.3 Advanced Extensions
- Build a web UI for policy diff browsing.
- Integrate with CI pipelines for policy regression checks.
9. Real-World Connections
9.1 Industry Applications
- OS upgrade policy regression audits.
- Compliance and security change management.
9.2 Related Open Source Projects
- setools (
sesearch,seinfo).
9.3 Interview Relevance
- Policy analysis and risk reasoning.
10. Resources
10.1 Essential Reading
- “SELinux Notebook” (policy tooling)
- “SELinux by Example” (policy language)
10.2 Video Resources
- Policy analysis talks
10.3 Tools & Documentation
sesearch,seinfo,semodule
10.4 Related Projects in This Series
11. Self-Assessment Checklist
11.1 Understanding
- I can explain module priorities and effective policy.
- I can normalize policy rules for diffs.
- I can assess risk from neverallow changes.
11.2 Implementation
- Diff output is deterministic.
- Report highlights high-risk changes.
- JSON output matches schema.
11.3 Growth
- I can justify my risk scoring model.
- I documented policy diff methodology.
12. Submission / Completion Criteria
Minimum Viable Completion:
- Compare two policies and list added/removed allow rules.
Full Completion:
- Include neverallow changes and risk summary.
Excellence (Going Above & Beyond):
- Integrate into CI for automated policy regression checks.
13 Additional Content Rules (Hard Requirements)
13.1 Determinism
- Normalize rule order and permissions.
13.2 Outcome Completeness
- Provide success and failure CLI demos with exit codes.
13.3 Cross-Linking
- Link to P03 and P12 where policy analysis is reused.
13.4 No Placeholder Text
- All sections are fully specified.