Project 6: File Context Integrity Checker
Build a scanning tool that detects SELinux label drift and generates safe relabel plans.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Level 2: Intermediate |
| Time Estimate | 1-2 weeks |
| Main Programming Language | Python |
| Alternative Programming Languages | Go, Rust |
| Coolness Level | Level 3 |
| Business Potential | 2 |
| Prerequisites | Project 1, file system basics, SELinux tools |
| Key Topics | file contexts, label drift, xattrs, compliance scanning |
1. Learning Objectives
By completing this project, you will:
- Detect mismatches between actual file labels and policy defaults.
- Build a deterministic scanning pipeline for large directory trees.
- Generate safe relabel plans and dry-run commands.
- Understand how labels are stored and how they drift over time.
- Create reports suitable for compliance and audits.
2. All Theory Needed (Per-Concept Breakdown)
File Context Rules and Default Labels
Fundamentals
SELinux assigns default labels to files based on path patterns defined in policy. These file context rules are stored in the policy and can be queried with semanage fcontext -l or matchpathcon. When a file is created, its label is typically inherited from the parent directory, but policy rules override defaults when they match. To detect label drift, you compare a file’s actual label to the expected label from policy. This concept is the basis of your integrity checker.
Deep Dive into the concept
File context rules are the canonical source of truth for expected labels. They are regex-based mappings from paths to types. The policy includes thousands of such rules, and the most specific match wins. This is why a generic rule like /var(/.*)? is overridden by more specific rules such as /var/www(/.*)? or /var/log(/.*)?. The matchpathcon tool evaluates these rules and tells you what the label should be. A drift checker should call matchpathcon for each path and compare it to the actual label from getfilecon or ls -Z.
In practice, label drift happens for several reasons: copying files without preserving xattrs, mounting filesystems without SELinux support, restoring from backups, or manually using chcon. Drift can be subtle: the file may still be accessible, but the wrong label can cause unexpected denials later. For compliance, you need a repeatable scan that reports these discrepancies. This scan must be efficient: calling matchpathcon for every file is expensive, so you should cache results for common path prefixes and avoid redundant work.
Another nuance is that not all mismatches are errors. Some files are intentionally labeled differently using semanage fcontext rules, which are persistent overrides. Your tool must respect these overrides, which is why you should query the effective policy mapping rather than hardcoding expectations. The correct logic is: policy is the truth, not the file system. If the file label does not match policy, it is drift and needs a fix or an updated policy rule.
Finally, note that the expected label includes the full context, including MLS/MCS levels. If your system uses categories, matchpathcon will show s0 by default. Your tool should compare at least the type, and optionally the full context. The choice is a key design decision: comparing full contexts may produce false positives in MCS environments, while comparing types may miss category mismatches. For this project, support both modes.
Additional operational notes on File Context Rules and Default Labels: In real systems, this concept interacts with policy versions, distribution defaults, and local overrides. Always record the exact policy version and runtime toggles when diagnosing behavior, because the same action can be allowed on one host and denied on another. When you change configuration related to this concept, capture before/after evidence (labels, logs, and outcomes) so you can justify the change, detect regressions, and roll it back if needed. Treat every tweak as a hypothesis: change one variable, re-run the same action, and compare results against a known baseline. This makes debugging repeatable and keeps your fixes defensible.
From a design perspective, treat File Context Rules and Default Labels as an invariant: define what success means, which data proves it, and what failure looks like. Build tooling that supports dry-run mode and deterministic fixtures so you can validate behavior without risking production. This also makes the concept teachable to others. Finally, connect the concept to security and performance trade-offs: overly broad changes reduce security signal, while overly strict changes create operational friction. Good designs surface these trade-offs explicitly so operators can make safe decisions.
How this fit on projects
Label rules drive §3.2 Functional Requirements and §3.7 Real World Outcome. This concept is also referenced in P01-selinux-context-explorer-visualizer.md.
Definitions & key terms
- file context rule -> mapping from path pattern to label
- matchpathcon -> tool to get expected label
- drift -> mismatch between expected and actual label
- override -> custom rule added via
semanage fcontext
Mental model diagram
policy path regex -> expected label
filesystem xattr -> actual label
compare -> drift report
How it works (step-by-step, with invariants and failure modes)
- For each path, compute expected label with
matchpathcon. - Read actual label via
getfilecon. - Compare labels (type-only or full context).
- Record mismatches and generate relabel commands.
Invariants: policy is the source of truth; matchpathcon output is deterministic. Failure modes: missing xattrs, permission errors, mislabeled policy.
Minimal concrete example
$ matchpathcon -V /srv/www
/srv/www system_u:object_r:default_t:s0 actual system_u:object_r:httpd_sys_content_t:s0
Common misconceptions
- “If it works, the label is fine.” -> Drift can be latent and still risky.
- “chcon fixes drift permanently.” -> It does not survive relabels.
Check-your-understanding questions
- Why should you use
matchpathconinstead of guessing labels? - What is the difference between expected and actual labels?
- Why might full-context comparison be too strict?
Check-your-understanding answers
- It uses the policy-defined mapping and reflects real expectations.
- Expected is policy default; actual is the xattr on disk.
- MCS categories may differ even when the type is correct.
Real-world applications
- Compliance scans for production systems.
- Label drift detection after migrations.
Where you’ll apply it
- This project: §3.2, §3.7, §5.10 Phase 2, §6.2.
- Also used in: P09-ansible-selinux-hardening-role.md.
References
- “SELinux System Administration” (labeling)
- Red Hat SELinux Guide, file contexts
Key insights
Policy is the source of truth; drift is any deviation from it.
Summary
File context rules define expected labels. Drift detection compares those rules to live filesystem labels.
Homework/Exercises to practice the concept
- Use
matchpathcon -Von three files and interpret the output. - Create a custom fcontext rule and observe how
matchpathconchanges. - Compare type-only vs full-context comparison on an MCS system.
Solutions to the homework/exercises
- The
-Voutput shows expected vs actual labels. semanage fcontext -a -t <type> "/path(/.*)?"then rerunmatchpathcon.- Full-context includes categories; type-only ignores them.
Extended Attributes and Label Lifecycle
Fundamentals
SELinux labels on files are stored as extended attributes (security.selinux) on the inode. This means labels persist across moves within the same filesystem but can be lost when copying without preserving xattrs or when using filesystems that do not support them. Understanding this lifecycle explains why label drift happens and how to fix it. Your tool needs to detect when labels are missing and classify such cases separately.
Deep Dive into the concept
Extended attributes are metadata stored alongside the inode. SELinux uses the security.selinux xattr to store the full context label. When you move a file within the same filesystem, the inode stays the same and the label stays attached. When you copy a file, a new inode is created, and the label will only be preserved if the copy tool supports xattrs and is invoked with the correct flags (cp -a or cp --preserve=xattr). Many backup or deployment tools do not preserve xattrs by default, leading to labels being reset to default_t or lost entirely. A drift checker should identify these cases, because the fix is often to relabel entire directories after a deployment.
Mount options also affect labels. If a filesystem is mounted with context= or fscontext=, the kernel applies a fixed label to all files, which may hide the actual xattrs. This can be intentional (for removable media) but often indicates misconfiguration. Filesystems mounted without seclabel support cannot store labels, causing them to appear as unlabeled_t. Your tool should detect these conditions and warn that relabeling will not solve the problem if the filesystem cannot store labels.
Relabel operations (restorecon, setfiles) write labels back to xattrs based on policy. They are often used after system installs or large file transfers. But relabeling is not magic: it cannot fix missing policy rules or incorrect path mappings. The correct workflow is to ensure policy has the right mappings, then relabel. Your tool should therefore generate a plan that includes semanage fcontext changes before relabeling when needed.
Understanding label lifecycle also informs performance decisions. When scanning large directories, repeatedly reading xattrs can be expensive. You can optimize by reading labels in batches or by limiting scans to directories known to be important. But for compliance, you may need a full scan. Your tool should support both incremental and full scans, with clear documentation of the trade-offs.
Operational expansion for Extended Attributes and Label Lifecycle: In real systems, the behavior you observe is the product of policy, labels, and runtime state. That means your investigation workflow must be repeatable. Start by documenting the exact inputs (contexts, paths, users, domains, ports, and the action attempted) and the exact outputs (audit events, error codes, and any policy query results). Then, replay the same action after each change so you can attribute cause and effect. When the concept touches multiple subsystems, isolate variables: change one label, one boolean, or one rule at a time. This reduces confusion and prevents accidental privilege creep. Use staging environments or fixtures to test fixes before deploying them widely, and always keep a rollback path ready.
To deepen understanding, connect Extended Attributes and Label Lifecycle to adjacent concepts: how it affects policy decisions, how it appears in logs, and how it changes operational risk. Build small verification scripts that assert the expected outcome and fail loudly if the outcome diverges. Over time, these scripts become a regression suite for your SELinux posture. Finally, treat the concept as documentation-worthy: write down the invariants it guarantees, the constraints it imposes, and the exact evidence that proves it works. This makes future debugging faster and creates a shared mental model for teams.
How this fit on projects
Label rules drive §3.2 Functional Requirements and §3.7 Real World Outcome. This concept is also referenced in P01-selinux-context-explorer-visualizer.md.
Further depth on Extended Attributes and Label Lifecycle: In production environments, this concept is shaped by policy versions, automation layers, and distro-specific defaults. To keep reasoning consistent, capture a minimal evidence bundle every time you analyze behavior: the policy name/version, the exact labels or contexts involved, the command that triggered the action, and the resulting audit event. If the same action yields different decisions on two hosts, treat that as a signal that a hidden variable changed (boolean state, module priority, label drift, or category range). This disciplined approach prevents trial-and-error debugging and makes your conclusions defensible.
Operationally, build a short checklist for Extended Attributes and Label Lifecycle: verify prerequisites, verify labels or mappings, verify policy query results, then run the action and confirm the expected audit outcome. Track metrics that reflect stability, such as the count of denials per hour, the number of unique denial keys, or the fraction of hosts in compliance. When you must change behavior, apply the smallest change that can be verified (label fix before boolean, boolean before policy). Document the rollback path and include a post-change validation step so the system returns to a known-good state.
Definitions & key terms
- file context rule -> mapping from path pattern to label
- matchpathcon -> tool to get expected label
- drift -> mismatch between expected and actual label
- override -> custom rule added via
semanage fcontext
Mental model diagram
policy path regex -> expected label
filesystem xattr -> actual label
compare -> drift report
How it works (step-by-step, with invariants and failure modes)
- For each path, compute expected label with
matchpathcon. - Read actual label via
getfilecon. - Compare labels (type-only or full context).
- Record mismatches and generate relabel commands.
Invariants: policy is the source of truth; matchpathcon output is deterministic. Failure modes: missing xattrs, permission errors, mislabeled policy.
Minimal concrete example
$ matchpathcon -V /srv/www
/srv/www system_u:object_r:default_t:s0 actual system_u:object_r:httpd_sys_content_t:s0
Common misconceptions
- “If it works, the label is fine.” -> Drift can be latent and still risky.
- “chcon fixes drift permanently.” -> It does not survive relabels.
Check-your-understanding questions
- Why should you use
matchpathconinstead of guessing labels? - What is the difference between expected and actual labels?
- Why might full-context comparison be too strict?
Check-your-understanding answers
- It uses the policy-defined mapping and reflects real expectations.
- Expected is policy default; actual is the xattr on disk.
- MCS categories may differ even when the type is correct.
Real-world applications
- Compliance scans for production systems.
- Label drift detection after migrations.
Where you’ll apply it
- This project: §3.2, §3.7, §5.10 Phase 2, §6.2.
- Also used in: P09-ansible-selinux-hardening-role.md.
References
- “SELinux System Administration” (labeling)
- Red Hat SELinux Guide, file contexts
Key insights
Policy is the source of truth; drift is any deviation from it.
Summary
File context rules define expected labels. Drift detection compares those rules to live filesystem labels.
Homework/Exercises to practice the concept
- Use
matchpathcon -Von three files and interpret the output. - Create a custom fcontext rule and observe how
matchpathconchanges. - Compare type-only vs full-context comparison on an MCS system.
Solutions to the homework/exercises
- The
-Voutput shows expected vs actual labels. semanage fcontext -a -t <type> "/path(/.*)?"then rerunmatchpathcon.- Full-context includes categories; type-only ignores them.
Drift Scanning at Scale and Reporting
Fundamentals
Scanning a full filesystem for label drift can be expensive. A practical tool must support incremental scanning, path filtering, and deterministic reporting. It should also produce a relabel plan that is safe to apply. This requires careful design: avoiding repeated matchpathcon calls, handling permission errors gracefully, and producing stable output for audits.
Deep Dive into the concept
Large-scale drift scanning is a data engineering problem. You may need to scan millions of files across multiple mount points. Each scan involves reading xattrs and computing expected labels. The naive approach is to call matchpathcon for each file, which is slow. A better approach is to build a cache keyed by directory prefixes or by regex rules. For example, if you know that all files under /var/log share the same expected type, you can cache that and reduce repeated calls. However, be careful: some directories have more specific rules, so your cache must respect rule precedence.
Reporting must be deterministic. If you scan in filesystem order, results can vary between runs. A compliance tool should sort output by path and include a stable summary (counts per label type, counts per top-level directory). This makes reports diffable and suitable for audits. The tool should also provide a “dry-run” mode that only reports differences without changing anything. If you choose to include an optional apply mode, it should only generate restorecon commands or a batch plan, not execute arbitrary relabels silently.
Error handling is another important part of scaling. Some files may be unreadable due to permissions or broken links. Your tool should log these errors separately and continue scanning. These errors should not mask drift findings. A good design is to have three output sections: drifted files, errors, and summary counts.
Finally, scanning across mount points requires awareness of filesystem type. Some mounts may not support SELinux labels. Your tool should detect this (e.g., by checking for unlabeled_t patterns or by reading mount options) and report it separately. This is a key insight for operations: sometimes the fix is not relabeling but remounting with proper options.
Operational expansion for context=` overrides labels so all files show the same context.: In real systems, the behavior you observe is the product of policy, labels, and runtime state. That means your investigation workflow must be repeatable. Start by documenting the exact inputs (contexts, paths, users, domains, ports, and the action attempted) and the exact outputs (audit events, error codes, and any policy query results). Then, replay the same action after each change so you can attribute cause and effect. When the concept touches multiple subsystems, isolate variables: change one label, one boolean, or one rule at a time. This reduces confusion and prevents accidental privilege creep. Use staging environments or fixtures to test fixes before deploying them widely, and always keep a rollback path ready.
To deepen understanding, connect context=` overrides labels so all files show the same context. to adjacent concepts: how it affects policy decisions, how it appears in logs, and how it changes operational risk. Build small verification scripts that assert the expected outcome and fail loudly if the outcome diverges. Over time, these scripts become a regression suite for your SELinux posture. Finally, treat the concept as documentation-worthy: write down the invariants it guarantees, the constraints it imposes, and the exact evidence that proves it works. This makes future debugging faster and creates a shared mental model for teams.
How this fit on projects
Label lifecycle is part of §3.6 Edge Cases and §7.1 Pitfalls. It is also used in P05-container-selinux-sandbox-lab.md for volume labeling.
Further depth on context=` overrides labels so all files show the same context.: In production environments, this concept is shaped by policy versions, automation layers, and distro-specific defaults. To keep reasoning consistent, capture a minimal evidence bundle every time you analyze behavior: the policy name/version, the exact labels or contexts involved, the command that triggered the action, and the resulting audit event. If the same action yields different decisions on two hosts, treat that as a signal that a hidden variable changed (boolean state, module priority, label drift, or category range). This disciplined approach prevents trial-and-error debugging and makes your conclusions defensible.
Operationally, build a short checklist for context=` overrides labels so all files show the same context.: verify prerequisites, verify labels or mappings, verify policy query results, then run the action and confirm the expected audit outcome. Track metrics that reflect stability, such as the count of denials per hour, the number of unique denial keys, or the fraction of hosts in compliance. When you must change behavior, apply the smallest change that can be verified (label fix before boolean, boolean before policy). Document the rollback path and include a post-change validation step so the system returns to a known-good state.
Definitions & key terms
- xattr -> extended attribute storing
security.selinux - seclabel -> mount option enabling labels
- unlabeled_t -> label used when xattrs are missing
- restorecon -> tool that rewrites labels according to policy
Mental model diagram
copy without xattrs -> new inode -> label lost -> default_t
How it works (step-by-step, with invariants and failure modes)
- File created or copied.
- If xattrs preserved, label retained; otherwise default label applied.
- Drift checker reads xattr and compares to policy.
- Relabeling writes correct xattr.
Invariants: labels live in xattrs; file moves preserve labels. Failure modes: xattr unsupported, mount options override labels.
Minimal concrete example
$ cp file /tmp/file2
$ ls -Z /tmp/file2
# label may differ if xattrs not preserved
Common misconceptions
- “Relabel fixes everything.” -> It cannot fix unsupported filesystems.
- “Labels are based on file ownership.” -> Labels are independent of ownership.
Check-your-understanding questions
- Why do labels disappear after some copy operations?
- What does
unlabeled_tusually indicate? - How does
context=mount option affect labels?
Check-your-understanding answers
- The new inode is created without preserved xattrs.
- The filesystem lacks SELinux label support or labels are missing.
- It forces a fixed label for all files on that mount.
Real-world applications
- Detecting label loss after backups or rsync.
- Fixing mislabeled NFS mounts.
Where you’ll apply it
- This project: §3.6, §7.1, §7.2.
- Also used in: P05-container-selinux-sandbox-lab.md.
References
- Linux xattr documentation
- Red Hat SELinux Guide, labeling section
Key insights
Labels are metadata; when metadata is lost, policy enforcement breaks.
Summary
Understanding xattr lifecycle explains why labels drift and why relabeling is required.
Homework/Exercises to practice the concept
- Copy a file with and without
-aand compare labels. - Mount a filesystem with
context=and observe labels. - Run
restoreconand verify xattrs.
Solutions to the homework/exercises
cp -apreserves labels; plaincpoften does not.context=overrides labels so all files show the same context.restoreconrewrites the label to match policy.
3. Project Specification
3.1 What You Will Build
A CLI tool named selctxcheck that scans directories for SELinux label drift and produces a relabel plan.
Included features:
- Path scanning with filters
- Drift detection using
matchpathcon - Dry-run reports and JSON output
- Relabel command generation
Excluded features:
- Automatic relabel execution (optional in an extension)
3.2 Functional Requirements
- Scan Paths: Accept one or more root paths to scan.
- Drift Detection: Compare expected vs actual labels.
- Reporting: Output drifted files and summary counts.
- Relabel Plan: Generate
restoreconcommands. - Filters: Exclude paths via patterns.
3.3 Non-Functional Requirements
- Performance: Scan 100k files in under 1 minute on a VM.
- Reliability: Continue on errors and report them.
- Usability: Provide clear output and exit codes.
3.4 Example Usage / Output
$ selctxcheck scan /srv/www --dry-run
DRIFT: /srv/www/app.conf
current: system_u:object_r:default_t:s0
expected: system_u:object_r:httpd_sys_content_t:s0
Summary:
scanned: 4321
drifted: 12
errors: 3
3.5 Data Formats / Schemas / Protocols
JSON output schema (v1):
{
"scanned": 4321,
"drifted": 12,
"errors": 3,
"items": [
{"path": "/srv/www/app.conf", "current": "...", "expected": "..."}
]
}
3.6 Edge Cases
- Files with no labels (
unlabeled_t) - Mount points without SELinux support
- Huge directories with symlink loops
3.7 Real World Outcome
3.7.1 How to Run (Copy/Paste)
./selctxcheck scan /srv/www --dry-run --output report.json
3.7.2 Golden Path Demo (Deterministic)
Use a fixture directory tree with known labels and freeze the scan order by sorting paths.
3.7.3 CLI Transcript (Success and Failure)
$ ./selctxcheck scan /srv/www
Report: ./report.txt
Exit code: 0
$ ./selctxcheck scan /missing
ERROR: path not found
Exit code: 2
3.7.4 If CLI: exit codes
0success1success with drift found2invalid input3SELinux labels unavailable
4. Solution Architecture
4.1 High-Level Design
Scanner -> Label Comparator -> Report Generator -> Relabel Plan
4.2 Key Components
| Component | Responsibility | Key Decisions |
|---|---|---|
| Scanner | Walk filesystem | Use iterative walk to avoid recursion limits |
| Comparator | Compare expected vs actual labels | Cache expected labels |
| Reporter | Output drift and errors | Deterministic sort |
| Plan Generator | Build restorecon commands |
Group by directory |
4.3 Data Structures (No Full Code)
DriftItem = {
"path": "/srv/www/app.conf",
"current": "...",
"expected": "..."
}
4.4 Algorithm Overview
Key Algorithm: Drift Scan
- Walk directory tree and collect paths.
- For each path, compute expected label (cached).
- Compare with actual label and record mismatch.
- Sort results and generate report.
Complexity Analysis:
- Time: O(n) paths
- Space: O(n) results
5. Implementation Guide
5.1 Development Environment Setup
sudo dnf install -y policycoreutils-python-utils
5.2 Project Structure
selctxcheck/
├── selctxcheck/
│ ├── cli.py
│ ├── scan.py
│ ├── compare.py
│ └── report.py
├── fixtures/
└── tests/
5.3 The Core Question You’re Answering
“Are my file labels still aligned with policy expectations?”
5.4 Concepts You Must Understand First
- File context rules and
matchpathcon. - Label lifecycle and xattrs.
- Scanning at scale with deterministic output.
5.5 Questions to Guide Your Design
- How will you keep scans fast on large directories?
- How will you present results for compliance?
- Should you compare full contexts or just types?
5.6 Thinking Exercise
Design a relabel plan that minimizes the number of restorecon calls while covering all drifted files.
5.7 The Interview Questions They’ll Ask
- “What is label drift and why does it matter?”
- “Why do we use
matchpathcon?” - “How do you handle unlabeled files?”
5.8 Hints in Layers
Hint 1: Start with a single directory scan
Hint 2: Add caching for expected labels
Hint 3: Add a dry-run and JSON report
5.9 Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| SELinux labeling | “SELinux System Administration” | Labeling chapter |
| Filesystems | “The Linux Programming Interface” | File systems |
5.10 Implementation Phases
Phase 1: Foundation (3-4 days)
Goals:
- Implement scanner and label parser.
Tasks:
- Walk directory tree and read labels.
- Compare to expected labels via
matchpathcon.
Checkpoint: Drift results match fixture dataset.
Phase 2: Core Functionality (4-5 days)
Goals:
- Add reporting and relabel plan generation.
Tasks:
- Sort and format report output.
- Generate
restoreconcommands grouped by directory.
Checkpoint: Report and plan match golden path.
Phase 3: Polish & Edge Cases (2-3 days)
Goals:
- Add JSON output and error handling.
Tasks:
- Write JSON schema and output.
- Add exit codes for drift and errors.
Checkpoint: All tests pass with deterministic output.
5.11 Key Implementation Decisions
| Decision | Options | Recommendation | Rationale |
|---|---|---|---|
| Label comparison | full vs type-only | configurable | supports MCS systems |
| Output format | text only vs text + JSON | text + JSON | automation-friendly |
| Plan generation | per-file vs per-dir | per-dir | fewer commands |
6. Testing Strategy
6.1 Test Categories
| Category | Purpose | Examples |
|---|---|---|
| Unit Tests | Parse and compare labels | matchpathcon parser |
| Integration Tests | Full scan | fixture directory tree |
| Edge Case Tests | unreadable paths | permission errors |
6.2 Critical Test Cases
- Drift detected when expected label differs.
- Unlabeled file reported separately.
- Scan continues when encountering errors.
6.3 Test Data
fixtures/tree/
fixtures/labels.json
7. Common Pitfalls & Debugging
7.1 Frequent Mistakes
| Pitfall | Symptom | Solution |
|---|---|---|
| Comparing only text output | False positives | Use canonical label parsing |
| No caching | Slow scans | Cache expected labels |
| Ignoring errors | Incomplete reports | Report errors separately |
7.2 Debugging Strategies
- Use small fixtures for deterministic tests.
- Compare
matchpathconandls -Zoutputs for a single path.
7.3 Performance Traps
- Scanning
/without filters can be extremely slow.
8. Extensions & Challenges
8.1 Beginner Extensions
- Add CSV export.
- Add path exclusion patterns.
8.2 Intermediate Extensions
- Add parallel scanning.
- Support scanning multiple hosts via SSH.
8.3 Advanced Extensions
- Integrate with a compliance dashboard.
- Add policy rule diff detection.
9. Real-World Connections
9.1 Industry Applications
- Compliance auditing for SELinux labeling.
- Post-migration label verification.
9.2 Related Open Source Projects
- setools for label queries.
- oscap for compliance reporting.
9.3 Interview Relevance
- Knowledge of SELinux labeling and drift detection.
- File system metadata handling.
10. Resources
10.1 Essential Reading
- “SELinux System Administration” (labeling)
- Red Hat SELinux Guide (file contexts)
10.2 Video Resources
- SELinux labeling tutorials
10.3 Tools & Documentation
matchpathcon,restorecon,semanage fcontext
10.4 Related Projects in This Series
11. Self-Assessment Checklist
11.1 Understanding
- I can explain file context rules and drift.
- I understand xattrs and label lifecycle.
- I can design a deterministic scan report.
11.2 Implementation
- Scans complete with expected results.
- Reports are deterministic and sorted.
- Exit codes are documented and correct.
11.3 Growth
- I can explain my caching strategy.
- I documented trade-offs between full and type-only comparison.
12. Submission / Completion Criteria
Minimum Viable Completion:
- Scan a directory and report label drift.
Full Completion:
- Generate relabel plans and JSON output.
Excellence (Going Above & Beyond):
- Multi-host scanning and compliance dashboard integration.
13 Additional Content Rules (Hard Requirements)
13.1 Determinism
- Sort paths and freeze timestamps in reports.
13.2 Outcome Completeness
- Provide success and failure CLI demos with exit codes.
13.3 Cross-Linking
- Link to P01 and P09 where label drift is reused.
13.4 No Placeholder Text
- All sections are fully populated.