Project 4: System Inventory & Audit Tool

Build a CLI that inventories files, flags risky permissions, and produces a security-focused audit report.

Quick Reference

Attribute Value
Difficulty Level 2: Intermediate
Time Estimate 1-2 weeks
Main Programming Language Shell + find + awk
Alternative Programming Languages Python, Go
Coolness Level Level 3: Security-Useful
Business Potential Level 3: Compliance / Audit
Prerequisites Basic Linux permissions, find usage, awk basics
Key Topics filesystem metadata, permissions, safe traversal, reporting

1. Learning Objectives

By completing this project, you will:

  1. Interpret Unix permissions and identify high-risk patterns (world-writable, setuid).
  2. Use find predicates to scan large trees safely and efficiently.
  3. Handle filenames safely with null-delimited output.
  4. Summarize results into a deterministic audit report.
  5. Design a CLI that supports common audit flags and produces actionable output.

2. All Theory Needed (Per-Concept Breakdown)

2.1 Unix Permissions, Ownership, and Risk Models

Fundamentals

Unix permissions are a compact representation of who can read, write, or execute a file. Each file has an owner, group, and a set of permission bits for user, group, and others. Some permission combinations are risky: world-writable files (-rw-rw-rw-) can be modified by any user, and setuid binaries run with the permissions of their owner, which can be dangerous if misconfigured. An audit tool must understand these semantics to flag risks correctly. Permissions are also intertwined with file types, mount points, and system policies. Understanding the basics of read/write/execute bits and special bits (setuid, setgid, sticky) is the foundation of security auditing.

Deep Dive into the concept

Permissions are not just strings like -rwxr-xr-x; they are bit fields in the inode. The three standard permission triads (user/group/other) control read (r), write (w), and execute (x) access. For files, execute means the file can be run as a program; for directories, execute means the directory can be traversed. This distinction is critical in audits: a world-writable directory with execute permission allows anyone to create or modify files within it. The sticky bit (t) on directories (like /tmp) allows only the file owner or root to delete files, which mitigates some risk. Your audit tool should be aware of this and not flag sticky world-writable directories as severely as non-sticky ones.

The setuid and setgid bits (s on the owner or group execute position) change how executables run. A setuid root binary runs with root privileges, which makes it a potential privilege escalation target. Many systems rely on setuid for legitimate functionality (e.g., passwd), so your audit must differentiate between expected and unexpected setuid binaries. For a learning project, you can flag all setuid/setgid files and allow users to whitelist known-safe paths. This is a common real-world pattern: detection plus a human review step.

Ownership is another axis. A world-writable file owned by root is often more concerning than one owned by a normal user. Your tool can classify risk levels based on ownership and permissions. For example, a root-owned setuid binary in /usr/bin is expected, but a setuid binary in /tmp is highly suspicious. You can encode these heuristics in a report severity column.

Permissions are also affected by umask and default ACLs. Many systems use ACLs to grant additional permissions beyond the mode bits. Your tool may not parse ACLs, but you should acknowledge this limitation. A file may appear safe by mode bits but be exposed via ACL. Documenting this limitation is important for honesty and for setting user expectations.

Another subtlety is symbolic links. A symlink’s permissions are not typically used; the target’s permissions matter. When auditing, you should either skip symlinks or record them separately, because following symlinks can lead out of scope or create loops. For a simple audit tool, treat symlinks as informational and do not follow them unless explicitly requested.

Finally, permissions are not the only risk signal. File metadata like modification time, ownership changes, or unusual file names can indicate suspicious activity. For example, recently modified files in /etc might warrant review. Your tool can incorporate optional flags for -mtime or -ctime to highlight recent changes. This broadens the audit beyond just permission bits and makes it more realistic.

It is also useful to understand numeric modes. Permissions can be expressed in octal (e.g., 644 or 755), which makes it easier to reason about bitwise checks in find -perm. When debugging, use stat rather than ls to get exact numeric modes and ownership data. This helps validate that your predicates match what you think they match and avoids surprises from symbolic formatting.

How this fit on projects

The entire audit is based on interpreting permissions and classifying risks. Without this concept, the tool cannot produce meaningful results.

Definitions & key terms

  • Mode bits: Permission bits stored in the inode.
  • setuid/setgid: Special bits that change effective user/group when running.
  • Sticky bit: Directory bit that restricts delete operations.
  • World-writable: Files or directories writable by “others”.
  • ACL: Access control list providing extended permissions.

Mental model diagram (ASCII)

Permissions: rwx r-x r--
             ^   ^   ^
           user group other
Special bits: setuid, setgid, sticky

How it works (step-by-step, with invariants and failure modes)

  1. Read permission bits from file metadata.
  2. Classify file based on risk rules (world-writable, setuid, etc.).
  3. Adjust severity based on location and ownership.
  4. Emit result in report.

Invariant: Every flagged item has a clear rule that triggered it. Failure modes: missing ACL awareness, symlink misinterpretation, false positives in /tmp.

Minimal concrete example

# Find world-writable files
find /etc -type f -perm -0002 -print

Common misconceptions

  • “Execute bit is only for files” -> For directories, execute means traversal.
  • “World-writable always equals critical” -> Sticky directories can be acceptable.
  • “Setuid is always malicious” -> Some are legitimate system tools.

Check-your-understanding questions

  1. What is the difference between execute permission on a file vs a directory?
  2. Why is a world-writable directory with sticky bit less risky?
  3. Why might you want to flag setuid binaries separately from world-writable files?
  4. What is a limitation of relying only on mode bits?

Check-your-understanding answers

  1. File execute allows running; directory execute allows traversal.
  2. Sticky bit prevents users from deleting others’ files.
  3. Setuid implies privilege escalation risk, which is a different threat category.
  4. ACLs can grant additional permissions not reflected in mode bits.

Real-world applications

  • Compliance audits (CIS benchmarks).
  • Incident response investigations.
  • Hardening server configurations.

Where you’ll apply it

  • See §3.2 for audit rules and severity classification.
  • See §3.7 for the deterministic report output.
  • Also used in: P06 Personal DevOps Toolkit as a subcommand.

References

  • “The Linux Command Line” by William Shotts, permissions chapters
  • chmod(1) and stat(1) manual pages

Key insights

Permissions are a policy language. Your audit tool is translating that language into risk signals.

Summary

Understanding permissions, ownership, and special bits is the foundation of any file-based security audit. Correct risk classification depends on it.

Homework/Exercises to practice the concept

  1. Create a world-writable file and observe its permission bits with ls -l.
  2. Find all setuid binaries on your system and count them.
  3. Compare a sticky directory (/tmp) with a non-sticky world-writable directory.

Solutions to the homework/exercises

  1. Use chmod 666 file and check ls -l output.
  2. Run find / -perm -4000 -type f 2>/dev/null | wc -l.
  3. Use ls -ld /tmp and create a test directory with chmod 777.

2.2 find Predicates, Pruning, and Safe Traversal

Fundamentals

find evaluates predicates against files and directories. For audit tools, you need to combine predicates like -perm, -type, -name, and -mtime to locate risky files. Pruning is critical to avoid scanning virtual filesystems like /proc and /sys, which can produce errors or infinite loops. Safe traversal also means handling filenames robustly with null-delimited output. In auditing, the find step is where you define the scope of your report, so its correctness directly determines the trustworthiness of the audit. Ordering of predicates and careful use of -prune or -xdev can make the difference between a safe scan and a noisy, misleading one.

Deep Dive into the concept

find predicates are evaluated left to right, and the order determines behavior. A typical audit command might look like: find / -path /proc -prune -o -path /sys -prune -o -type f -perm -0002 -print. The -prune -o pattern is a short-circuit: if a path matches /proc, -prune returns true and -o prevents further evaluation. Without this, find still descends into /proc and produces errors. This pattern is the core of safe traversal.

Predicates like -perm -0002 match any file with the world-writable bit set. The leading dash means “any of these bits”, while an exact mode like -perm 0002 means “exactly these bits”. Understanding this distinction is essential. For example, -perm -4000 matches any setuid file, while -perm 4000 matches only files with setuid and no other permissions. Audit tools generally want the dash form to find any file with the risky bit set.

Traversal scope matters for performance and safety. Scanning / can be slow and may require root privileges. For a learning project, your tool should accept a --root parameter and default to the current directory, while documenting that system-wide scans may need elevated privileges. The tool should also detect permission errors and continue (while logging them), rather than aborting.

Safe handling of filenames is crucial. Use -print0 and process with xargs -0 or a while read -d '' loop. This ensures you can scan files with spaces, tabs, and newlines in names. Audits often run on messy filesystems, so this is important for reliability.

Another important concept is filtering file types. Directory permissions are different from file permissions, and your audit may need to check both. For example, world-writable directories are risky only if the sticky bit is absent. You may want to handle directories separately. Similarly, you might exclude sockets, device files, or symlinks from the audit. Use -type f for files and -type d for directories, and consider separate passes if needed.

Finally, ensure deterministic output. find does not guarantee ordering. Sorting the output before processing ensures consistent reports. This also aids testing: you can compare reports line-by-line against a golden file.

In larger systems, you may also want to restrict scans to a single filesystem to avoid crossing into mounted network drives or removable media. The -xdev predicate limits traversal to one filesystem, which can be a valuable safety and performance feature. Another performance pattern is to group predicates so that cheap tests happen first, minimizing expensive metadata lookups. For example, filtering by -type f and -name before checking -perm can reduce work in deep trees. For audits that must run regularly, these optimizations can turn a slow nightly scan into a fast, practical check.

When you need more metadata, consider -printf (GNU find) or a stat call to capture mode and ownership in a single pass. This reduces repeated filesystem lookups and keeps the scanner efficient.

You should also design the tool to handle permission errors gracefully. Instead of failing, capture errors into a list and report them at the end. This ensures the audit completes and gives the user actionable information about what was not scanned.

How this fit on projects

This concept defines how the audit tool discovers targets. The correctness of predicates and pruning is directly tied to the accuracy of your report.

Definitions & key terms

  • Predicate: A test or condition in find.
  • Prune: Skip descending into a directory tree.
  • -perm -MODE: Match any file with the bits in MODE.
  • -print0: Output filenames with null separators.
  • Scope: The directory tree that the audit covers.

Mental model diagram (ASCII)

root/
  proc/   -> pruned
  sys/    -> pruned
  etc/    -> scanned
  home/   -> scanned

How it works (step-by-step, with invariants and failure modes)

  1. Start at root path.
  2. Apply prune rules to skip virtual or irrelevant dirs.
  3. Apply type and permission predicates.
  4. Emit null-delimited filenames.

Invariant: Only files in scope are considered. Failure modes: missing prune rules, permission errors, unsafe filename parsing.

Minimal concrete example

find /etc -type f -perm -0002 -print0

Common misconceptions

  • “-perm 0002 matches world-writable” -> Use -perm -0002 instead.
  • “find output is ordered” -> It is not.
  • “Errors should abort scan” -> Audits should continue with warnings.

Check-your-understanding questions

  1. Why is -perm -0002 different from -perm 0002?
  2. What does -prune do, and why is it needed for /proc?
  3. Why use -print0?
  4. How can you keep output deterministic?

Check-your-understanding answers

  1. -perm -0002 matches any file with that bit set, regardless of other bits.
  2. It stops traversal into virtual filesystems that can cause errors.
  3. It prevents word-splitting on weird filenames.
  4. Sort the output before processing.

Real-world applications

  • Compliance audits of /etc and /usr/bin.
  • Inventory of risky permissions in shared servers.

Where you’ll apply it

References

  • find(1) manual page
  • “The Linux Command Line” by William Shotts

Key insights

Safe traversal is as important as the permission logic; without it, audits are incomplete or dangerous.

Summary

find predicates and pruning allow you to scan large systems safely. Correct use of -perm, -prune, and -print0 is the foundation of an audit tool.

Homework/Exercises to practice the concept

  1. Build a find command that lists world-writable directories without sticky bit.
  2. Exclude .git and node_modules from a scan.
  3. Sort the output and compare runs.

Solutions to the homework/exercises

  1. Use -type d -perm -0002 ! -perm -1000.
  2. Use -path './.git' -prune -o -path './node_modules' -prune -o ....
  3. Pipe through sort and verify stable ordering.

2.3 Reporting, Severity, and Deterministic Output

Fundamentals

An audit tool is only useful if its output is readable and actionable. Reporting involves summarizing findings, classifying severity, and providing enough context for remediation. Deterministic output means that the same scan produces the same report order and formatting, which is critical for automated checks. A useful report includes counts by category (world-writable files, setuid binaries, risky directories), a detailed list of findings, and a summary section. Severity classification can be simple (high/medium/low) based on risk rules. Clear exit codes and consistent formatting make it possible to automate audits in CI without manual interpretation. Reports should be scan-friendly.

Deep Dive into the concept

Reporting is the bridge between technical data and operational action. A long list of files is not enough; you need structure. A good report starts with a summary: total files scanned, number of findings per category, and total errors (like permission denied). This tells the reader the scope and the health of the system at a glance. Next, it presents findings grouped by category, with each entry including file path, permissions, owner, and reason. This is the minimum necessary to act.

Severity classification is inherently heuristic. You can define rules such as: setuid binaries in non-system directories are high severity, world-writable files in /etc are high, world-writable directories with sticky bit are low, and so on. These rules should be explicit in your documentation so the user can interpret the results. For a learning project, a simple scoring system (high/medium/low) is sufficient. In real tools, severity can also consider file age, ownership, or known allowlists.

Deterministic output is essential for automation and testing. Filesystem traversal order is not stable, so you must sort results (by path or by severity then path). If you include timestamps, make them optional or fixed for tests. A deterministic report enables diff-based checks, which are common in compliance automation. In this project, the golden output uses fixed timestamps and a sorted list of findings.

Another important aspect is error handling. Scans often encounter permission denied errors. These should not abort the scan; instead, capture them in an “errors” section with the path and the error message. This transparency is crucial for audits: if some directories couldn’t be scanned, the report should say so. You may also want to provide a --strict flag that causes the tool to exit with a non-zero status if any errors occurred, which is useful in CI pipelines.

Finally, report formats matter. Human-readable text is great for ad-hoc audits, but machine-readable output (CSV or JSON) can be consumed by other tools. For this project, provide a plain text report by default and an optional CSV summary. This gives learners exposure to both use cases without expanding the scope too far.

Reporting also supports historical comparison. If you store a baseline report, you can diff the latest run against it to detect new risky files. This turns a one-off audit into a continuous monitoring practice. Even if you do not implement diffing in this project, structuring your output to be stable and machine-readable makes it easy to add later. When the same file appears repeatedly across reports, include a stable identifier (the path) so that external tools can correlate findings across time.

You can make severity more actionable by tying it to suggested remediations. For example, a HIGH severity finding might include a note like "remove world-writable bit" or "verify setuid requirement." This turns the report into a checklist rather than a passive list. Even a simple remediation hint field improves usability, and it encourages the user to think about why the finding matters, not just that it exists.

How this fit on projects

Reporting is how the audit’s value is delivered. It turns raw findings into a structured, actionable document.

Definitions & key terms

  • Severity: Classification of risk (high/medium/low).
  • Deterministic output: Stable ordering and formatting across runs.
  • Finding: A file or directory that matches a risk rule.
  • Audit report: Structured summary of findings and errors.

Mental model diagram (ASCII)

scan results -> [classify severity] -> [sort] -> report
      |                                   |
      v                                   v
  errors list                         summary stats

How it works (step-by-step, with invariants and failure modes)

  1. Collect findings into categories.
  2. Assign severity based on rules.
  3. Sort findings for deterministic output.
  4. Emit summary and detail sections.

Invariant: Each finding includes path, permission, owner, and reason. Failure modes: unsorted output, missing error reporting, ambiguous severity.

Minimal concrete example

# Example summary output line
HIGH  /etc/shadow  mode=644  reason=world-readable sensitive file

Common misconceptions

  • “A list of files is enough” -> Audits need classification and context.
  • “Ordering doesn’t matter” -> Determinism is crucial for automation.
  • “Errors can be ignored” -> Missing scan coverage invalidates results.

Check-your-understanding questions

  1. Why is deterministic ordering important for audit reports?
  2. What information should each finding include?
  3. How should permission errors be reported?
  4. What is a simple severity rule for setuid binaries?

Check-your-understanding answers

  1. It allows stable diff comparisons and automated checks.
  2. Path, permissions, owner, and reason.
  3. As a separate errors section with path and error message.
  4. High severity if setuid outside system directories.

Real-world applications

  • Compliance reporting for regulatory audits.
  • Continuous security monitoring.

Where you’ll apply it

  • See §3.7 for report format and golden output.
  • See §5.10 Phase 3 for report generation.
  • Also used in: P01 Log Analyzer for deterministic outputs.

References

  • CIS Benchmarks (reporting style)
  • “The Practice of System and Network Administration”

Key insights

An audit tool is only as useful as its report. Structure, severity, and determinism matter as much as detection.

Summary

Reporting turns scan data into actionable insight. Sorting, severity classification, and error transparency are essential for trust.

Homework/Exercises to practice the concept

  1. Design a report format with summary and detail sections.
  2. Implement deterministic sorting by severity then path.
  3. Add an errors section that lists permission denied paths.

Solutions to the homework/exercises

  1. Use headings like “Summary”, “Findings”, “Errors”.
  2. Sort with sort -k1,1 -k2,2 after labeling severity.
  3. Capture errors from find stderr and include them in the report.

3. Project Specification

3.1 What You Will Build

A CLI tool called audit that:

  • Scans a given root directory for risky permissions.
  • Flags world-writable files, world-writable directories without sticky bit, and setuid/setgid files.
  • Produces a deterministic report with severity levels.
  • Logs scan errors (permission denied) separately.

Included:

  • Severity classification.
  • Optional CSV output.
  • Configurable root path.

Excluded:

  • Full ACL parsing.
  • Network scanning.

3.2 Functional Requirements

  1. Scanning scope: Accept --root and default to current directory.
  2. Permission checks: Detect world-writable, setuid, setgid.
  3. Directory checks: Detect world-writable directories without sticky bit.
  4. Reporting: Output summary and detailed findings.
  5. Error handling: Log permission errors separately.
  6. Exit codes: Indicate success and failure.

3.3 Non-Functional Requirements

  • Performance: Handle at least 100k files within a reasonable time.
  • Reliability: Continue scanning despite permission errors.
  • Usability: Clear CLI flags and readable output.

3.4 Example Usage / Output

$ ./audit.sh --root /etc --world-writable --setuid
Scanning /etc ...
World-writable files: 3
Setuid files: 2
Report written to: audit-report.txt

3.5 Data Formats / Schemas / Protocols

Report format (text):

Summary:
  scanned: 4212
  world_writable_files: 3
  world_writable_dirs: 1
  setuid_files: 2
Findings:
  HIGH /etc/sudoers.d/legacy  mode=666  owner=root reason=world-writable

3.6 Edge Cases

  • Permission denied on system directories.
  • Symlinks that point outside root.
  • Filenames with newlines.

3.7 Real World Outcome

3.7.1 How to Run (Copy/Paste)

./audit.sh --root ./sample --world-writable --setuid --output audit-report.txt

3.7.2 Golden Path Demo (Deterministic)

$ ./audit.sh --root ./sample --world-writable --setuid
Scanning ./sample ...
World-writable files: 1
Setuid files: 0
Report written to: audit-report.txt

3.7.3 Failure Demo (Deterministic)

$ ./audit.sh --root ./missing --world-writable
ERROR: root directory not found: ./missing
exit code: 2

3.7.4 If CLI: exact terminal transcript

$ ./audit.sh --root ./sample --world-writable
Scanning ./sample ...
World-writable files: 1
$ echo $?
0

Exit codes:

  • 0: Success.
  • 1: Findings detected (non-empty report).
  • 2: Invalid arguments or missing root.

4. Solution Architecture

4.1 High-Level Design

root -> [find scan] -> [classify] -> [sort] -> report
           |
           v
        errors.log

4.2 Key Components

Component Responsibility Key Decisions
Scanner Traverse filesystem prune /proc and /sys
Classifier Assign severity rule-based mapping
Reporter Output summary and details deterministic ordering
Error Logger Capture permission errors separate section

4.3 Data Structures (No Full Code)

findings[] = {severity, path, mode, owner, reason}
errors[] = {path, error}
counts[category] = n

4.4 Algorithm Overview

Key Algorithm: Risk Classification

  1. Run find with predicates for each risk category.
  2. For each match, gather metadata (owner, mode, path).
  3. Assign severity based on rules.
  4. Append to report list.
  5. Sort and output.

Complexity Analysis:

  • Time: O(n) traversal.
  • Space: O(m) for findings list.

5. Implementation Guide

5.1 Development Environment Setup

# No additional tools required

5.2 Project Structure

audit/
├── audit.sh
├── lib/
│   ├── scan.sh
│   ├── classify.sh
│   └── report.sh
└── tests/
    └── golden-report.txt

5.3 The Core Question You’re Answering

“How can I systematically inventory and flag risky files using only filesystem metadata?”

5.4 Concepts You Must Understand First

  1. Permission bits and special bits
  2. find predicates and prune rules
  3. Deterministic reporting

5.5 Questions to Guide Your Design

  1. Which directories should always be excluded from scans?
  2. How will you classify severity?
  3. What should exit code 1 mean (findings present)?

5.6 Thinking Exercise

If you scan / as a non-root user, what categories of files will you miss? How will your report communicate that limitation?

5.7 The Interview Questions They’ll Ask

  1. What is the difference between setuid and setgid?
  2. Why is a sticky bit important on world-writable directories?
  3. How do you avoid broken results from filenames with spaces?

5.8 Hints in Layers

Hint 1: Find world-writable files

find /etc -type f -perm -0002 -print

Hint 2: Find setuid files

find / -type f -perm -4000 -print

Hint 3: Handle spaces safely

find /etc -type f -perm -0002 -print0 | xargs -0 ls -l

Hint 4: Sort output Pipe results through sort to keep order stable.

5.9 Books That Will Help

Topic Book Chapter
Permissions “The Linux Command Line” Permissions chapter
Security “Practical Unix & Internet Security” file permissions
find “The Linux Command Line” Ch. 17

5.10 Implementation Phases

Phase 1: Foundation (2-3 days)

Goals: Basic scan and detection.

Tasks:

  1. Implement world-writable and setuid scans.
  2. Output raw list of findings.

Checkpoint: Tool lists correct files in a small sample tree.

Phase 2: Reporting (3-4 days)

Goals: Add severity and summary report.

Tasks:

  1. Build classification rules.
  2. Add summary counts and sorted output.

Checkpoint: Report matches golden output.

Phase 3: Robustness (2-3 days)

Goals: Error handling and optional CSV output.

Tasks:

  1. Capture permission errors in a log.
  2. Add CSV output option.

Checkpoint: Errors are reported separately without stopping the scan.

5.11 Key Implementation Decisions

Decision Options Recommendation Rationale
Output format text vs CSV text + optional CSV human + machine use
Severity rules simple vs complex simple easier to explain
Scan scope default / vs current current safe by default

6. Testing Strategy

6.1 Test Categories

Category Purpose Examples
Unit Tests Permission classification mock file metadata
Integration Tests End-to-end scan sample tree fixtures
Edge Case Tests weird filenames spaces, newlines

6.2 Critical Test Cases

  1. World-writable file in /etc flagged as high.
  2. Sticky world-writable directory flagged as low.
  3. Permission denied directories logged to errors.

6.3 Test Data

/tmp/world.txt (mode 666)
/sample/suid.bin (mode 4755)
/sample/public (mode 777, sticky)

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

Pitfall Symptom Solution
Wrong perm flag missing findings use -perm -0002 and -4000
Unpruned /proc errors or hangs add prune rules
Unsorted output flaky tests sort findings

7.2 Debugging Strategies

  • Test on a small fixture tree before scanning real systems.
  • Print the raw find command for inspection.
  • Add a --debug flag to show matched paths.

7.3 Performance Traps

System-wide scans can be slow; recommend scoping to specific roots.


8. Extensions & Challenges

8.1 Beginner Extensions

  • Add --mtime to find recently modified files.
  • Add --owner to filter by owner.

8.2 Intermediate Extensions

  • Add allowlist file to suppress known-safe findings.
  • Add JSON report output.

8.3 Advanced Extensions

  • Parse ACLs for deeper permission analysis.
  • Integrate with CI for continuous audits.

9. Real-World Connections

9.1 Industry Applications

  • Security compliance checks.
  • System hardening audits.
  • lynis: Security auditing tool for Unix systems.
  • osquery: System inventory queries.

9.3 Interview Relevance

  • Filesystem permissions and security basics.
  • Safe traversal and reporting.

10. Resources

10.1 Essential Reading

  • “The Linux Command Line” by William Shotts
  • “Practical Unix & Internet Security”

10.2 Video Resources

  • “Unix permissions explained” (tutorial)

10.3 Tools & Documentation

  • find(1) and stat(1) manual pages

11. Self-Assessment Checklist

11.1 Understanding

  • I can explain setuid and sticky bits.
  • I can interpret permission strings correctly.
  • I can explain my severity rules.

11.2 Implementation

  • Scan completes with errors reported separately.
  • Report is deterministic.
  • Exit codes follow specification.

11.3 Growth

  • I can explain how this tool supports compliance work.
  • I can describe limitations (ACLs, symlinks).

12. Submission / Completion Criteria

Minimum Viable Completion:

  • Detect world-writable and setuid files.
  • Produce a deterministic report.
  • Exit codes implemented.

Full Completion:

  • Severity classification and error logging.
  • Optional CSV output.
  • Golden tests pass.

Excellence (Going Above & Beyond):

  • ACL parsing.
  • Integration with CI.
  • Allowlist/denylist support.