Project 2: The “Skeleton” Node (C++ No-Boilerplate)
A talker/listener pair without using
rclcpp::Nodeconvenience methods. You will manually initialize context, create publishers/subscribers, and manage the executor loop.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Level 2: Intermediate |
| Time Estimate | 1-2 weeks |
| Main Programming Language | C++ |
| Alternative Programming Languages | Rust, C |
| Coolness Level | Level 3: Genuinely Clever |
| Business Potential | 1. The “Resume Gold” |
| Prerequisites | C++17 basics, CMake, ROS 2 install, Understanding of callbacks |
| Key Topics | rcl vs rclcpp, Executors and Callback Groups, C++ Object Lifetimes and Shared Ownership |
1. Learning Objectives
By completing this project, you will:
- Explain how rcl vs rclcpp affects ROS 2 behavior in this project.
- Implement the core pipeline for Project 2 and validate it with a deterministic demo.
- Measure and document performance or correctness under at least one stress condition.
- Produce artifacts (configs, logs, scripts) that make the system reproducible.
2. All Theory Needed (Per-Concept Breakdown)
rcl vs rclcpp
Fundamentals
rcl vs rclcpp is the layered ROS 2 client library stack and what is provided by rcl vs the C++ API. In ROS 2, this concept defines how nodes coordinate, exchange data, and enforce guarantees. At a minimum you should be able to name the primary entities involved, identify where configuration lives, and explain how rcl API and rclcpp Node influence behavior. When you debug a system, you will almost always inspect rmw or type support first because those details surface mismatches early. The practical goal is to build a mental map that connects the API knobs you change to the wire-level or runtime effects you observe. If you can explain this concept without naming a single ROS 2 command, you know it as a systems principle rather than a tooling trick, which is exactly what you need for production robotics.
Deep Dive into the concept
A deeper look at rcl vs rclcpp starts by tracing data from the API surface to the middleware. Every time you configure rcl API or rclcpp Node, ROS 2 expresses that intent in the rmw layer, which then maps the intent into DDS-RTPS structures. The mapping is not always one-to-one: a single policy or field can affect multiple runtime behaviors, including buffering, matching, and timing. This is why a simple change in rmw can cause a subscriber to stop receiving data, or why two vendors can discover each other but never exchange payloads. The useful diagnostic strategy is to observe the graph (who matched), then the transport (what packets appear), and finally the runtime state (queues, deadlines, timers).
Failure modes cluster around mismatched assumptions. If type support is configured incorrectly, you may see data on one machine but not another, or discover that messages arrive but are rejected silently. If context is too restrictive, you will observe a graph that looks healthy but never transitions into active data flow. In embedded settings, this can appear as missed deadlines or watchdog resets rather than explicit errors. A robust design therefore includes explicit validation: log the effective policy, emit version identifiers, and test a known-good baseline before you change parameters. This project forces that discipline because you will create repeatable experiments and capture deterministic outputs, so you can explain not only what happened but why it happened.
How this fits on projects
This concept directly shapes how you implement and validate Project 2. You will configure it, observe it, and stress it under controlled conditions.
Definitions & key terms
- rcl API: rcl API in the context of rcl vs rclcpp and ROS 2 systems.
- rclcpp Node: rclcpp Node in the context of rcl vs rclcpp and ROS 2 systems.
- rmw: rmw in the context of rcl vs rclcpp and ROS 2 systems.
- type support: type support in the context of rcl vs rclcpp and ROS 2 systems.
- context: context in the context of rcl vs rclcpp and ROS 2 systems.
Mental model diagram (ASCII)
[User Code] -> [rcl vs rclcpp] -> [rmw/DDS] -> [Wire/Runtime Effects]
| | | |
Config/API Policies Entities Observability
How it works (step-by-step, with invariants and failure modes)
- A node configures the concept through API calls or config files.
- The rmw layer translates the settings into DDS/RTPS fields (rcl API, rclcpp Node).
- Peers evaluate compatibility, matching, or timing using rmw and type support.
- The runtime queues or state machines enforce the policy and emit data.
- Observability tools (logs, CLI, packet capture) confirm context behavior.
Minimal concrete example
rcl_init(); rcl_node_init(); rcl_publisher_init();
Common misconceptions
- Assuming defaults are identical across vendors.
- Believing that discovery implies data flow without validating compatibility.
Check-your-understanding questions
- Explain how rcl vs rclcpp changes runtime behavior in ROS 2.
- Predict what happens if rcl API conflicts with rclcpp Node.
- Why might two nodes discover each other but still exchange no data?
Check-your-understanding answers
- It alters matching, buffering, or timing constraints expressed via DDS/RTPS.
- The endpoints fail to match or drop messages due to incompatible policy/encoding.
- QoS or policy mismatch prevents writer-reader matching or delivery.
Real-world applications
- writing thin wrappers
- debugging when rclcpp hides details
Where you’ll apply it
- You will apply it in Section 5.4 (Concepts You Must Understand First), Section 5.10 (Implementation Phases), and Section 6.2 (Critical Test Cases).
- Also used in: P03-the-discovery-server-scaling-beyond-multicast.md and other projects in this series.
References
- ROS 2 internal interfaces docs
- ROS 2 design: rmw
Key insights
- rcl vs rclcpp is the lever that connects configuration to observable system behavior.
Summary
This concept is the bridge between theory and runtime evidence. Mastery means you can predict outcomes, not just observe them.
Homework/Exercises to practice the concept
- Capture or log a minimal trace where this concept is visible.
- Change one policy/setting and predict the system impact before running it.
- Explain the failure mode you expect if the configuration is wrong.
Solutions to the homework/exercises
- The trace should show the concept-specific fields or events you expect.
- Your prediction should name which endpoints match and how latency/loss changes.
- A wrong configuration should lead to mismatch, dropped data, or timeouts.
Executors and Callback Groups
Fundamentals
Executors and Callback Groups is the scheduling model that determines when ROS 2 callbacks run and how concurrency is controlled. In ROS 2, this concept defines how nodes coordinate, exchange data, and enforce guarantees. At a minimum you should be able to name the primary entities involved, identify where configuration lives, and explain how SingleThreadedExecutor and MultiThreadedExecutor influence behavior. When you debug a system, you will almost always inspect callback group or mutual exclusion first because those details surface mismatches early. The practical goal is to build a mental map that connects the API knobs you change to the wire-level or runtime effects you observe. If you can explain this concept without naming a single ROS 2 command, you know it as a systems principle rather than a tooling trick, which is exactly what you need for production robotics.
Deep Dive into the concept
A deeper look at Executors and Callback Groups starts by tracing data from the API surface to the middleware. Every time you configure SingleThreadedExecutor or MultiThreadedExecutor, ROS 2 expresses that intent in the rmw layer, which then maps the intent into DDS-RTPS structures. The mapping is not always one-to-one: a single policy or field can affect multiple runtime behaviors, including buffering, matching, and timing. This is why a simple change in callback group can cause a subscriber to stop receiving data, or why two vendors can discover each other but never exchange payloads. The useful diagnostic strategy is to observe the graph (who matched), then the transport (what packets appear), and finally the runtime state (queues, deadlines, timers).
Failure modes cluster around mismatched assumptions. If mutual exclusion is configured incorrectly, you may see data on one machine but not another, or discover that messages arrive but are rejected silently. If spin is too restrictive, you will observe a graph that looks healthy but never transitions into active data flow. In embedded settings, this can appear as missed deadlines or watchdog resets rather than explicit errors. A robust design therefore includes explicit validation: log the effective policy, emit version identifiers, and test a known-good baseline before you change parameters. This project forces that discipline because you will create repeatable experiments and capture deterministic outputs, so you can explain not only what happened but why it happened.
How this fits on projects
This concept directly shapes how you implement and validate Project 2. You will configure it, observe it, and stress it under controlled conditions.
Definitions & key terms
- SingleThreadedExecutor: SingleThreadedExecutor in the context of Executors and Callback Groups and ROS 2 systems.
- MultiThreadedExecutor: MultiThreadedExecutor in the context of Executors and Callback Groups and ROS 2 systems.
- callback group: callback group in the context of Executors and Callback Groups and ROS 2 systems.
- mutual exclusion: mutual exclusion in the context of Executors and Callback Groups and ROS 2 systems.
- spin: spin in the context of Executors and Callback Groups and ROS 2 systems.
Mental model diagram (ASCII)
[User Code] -> [Executors and Callback Groups] -> [rmw/DDS] -> [Wire/Runtime Effects]
| | | |
Config/API Policies Entities Observability
How it works (step-by-step, with invariants and failure modes)
- A node configures the concept through API calls or config files.
- The rmw layer translates the settings into DDS/RTPS fields (SingleThreadedExecutor, MultiThreadedExecutor).
- Peers evaluate compatibility, matching, or timing using callback group and mutual exclusion.
- The runtime queues or state machines enforce the policy and emit data.
- Observability tools (logs, CLI, packet capture) confirm spin behavior.
Minimal concrete example
auto cbg = node->create_callback_group(MutuallyExclusive);
Common misconceptions
- Assuming defaults are identical across vendors.
- Believing that discovery implies data flow without validating compatibility.
Check-your-understanding questions
- Explain how Executors and Callback Groups changes runtime behavior in ROS 2.
- Predict what happens if SingleThreadedExecutor conflicts with MultiThreadedExecutor.
- Why might two nodes discover each other but still exchange no data?
Check-your-understanding answers
- It alters matching, buffering, or timing constraints expressed via DDS/RTPS.
- The endpoints fail to match or drop messages due to incompatible policy/encoding.
- QoS or policy mismatch prevents writer-reader matching or delivery.
Real-world applications
- real-time pipelines
- preventing deadlocks in services/actions
Where you’ll apply it
- You will apply it in Section 5.4 (Concepts You Must Understand First), Section 5.10 (Implementation Phases), and Section 6.2 (Critical Test Cases).
- Also used in: P03-the-discovery-server-scaling-beyond-multicast.md and other projects in this series.
References
- ROS 2 executor design articles
- rclcpp API docs
Key insights
- Executors and Callback Groups is the lever that connects configuration to observable system behavior.
Summary
This concept is the bridge between theory and runtime evidence. Mastery means you can predict outcomes, not just observe them.
Homework/Exercises to practice the concept
- Capture or log a minimal trace where this concept is visible.
- Change one policy/setting and predict the system impact before running it.
- Explain the failure mode you expect if the configuration is wrong.
Solutions to the homework/exercises
- The trace should show the concept-specific fields or events you expect.
- Your prediction should name which endpoints match and how latency/loss changes.
- A wrong configuration should lead to mismatch, dropped data, or timeouts.
C++ Object Lifetimes and Shared Ownership
Fundamentals
C++ Object Lifetimes and Shared Ownership is how shared_ptr, weak_ptr, and RAII interact with ROS 2 node lifetimes and callbacks. In ROS 2, this concept defines how nodes coordinate, exchange data, and enforce guarantees. At a minimum you should be able to name the primary entities involved, identify where configuration lives, and explain how shared_ptr and weak_ptr influence behavior. When you debug a system, you will almost always inspect RAII or dangling callbacks first because those details surface mismatches early. The practical goal is to build a mental map that connects the API knobs you change to the wire-level or runtime effects you observe. If you can explain this concept without naming a single ROS 2 command, you know it as a systems principle rather than a tooling trick, which is exactly what you need for production robotics.
Deep Dive into the concept
A deeper look at C++ Object Lifetimes and Shared Ownership starts by tracing data from the API surface to the middleware. Every time you configure shared_ptr or weak_ptr, ROS 2 expresses that intent in the rmw layer, which then maps the intent into DDS-RTPS structures. The mapping is not always one-to-one: a single policy or field can affect multiple runtime behaviors, including buffering, matching, and timing. This is why a simple change in RAII can cause a subscriber to stop receiving data, or why two vendors can discover each other but never exchange payloads. The useful diagnostic strategy is to observe the graph (who matched), then the transport (what packets appear), and finally the runtime state (queues, deadlines, timers).
Failure modes cluster around mismatched assumptions. If dangling callbacks is configured incorrectly, you may see data on one machine but not another, or discover that messages arrive but are rejected silently. If ownership is too restrictive, you will observe a graph that looks healthy but never transitions into active data flow. In embedded settings, this can appear as missed deadlines or watchdog resets rather than explicit errors. A robust design therefore includes explicit validation: log the effective policy, emit version identifiers, and test a known-good baseline before you change parameters. This project forces that discipline because you will create repeatable experiments and capture deterministic outputs, so you can explain not only what happened but why it happened.
How this fits on projects
This concept directly shapes how you implement and validate Project 2. You will configure it, observe it, and stress it under controlled conditions.
Definitions & key terms
- shared_ptr: shared_ptr in the context of C++ Object Lifetimes and Shared Ownership and ROS 2 systems.
- weak_ptr: weak_ptr in the context of C++ Object Lifetimes and Shared Ownership and ROS 2 systems.
- RAII: RAII in the context of C++ Object Lifetimes and Shared Ownership and ROS 2 systems.
- dangling callbacks: dangling callbacks in the context of C++ Object Lifetimes and Shared Ownership and ROS 2 systems.
- ownership: ownership in the context of C++ Object Lifetimes and Shared Ownership and ROS 2 systems.
Mental model diagram (ASCII)
[User Code] -> [C++ Object Lifetimes and Shared Ownership] -> [rmw/DDS] -> [Wire/Runtime Effects]
| | | |
Config/API Policies Entities Observability
How it works (step-by-step, with invariants and failure modes)
- A node configures the concept through API calls or config files.
- The rmw layer translates the settings into DDS/RTPS fields (shared_ptr, weak_ptr).
- Peers evaluate compatibility, matching, or timing using RAII and dangling callbacks.
- The runtime queues or state machines enforce the policy and emit data.
- Observability tools (logs, CLI, packet capture) confirm ownership behavior.
Minimal concrete example
auto node = std::make_shared<rclcpp::Node>("n");
Common misconceptions
- Assuming defaults are identical across vendors.
- Believing that discovery implies data flow without validating compatibility.
Check-your-understanding questions
- Explain how C++ Object Lifetimes and Shared Ownership changes runtime behavior in ROS 2.
- Predict what happens if shared_ptr conflicts with weak_ptr.
- Why might two nodes discover each other but still exchange no data?
Check-your-understanding answers
- It alters matching, buffering, or timing constraints expressed via DDS/RTPS.
- The endpoints fail to match or drop messages due to incompatible policy/encoding.
- QoS or policy mismatch prevents writer-reader matching or delivery.
Real-world applications
- safe shutdown of nodes
- composable node containers
Where you’ll apply it
- You will apply it in Section 5.4 (Concepts You Must Understand First), Section 5.10 (Implementation Phases), and Section 6.2 (Critical Test Cases).
- Also used in: P03-the-discovery-server-scaling-beyond-multicast.md and other projects in this series.
References
- Effective Modern C++
- ROS 2 C++ best practices
Key insights
- C++ Object Lifetimes and Shared Ownership is the lever that connects configuration to observable system behavior.
Summary
This concept is the bridge between theory and runtime evidence. Mastery means you can predict outcomes, not just observe them.
Homework/Exercises to practice the concept
- Capture or log a minimal trace where this concept is visible.
- Change one policy/setting and predict the system impact before running it.
- Explain the failure mode you expect if the configuration is wrong.
Solutions to the homework/exercises
- The trace should show the concept-specific fields or events you expect.
- Your prediction should name which endpoints match and how latency/loss changes.
- A wrong configuration should lead to mismatch, dropped data, or timeouts.
3. Project Specification
3.1 What You Will Build
A talker/listener pair without using rclcpp::Node convenience methods. You will manually initialize context, create publishers/subscribers, and manage the executor loop.
Included features:
- Deterministic startup with explicit configuration.
- Observability (logs/CLI output) that exposes discovery/data flow.
- A reproducible demo and a failure case.
Excluded on purpose:
- Full robot control stacks or SLAM pipelines.
- Custom GUIs beyond CLI output.
3.2 Functional Requirements
- **Manual initialization: **Manual initialization -> Understanding
rclcpp::initvs context objects. - **Executor management: **Executor management -> Adding nodes and controlling spin.
- **Shutdown correctness: **Shutdown correctness -> Handling SIGINT and cleanup.
- Deterministic startup: The project must start with a reproducible, logged configuration.
- Observability: Provide CLI or log output that confirms each major component is working.
3.3 Non-Functional Requirements
- Performance: Must meet the throughput/latency targets documented in the benchmark.\n- Reliability: Must handle common network or runtime failures gracefully.\n- Usability: CLI flags and logs must make configuration and diagnosis obvious.
3.4 Example Usage / Output
$ ros2 run skeleton_node minimal_node --ros-args -r __node:=skeleton
[INFO] node ready, publishing /chatter
$ ros2 topic echo /chatter
3.5 Data Formats / Schemas / Protocols
Node config (YAML)
name: skeleton
namespace: /demo
queue_depth: 10
3.6 Edge Cases
- Node name collision
- Executor starvation with blocking callback
- Shutdown during callback
3.7 Real World Outcome
By the end of this project you will have a reproducible system that produces the same observable signals every time you run it. You will be able to point to console output, captured packets, or bag files and explain exactly why the result is correct. You will also be able to force a failure and demonstrate a clean error path.
3.7.1 How to Run (Copy/Paste)
# Build
colcon build --packages-select project_2
# Run
source install/setup.bash
# Start the main node/tool
./run_project_2.sh
3.7.2 Golden Path Demo (Deterministic)
$ ros2 run skeleton_node minimal_node --ros-args -r __node:=skeleton
[INFO] node ready, publishing /chatter
$ ros2 topic echo /chatter
3.7.3 Failure Demo (Deterministic)
$ ros2 run skeleton_node minimal_node --ros-args -r __node:=/bad/name
[ERROR] Invalid node name: /bad/name
4. Solution Architecture
4.1 High-Level Design
[Input/Config] -> [Core Engine] -> [ROS 2/DDS] -> [Observability Output]
4.2 Key Components
| Component | Responsibility | Key Decisions |
|---|---|---|
| Manual Node Setup | Initialize rcl and construct a minimal node | Avoid rclcpp convenience APIs |
| Custom Executor | Wire callbacks into an executor explicitly | Control threading model |
| Introspection Hooks | Log graph state and QoS on startup | Deterministic startup logs |
4.3 Data Structures (No Full Code)
struct NodeConfig {
std::string name;
std::string ns;
size_t queue_depth;
};
4.4 Algorithm Overview
Key Algorithm: Core Pipeline
- Initialize rcl
- Create node and publisher/subscriber
- Spin custom executor
Complexity Analysis:
- Time: O(n) over messages/events processed
- Space: O(1) to O(n) depending on buffering
5. Implementation Guide
5.1 Development Environment Setup
# Install ROS 2 and dependencies
sudo apt-get update
sudo apt-get install -y ros-$ROS_DISTRO-ros-base python3-colcon-common-extensions
5.2 Project Structure
project-root/
|-- src/
| |-- main.cpp
| |-- config.yaml
| `-- utils.cpp
|-- scripts/
| `-- run_project.sh
|-- tests/
| `-- test_core.py
`-- README.md
5.3 The Core Question You’re Answering
“What actually happens when a ROS 2 node starts, spins, and shuts down?”
5.4 Concepts You Must Understand First
Stop and research these before coding:
- rcl vs rclcpp
- What breaks if this is misconfigured?
- How will you observe it?
- Executors and Callback Groups
- What breaks if this is misconfigured?
- How will you observe it?
- C++ Object Lifetimes and Shared Ownership
- What breaks if this is misconfigured?
- How will you observe it?
5.5 Questions to Guide Your Design
- How will you explicitly create publishers and subscriptions?
- How will you handle shutdown without leaking resources?
- Will you use a single-threaded or multi-threaded executor?
5.6 Thinking Exercise
Draw the call sequence from main() to the first published message. Where does the DDS writer get created?
5.7 The Interview Questions They’ll Ask
- “What is
rcland why does ROS 2 use it?” - “How does an executor work?”
- “What happens if you forget to
rclcpp::shutdown()?”
5.8 Hints in Layers
Hint 1: Build a minimal context
rclcpp::init(argc, argv);
Hint 2: Use a simple executor
rclcpp::executors::SingleThreadedExecutor exec;
Hint 3: Add the node explicitly
exec.add_node(node);
5.9 Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Topic | Book | Chapter |
| C++ Basics | “A Tour of C++” | Ch. 1-3 |
| System Architecture | “Clean Architecture” | Ch. 1 |
5.10 Implementation Phases
Phase 1: Foundation (1-2 days)
Goals:
- Reproduce the baseline example from the original project outline.
- Validate toolchain, dependencies, and environment variables.
Tasks:
- Create the repository and baseline project structure.
- Run a minimal example to confirm discovery/data flow.
Checkpoint: You can reproduce the minimal example and collect logs.
Phase 2: Core Functionality (1-2 weeks)
Goals:
- Implement the full feature set from the requirements.
- Instrument key metrics and logs.
Tasks:
- Implement each component and integrate them.
- Add CLI/config flags for core parameters.
Checkpoint: Golden path demo succeeds with deterministic output.
Phase 3: Polish & Edge Cases (3-5 days)
Goals:
- Handle failure scenarios and document them.
- Create a short report/README describing results.
Tasks:
- Add error handling, timeouts, and validation.
- Capture failure demo output and metrics.
Checkpoint: Failure demo yields the expected errors and exit codes.
5.11 Key Implementation Decisions
| Decision | Options | Recommendation | Rationale |
|---|---|---|---|
| Transport | UDP, shared memory, serial | UDP for baseline | Simplest to observe and debug |
| QoS | Default, tuned | Default then tune | Establish baseline before optimization |
6. Testing Strategy
6.1 Test Categories
| Category | Purpose | Examples |
|---|---|---|
| Unit Tests | Validate parsers and helpers | Packet decoder, config parser |
| Integration Tests | End-to-end ROS 2 flow | Publisher -> Subscriber -> Metrics |
| Edge Case Tests | Failures & mismatches | Wrong domain ID, missing config |
6.2 Critical Test Cases
- Test 1: Baseline message flow works end-to-end.
- Test 2: Configuration mismatch produces a clear, actionable error.
- Test 3: Performance/latency stays within documented bounds.
6.3 Test Data
Use a fixed dataset or fixed random seed to make metrics reproducible.
7. Common Pitfalls & Debugging
7.1 Frequent Mistakes
| Pitfall | Symptom | Solution |
|---|---|---|
| QoS mismatch | Discovery works but no data | Align policies explicitly |
| Misconfigured env vars | No nodes discovered | Print and validate env on startup |
| Network filtering | Intermittent data | Check firewall and multicast settings |
7.2 Debugging Strategies
- Start from the graph: confirm discovery before tuning QoS.
- Capture packets: validate that RTPS traffic appears on expected ports.
7.3 Performance Traps
If throughput is low, check for unnecessary serialization, small history depth, or lack of shared memory.
8. Extensions & Challenges
8.1 Beginner Extensions
- Add verbose logging and a dry-run mode.
- Add a simple configuration file parser.
8.2 Intermediate Extensions
- Add metrics export to CSV or JSON.
- Add automated regression tests.
8.3 Advanced Extensions
- Implement cross-vendor compatibility validation.
- Add chaos testing with randomized loss/latency patterns.
9. Real-World Connections
9.1 Industry Applications
- Fleet robotics where reliability must be guaranteed under lossy Wi-Fi.
- Industrial systems that require deterministic startup and clear failure modes.
9.2 Related Open Source Projects
- ROS 2 core repositories (rcl, rmw, rosidl)
- DDS vendors: Fast DDS, Cyclone DDS
9.3 Interview Relevance
- Explain QoS compatibility and discovery failures.
- Describe how to debug why nodes discover but do not communicate.
10. Resources
10.1 Essential Reading
- “A Concise Introduction to Robot Programming with ROS 2” (focus on the sections related to rcl vs rclcpp)
- ROS 2 official docs for the specific APIs used in this project
10.2 Video Resources
- ROS 2 community talks on middleware and DDS
- Vendor tutorials on discovery and QoS
10.3 Tools & Documentation
- ROS 2 CLI and rclcpp/rclpy docs
- Wireshark or tcpdump for network visibility
10.4 Related Projects in This Series
- Project 1: Builds prerequisite concepts
- Project 3: Extends the middleware layer
11. Self-Assessment Checklist
11.1 Understanding
- I can explain rcl vs rclcpp without notes
- I can explain how QoS and discovery interact
- I understand why the system fails when policies mismatch
11.2 Implementation
- All functional requirements are met
- Golden path demo succeeds
- Failure demo produces expected errors
11.3 Growth
- I can explain this project in a technical interview
- I documented lessons learned and configs
- I can reproduce the results on another machine
12. Submission / Completion Criteria
Minimum Viable Completion:
- Golden path demo output matches documentation
- At least one failure scenario is documented
- Metrics or logs demonstrate correct behavior
Full Completion:
- All minimum criteria plus:
- Compatibility verified across at least two QoS settings
- Results written to a short report
Excellence (Going Above & Beyond):
- Automated regression tests for discovery/QoS behavior
- Clear compatibility matrix or benchmark chart