Project 14: RTOS Micro-Clinic - Task Scheduling Study

Build small RTOS tasks and measure scheduling jitter, latency, and synchronization costs.

Quick Reference

Attribute Value
Difficulty Level 4: Expert
Time Estimate 20-30 hours
Main Programming Language C++
Alternative Programming Languages C
Coolness Level Level 4: Hardcore Tech Flex
Business Potential 3. The “Platform” Model
Prerequisites C/C++ basics, Teensyduino setup, basic electronics, ability to use a multimeter/logic analyzer
Key Topics RTOS tasks, scheduling, synchronization

1. Learning Objectives

By completing this project, you will:

  1. Explain the core question for this project in your own words.
  2. Implement the main workflow and validate it with measurements.
  3. Handle at least two failure modes and document recovery.
  4. Produce a deterministic report that matches hardware behavior.

2. All Theory Needed (Per-Concept Breakdown)

RTOS Scheduling, Priorities, and Synchronization

Fundamentals An RTOS provides tasks, priorities, and scheduling policies for concurrent work. It can preempt tasks to meet deadlines, but it adds overhead and complexity. Synchronization primitives like mutexes and queues are required to protect shared resources. This project makes scheduling tradeoffs measurable.

Deep Dive into the concept Tasks are independent contexts with their own stacks. The scheduler selects the highest-priority ready task, preempting lower-priority tasks. Priority inversion happens when a low-priority task holds a resource needed by a high-priority task; priority inheritance mitigates it but requires correct primitive usage. Mutexes protect exclusive access; queues decouple producers and consumers. Measuring scheduling requires timestamps or GPIO toggles in each task. Context switches consume CPU and memory. This project builds a micro-clinic with controlled tasks, then measures jitter and latency under load. You will see how priority and synchronization choices shape real-time behavior.

How this fit on projects This concept directly powers the implementation choices and validation steps in this project.

Definitions & key terms

  • Task: Independent execution context managed by the RTOS.
  • Preemption: Higher-priority task interrupts lower-priority task.
  • Mutex: Mutual exclusion lock for shared resources.
  • Jitter: Variation in timing of task execution.

Mental model diagram (ASCII)

Tasks -> Scheduler -> CPU -> Interrupts

How it works (step-by-step)

  1. Create tasks with different priorities.
  2. Protect shared data with a mutex.
  3. Measure task latency under load.
  4. Adjust priorities and observe jitter changes.

Minimal concrete example

xTaskCreate(taskA, "A", 512, NULL, 2, NULL);
vTaskDelay(pdMS_TO_TICKS(10));

Common misconceptions

  • RTOS automatically solves timing problems.
  • More tasks always means better structure.
  • Mutexes are free.

Check-your-understanding questions

  • What causes priority inversion?
  • When should you use a queue vs a mutex?
  • How do you measure scheduling jitter?

Check-your-understanding answers

  • A low-priority task holds a resource needed by a high-priority one.
  • Use queues for data transfer; mutexes for exclusive access.
  • Toggle GPIO in each task and measure intervals.

Real-world applications

  • Robotics
  • Industrial automation
  • Real-time communications

Where you’ll apply it

References

  • FreeRTOS documentation
  • RTOS textbooks

Key insights

An RTOS makes timing explicit, but you must design for it.

Summary

Scheduling policies define latency and jitter in real systems.

Homework/Exercises to practice the concept

  • Create a low-priority task that blocks a high-priority task and fix it.
  • Measure context switch overhead with a GPIO toggle.

Solutions to the homework/exercises

  • Use priority inheritance or restructure resource access.
  • Toggle a pin at task entry/exit and compute delta.

3. Project Specification

3.1 What You Will Build

Build small RTOS tasks and measure scheduling jitter, latency, and synchronization costs.

3.2 Functional Requirements

  1. Create at least three tasks with priorities.
  2. Measure task jitter under load.
  3. Demonstrate priority inversion and mitigation.
  4. Document scheduling results.

3.3 Non-Functional Requirements

  • Performance: Meet the target timing/throughput for the project.
  • Reliability: Detect errors and recover without undefined behavior.
  • Usability: Provide clear logs and a repeatable workflow.

3.4 Example Usage / Output

./P14-rtos-micro-clinic-task-scheduling-study --run

3.5 Data Formats / Schemas / Protocols

CSV with columns: task, period_ms, jitter_us, cpu_pct

3.6 Edge Cases

  • Deadlock due to mutex misuse
  • Stack overflow in task
  • High-priority task starving others

3.7 Real World Outcome

You will run the project and see deterministic logs and measurements that match physical hardware behavior.

3.7.1 How to Run (Copy/Paste)

cd project-root
make
./P14-rtos-micro-clinic-task-scheduling-study --run

3.7.2 Golden Path Demo (Deterministic)

Use a fixed input configuration and a known test signal. Capture output for 60 seconds and verify it matches expected values.

3.7.3 If CLI: exact terminal transcript

$ ./P14-rtos-micro-clinic-task-scheduling-study --run --seed 42
[INFO] RTOS Micro-Clinic - Task Scheduling Study starting
[INFO] Report saved to data/report.csv
[INFO] Status: OK
$ echo $?
0

Failure Demo (Deterministic)

$ ./P14-rtos-micro-clinic-task-scheduling-study --run --missing-device
[ERROR] Device not detected
$ echo $?
2

4. Solution Architecture

4.1 High-Level Design

Inputs -> Acquisition -> Processing -> Output/Log

4.2 Key Components

Component Responsibility Key Decisions
Acquisition Configure peripherals and capture data Use stable clock settings
Processing Convert raw data to meaningful values Apply calibration/filters
Output/Log Emit reports and logs CSV for reproducibility

4.3 Data Structures (No Full Code)

struct Sample {
    uint32_t timestamp_us;
    uint32_t value;
    uint32_t flags;
};

4.4 Algorithm Overview

Key Algorithm: Measurement + Report

  1. Initialize hardware and verify configuration.
  2. Capture data and record timestamps.
  3. Compute metrics and write report.

Complexity Analysis:

  • Time: O(n) in samples
  • Space: O(n) for log storage

5. Implementation Guide

5.1 Development Environment Setup

# Arduino IDE + Teensyduino must be installed
# Optional CLI workflow
arduino-cli core update-index
arduino-cli core install teensy:avr

5.2 Project Structure

project-root/
├── src/
│   ├── main.ino
│   ├── hw_config.h
│   └── measurements.cpp
├── tools/
│   └── analyze.py
├── data/
│   └── samples.csv
└── README.md

5.3 The Core Question You’re Answering

“How do scheduling policies change real-time behavior?”

5.4 Concepts You Must Understand First

Stop and research these before coding:

  1. RTOS tasks, scheduling, synchronization
  2. Data logging and measurement techniques
  3. Basic timing math and error analysis

5.5 Questions to Guide Your Design

  1. Which tasks require highest priority?
  2. What synchronization primitive fits each case?
  3. How will you measure jitter?

5.6 Thinking Exercise

Design a task set and predict which task will miss its deadline.

5.7 The Interview Questions They’ll Ask

  1. What is priority inversion?
  2. Difference between semaphore and mutex?
  3. How do you measure scheduler overhead?

5.8 Hints in Layers

  • Start with two tasks before adding more.
  • Use GPIO toggles for timing.
  • Check stack usage for each task.

5.9 Books That Will Help

| Topic | Book | Chapter | |——-|——|———| | RTOS concepts | Zephyr RTOS Embedded C Programming | Ch. 3-5 | | Scheduling | Real-Time Concepts | Ch. 2 | | Concurrency | Making Embedded Systems | Ch. 5 |

5.10 Implementation Phases

Phase 1: Foundation (6 hours)

Goals:

  • Create tasks
  • Measure baseline jitter

Tasks:

  1. Create tasks
  2. Measure baseline jitter

Checkpoint: Tasks running

Phase 2: Core Functionality (10 hours)

Goals:

  • Add synchronization
  • Inject load

Tasks:

  1. Add synchronization
  2. Inject load

Checkpoint: Jitter measured

Phase 3: Polish (6 hours)

Goals:

  • Priority inversion demo
  • Report results

Tasks:

  1. Priority inversion demo
  2. Report results

Checkpoint: Scheduling report

5.11 Key Implementation Decisions

| Decision | Options | Recommendation | Rationale | |———-|———|—————-|———–| | Buffering | Single buffer, double buffer | Double buffer | Avoids data loss during processing | | Logging format | CSV, binary | CSV | Human-readable while still scriptable | | Clock speed | Default, overclock | Default | Keeps peripherals in spec |


6. Testing Strategy

6.1 Test Categories

| Category | Purpose | Examples | |———-|———|———-| | Unit Tests | Validate math, parsing, and conversions | Timer math, CRC checks | | Integration Tests | Verify peripherals and pipelines | DMA -> buffer -> log | | Edge Case Tests | Handle boundary conditions | Brownout, missing sensor |

6.2 Critical Test Cases

{test_cases}

6.3 Test Data

Use a fixed test input pattern and record outputs to data/report.csv

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

| Pitfall | Symptom | Solution | |———|———|———-| {pitfalls}

7.2 Debugging Strategies

{debug_strats}

7.3 Performance Traps

Large buffers improve stability but increase latency. Measure both throughput and jitter to choose the right size.


8. Extensions & Challenges

8.1 Beginner Extensions

{ex_begin}

8.2 Intermediate Extensions

{ex_inter}

8.3 Advanced Extensions

{ex_adv}


9. Real-World Connections

9.1 Industry Applications

{industry_apps}

{open_source}

9.3 Interview Relevance

{interview_rel}


10. Resources

10.1 Essential Reading

{resources}

10.2 Video Resources

  • Embedded systems timing walkthrough (YouTube)
  • Teensy hardware deep dive (Conference talk)

10.3 Tools & Documentation

  • Teensyduino: Toolchain for Teensy boards
  • Logic Analyzer: Timing verification
  • Multimeter: Voltage and current measurement

{related_projects}


11. Self-Assessment Checklist

11.1 Understanding

  • I can explain the main concept without notes.
  • I can explain why the measurements match (or do not match) expectations.
  • I understand at least one tradeoff made in this project.

11.2 Implementation

  • All functional requirements are met.
  • All critical test cases pass.
  • Logs and reports are reproducible.
  • Edge cases are handled.

11.3 Growth

  • I documented lessons learned.
  • I can explain this project in a job interview.
  • I identified one improvement for next iteration.

12. Submission / Completion Criteria

Minimum Viable Completion: {comp_min}

Full Completion: {comp_full}

Excellence (Going Above & Beyond): {comp_ex}