Project 7: Message Queue and ISR Deferral
Build an ISR-safe message queue and a deferred-processing task that consumes sensor data without blocking interrupts.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Advanced |
| Time Estimate | 1-2 weeks |
| Main Programming Language | C |
| Alternative Programming Languages | Rust, Ada |
| Coolness Level | High |
| Business Potential | High |
| Prerequisites | Projects 1-6, interrupts, preemptive scheduler |
| Key Topics | ISR-safe queues, ring buffers, deferred interrupts, task wake-up |
1. Learning Objectives
By completing this project, you will:
- Implement a fixed-size message queue with ISR-safe operations.
- Move data from ISR to a processing task deterministically.
- Define queue overflow behavior and measure dropped messages.
- Verify task wake-up correctness under interrupt load.
2. All Theory Needed (Per-Concept Breakdown)
2.1 Ring Buffers and ISR-Safe Queue Operations
Fundamentals
A ring buffer is a fixed-size circular queue with head and tail indices. It provides constant-time enqueue and dequeue operations, which is ideal for real-time systems. In ISR contexts, operations must be non-blocking and short, so ring buffers are a common choice. Correctness depends on careful handling of head/tail updates and proper use of atomic or critical sections to prevent race conditions.
Additional fundamentals: A ring buffer is ideal for real-time data flow because it has constant-time operations and no dynamic allocation. This predictability is exactly what you want in ISRs. The cost of that predictability is a fixed capacity, which forces you to make explicit decisions about overflow behavior.
Deep Dive into the concept
A ring buffer uses a fixed array of N elements and two indices: head (write) and tail (read). When head advances to the end of the array, it wraps to 0. The buffer is full when the next head position equals tail, and empty when head equals tail. This design is fast and predictable but requires correct handling of wraparound. For power-of-two sizes, you can use bit masks to avoid modulo operations, further improving efficiency.
ISR safety adds constraints. If an ISR enqueues data while a task dequeues, you have a concurrent access problem. For single-producer (ISR) and single-consumer (task) cases, you can sometimes avoid full locks by ensuring that head is only written by the ISR and tail only by the task, with careful memory ordering. However, on Cortex-M without cache, a short critical section around index updates is usually simpler and safe. The ISR should never block; if the queue is full, you must decide whether to drop the new data, overwrite the oldest, or set an error flag. This decision is application-specific and should be explicit.
Another subtlety is data coherence. If the ISR writes data into the buffer and then updates the head, the task must not read the head before the data is fully written. On Cortex-M, writes are strongly ordered for peripheral and normal memory, but it is still good practice to update data first, then update head. If you later port to a system with caches, you will need memory barriers. For this project, a critical section is enough.
Ring buffers are often used to buffer UART input, sensor readings, or ADC samples. They are the backbone of deferred interrupt handling: the ISR captures minimal data, puts it into the queue, and signals a task to process it. This separation keeps ISRs short and deterministic while preserving data integrity. It also provides a clean boundary for testing: you can inject data into the queue and verify the task output independently of the ISR.
Additional depth: Ring buffer correctness depends on clear ownership of the head and tail indices. In the simplest ISR-producer / task-consumer model, the ISR writes head and the task writes tail. This separation allows you to avoid full locking if you are careful, but you must still ensure that reads and writes are ordered correctly. On Cortex-M, memory ordering is strong for normal RAM, but if you ever use DMA or caches, you may need memory barriers. A robust educational implementation can simply wrap enqueue and dequeue operations in short critical sections, which guarantees correctness even if you later change concurrency patterns.
Sizing the ring buffer is also a design exercise. You should calculate the worst-case burst size from the interrupt source and the worst-case consumer latency. For example, if an ISR can fire at 1 kHz and the consumer task might be delayed for 10 ms, you need room for at least 10 samples. Add margin for safety. This type of reasoning is essential in real-time systems because it links timing assumptions to memory usage. Including this calculation in your documentation makes your design defensible.
Finally, ring buffers can be extended with metadata such as timestamps or sequence numbers. This helps detect lost samples or out-of-order processing. In this project, consider adding a sequence counter that increments on each enqueue. If the consumer sees a gap, it knows data was dropped. This is a simple technique that provides valuable insight during testing and later performance evaluation.
How this fit on projects
The ring buffer queue is implemented in Sec. 3.1 and used in Sec. 3.7 to demonstrate ISR-to-task data flow. It is also relevant to event flag patterns in Project 8.
Definitions & key terms
- Ring buffer: Circular queue using fixed-size array.
- Head/Tail: Indices for write and read positions.
- Overflow policy: Behavior when queue is full.
- ISR-safe: Operations safe to call from an interrupt.
Mental model diagram (ASCII)
[0][1][2][3][4]
^ ^
tail head (next write)
How it works (step-by-step, with invariants and failure modes)
- ISR writes data at head position.
- ISR advances head; invariant: head never overtakes tail.
- Task reads data at tail position.
- Task advances tail.
- Failure mode: concurrent updates without protection -> corrupted indices.
Minimal concrete example
bool q_push_isr(queue_t *q, uint8_t v) {
uint32_t next = (q->head + 1) & q->mask;
if (next == q->tail) return false; // full
q->buf[q->head] = v;
q->head = next;
return true;
}
Common misconceptions
- “Ring buffers are always lock-free” -> depends on concurrency model.
- “Overflow cannot happen” -> it will under bursty interrupts.
- “ISR can block” -> it must never block.
Check-your-understanding questions
- How do you detect full vs empty in a ring buffer?
- Why should ISR never block on a full queue?
- What is the safest overflow policy for sensor data?
Check-your-understanding answers
- Empty when head == tail, full when next head == tail.
- It would stall all interrupts and break real-time guarantees.
- Depends on application: drop newest for integrity or overwrite for freshness.
Real-world applications
- UART receive buffers.
- ADC sample collection.
- Event logging in embedded systems.
Where you’ll apply it
- This project: Sec. 3.1, Sec. 3.7.2, Sec. 5.10 Phase 1.
- Also used in: Project 8.
References
- “Making Embedded Systems” by Elecia White, Ch. 5-6.
- Embedded ring buffer patterns in RTOS literature.
Key insights
Ring buffers give you deterministic, constant-time queues that fit ISR constraints.
Summary
A ring buffer is the most practical ISR-safe queue for moving data from interrupts to tasks.
Homework/Exercises to practice the concept
- Implement a power-of-two ring buffer and test wraparound.
- Add an overflow counter and log it over UART.
- Stress test with a synthetic ISR pushing at high rate.
Solutions to the homework/exercises
- Use
mask = size-1and(idx + 1) & mask. - Increment a counter when full; print every second.
- Simulate ISR in a loop and measure drop rate.
2.2 Deferred Interrupt Handling and Task Wake-Up
Fundamentals
Deferred interrupt handling moves heavy processing out of ISRs into tasks. The ISR captures minimal data, pushes it to a queue, and signals a task to run. This keeps ISR latency low while ensuring data is eventually processed. The task typically blocks on the queue and is made READY when new data arrives.
Additional fundamentals: Deferring work from ISR to task is a standard real-time pattern. It keeps interrupt latency bounded and makes complex processing testable. This separation also makes it easier to apply synchronization primitives that are unsafe in ISR context, which leads to more robust code. It also keeps the timing model simple because long operations happen in task time, not interrupt time.
Deep Dive into the concept
ISRs must be short and deterministic. However, many operations like filtering sensor data, formatting logs, or performing calculations are too heavy to run at interrupt level. Deferred interrupt handling solves this by splitting work into two phases: the ISR (fast capture) and a task (slow processing). The ISR enqueues data and then wakes the consumer task. This wake-up can be implemented by directly setting the task state to READY and triggering PendSV, or by using a semaphore/event flag that the task waits on.
The wake-up mechanism must be safe across interrupt boundaries. If the task is already READY, you should avoid double-waking or re-queuing. If the task is blocked, you must ensure the scheduler sees the new state promptly. Typically, the ISR sets a flag or uses an API like queue_send_from_isr, which returns a boolean indicating whether a higher-priority task was woken. If so, the ISR triggers PendSV to preempt the current task. This preserves responsiveness for high-priority consumers.
Deferred handling also needs to consider queue overflow. If the ISR produces data faster than the task can consume, the queue will fill. You must decide whether to drop data, overwrite, or backpressure. In a real-time system, dropping may be acceptable if you only need the most recent sample. If every sample matters (e.g., logging), you need to size the queue appropriately or reduce ISR rates. This project encourages you to quantify overflow with counters so you can measure whether your design is sufficient.
Finally, deferred handling is a correctness boundary. By moving processing into tasks, you make it easier to test and debug the logic without worrying about ISR timing. It also allows you to apply mutexes and other primitives that are not safe in ISRs. This is why most RTOS drivers use ISR + task patterns for communication peripherals.
Additional depth: Deferred interrupt handling also introduces a subtle scheduling consideration: if the consumer task has lower priority than other ready tasks, it may not run in time to drain the queue, leading to overflow. This is not a bug in the queue itself, but a design issue. The consumer task’s priority should be chosen based on the criticality of the data and the acceptable loss rate. In some cases, you may want the consumer to be higher priority than most tasks to ensure timely processing. This is especially true for sensors or communication protocols with tight deadlines.
Another consideration is the interaction between ISR frequency and scheduler tick. If the ISR produces data faster than the scheduler tick, the consumer may only be scheduled once per tick, which can limit throughput. You can mitigate this by unblocking the consumer and triggering PendSV immediately when new data arrives, allowing it to run before the next tick if its priority is high enough. This demonstrates how interrupts and scheduling interact to produce real-time behavior.
Finally, the deferred processing task should be designed to handle bursts efficiently. If the queue contains multiple items, the task can drain several per run before blocking again. However, you must be careful not to monopolize the CPU by draining too aggressively. A balanced approach is to process up to a fixed number of items per cycle or to yield after each item if the system has many competing tasks. These design decisions determine how the system behaves under load and are part of what makes RTOS engineering interesting.
How this fit on projects
You will build the deferred ISR path in Sec. 3.1 and demonstrate ISR-to-task flow in Sec. 3.7.2. This pattern reappears in timers and event flags in Project 8.
Definitions & key terms
- Deferred handling: Splitting ISR work into capture + task processing.
- From-ISR API: Special RTOS API for use in interrupts.
- Wake-up: Transitioning a task from BLOCKED to READY.
Mental model diagram (ASCII)
ISR -> enqueue -> wake task -> task processes -> repeat
How it works (step-by-step, with invariants and failure modes)
- ISR captures data and enqueues.
- ISR signals/wakes consumer task.
- Task wakes, dequeues, processes.
- Failure mode: missing wake -> data stuck in queue.
Minimal concrete example
void ADC_IRQHandler(void) {
uint16_t sample = ADC->DR;
if (queue_push_isr(&q, sample)) {
task_wake(consumer_task);
SCB->ICSR = SCB_ICSR_PENDSVSET_Msk;
}
}
Common misconceptions
- “ISR should do everything” -> it should do minimal work.
- “Queue alone is enough” -> task must be woken reliably.
- “Overflow cannot happen” -> it will under bursts.
Check-your-understanding questions
- Why is deferred processing better than heavy ISR work?
- When should an ISR trigger PendSV?
- How do you handle queue overflow?
Check-your-understanding answers
- It reduces ISR latency and keeps timing deterministic.
- When a higher-priority task was woken by the ISR.
- Drop, overwrite, or count losses based on application needs.
Real-world applications
- UART receive and packet parsing.
- ADC sampling and filtering.
- GPIO interrupt event processing.
Where you’ll apply it
- This project: Sec. 3.1, Sec. 3.7.2, Sec. 5.10 Phase 2.
- Also used in: Project 10.
References
- FreeRTOS “FromISR” APIs documentation.
- “Real-Time Concepts for Embedded Systems” by Qing Li, Ch. 7.
Key insights
Deferring work from ISR to task preserves real-time responsiveness and simplifies correctness.
Summary
Deferred interrupt handling is the standard pattern for safe, deterministic ISR design.
Homework/Exercises to practice the concept
- Implement a queue and a consumer task; log throughput.
- Increase ISR rate and measure overflow count.
- Add a GPIO pulse in ISR and measure latency.
Solutions to the homework/exercises
- Use UART logs to verify no data loss at low rates.
- Observe overflow counter growth under high rates.
- Measure pulse width on a scope for ISR timing.
3. Project Specification
3.1 What You Will Build
An ISR-safe message queue with a producer ISR and a consumer task, including overflow handling and deterministic wake-up behavior.
3.2 Functional Requirements
- Queue Operations: ISR can enqueue without blocking.
- Task Wake-Up: Consumer task unblocks when data arrives.
- Overflow Policy: Defined behavior when queue is full.
3.3 Non-Functional Requirements
- Performance: ISR enqueue < 5 us.
- Reliability: No data corruption after 10,000 messages.
- Usability: Overflow counts visible via UART.
3.4 Example Usage / Output
[queue] enq=1024 deq=1020 dropped=4
3.5 Data Formats / Schemas / Protocols
- Queue entry: fixed-size struct containing sensor sample and timestamp.
3.6 Edge Cases
- Queue full under burst load.
- Consumer task lower priority than producer.
- ISR firing while queue update in progress.
3.7 Real World Outcome
Sensor data flows reliably from ISR to task without long ISR execution time, and overflow behavior is predictable and measured.
3.7.1 How to Run (Copy/Paste)
make clean all
make flash
Exit codes:
make: 0 success, 2 build failure.openocd: 0 flash success, 1 connection failure.
3.7.2 Golden Path Demo (Deterministic)
- Configure ISR to enqueue samples at 100 Hz.
- Consumer task prints counts every second.
- No drops; enqueue/dequeue counts match.
3.7.3 Failure Demo (Deterministic)
- Increase ISR rate to 5 kHz.
- Queue fills and overflow counter increments.
- Reduce rate or increase queue size to fix.
4. Solution Architecture
4.1 High-Level Design
ISR -> ring buffer -> wake task -> process data
4.2 Key Components
| Component | Responsibility | Key Decisions |
|---|---|---|
queue.c |
Ring buffer operations | Overflow policy |
isr.c |
Producer ISR | Minimal work |
consumer.c |
Data processing task | Priority vs producer |
4.4 Data Structures (No Full Code)
typedef struct {
uint16_t sample;
uint32_t tick;
} sample_t;
4.4 Algorithm Overview
Key Algorithm: ISR Enqueue
- Capture sample in ISR.
- Enqueue into ring buffer.
- If consumer woken, trigger PendSV.
Complexity Analysis:
- Time: O(1) enqueue/dequeue.
- Space: O(n) buffer.
5. Implementation Guide
5.1 Development Environment Setup
brew install arm-none-eabi-gcc openocd
5.2 Project Structure
rtos-p07/
|-- src/
| |-- queue.c
| |-- isr.c
| |-- consumer.c
| `-- main.c
|-- include/
`-- Makefile
5.3 The Core Question You’re Answering
“How do I move data from an interrupt to a task without breaking determinism?”
5.4 Concepts You Must Understand First
- Ring buffers and ISR-safe queue operations.
- Deferred interrupt handling and task wake-up.
5.5 Questions to Guide Your Design
- What should happen when the queue is full?
- How will you signal the consumer task safely?
- Which task priority should the consumer have?
5.6 Thinking Exercise
Design a burst scenario and compute required queue depth to avoid drops.
5.7 The Interview Questions They’ll Ask
- “Why should ISRs be short?”
- “How do you make a queue ISR-safe?”
- “What is deferred interrupt handling?”
5.8 Hints in Layers
Hint 1: Power-of-two size Use a power-of-two buffer for fast masking.
Hint 2: Minimal ISR Enqueue and exit; no prints.
Hint 3: Log drops Add a counter to quantify overflow.
5.9 Books That Will Help
| Topic | Book | Chapter |
|---|---|---|
| Message queues | “Real-Time Concepts for Embedded Systems” | Ch. 7 |
| Interrupt handling | “Making Embedded Systems” | Ch. 5-6 |
5.10 Implementation Phases
Phase 1: Queue Implementation (3-4 days)
Goals: Build ring buffer and APIs. Tasks: enqueue/dequeue, overflow logic. Checkpoint: Queue passes wraparound tests.
Phase 2: ISR Producer (2-3 days)
Goals: Integrate ISR capture. Tasks: capture sample, enqueue in ISR. Checkpoint: No data loss at low rates.
Phase 3: Consumer Task (2-3 days)
Goals: Process data in task context. Tasks: block on queue, process data. Checkpoint: UART logs show consistent flow.
5.11 Key Implementation Decisions
| Decision | Options | Recommendation | Rationale |
|---|---|---|---|
| Overflow behavior | Drop newest/oldest | Drop newest | Preserve existing queue content |
| Queue size | Small vs large | Medium | Balance memory and burst tolerance |
6. Testing Strategy
6.1 Test Categories
| Category | Purpose | Examples | |——————|——————————|————————————| | Unit Tests | Queue correctness | Wraparound and full conditions | | Integration Tests | ISR-to-task pipeline | Samples processed in order | | Edge Case Tests | Overflow under burst | Drop counter increments |
6.2 Critical Test Cases
- Enqueue/dequeue order preserved.
- Queue overflow increments drop counter.
- Task wake-up occurs immediately on enqueue.
6.3 Test Data
Queue depth: 64
Expected drop count: 0 at 100 Hz
7. Common Pitfalls & Debugging
7.1 Frequent Mistakes
| Pitfall | Symptom | Solution | |————————–|—————————-|———————————–| | No critical section | Random queue corruption | Use short critical sections | | ISR does too much work | Jitter and missed ticks | Defer processing to task | | Overflow ignored | Silent data loss | Add drop counter |
7.2 Debugging Strategies
- Inspect head/tail indices in GDB.
- Log drop count and queue depth periodically.
7.3 Performance Traps
- Large queue scanning in ISR; keep O(1) operations.
8. Extensions & Challenges
8.1 Beginner Extensions
- Add a “peek” API for debugging.
- Add a queue reset function.
8.2 Intermediate Extensions
- Implement overwrite-oldest mode.
- Add per-message timestamps.
8.3 Advanced Extensions
- Add multi-producer support with atomic ops.
- Add DMA-based enqueue for high-rate sensors.
9. Real-World Connections
9.1 Industry Applications
- UART receive pipelines.
- Sensor fusion systems.
9.2 Related Open Source Projects
- Zephyr: ISR-safe FIFO APIs.
- FreeRTOS: xQueueSendFromISR patterns.
9.3 Interview Relevance
- ISR-safe queues and deferred interrupts are common interview topics.
10. Resources
10.1 Essential Reading
- “Real-Time Concepts for Embedded Systems” by Qing Li.
10.2 Video Resources
- Embedded queue and ISR design walkthroughs.
10.3 Tools & Documentation
- Logic analyzer for ISR timing.
10.4 Related Projects in This Series
11. Self-Assessment Checklist
11.1 Understanding
- I can explain ring buffer full/empty conditions.
- I can describe why deferred handling is necessary.
11.2 Implementation
- ISR enqueues without blocking.
- Consumer task processes all data in order.
11.3 Growth
- I can reason about queue sizing for burst traffic.
12. Submission / Completion Criteria
Minimum Viable Completion:
- ISR-safe queue implemented and verified.
- Consumer task processes messages with no corruption.
Full Completion:
- Overflow behavior defined and tested.
- ISR-to-task wake-up timing measured.
Excellence (Going Above & Beyond):
- DMA-based data ingestion added.
- Multi-producer queue support implemented.