Project 8: Event Flags and Software Timers

Implement event flags for multi-condition synchronization and a software timer subsystem for periodic callbacks.

Quick Reference

Attribute Value
Difficulty Advanced
Time Estimate 1-2 weeks
Main Programming Language C
Alternative Programming Languages Rust, Ada
Coolness Level High
Business Potential High
Prerequisites Projects 1-7, sleep services, queue wake-ups
Key Topics Event flags, bitmasks, wait-any/wait-all, software timers

1. Learning Objectives

By completing this project, you will:

  1. Implement event flags with wait-any and wait-all semantics.
  2. Create a software timer service driven by SysTick.
  3. Prevent missed events with sticky flags and clear-on-read rules.
  4. Demonstrate periodic timer callbacks and event-driven tasks.

2. All Theory Needed (Per-Concept Breakdown)

2.1 Event Flags and Bitmask Synchronization

Fundamentals

Event flags are synchronization primitives that use a bitmask to represent multiple independent events. Tasks can wait until any flag in a mask is set (wait-any) or until all required flags are set (wait-all). This is efficient because multiple conditions are stored in a single integer and can be manipulated atomically. Event flags are especially useful when tasks need to react to combinations of events, such as “sensor data ready” and “communication link available”.

Additional fundamentals: Event flags compress multiple conditions into a single word, which makes them efficient and composable. This is particularly useful in embedded systems where CPU cycles and memory are limited. The key design choice is how flags are cleared so that events are not lost.

Deep Dive into the concept

An event flag group maintains a 32-bit (or 64-bit) mask of flags. Setting a flag is done with a bitwise OR, and clearing is done with AND and inverse masks. Wait-any semantics means a task unblocks when (flags & wait_mask) != 0. Wait-all semantics means a task unblocks when (flags & wait_mask) == wait_mask. The design choice is what happens to flags after a task wakes: clear-on-read (consume) or keep (sticky). Sticky flags are safer when multiple tasks may need to observe the same event; clear-on-read is useful when only one consumer should react. Many RTOSes allow the caller to choose. For this project, you should implement both behaviors or clearly specify one and justify it.

Race conditions are a key risk. If a task checks the flag mask and then blocks, an ISR could set the flag between these steps, causing the event to be missed. To prevent this, the wait operation must be atomic: either the task blocks and the event remains set (sticky), or the wait checks and blocks in a critical section. Another strategy is to always leave flags set until explicitly cleared by the waiting task, ensuring no event is lost. This is the simplest and most robust approach for a learning RTOS.

Event flags are distinct from queues and semaphores. A queue carries data; a flag carries only the fact that something happened. This makes flags ideal for lightweight signaling. They also compose well: a task can wait on multiple conditions without needing multiple queues. In real systems, event flags are often used for state machines, power management events, and protocol signaling.

Implementing event flags requires a list of tasks waiting on the flag group, each with its own wait mask and mode (any/all). When flags are set, the kernel must check which tasks can be released. This can be done immediately in the setter (possibly from ISR) or deferred. If from ISR, ensure you avoid heavy iteration; you may instead set a bit and let a task handle releasing. For simplicity, a small task list and a short critical section is fine.

Additional depth: Event flags can also be combined with timeouts. A task might wait for a flag but only up to a certain time, after which it proceeds with a fallback action. This pattern is common in communication protocols where you wait for a response but cannot block indefinitely. Implementing timeouts requires integrating flag waits with the sleep or timer system. In a simple RTOS, you can implement this by storing both the wait mask and a wake tick in the task control block. If the timeout occurs first, the task wakes and can detect that the flags were not set.

Another subtlety is the choice between sticky and auto-clear semantics. Sticky flags are simpler and safer, but they can accumulate if no task clears them. Auto-clear can be more efficient but risks missed events if multiple tasks are waiting. Some RTOSes allow a hybrid: clear only the bits that satisfied the wait condition. This provides a balance between safety and efficiency. If you implement this, be careful with race conditions where multiple tasks wake at the same time; you must decide which task consumes which bits.

Event flags are also a good fit for ISR signaling. An ISR can set a flag and wake a task without needing to allocate a message. This reduces memory usage and avoids queue overflow. However, the tradeoff is that you lose the data payload and only signal that “something happened.” In many systems, this is enough: the task can read data directly from a hardware register or from a shared buffer protected by a mutex. This pattern should be explained in your documentation so you can choose the right primitive for each situation.

How this fit on projects

Event flags are implemented in Sec. 3.1 and used in Sec. 3.7 for multi-task coordination. They build on sleep and queue wake-up logic from Project 5 and Project 7.

Definitions & key terms

  • Event flags: Bitmask-based synchronization primitive.
  • Wait-any: Unblock when any bit in mask is set.
  • Wait-all: Unblock when all bits in mask are set.
  • Sticky flag: Flag remains set until explicitly cleared.

Mental model diagram (ASCII)

Flags: 0b0000_0101
Task A waits any 0b0001 -> wakes
Task B waits all 0b0101 -> wakes

How it works (step-by-step, with invariants and failure modes)

  1. Task calls wait with mask and mode.
  2. Kernel checks flags atomically.
  3. If condition met, task continues; else block.
  4. When flag set, kernel reevaluates waiters.
  5. Failure mode: missing atomicity -> missed events.

Minimal concrete example

if (flags & mask) {
    if (clear_on_exit) flags &= ~mask;
    return;
} else {
    block_task_on_flags(mask, mode);
}

Common misconceptions

  • “Flags carry data” -> they only signal events.
  • “Clearing flags immediately is always safe” -> it can cause missed events.
  • “Wait-all is same as wait-any” -> semantics are different.

Check-your-understanding questions

  1. When would you use wait-all instead of wait-any?
  2. Why can flags be missed without atomic operations?
  3. What is the difference between sticky and clear-on-read flags?

Check-your-understanding answers

  1. When multiple conditions must be true before proceeding.
  2. Because a flag can be set between check and block.
  3. Sticky stays set until cleared; clear-on-read consumes the event.

Real-world applications

  • Protocol state machines waiting for multiple events.
  • Power management signals (battery low + idle).

Where you’ll apply it

  • This project: Sec. 3.1, Sec. 3.7.2, Sec. 5.10 Phase 1.
  • Also used in: Project 10.

References

  • CMSIS-RTOS event flags documentation.
  • “Real-Time Concepts for Embedded Systems” by Qing Li, Ch. 8.

Key insights

Event flags enable efficient multi-condition synchronization with minimal overhead.

Summary

Event flags provide a compact, efficient way to signal multiple conditions to tasks without moving data.

Homework/Exercises to practice the concept

  1. Implement a flag group and test wait-any vs wait-all.
  2. Add clear-on-read and verify no missed events.
  3. Simulate two tasks waiting on overlapping masks.

Solutions to the homework/exercises

  1. Set different bits and observe which tasks wake.
  2. Use sticky flags to ensure no missed events.
  3. Verify both tasks can wake depending on set bits.

2.2 Software Timers and Timer Management

Fundamentals

Software timers are kernel-managed timers that trigger callbacks or signals at future times. They are built on the tick counter and allow you to schedule one-shot or periodic events without dedicating hardware timers to each task. A timer service typically maintains a list of timers sorted by expiration time and checks it on each tick.

Additional fundamentals: Software timers are time-based events managed by the kernel. They scale far better than dedicating hardware timers to every task, and they integrate naturally with tick-based scheduling. Their correctness depends on accurate tick arithmetic and careful handling of periodic drift.

Deep Dive into the concept

Software timers are crucial for scalable timing. Instead of each task implementing its own delay loop or polling, the kernel provides a timer API. Each timer has a period, a next-expiration tick, and a callback or signal mechanism. A one-shot timer fires once and then stops; a periodic timer reschedules itself by adding its period to its expiration time. Like sleep services, timers must handle tick wraparound and avoid drift. The best practice is to use absolute time for the next expiration: next += period rather than next = now + period. This keeps periodic timers stable even if callbacks are delayed.

Timer management can be implemented with different data structures: sorted linked lists, min-heaps, or timing wheels. A sorted list is simple and works well for small numbers of timers. Each tick, you check the head of the list; if it has expired, you fire it and move to the next. This keeps per-tick overhead low when few timers expire. The tradeoff is O(n) insertion cost when adding a timer. For this project, a sorted list is perfectly acceptable.

Timer callbacks should be short and deterministic. Running long callbacks in the timer ISR (if you process timers in an ISR) is a mistake; instead, many RTOSes defer timer callbacks to a timer task. For a learning RTOS, you can implement timers as part of the SysTick ISR and only set event flags or enqueue messages, letting tasks do the heavy work. This keeps ISR time bounded. Another choice is to run timers in a dedicated timer task that wakes each tick. This is safer but consumes more CPU. The key is to decide and document your choice.

Timer drift is a subtle bug. If you set next = now + period, and the timer callback is delayed by some time, your timer will slowly drift later and later. Using next += period avoids drift because it anchors the schedule to the original timeline. This is critical for periodic control loops and heartbeats.

Finally, software timers interact with other RTOS primitives. A timer callback might set an event flag or give a semaphore, waking tasks. This creates a clean separation between time-based events and task-level processing. In this project, you will combine timers with event flags to build multi-condition behavior.

Additional depth: Software timers can be implemented in different execution contexts. If you run timer callbacks directly in the SysTick ISR, you get immediate execution but increase ISR latency and jitter. If you defer callbacks to a timer task, you keep ISRs short but introduce a small scheduling delay. Many RTOSes choose the timer task model because it keeps timing predictable and makes callbacks safer (they can use mutexes, allocate memory, etc.). For this project, you can implement either model but should document the tradeoff and show that callbacks remain deterministic.

Timer management also interacts with power management. If you implement tickless idle, the timer service must compute the next expiration time and program a hardware timer or adjust SysTick accordingly. This is more advanced but illustrates why timer structures are important: you need to know the earliest expiration to decide how long the system can sleep. Even if you do not implement tickless idle, you can still design your timer list to make it possible later.

Another subtle issue is timer cancellation and rescheduling. If a timer is active and you want to stop it, you must remove it safely from the timer list. If you do this from an ISR while the timer list is being updated, you can corrupt the list. The safe approach is to disable interrupts or use a dedicated timer task to serialize operations. This project is a good place to practice that discipline. The more complex your timer operations become, the more important it is to have clear ownership rules and atomic updates.

How this fit on projects

Software timers are implemented in Sec. 3.1 and validated in Sec. 3.7.2. They build directly on tick arithmetic from Project 5.

Definitions & key terms

  • Software timer: Timer managed by the kernel using ticks.
  • One-shot: Fires once and stops.
  • Periodic: Fires repeatedly at fixed intervals.
  • Timer drift: Gradual shift in timing due to delayed callbacks.

Mental model diagram (ASCII)

Tick -> check timer list -> expired? -> fire -> reschedule

How it works (step-by-step, with invariants and failure modes)

  1. Add timer with expiration tick.
  2. Each tick, check the earliest timer.
  3. If expired, fire callback or signal.
  4. If periodic, set next = next + period.
  5. Failure mode: using now + period introduces drift.

Minimal concrete example

void timer_tick(void) {
    while (timer_list && time_reached(g_tick, timer_list->next)) {
        fire_timer(timer_list);
        if (timer_list->periodic) timer_list->next += timer_list->period;
        else remove_timer(timer_list);
    }
}

Common misconceptions

  • “Timers should run inside ISR callbacks” -> heavy work should be deferred.
  • “Now + period is fine” -> it drifts under load.
  • “Hardware timers are always better” -> software timers scale better for many events.

Check-your-understanding questions

  1. Why should periodic timers use next += period?
  2. What is the tradeoff between sorted list and timer wheel?
  3. Why should timer callbacks be short?

Check-your-understanding answers

  1. It prevents drift and keeps a stable phase.
  2. Sorted list is simple but O(n) insert; wheel is faster but more complex.
  3. Long callbacks increase jitter and can delay other timers.

Real-world applications

  • Periodic heartbeats and watchdog kicks.
  • Timeouts in communication protocols.

Where you’ll apply it

  • This project: Sec. 3.1, Sec. 3.7.2, Sec. 5.10 Phase 2.
  • Also used in: Project 10.

References

  • “Real-Time Concepts for Embedded Systems” by Qing Li, Ch. 11.
  • CMSIS-RTOS timer APIs.

Key insights

Software timers provide scalable, deterministic time events without consuming hardware timers.

Summary

A timer service built on the tick counter enables periodic and one-shot events with predictable behavior.

Homework/Exercises to practice the concept

  1. Implement two periodic timers and verify their phase stability.
  2. Add a one-shot timer that triggers an event flag.
  3. Measure drift when using now + period vs next += period.

Solutions to the homework/exercises

  1. Use GPIO toggles to visualize timing.
  2. Confirm that the waiting task wakes exactly once.
  3. Expect drift with now + period under load.

3. Project Specification

3.1 What You Will Build

An event flag subsystem with wait-any/wait-all semantics and a software timer service that fires periodic events, both integrated into your RTOS.

3.2 Functional Requirements

  1. Event Flags: Support set, clear, wait-any, wait-all.
  2. Software Timers: One-shot and periodic timers driven by SysTick.
  3. Task Wake-Up: Tasks wake on flags or timer events.

3.3 Non-Functional Requirements

  • Performance: Flag checks and timer updates < 50 us per tick.
  • Reliability: No missed events under load.
  • Usability: Events visible via UART or GPIO.

3.4 Example Usage / Output

  • LED1 toggles every 100 ms via timer.
  • Task waits on event flags and logs “READY” when multiple events occur.

3.5 Data Formats / Schemas / Protocols

  • Event flag group: uint32_t flags.
  • Timer struct: {next, period, callback}.

3.6 Edge Cases

  • Flags set while task is checking.
  • Multiple timers expiring on same tick.
  • Timer drift under load.

3.7 Real World Outcome

You can coordinate tasks based on multiple conditions and drive periodic actions with software timers, all without missing events.

3.7.1 How to Run (Copy/Paste)

make clean all
make flash

Exit codes:

  • make: 0 success, 2 build failure.
  • openocd: 0 flash success, 1 connection failure.

3.7.2 Golden Path Demo (Deterministic)

  1. Timer A sets FLAG_A every 100 ms.
  2. Timer B sets FLAG_B every 1 s.
  3. Task waits on wait-all for A and B and logs each second.

3.7.3 Failure Demo (Deterministic)

  1. Clear flags immediately in ISR without sticky behavior.
  2. Task occasionally misses the combined event.
  3. Enable sticky flags and verify no missed wake-ups.

4. Solution Architecture

4.1 High-Level Design

SysTick -> timer service -> set flags -> tasks wait on flags

4.2 Key Components

Component Responsibility Key Decisions
flags.c Event flag operations Sticky vs clear-on-read
timers.c Timer list management Sorted list vs wheel
sched.c Wake tasks on flag/timer events Immediate vs deferred wake

4.4 Data Structures (No Full Code)

typedef struct {
    uint32_t flags;
} flag_group_t;

4.4 Algorithm Overview

Key Algorithm: Timer Tick

  1. Check earliest timer.
  2. If expired, set flag or enqueue callback.
  3. Reschedule periodic timers using next += period.

Complexity Analysis:

  • Time: O(k) where k is number of expired timers.
  • Space: O(n) timers.

5. Implementation Guide

5.1 Development Environment Setup

brew install arm-none-eabi-gcc openocd

5.2 Project Structure

rtos-p08/
|-- src/
|   |-- flags.c
|   |-- timers.c
|   `-- main.c
|-- include/
`-- Makefile

5.3 The Core Question You’re Answering

“How do I coordinate multiple events and periodic actions without missing signals?”

5.4 Concepts You Must Understand First

  1. Event flags and wait-any/wait-all semantics.
  2. Software timers and drift-free scheduling.

5.5 Questions to Guide Your Design

  1. Should flags be sticky or clear-on-read by default?
  2. Where should timer callbacks run (ISR vs task)?
  3. How will you prevent timer drift?

5.6 Thinking Exercise

Design a scenario where two timers trigger and a task waits for both flags.

5.7 The Interview Questions They’ll Ask

  1. “What is the difference between event flags and queues?”
  2. “How do you implement a periodic timer without drift?”
  3. “Why might timer callbacks be deferred to a task?”

5.8 Hints in Layers

Hint 1: Start with sticky flags Avoid missed events while learning.

Hint 2: Use absolute time Reschedule timers with next += period.

Hint 3: Keep callbacks small Set flags in ISR, process in tasks.


5.9 Books That Will Help

Topic Book Chapter
Kernel objects “Real-Time Concepts for Embedded Systems” Ch. 8
Timers “Real-Time Concepts for Embedded Systems” Ch. 11

5.10 Implementation Phases

Phase 1: Event Flags (3-4 days)

Goals: Implement flag group and wait logic. Tasks: set/clear, wait-any, wait-all. Checkpoint: Tasks wake on correct mask.

Phase 2: Timer Service (3-4 days)

Goals: Implement timer list and callbacks. Tasks: add/remove timers, handle expirations. Checkpoint: Periodic timers fire on schedule.

Phase 3: Integration (2-3 days)

Goals: Combine timers and flags. Tasks: Use timers to set flags, tasks wait on flags. Checkpoint: Multi-condition coordination works.

5.11 Key Implementation Decisions

Decision Options Recommendation Rationale
Flag clearing Sticky vs consume Sticky Avoid missed events
Timer structure Sorted list vs wheel Sorted list Simpler for small timer counts

6. Testing Strategy

6.1 Test Categories

| Category | Purpose | Examples | |——————|—————————–|——————————–| | Unit Tests | Flag and timer correctness | Wait-any/all scenarios | | Integration Tests | Timer-driven flags | Heartbeat + multi-condition | | Edge Case Tests | Multiple timer expiry | Two timers same tick |

6.2 Critical Test Cases

  1. Task waiting on wait-all wakes only when all flags set.
  2. Periodic timer maintains phase over 1000 cycles.
  3. Flags set during wait do not get missed.

6.3 Test Data

Timer A: 100 ms
Timer B: 1000 ms

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

| Pitfall | Symptom | Solution | |—————————–|—————————|———————————-| | Clearing flags too early | Missed events | Use sticky flags | | Timer drift | Phase shift over time | Use next += period | | Long callbacks in ISR | Increased jitter | Defer work to task |

7.2 Debugging Strategies

  • Toggle GPIO when flags set to visualize timing.
  • Log timer expirations and compare with expected schedule.

7.3 Performance Traps

  • Scanning entire timer list each tick; check only expired timers.

8. Extensions & Challenges

8.1 Beginner Extensions

  • Add flag wait timeouts.
  • Add a debug view of flag states.

8.2 Intermediate Extensions

  • Implement a timer wheel for O(1) insert.
  • Add a timer task rather than ISR-based callbacks.

8.3 Advanced Extensions

  • Support 64-bit timers for very long delays.
  • Add hierarchical flag groups for complex systems.

9. Real-World Connections

9.1 Industry Applications

  • Protocol stacks waiting on multiple events.
  • Power management systems that rely on timer events.
  • CMSIS-RTOS2: Event flags and timers.
  • Zephyr: k_timer and event flags.

9.3 Interview Relevance

  • Event flags and timers are common RTOS interview questions.

10. Resources

10.1 Essential Reading

  • “Real-Time Concepts for Embedded Systems” by Qing Li.

10.2 Video Resources

  • RTOS event flags and timers tutorials.

10.3 Tools & Documentation

  • Logic analyzer for periodic signal verification.

11. Self-Assessment Checklist

11.1 Understanding

  • I can explain wait-any vs wait-all semantics.
  • I understand how to prevent timer drift.

11.2 Implementation

  • Tasks wake correctly on flag conditions.
  • Timers fire with consistent periods.

11.3 Growth

  • I can describe when to use event flags vs queues.

12. Submission / Completion Criteria

Minimum Viable Completion:

  • Event flags implemented with wait-any/wait-all.
  • Software timers fire at correct periods.

Full Completion:

  • Integration between timers and flags verified.
  • No missed events under load.

Excellence (Going Above & Beyond):

  • Timer wheel implementation.
  • Timer task decoupled from ISR.